Altitude simulation facility for testing large space motors
Katz, U.; Lustig, J.; Cohen, Y.; Malkin, I.
1993-02-01
This work describes the design of an altitude simulation facility for testing the AKM motor installed in the 'Ofeq' satellite launcher. The facility, which is controlled by a computer, consists of a diffuser and a single-stage ejector fed with preheated air. The calculations of performance and dimensions of the gas extraction system were conducted according to a one-dimensional analysis. Tests were carried out on a small-scale model of the facility in order to examine the design concept, then the full-scale facility was constructed and operated. There was good agreement among the results obtained from the small-scale facility, from the full-scale facility, and from calculations.
A Steam Jet Plume Simulation in a Large Bulk Space with a System Code MARS
International Nuclear Information System (INIS)
Bae, Sung Won; Chung, Bub Dong
2006-01-01
From May 2002, the OECD-SETH group has launched the PANDA Project in order to provide an experimental data base for a multi-dimensional code assessment. OECD-SETH group expects the PANDA Project will meet the increasing needs for adequate experimental data for a 3D distribution of relevant variables like the temperature, velocity and steam-air concentrations that are measured with a sufficient resolution and accuracy. The scope of the PANDA Project is the mixture stratification and mixing phenomena in a large bulk space. Total of 24 test series are still being performed in PSI, Switzerland. The PANDA facility consists of 2 main large vessels and 1 connection pipe Within the large vessels, a steam injection nozzle and outlet vent are arranged for each test case. These tests are categorized into 3 modes, i.e. the high momentum, near wall plume, and free plume tests. KAERI has also participated in the SETH group since 1997 so that the multi-dimensional capability of the MARS code could be assessed and developed. Test 17, the high steam jet injection test, has already been simulated by MARS and shows promising results. Now, the test 9 and 9bis cases which use a low speed horizontal steam jet flow have been simulated and investigated
Large size space construction for space exploitation
Kondyurin, Alexey
2016-07-01
Space exploitation is impossible without large space structures. We need to make sufficient large volume of pressurized protecting frames for crew, passengers, space processing equipment, & etc. We have to be unlimited in space. Now the size and mass of space constructions are limited by possibility of a launch vehicle. It limits our future in exploitation of space by humans and in development of space industry. Large-size space construction can be made with using of the curing technology of the fibers-filled composites and a reactionable matrix applied directly in free space. For curing the fabric impregnated with a liquid matrix (prepreg) is prepared in terrestrial conditions and shipped in a container to orbit. In due time the prepreg is unfolded by inflating. After polymerization reaction, the durable construction can be fitted out with air, apparatus and life support systems. Our experimental studies of the curing processes in the simulated free space environment showed that the curing of composite in free space is possible. The large-size space construction can be developed. A project of space station, Moon base, Mars base, mining station, interplanet space ship, telecommunication station, space observatory, space factory, antenna dish, radiation shield, solar sail is proposed and overviewed. The study was supported by Humboldt Foundation, ESA (contract 17083/03/NL/SFe), NASA program of the stratospheric balloons and RFBR grants (05-08-18277, 12-08-00970 and 14-08-96011).
Venkatachari, Balaji Shankar; Streett, Craig L.; Chang, Chau-Lyan; Friedlander, David J.; Wang, Xiao-Yen; Chang, Sin-Chung
2016-01-01
Despite decades of development of unstructured mesh methods, high-fidelity time-accurate simulations are still predominantly carried out on structured, or unstructured hexahedral meshes by using high-order finite-difference, weighted essentially non-oscillatory (WENO), or hybrid schemes formed by their combinations. In this work, the space-time conservation element solution element (CESE) method is used to simulate several flow problems including supersonic jet/shock interaction and its impact on launch vehicle acoustics, and direct numerical simulations of turbulent flows using tetrahedral meshes. This paper provides a status report for the continuing development of the space-time conservation element solution element (CESE) numerical and software framework under the Revolutionary Computational Aerosciences (RCA) project. Solution accuracy and large-scale parallel performance of the numerical framework is assessed with the goal of providing a viable paradigm for future high-fidelity flow physics simulations.
International Nuclear Information System (INIS)
Dobranich, Dean; Blanchat, Thomas K.
2008-01-01
Sandia National Laboratories, as a Department of Energy, National Nuclear Security Agency, has major responsibility to ensure the safety and security needs of nuclear weapons. As such, with an experienced research staff, Sandia maintains a spectrum of modeling and simulation capabilities integrated with experimental and large-scale test capabilities. This expertise and these capabilities offer considerable resources for addressing issues of interest to the space power and propulsion communities. This paper presents Sandia's capability to perform thermal qualification (analysis, test, modeling and simulation) using a representative weapon system as an example demonstrating the potential to support NASA's Lunar Reactor System
Simpson, R.; Broussely, M.; Edwards, G.; Robinson, D.; Cozzani, A.; Casarosa, G.
2012-07-01
The National Physical Laboratory (NPL) and The European Space Research and Technology Centre (ESTEC) have performed for the first time successful surface temperature measurements using infrared thermal imaging in the ESTEC Large Space Simulator (LSS) under vacuum and with the Sun Simulator (SUSI) switched on during thermal qualification tests of the GAIA Deployable Sunshield Assembly (DSA). The thermal imager temperature measurements, with radiosity model corrections, show good agreement with thermocouple readings on well characterised regions of the spacecraft. In addition, the thermal imaging measurements identified potentially misleading thermocouple temperature readings and provided qualitative real-time observations of the thermal and spatial evolution of surface structure changes and heat dissipation during hot test loadings, which may yield additional thermal and physical measurement information through further research.
An Improved Treatment of AC Space Charge Fields in Large Signal Simulation Codes
National Research Council Canada - National Science Library
Dialetis, D; Chernin, D; Antonsen, Jr., T. M; Levush, B
2006-01-01
An accurate representation of the AC space charge electric field is required in order to be able to predict the performance of linear beam tubes, including TWT's and klystrons, using a steady state...
Kotik, A.; Usyukin, V.; Vinogradov, I.; Arkhipov, M.
2017-11-01
he realization of astrophysical researches requires the development of high-sensitive centimeterband parabolic space radiotelescopes (SRT) with the large-size mirrors. Constructively such SRT with the mirror size more than 10 m can be realized as deployable rigid structures. Mesh-structures of such size do not provide the reflector reflecting surface accuracy which is necessary for the centimeter band observations. Now such telescope with the 10 m diameter mirror is developed in Russia in the frame of "SPECTR - R" program. External dimensions of the telescope is more than the size of existing thermo-vacuum chambers used to prove SRT reflecting surface accuracy parameters under the action of space environment factors. That's why the numerical simulation turns out to be the basis required to accept the taken designs. Such modeling should be based on experimental working of the basic constructive materials and elements of the future reflector. In the article computational modeling of reflecting surface deviations of a centimeter-band of a large-sized deployable space reflector at a stage of his orbital functioning is considered. The analysis of the factors that determines the deviations - both determined (temperatures fields) and not-determined (telescope manufacturing and installation faults; the deformations caused by features of composite materials behavior in space) is carried out. The finite-element model and complex of methods are developed. They allow to carry out computational modeling of reflecting surface deviations caused by influence of all factors and to take into account the deviations correction by space vehicle orientation system. The results of modeling for two modes of functioning (orientation at the Sun) SRT are presented.
Electron beam injection and associated phenomena as observed in a large space simulation chamber
International Nuclear Information System (INIS)
Beghin, C.; Arnal, Y.; Delahaye, J.Y.
1982-01-01
This chapter describes an experiment whose main purpose was to perform a simulation under conditions where the ambient neutral and ionized gas, magnetic field strength and lay-out of the different packages were as close as possible to those anticipated for the First Spacelab Flight (FSLP) mission. Phenomena Induced by Charged Particle Beams (PICPAB) are planned to be investigated during the FSLP using a Euopean payload. The PICPAB experiment consists of two accelerators of electron and ion beams and associated diagnostic instruments including wave receivers, thermal plasma probes and return current particle energy-analyzers. The main results of the test with the electron beam are reported. Topics considered include the experimental configuration; a transverse dc electric field in the absence of background plasma; a transverse dc electric field in the background plasma; ambient plasma response; a high-frequency electric field; return current characteristics; and collector vs. plasma behavior. The complexity of the beam-plasma-collector-gun system is shown where nonlinear processes are generated in several consecutive steps. It is concluded that under the peculiar conditions described (with the beam propagation distance shorter than the first node focalization length and nearly zero pitch-angle injection, neutral gas pressure ranging from less to 10 -6 up to 10 -4 torr), the beam plasma discharge was never triggered
Mobile work station concept for assembly of large space structures (zero gravity simulation tests)
Heard, W. L., Jr.; Bush, H. G.; Wallsom, R. E.; Jensen, J. K.
1982-03-01
The concept presented is intended to enhance astronaut assembly of truss structure that is either too large or complex to fold for efficient Shuttle delivery to orbit. The potential of augmented astronaut assembly is illustrated by applying the result of the tests to a barebones assembly of a truss structure. If this structure were assembled from the same nestable struts that were used in the Mobile Work Station assembly tests, the spacecraft would be 55 meters in diameter and consist of about 500 struts. The struts could be packaged in less than 1/2% of the Shuttle cargo bay volume and would take up approximately 3% of the mass lift capability. They could be assembled in approximately four hours. This assembly concept for erectable structures is not only feasible, but could be used to significant economic advantage by permitting the superior packaging feature of erectable structures to be exploited and thereby reduce expensive Shuttle delivery flights.
Space plasma simulation chamber
International Nuclear Information System (INIS)
1986-01-01
Scientific results of experiments and tests of instruments performed with the Space Plasma Simulation Chamber and its facility are reviewed in the following six categories. 1. Tests of instruments on board rockets, satellites and balloons. 2. Plasma wave experiments. 3. Measurements of plasma particles. 4. Optical measurements. 5. Plasma production. 6. Space plasms simulations. This facility has been managed under Laboratory Space Plasma Comittee since 1969 and used by scientists in cooperative programs with universities and institutes all over country. A list of publications is attached. (author)
Cannon, R. H., Jr.; Alexander, H.
1985-01-01
A Space Robot Simulator Vehicle (SRSV) was constructed to model a free-flying robot capable of doing construction, manipulation and repair work in space. The SRSV is intended as a test bed for development of dynamic and static control methods for space robots. The vehicle is built around a two-foot-diameter air-cushion vehicle that carries batteries, power supplies, gas tanks, computer, reaction jets and radio equipment. It is fitted with one or two two-link manipulators, which may be of many possible designs, including flexible-link versions. Both the vehicle body and its first arm are nearly complete. Inverse dynamic control of the robot's manipulator has been successfully simulated using equations generated by the dynamic simulation package SDEXACT. In this mode, the position of the manipulator tip is controlled not by fixing the vehicle base through thruster operation, but by controlling the manipulator joint torques to achieve the desired tip motion, while allowing for the free motion of the vehicle base. One of the primary goals is to minimize use of the thrusters in favor of intelligent control of the manipulator. Ways to reduce the computational burden of control are described.
LARGE BUILDING HVAC SIMULATION
The report discusses the monitoring and collection of data relating to indoor pressures and radon concentrations under several test conditions in a large school building in Bartow, Florida. The Florida Solar Energy Center (FSEC) used an integrated computational software, FSEC 3.0...
International Nuclear Information System (INIS)
Friedman, A.; Grote, D.P.
1996-10-01
Under conditions which arise commonly in space-charge-dominated beam applications, the applied focusing, bending, and accelerating fields vary rapidly with axial position, while the self-fields (which are, on average, comparable in strength to the applied fields) vary smoothly. In such cases it is desirable to employ timesteps which advance the particles over distances greater than the characteristic scales over which the applied fields vary. Several related concepts are potentially applicable: sub-cycling of the particle advance relative to the field solution, a higher-order time-advance algorithm, force-averaging by integration along approximate orbits, and orbit-averaging. We report on our investigations into the utility of such techniques for systems typical of those encountered in accelerator studies for heavy-ion beam-driven inertial fusion
Directory of Open Access Journals (Sweden)
Jianguang Yue
2018-01-01
Full Text Available In a large spatial structure, normally the important members are of special type and are the safety key for the global structure. In order to study the mechanical behavior details of the local member, it is difficult for the common test method to realize the complex spatial loading state of the local member. Therefore, a local-fine finite element model was proposed and a large-space vertical hybrid structure was numerically simulated. The seismic responses of the global structure and the Y-type S-SRC column were analyzed under El Centro seismic motions with the peak acceleration of 35 gal and 220 gal. The numerical model was verified with the results of the seismic shaking table test of the structure model. The failure mechanism and stiffness damage evolution of the Y-type S-SRC column were analyzed. The calculated results agreed well with the test results. It indicates that the local-fine FEM could reflect the mechanical details of the local members in a large spatial structure.
Indoor Climate of Large Glazed Spaces
DEFF Research Database (Denmark)
Hendriksen, Ole Juhl; Madsen, Christina E.; Heiselberg, Per
In recent years large glazed spaces has found increased use both in connection with renovation of buildings and as part of new buildings. One of the objectives is to add an architectural element, which combines indoor- and outdoor climate. In order to obtain a satisfying indoor climate it is crui...... it is cruicial at the design stage to be able to predict the performance regarding thermal comfort and energy consumption. This paper focus on the practical implementation of Computational Fluid Dynamics (CFD) and the relation to other simulation tools regarding indoor climate.......In recent years large glazed spaces has found increased use both in connection with renovation of buildings and as part of new buildings. One of the objectives is to add an architectural element, which combines indoor- and outdoor climate. In order to obtain a satisfying indoor climate...
Large Eddy Simulation of turbulence
International Nuclear Information System (INIS)
Poullet, P.; Sancandi, M.
1994-12-01
Results of Large Eddy Simulation of 3D isotropic homogeneous turbulent flows are presented. A computer code developed on Connexion Machine (CM5) has allowed to compare two turbulent viscosity models (Smagorinsky and structure function). The numerical scheme influence on the energy density spectrum is also studied [fr
Large Space Structures Fielding Plan
1991-01-01
15830 STS PAYLOARE SYSTESETY 3C (A %AA IASB STS DAYLCODSICARGO SRORM 1PVFR! PR 111L 5 SOL? CIE. JR-012 SAFETY 19LENEVIASO PLA PSOR 1, ,I -1 AR S’EATIOR...support/safety measures in space will interface. Although these features can be developed to some degree as stated objectives, many must be designed from...continuity 7. Check system for mechanical continuity 8. Verify LSS assembly continuity B. Productivity Measurements 1. Note duration of assembly activities
Laboratory simulation of space plasma phenomena*
Amatucci, B.; Tejero, E. M.; Ganguli, G.; Blackwell, D.; Enloe, C. L.; Gillman, E.; Walker, D.; Gatling, G.
2017-12-01
Laboratory devices, such as the Naval Research Laboratory's Space Physics Simulation Chamber, are large-scale experiments dedicated to the creation of large-volume plasmas with parameters realistically scaled to those found in various regions of the near-Earth space plasma environment. Such devices make valuable contributions to the understanding of space plasmas by investigating phenomena under carefully controlled, reproducible conditions, allowing for the validation of theoretical models being applied to space data. By working in collaboration with in situ experimentalists to create realistic conditions scaled to those found during the observations of interest, the microphysics responsible for the observed events can be investigated in detail not possible in space. To date, numerous investigations of phenomena such as plasma waves, wave-particle interactions, and particle energization have been successfully performed in the laboratory. In addition to investigations such as plasma wave and instability studies, the laboratory devices can also make valuable contributions to the development and testing of space plasma diagnostics. One example is the plasma impedance probe developed at NRL. Originally developed as a laboratory diagnostic, the sensor has now been flown on a sounding rocket, is included on a CubeSat experiment, and will be included on the DoD Space Test Program's STP-H6 experiment on the International Space Station. In this presentation, we will describe several examples of the laboratory investigation of space plasma waves and instabilities and diagnostic development. *This work supported by the NRL Base Program.
Effects of Turbine Spacings in Very Large Wind Farms
DEFF Research Database (Denmark)
farm. LES simulations of large wind farms are performed with full aero-elastic Actuator Lines. The simulations investigate the inherent dynamics inside wind farms in the absence of atmospheric turbulence compared to cases with atmospheric turbulence. Resulting low frequency structures are inherent...... in wind farms for certain turbine spacings and affect both power production and loads...
Environmental effects and large space systems
Garrett, H. B.
1981-01-01
When planning large scale operations in space, environmental impact must be considered in addition to radiation, spacecraft charging, contamination, high power and size. Pollution of the atmosphere and space is caused by rocket effluents and by photoelectrons generated by sunlight falling on satellite surfaces even light pollution may result (the SPS may reflect so much light as to be a nuisance to astronomers). Large (100 Km 2) structures also will absorb the high energy particles that impinge on them. Altogether, these effects may drastically alter the Earth's magnetosphere. It is not clear if these alterations will in any way affect the Earth's surface climate. Large structures will also generate large plasma wakes and waves which may cause interference with communications to the vehicle. A high energy, microwave beam from the SPS will cause ionospheric turbulence, affecting UHF and VHF communications. Although none of these effects may ultimately prove critical, they must be considered in the design of large structures.
Large-scale numerical simulations of plasmas
International Nuclear Information System (INIS)
Hamaguchi, Satoshi
2004-01-01
The recent trend of large scales simulations of fusion plasma and processing plasmas is briefly summarized. Many advanced simulation techniques have been developed for fusion plasmas and some of these techniques are now applied to analyses of processing plasmas. (author)
Modeling and simulation of large HVDC systems
Energy Technology Data Exchange (ETDEWEB)
Jin, H.; Sood, V.K.
1993-01-01
This paper addresses the complexity and the amount of work in preparing simulation data and in implementing various converter control schemes and the excessive simulation time involved in modelling and simulation of large HVDC systems. The Power Electronic Circuit Analysis program (PECAN) is used to address these problems and a large HVDC system with two dc links is simulated using PECAN. A benchmark HVDC system is studied to compare the simulation results with those from other packages. The simulation time and results are provided in the paper.
Vertical integration from the large Hilbert space
Erler, Theodore; Konopka, Sebastian
2017-12-01
We develop an alternative description of the procedure of vertical integration based on the observation that amplitudes can be written in BRST exact form in the large Hilbert space. We relate this approach to the description of vertical integration given by Sen and Witten.
Laboratory simulation of erosion by space plasma
International Nuclear Information System (INIS)
Kristoferson, L.; Fredga, K.
1976-04-01
A laboratory experiment has been made where a plasma stream collides with targets made of different materials of cosmic interest. The experiment can be viewed as a process simulation of the solar wind particle interaction with solid surfaces in space, e.g. cometary dust. Special interest is given to sputtering of OH and Na. It is shown that the erosion of solid particles in interplanetary space at large heliocentric distances is most likely dominated by sputtering and by sublimation near the sun. The heliocentric distance of the limit between the two regions is determined mainly by the material properties of the eroded surface, e.g. heat of sublimation and sputtering yield, a typical distance being 0,5 a.u. It is concluded that the observations of Na in comets at large solar distances, in some cases also near the sun, is most likely to be explained by solar wind sputtering. OH emission in space could be of importance also from 'dry', water-free, matter by means of molecule sputtering. The observed OH production rates in comets are however too large to be explained in this way and are certainly the results of sublimation and dissociation of H 2 O from an icy nucleus. (Auth.)
Distributed simulation of large computer systems
International Nuclear Information System (INIS)
Marzolla, M.
2001-01-01
Sequential simulation of large complex physical systems is often regarded as a computationally expensive task. In order to speed-up complex discrete-event simulations, the paradigm of Parallel and Distributed Discrete Event Simulation (PDES) has been introduced since the late 70s. The authors analyze the applicability of PDES to the modeling and analysis of large computer system; such systems are increasingly common in the area of High Energy and Nuclear Physics, because many modern experiments make use of large 'compute farms'. Some feasibility tests have been performed on a prototype distributed simulator
Large eddy simulations of compressible magnetohydrodynamic turbulence
International Nuclear Information System (INIS)
Grete, Philipp
2016-01-01
subsonic (sonic Mach number M s ∼0.2) to the highly supersonic (M s ∼20) regime, and against other SGS closures. The latter include established closures of eddy-viscosity and scale-similarity type. In all tests and over the entire parameter space, we find that the proposed closures are (significantly) closer to the reference data than the other closures. In the a posteriori tests, we perform large eddy simulations of decaying, supersonic MHD turbulence with initial M s ∼3. We implemented closures of all types, i.e. of eddy-viscosity, scale-similarity and nonlinear type, as an SGS model and evaluated their performance in comparison to simulations without a model (and at higher resolution). We find that the models need to be calculated on a scale larger than the grid scale, e.g. by an explicit filter, to have an influence on the dynamics at all. Furthermore, we show that only the proposed nonlinear closure improves higher-order statistics.
Large eddy simulations of compressible magnetohydrodynamic turbulence
Grete, Philipp
2017-02-01
subsonic (sonic Mach number M s ≈ 0.2) to the highly supersonic (M s ≈ 20) regime, and against other SGS closures. The latter include established closures of eddy-viscosity and scale-similarity type. In all tests and over the entire parameter space, we find that the proposed closures are (significantly) closer to the reference data than the other closures. In the a posteriori tests, we perform large eddy simulations of decaying, supersonic MHD turbulence with initial M s ≈ 3. We implemented closures of all types, i.e. of eddy-viscosity, scale-similarity and nonlinear type, as an SGS model and evaluated their performance in comparison to simulations without a model (and at higher resolution). We find that the models need to be calculated on a scale larger than the grid scale, e.g. by an explicit filter, to have an influence on the dynamics at all. Furthermore, we show that only the proposed nonlinear closure improves higher-order statistics.
Large eddy simulations of compressible magnetohydrodynamic turbulence
Energy Technology Data Exchange (ETDEWEB)
Grete, Philipp
2016-09-09
subsonic (sonic Mach number M{sub s}∼0.2) to the highly supersonic (M{sub s}∼20) regime, and against other SGS closures. The latter include established closures of eddy-viscosity and scale-similarity type. In all tests and over the entire parameter space, we find that the proposed closures are (significantly) closer to the reference data than the other closures. In the a posteriori tests, we perform large eddy simulations of decaying, supersonic MHD turbulence with initial M{sub s}∼3. We implemented closures of all types, i.e. of eddy-viscosity, scale-similarity and nonlinear type, as an SGS model and evaluated their performance in comparison to simulations without a model (and at higher resolution). We find that the models need to be calculated on a scale larger than the grid scale, e.g. by an explicit filter, to have an influence on the dynamics at all. Furthermore, we show that only the proposed nonlinear closure improves higher-order statistics.
Dynamic large eddy simulation: Stability via realizability
Mokhtarpoor, Reza; Heinz, Stefan
2017-10-01
The concept of dynamic large eddy simulation (LES) is highly attractive: such methods can dynamically adjust to changing flow conditions, which is known to be highly beneficial. For example, this avoids the use of empirical, case dependent approximations (like damping functions). Ideally, dynamic LES should be local in physical space (without involving artificial clipping parameters), and it should be stable for a wide range of simulation time steps, Reynolds numbers, and numerical schemes. These properties are not trivial, but dynamic LES suffers from such problems over decades. We address these questions by performing dynamic LES of periodic hill flow including separation at a high Reynolds number Re = 37 000. For the case considered, the main result of our studies is that it is possible to design LES that has the desired properties. It requires physical consistency: a PDF-realizable and stress-realizable LES model, which requires the inclusion of the turbulent kinetic energy in the LES calculation. LES models that do not honor such physical consistency can become unstable. We do not find support for the previous assumption that long-term correlations of negative dynamic model parameters are responsible for instability. Instead, we concluded that instability is caused by the stable spatial organization of significant unphysical states, which are represented by wall-type gradient streaks of the standard deviation of the dynamic model parameter. The applicability of our realizability stabilization to other dynamic models (including the dynamic Smagorinsky model) is discussed.
Learning from large scale neural simulations
DEFF Research Database (Denmark)
Serban, Maria
2017-01-01
Large-scale neural simulations have the marks of a distinct methodology which can be fruitfully deployed to advance scientific understanding of the human brain. Computer simulation studies can be used to produce surrogate observational data for better conceptual models and new how...
Large Eddy Simulations using oodlesDST
2016-01-01
Research Agency DST-Group-TR-3205 ABSTRACT The oodlesDST code is based on OpenFOAM software and performs Large Eddy Simulations of......maritime platforms using a variety of simulation techniques. He is currently using OpenFOAM software to perform both Reynolds Averaged Navier-Stokes
Large space antenna concepts for ESGP
Love, Allan W.
1989-01-01
It is appropriate to note that 1988 marks the 100th anniversary of the birth of the reflector antenna. It was in 1888 that Heinrich Hertz constructed the first one, a parabolic cylinder made of sheet zinc bent to shape and supported by a wooden frame. Hertz demonstrated the existence of the electromagnetic waves that had been predicted theoretically by James Clerk Maxwell some 22 years earlier. In the 100 years since Hertz's pioneering work the field of electromagnetics has grown explosively: one of the technologies is that of remote sensing of planet Earth by means of electromagnetic waves, using both passive and active sensors located on an Earth Science Geostationary Platform (ESEP). For these purposes some exquisitely sensitive instruments were developed, capable of reaching to the fringes of the known universe, and relying on large reflector antennas to collect the minute signals and direct them to appropriate receiving devices. These antennas are electrically large, with diameters of 3000 to 10,000 wavelengths and with gains approaching 80 to 90 dB. Some of the reflector antennas proposed for ESGP are also electrically large. For example, at 220 GHz a 4-meter reflector is nearly 3000 wavelengths in diameter, and is electrically quite comparable with a number of the millimeter wave radiotelescopes that are being built around the world. Its surface must meet stringent requirements on rms smoothness, and ability to resist deformation. Here, however, the environmental forces at work are different. There are no varying forces due to wind and gravity, but inertial forces due to mechanical scanning must be reckoned with. With this form of beam scanning, minimizing momentum transfer to the space platform is a problem that demands an answer. Finally, reflector surface distortion due to thermal gradients caused by the solar flux probably represents the most challenging problem to be solved if these Large Space Antennas are to achieve the gain and resolution required of
Large eddy simulation of bundle turbulent flows
International Nuclear Information System (INIS)
Hassan, Y.A.; Barsamian, H.R.
1995-01-01
Large eddy simulation may be defined as simulation of a turbulent flow in which the large scale motions are explicitly resolved while the small scale motions are modeled. This results into a system of equations that require closure models. The closure models relate the effects of the small scale motions onto the large scale motions. There have been several models developed, the most popular is the Smagorinsky eddy viscosity model. A new model has recently been introduced by Lee that modified the Smagorinsky model. Using both of the above mentioned closure models, two different geometric arrangements were used in the simulation of turbulent cross flow within rigid tube bundles. An inlined array simulations was performed for a deep bundle (10,816 nodes) as well as an inlet/outlet simulation (57,600 nodes). Comparisons were made to available experimental data. Flow visualization enabled the distinction of different characteristics within the flow such as jet switching effects in the wake of the bundle flow for the inlet/outlet simulation case, as well as within tube bundles. The results indicate that the large eddy simulation technique is capable of turbulence prediction and may be used as a viable engineering tool with the careful consideration of the subgrid scale model. (author)
Optimal control of large space structures via generalized inverse matrix
Nguyen, Charles C.; Fang, Xiaowen
1987-01-01
Independent Modal Space Control (IMSC) is a control scheme that decouples the space structure into n independent second-order subsystems according to n controlled modes and controls each mode independently. It is well-known that the IMSC eliminates control and observation spillover caused when the conventional coupled modal control scheme is employed. The independent control of each mode requires that the number of actuators be equal to the number of modelled modes, which is very high for a faithful modeling of large space structures. A control scheme is proposed that allows one to use a reduced number of actuators to control all modeled modes suboptimally. In particular, the method of generalized inverse matrices is employed to implement the actuators such that the eigenvalues of the closed-loop system are as closed as possible to those specified by the optimal IMSC. Computer simulation of the proposed control scheme on a simply supported beam is given.
Numerical simulation of large deformation polycrystalline plasticity
International Nuclear Information System (INIS)
Inal, K.; Neale, K.W.; Wu, P.D.; MacEwen, S.R.
2000-01-01
A finite element model based on crystal plasticity has been developed to simulate the stress-strain response of sheet metal specimens in uniaxial tension. Each material point in the sheet is considered to be a polycrystalline aggregate of FCC grains. The Taylor theory of crystal plasticity is assumed. The numerical analysis incorporates parallel computing features enabling simulations of realistic models with large number of grains. Simulations have been carried out for the AA3004-H19 aluminium alloy and the results are compared with experimental data. (author)
Regularization modeling for large-eddy simulation
Geurts, Bernardus J.; Holm, D.D.
2003-01-01
A new modeling approach for large-eddy simulation (LES) is obtained by combining a "regularization principle" with an explicit filter and its inversion. This regularization approach allows a systematic derivation of the implied subgrid model, which resolves the closure problem. The central role of
Time simulation of flutter with large stiffness changes
Karpel, Mordechay; Wieseman, Carol D.
1992-01-01
Time simulation of flutter, involving large local structural changes, is formulated with a state-space model that is based on a relatively small number of generalized coordinates. Free-free vibration modes are first calculated for a nominal finite-element model with relatively large fictitious masses located at the area of structural changes. A low-frequency subset of these modes is then transformed into a set of structural modal coordinates with which the entire simulation is performed. These generalized coordinates and the associated oscillatory aerodynamic force coefficient matrices are used to construct an efficient time-domain, state-space model for a basic aeroelastic case. The time simulation can then be performed by simply changing the mass, stiffness, and damping coupling terms when structural changes occur. It is shown that the size of the aeroelastic model required for time simulation with large structural changes at a few apriori known locations is similar to that required for direct analysis of a single structural case. The method is applied to the simulation of an aeroelastic wind-tunnel model. The diverging oscillations are followed by the activation of a tip-ballast decoupling mechanism that stabilizes the system but may cause significant transient overshoots.
Remapping simulated halo catalogues in redshift space
Mead, Alexander; Peacock, John
2014-01-01
We discuss the extension to redshift space of a rescaling algorithm, designed to alter the effective cosmology of a pre-existing simulated particle distribution or catalogue of dark matter haloes. The rescaling approach was initially developed by Angulo & White and was adapted and applied to halo catalogues in real space in our previous work. This algorithm requires no information other than the initial and target cosmological parameters, and it contains no tuned parameters. It is shown here ...
On the possibility of large axion moduli spaces
Energy Technology Data Exchange (ETDEWEB)
Rudelius, Tom [Jefferson Physical Laboratory, Harvard University,Cambridge, MA 02138 (United States)
2015-04-28
We study the diameters of axion moduli spaces, focusing primarily on type IIB compactifications on Calabi-Yau three-folds. In this case, we derive a stringent bound on the diameter in the large volume region of parameter space for Calabi-Yaus with simplicial Kähler cone. This bound can be violated by Calabi-Yaus with non-simplicial Kähler cones, but additional contributions are introduced to the effective action which can restrict the field range accessible to the axions. We perform a statistical analysis of simulated moduli spaces, finding in all cases that these additional contributions restrict the diameter so that these moduli spaces are no more likely to yield successful inflation than those with simplicial Kähler cone or with far fewer axions. Further heuristic arguments for axions in other corners of the duality web suggest that the difficulty observed in http://dx.doi.org/10.1088/1475-7516/2003/06/001 of finding an axion decay constant parametrically larger than M{sub p} applies not only to individual axions, but to the diagonals of axion moduli space as well. This observation is shown to follow from the weak gravity conjecture of http://dx.doi.org/10.1088/1126-6708/2007/06/060, so it likely applies not only to axions in string theory, but also to axions in any consistent theory of quantum gravity.
Direct and large-eddy simulation IX
Kuerten, Hans; Geurts, Bernard; Armenio, Vincenzo
2015-01-01
This volume reflects the state of the art of numerical simulation of transitional and turbulent flows and provides an active forum for discussion of recent developments in simulation techniques and understanding of flow physics. Following the tradition of earlier DLES workshops, these papers address numerous theoretical and physical aspects of transitional and turbulent flows. At an applied level it contributes to the solution of problems related to energy production, transportation, magneto-hydrodynamics and the environment. A special session is devoted to quality issues of LES. The ninth Workshop on 'Direct and Large-Eddy Simulation' (DLES-9) was held in Dresden, April 3-5, 2013, organized by the Institute of Fluid Mechanics at Technische Universität Dresden. This book is of interest to scientists and engineers, both at an early level in their career and at more senior levels.
Field simulations for large dipole magnets
International Nuclear Information System (INIS)
Lazzaro, A.; Cappuzzello, F.; Cunsolo, A.; Cavallaro, M.; Foti, A.; Khouaja, A.; Orrigo, S.E.A.; Winfield, J.S.
2007-01-01
The problem of the description of magnetic field for large bending magnets is addressed in relation to the requirements of modern techniques of trajectory reconstruction. The crucial question of the interpolation and extrapolation of fields known at a discrete number of points is analysed. For this purpose a realistic field model of the large dipole of the MAGNEX spectrometer, obtained with finite elements three dimensional simulations, is used. The influence of the uncertainties in the measured field to the quality of the trajectory reconstruction is treated in detail. General constraints for field measurements in terms of required resolutions, step sizes and precisions are thus extracted
Large data management and systematization of simulation
International Nuclear Information System (INIS)
Ueshima, Yutaka; Saitho, Kanji; Koga, James; Isogai, Kentaro
2004-01-01
In the advanced photon research large-scale simulations are powerful tools. In the numerical experiments, real-time visualization and steering system are thought as hopeful methods of data analysis. This approach is valid in the stereotype analysis at one time or short-cycle simulation. In the research for an unknown problem, it is necessary that the output data can be analyzed many times because profitable analysis is difficult at the first time. Consequently, output data should be filed to refer and analyze at any time. To support the research, we need the followed automatic functions, transporting data files from data generator to data storage, analyzing data, tracking history of data handling, and so on. The Large Data Management system will be functional Problem Solving Environment distributed system. (author)
Large Eddy Simulation for Compressible Flows
Garnier, E; Sagaut, P
2009-01-01
Large Eddy Simulation (LES) of compressible flows is still a widely unexplored area of research. The authors, whose books are considered the most relevant monographs in this field, provide the reader with a comprehensive state-of-the-art presentation of the available LES theory and application. This book is a sequel to "Large Eddy Simulation for Incompressible Flows", as most of the research on LES for compressible flows is based on variable density extensions of models, methods and paradigms that were developed within the incompressible flow framework. The book addresses both the fundamentals and the practical industrial applications of LES in order to point out gaps in the theoretical framework as well as to bridge the gap between LES research and the growing need to use it in engineering modeling. After introducing the fundamentals on compressible turbulence and the LES governing equations, the mathematical framework for the filtering paradigm of LES for compressible flow equations is established. Instead ...
1984-01-01
The large space structures technology development missions to be performed on an early manned space station was studied and defined and the resources needed and the design implications to an early space station to carry out these large space structures technology development missions were determined. Emphasis is being placed on more detail in mission designs and space station resource requirements.
EFT of large scale structures in redshift space
Lewandowski, Matthew; Senatore, Leonardo; Prada, Francisco; Zhao, Cheng; Chuang, Chia-Hsun
2018-03-01
We further develop the description of redshift-space distortions within the effective field theory of large scale structures. First, we generalize the counterterms to include the effect of baryonic physics and primordial non-Gaussianity. Second, we evaluate the IR resummation of the dark matter power spectrum in redshift space. This requires us to identify a controlled approximation that makes the numerical evaluation straightforward and efficient. Third, we compare the predictions of the theory at one loop with the power spectrum from numerical simulations up to ℓ=6 . We find that the IR resummation allows us to correctly reproduce the baryon acoustic oscillation peak. The k reach—or, equivalently, the precision for a given k —depends on additional counterterms that need to be matched to simulations. Since the nonlinear scale for the velocity is expected to be longer than the one for the overdensity, we consider a minimal and a nonminimal set of counterterms. The quality of our numerical data makes it hard to firmly establish the performance of the theory at high wave numbers. Within this limitation, we find that the theory at redshift z =0.56 and up to ℓ=2 matches the data at the percent level approximately up to k ˜0.13 h Mpc-1 or k ˜0.18 h Mpc-1 , depending on the number of counterterms used, with a potentially large improvement over former analytical techniques.
Large scale particle simulations in a virtual memory computer
International Nuclear Information System (INIS)
Gray, P.C.; Million, R.; Wagner, J.S.; Tajima, T.
1983-01-01
Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceeds the computer core size. The required address space is automatically mapped onto slow disc memory the the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Assesses to slow memory significantly reduce the excecution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time. (orig.)
Precision Optical Coatings for Large Space Telescope Mirrors
Sheikh, David
This proposal “Precision Optical Coatings for Large Space Telescope Mirrors” addresses the need to develop and advance the state-of-the-art in optical coating technology. NASA is considering large monolithic mirrors 1 to 8-meters in diameter for future telescopes such as HabEx and LUVOIR. Improved large area coating processes are needed to meet the future requirements of large astronomical mirrors. In this project, we will demonstrate a broadband reflective coating process for achieving high reflectivity from 90-nm to 2500-nm over a 2.3-meter diameter coating area. The coating process is scalable to larger mirrors, 6+ meters in diameter. We will use a battery-driven coating process to make an aluminum reflector, and a motion-controlled coating technology for depositing protective layers. We will advance the state-of-the-art for coating technology and manufacturing infrastructure, to meet the reflectance and wavefront requirements of both HabEx and LUVOIR. Specifically, we will combine the broadband reflective coating designs and processes developed at GSFC and JPL with large area manufacturing technologies developed at ZeCoat Corporation. Our primary objectives are to: Demonstrate an aluminum coating process to create uniform coatings over large areas with near-theoretical aluminum reflectance Demonstrate a motion-controlled coating process to apply very precise 2-nm to 5- nm thick protective/interference layers to large areas, Demonstrate a broadband coating system (90-nm to 2500-nm) over a 2.3-meter coating area and test it against the current coating specifications for LUVOIR/HabEx. We will perform simulated space-environment testing, and we expect to advance the TRL from 3 to >5 in 3-years.
Navigation simulator for the Space Tug vehicle
Colburn, B. K.; Boland, J. S., III; Peters, E. G.
1977-01-01
A general simulation program (GSP) for state estimation of a nonlinear space vehicle flight navigation system is developed and used as a basis for evaluating the performance of a Space Tug navigation system. An explanation of the iterative guidance mode (IGM) guidance law, derivation of the dynamics, coordinate frames and state estimation routines are given in order to clarify the assumptions and approximations made. A number of simulation and analytical studies are used to demonstrate the operation of the Tug system. Included in the simulation studies are (1) initial offset vector parameter study; (2) propagation time vs accuracy; (3) measurement noise parametric study and (4) reduction in computational burden of an on-board implementable scheme. From the results of these studies, conclusions and recommendations concerning future areas of practical and theoretical work are presented.
Large eddy simulation of hydrodynamic cavitation
Bhatt, Mrugank; Mahesh, Krishnan
2017-11-01
Large eddy simulation is used to study sheet to cloud cavitation over a wedge. The mixture of water and water vapor is represented using a homogeneous mixture model. Compressible Navier-Stokes equations for mixture quantities along with transport equation for vapor mass fraction employing finite rate mass transfer between the two phases, are solved using the numerical method of Gnanaskandan and Mahesh. The method is implemented on unstructured grid with parallel MPI capabilities. Flow over a wedge is simulated at Re = 200 , 000 and the performance of the homogeneous mixture model is analyzed in predicting different regimes of sheet to cloud cavitation; namely, incipient, transitory and periodic, as observed in the experimental investigation of Harish et al.. This work is supported by the Office of Naval Research.
Large-eddy simulation of contrails
Energy Technology Data Exchange (ETDEWEB)
Chlond, A [Max-Planck-Inst. fuer Meteorologie, Hamburg (Germany)
1998-12-31
A large eddy simulation (LES) model has been used to investigate the role of various external parameters and physical processes in the life-cycle of contrails. The model is applied to conditions that are typical for those under which contrails could be observed, i.e. in an atmosphere which is supersaturated with respect to ice and at a temperature of approximately 230 K or colder. The sensitivity runs indicate that the contrail evolution is controlled primarily by humidity, temperature and static stability of the ambient air and secondarily by the baroclinicity of the atmosphere. Moreover, it turns out that the initial ice particle concentration and radiative processes are of minor importance in the evolution of contrails at least during the 30 minutes simulation period. (author) 9 refs.
Large eddy simulation of breaking waves
DEFF Research Database (Denmark)
Christensen, Erik Damgaard; Deigaard, Rolf
2001-01-01
A numerical model is used to simulate wave breaking, the large scale water motions and turbulence induced by the breaking process. The model consists of a free surface model using the surface markers method combined with a three-dimensional model that solves the flow equations. The turbulence....... The incoming waves are specified by a flux boundary condition. The waves are approaching in the shore-normal direction and are breaking on a plane, constant slope beach. The first few wave periods are simulated by a two-dimensional model in the vertical plane normal to the beach line. The model describes...... the steepening and the overturning of the wave. At a given instant, the model domain is extended to three dimensions, and the two-dimensional flow field develops spontaneously three-dimensional flow features with turbulent eddies. After a few wave periods, stationary (periodic) conditions are achieved...
Large-eddy simulation of contrails
Energy Technology Data Exchange (ETDEWEB)
Chlond, A. [Max-Planck-Inst. fuer Meteorologie, Hamburg (Germany)
1997-12-31
A large eddy simulation (LES) model has been used to investigate the role of various external parameters and physical processes in the life-cycle of contrails. The model is applied to conditions that are typical for those under which contrails could be observed, i.e. in an atmosphere which is supersaturated with respect to ice and at a temperature of approximately 230 K or colder. The sensitivity runs indicate that the contrail evolution is controlled primarily by humidity, temperature and static stability of the ambient air and secondarily by the baroclinicity of the atmosphere. Moreover, it turns out that the initial ice particle concentration and radiative processes are of minor importance in the evolution of contrails at least during the 30 minutes simulation period. (author) 9 refs.
Environmental Disturbance Modeling for Large Inflatable Space Structures
National Research Council Canada - National Science Library
Davis, Donald
2001-01-01
Tightening space budgets and stagnating spacelift capabilities are driving the Air Force and other space agencies to focus on inflatable technology as a reliable, inexpensive means of deploying large structures in orbit...
Large Atmospheric Computation on the Earth Simulator: The LACES Project
Directory of Open Access Journals (Sweden)
Michel Desgagné
2006-01-01
Full Text Available The Large Atmospheric Computation on the Earth Simulator (LACES project is a joint initiative between Canadian and Japanese meteorological services and academic institutions that focuses on the high resolution simulation of Hurricane Earl (1998. The unique aspect of this effort is the extent of the computational domain, which covers all of North America and Europe with a grid spacing of 1 km. The Canadian Mesoscale Compressible Community (MC2 model is shown to parallelize effectively on the Japanese Earth Simulator (ES supercomputer; however, even using the extensive computing resources of the ES Center (ESC, the full simulation for the majority of Hurricane Earl's lifecycle takes over eight days to perform and produces over 5.2 TB of raw data. Preliminary diagnostics show that the results of the LACES simulation for the tropical stage of Hurricane Earl's lifecycle compare well with available observations for the storm. Further studies involving advanced diagnostics have commenced, taking advantage of the uniquely large spatial extent of the high resolution LACES simulation to investigate multiscale interactions in the hurricane and its environment. It is hoped that these studies will enhance our understanding of processes occurring within the hurricane and between the hurricane and its planetary-scale environment.
Modeling, Analysis, and Optimization Issues for Large Space Structures
Pinson, L. D. (Compiler); Amos, A. K. (Compiler); Venkayya, V. B. (Compiler)
1983-01-01
Topics concerning the modeling, analysis, and optimization of large space structures are discussed including structure-control interaction, structural and structural dynamics modeling, thermal analysis, testing, and design.
Exploring a Large Space of Small Games
DEFF Research Database (Denmark)
Barros, Gabriella; Togelius, Julian
We explore the soundness and playability of randomly generated games expressed in the Video Game Description Language (VGDL). A grammar is defined for VGDL, which is able to express a large variety of simple arcade-like games, and random expansions of this grammar are fed to a VGDL interpreter...... and played with off the shelf agents. We see this work as the first step towards generating complete, playable games....
Large eddy simulation of cavitating flows
Gnanaskandan, Aswin; Mahesh, Krishnan
2014-11-01
Large eddy simulation on unstructured grids is used to study hydrodynamic cavitation. The multiphase medium is represented using a homogeneous equilibrium model that assumes thermal equilibrium between the liquid and the vapor phase. Surface tension effects are ignored and the governing equations are the compressible Navier Stokes equations for the liquid/vapor mixture along with a transport equation for the vapor mass fraction. A characteristic-based filtering scheme is developed to handle shocks and material discontinuities in non-ideal gases and mixtures. A TVD filter is applied as a corrector step in a predictor-corrector approach with the predictor scheme being non-dissipative and symmetric. The method is validated for canonical one dimensional flows and leading edge cavitation over a hydrofoil, and applied to study sheet to cloud cavitation over a wedge. This work is supported by the Office of Naval Research.
Scalable space-time adaptive simulation tools for computational electrocardiology
Krause, Dorian; Krause, Rolf
2013-01-01
This work is concerned with the development of computational tools for the solution of reaction-diffusion equations from the field of computational electrocardiology. We designed lightweight spatially and space-time adaptive schemes for large-scale parallel simulations. We propose two different adaptive schemes based on locally structured meshes, managed either via a conforming coarse tessellation or a forest of shallow trees. A crucial ingredient of our approach is a non-conforming morta...
Tool Support for Parametric Analysis of Large Software Simulation Systems
Schumann, Johann; Gundy-Burlet, Karen; Pasareanu, Corina; Menzies, Tim; Barrett, Tony
2008-01-01
The analysis of large and complex parameterized software systems, e.g., systems simulation in aerospace, is very complicated and time-consuming due to the large parameter space, and the complex, highly coupled nonlinear nature of the different system components. Thus, such systems are generally validated only in regions local to anticipated operating points rather than through characterization of the entire feasible operational envelope of the system. We have addressed the factors deterring such an analysis with a tool to support envelope assessment: we utilize a combination of advanced Monte Carlo generation with n-factor combinatorial parameter variations to limit the number of cases, but still explore important interactions in the parameter space in a systematic fashion. Additional test-cases, automatically generated from models (e.g., UML, Simulink, Stateflow) improve the coverage. The distributed test runs of the software system produce vast amounts of data, making manual analysis impossible. Our tool automatically analyzes the generated data through a combination of unsupervised Bayesian clustering techniques (AutoBayes) and supervised learning of critical parameter ranges using the treatment learner TAR3. The tool has been developed around the Trick simulation environment, which is widely used within NASA. We will present this tool with a GN&C (Guidance, Navigation and Control) simulation of a small satellite system.
Large-eddy simulations for turbulent flows
International Nuclear Information System (INIS)
Husson, S.
2007-07-01
The aim of this work is to study the impact of thermal gradients on a turbulent channel flow with imposed wall temperatures and friction Reynolds numbers of 180 and 395. In this configuration, temperature variations can be strong and induce significant variations of the fluid properties. We consider the low Mach number equations and carry out large eddy simulations. We first validate our simulations thanks to comparisons of some of our LES results with DNS data. Then, we investigate the influence of the variations of the conductivity and the viscosity and show that we can assume these properties constant only for weak temperature gradients. We also study the thermal sub-grid-scale modelling and find no difference when the sub-grid-scale Prandtl number is taken constant or dynamically calculated. The analysis of the effects of strongly increasing the temperature ratio mainly shows a dissymmetry of the profiles. The physical mechanism responsible of these modifications is explained. Finally, we use semi-local scaling and the Van Driest transformation and we show that they lead to a better correspondence of the low and high temperature ratios profiles. (author)
Large-scale Intelligent Transporation Systems simulation
Energy Technology Data Exchange (ETDEWEB)
Ewing, T.; Canfield, T.; Hannebutte, U.; Levine, D.; Tentner, A.
1995-06-01
A prototype computer system has been developed which defines a high-level architecture for a large-scale, comprehensive, scalable simulation of an Intelligent Transportation System (ITS) capable of running on massively parallel computers and distributed (networked) computer systems. The prototype includes the modelling of instrumented ``smart`` vehicles with in-vehicle navigation units capable of optimal route planning and Traffic Management Centers (TMC). The TMC has probe vehicle tracking capabilities (display position and attributes of instrumented vehicles), and can provide 2-way interaction with traffic to provide advisories and link times. Both the in-vehicle navigation module and the TMC feature detailed graphical user interfaces to support human-factors studies. The prototype has been developed on a distributed system of networked UNIX computers but is designed to run on ANL`s IBM SP-X parallel computer system for large scale problems. A novel feature of our design is that vehicles will be represented by autonomus computer processes, each with a behavior model which performs independent route selection and reacts to external traffic events much like real vehicles. With this approach, one will be able to take advantage of emerging massively parallel processor (MPP) systems.
Exploring the large-scale structure of Taylor–Couette turbulence through Large-Eddy Simulations
Ostilla-Mónico, Rodolfo; Zhu, Xiaojue; Verzicco, Roberto
2018-04-01
Large eddy simulations (LES) of Taylor-Couette (TC) flow, the flow between two co-axial and independently rotating cylinders are performed in an attempt to explore the large-scale axially-pinned structures seen in experiments and simulations. Both static and dynamic LES models are used. The Reynolds number is kept fixed at Re = 3.4 · 104, and the radius ratio η = ri /ro is set to η = 0.909, limiting the effects of curvature and resulting in frictional Reynolds numbers of around Re τ ≈ 500. Four rotation ratios from Rot = ‑0.0909 to Rot = 0.3 are simulated. First, the LES of TC is benchmarked for different rotation ratios. Both the Smagorinsky model with a constant of cs = 0.1 and the dynamic model are found to produce reasonable results for no mean rotation and cyclonic rotation, but deviations increase for increasing rotation. This is attributed to the increasing anisotropic character of the fluctuations. Second, “over-damped” LES, i.e. LES with a large Smagorinsky constant is performed and is shown to reproduce some features of the large-scale structures, even when the near-wall region is not adequately modeled. This shows the potential for using over-damped LES for fast explorations of the parameter space where large-scale structures are found.
Sensitivity technologies for large scale simulation
International Nuclear Information System (INIS)
Collis, Samuel Scott; Bartlett, Roscoe Ainsworth; Smith, Thomas Michael; Heinkenschloss, Matthias; Wilcox, Lucas C.; Hill, Judith C.; Ghattas, Omar; Berggren, Martin Olof; Akcelik, Volkan; Ober, Curtis Curry; van Bloemen Waanders, Bart Gustaaf; Keiter, Eric Richard
2005-01-01
Sensitivity analysis is critically important to numerous analysis algorithms, including large scale optimization, uncertainty quantification,reduced order modeling, and error estimation. Our research focused on developing tools, algorithms and standard interfaces to facilitate the implementation of sensitivity type analysis into existing code and equally important, the work was focused on ways to increase the visibility of sensitivity analysis. We attempt to accomplish the first objective through the development of hybrid automatic differentiation tools, standard linear algebra interfaces for numerical algorithms, time domain decomposition algorithms and two level Newton methods. We attempt to accomplish the second goal by presenting the results of several case studies in which direct sensitivities and adjoint methods have been effectively applied, in addition to an investigation of h-p adaptivity using adjoint based a posteriori error estimation. A mathematical overview is provided of direct sensitivities and adjoint methods for both steady state and transient simulations. Two case studies are presented to demonstrate the utility of these methods. A direct sensitivity method is implemented to solve a source inversion problem for steady state internal flows subject to convection diffusion. Real time performance is achieved using novel decomposition into offline and online calculations. Adjoint methods are used to reconstruct initial conditions of a contamination event in an external flow. We demonstrate an adjoint based transient solution. In addition, we investigated time domain decomposition algorithms in an attempt to improve the efficiency of transient simulations. Because derivative calculations are at the root of sensitivity calculations, we have developed hybrid automatic differentiation methods and implemented this approach for shape optimization for gas dynamics using the Euler equations. The hybrid automatic differentiation method was applied to a first
GPU-Accelerated Sparse Matrix Solvers for Large-Scale Simulations, Phase I
National Aeronautics and Space Administration — Many large-scale numerical simulations can be broken down into common mathematical routines. While the applications may differ, the need to perform functions such as...
The Space Station as a Construction Base for Large Space Structures
Gates, R. M.
1985-01-01
The feasibility of using the Space Station as a construction site for large space structures is examined. An overview is presented of the results of a program entitled Definition of Technology Development Missions (TDM's) for Early Space Stations - Large Space Structures. The definition of LSS technology development missions must be responsive to the needs of future space missions which require large space structures. Long range plans for space were assembled by reviewing Space System Technology Models (SSTM) and other published sources. Those missions which will use large space structures were reviewed to determine the objectives which must be demonstrated by technology development missions. The three TDM's defined during this study are: (1) a construction storage/hangar facility; (2) a passive microwave radiometer; and (3) a precision optical system.
TESLA: Large Signal Simulation Code for Klystrons
International Nuclear Information System (INIS)
Vlasov, Alexander N.; Cooke, Simon J.; Chernin, David P.; Antonsen, Thomas M. Jr.; Nguyen, Khanh T.; Levush, Baruch
2003-01-01
TESLA (Telegraphist's Equations Solution for Linear Beam Amplifiers) is a new code designed to simulate linear beam vacuum electronic devices with cavities, such as klystrons, extended interaction klystrons, twistrons, and coupled cavity amplifiers. The model includes a self-consistent, nonlinear solution of the three-dimensional electron equations of motion and the solution of time-dependent field equations. The model differs from the conventional Particle in Cell approach in that the field spectrum is assumed to consist of a carrier frequency and its harmonics with slowly varying envelopes. Also, fields in the external cavities are modeled with circuit like equations and couple to fields in the beam region through boundary conditions on the beam tunnel wall. The model in TESLA is an extension of the model used in gyrotron code MAGY. The TESLA formulation has been extended to be capable to treat the multiple beam case, in which each beam is transported inside its own tunnel. The beams interact with each other as they pass through the gaps in their common cavities. The interaction is treated by modification of the boundary conditions on the wall of each tunnel to include the effect of adjacent beams as well as the fields excited in each cavity. The extended version of TESLA for the multiple beam case, TESLA-MB, has been developed for single processor machines, and can run on UNIX machines and on PC computers with a large memory (above 2GB). The TESLA-MB algorithm is currently being modified to simulate multiple beam klystrons on multiprocessor machines using the MPI (Message Passing Interface) environment. The code TESLA has been verified by comparison with MAGIC for single and multiple beam cases. The TESLA code and the MAGIC code predict the same power within 1% for a simple two cavity klystron design while the computational time for TESLA is orders of magnitude less than for MAGIC 2D. In addition, recently TESLA was used to model the L-6048 klystron, code
Large scale molecular simulations of nanotoxicity.
Jimenez-Cruz, Camilo A; Kang, Seung-gu; Zhou, Ruhong
2014-01-01
The widespread use of nanomaterials in biomedical applications has been accompanied by an increasing interest in understanding their interactions with tissues, cells, and biomolecules, and in particular, on how they might affect the integrity of cell membranes and proteins. In this mini-review, we present a summary of some of the recent studies on this important subject, especially from the point of view of large scale molecular simulations. The carbon-based nanomaterials and noble metal nanoparticles are the main focus, with additional discussions on quantum dots and other nanoparticles as well. The driving forces for adsorption of fullerenes, carbon nanotubes, and graphene nanosheets onto proteins or cell membranes are found to be mainly hydrophobic interactions and the so-called π-π stacking (between aromatic rings), while for the noble metal nanoparticles the long-range electrostatic interactions play a bigger role. More interestingly, there are also growing evidences showing that nanotoxicity can have implications in de novo design of nanomedicine. For example, the endohedral metallofullerenol Gd@C₈₂(OH)₂₂ is shown to inhibit tumor growth and metastasis by inhibiting enzyme MMP-9, and graphene is illustrated to disrupt bacteria cell membranes by insertion/cutting as well as destructive extraction of lipid molecules. These recent findings have provided a better understanding of nanotoxicity at the molecular level and also suggested therapeutic potential by using the cytotoxicity of nanoparticles against cancer or bacteria cells. © 2014 Wiley Periodicals, Inc.
Large-Eddy Simulation of Subsonic Jets
International Nuclear Information System (INIS)
Vuorinen, Ville; Wehrfritz, Armin; Yu Jingzhou; Kaario, Ossi; Larmi, Martti; Boersma, Bendiks Jan
2011-01-01
The present study deals with development and validation of a fully explicit, compressible Runge-Kutta-4 (RK4) Navier-Stokes solver in the opensource CFD programming environment OpenFOAM. The background motivation is to shift towards explicit density based solution strategy and thereby avoid using the pressure based algorithms which are currently proposed in the standard OpenFOAM release for Large-Eddy Simulation (LES). This shift is considered necessary in strongly compressible flows when Ma > 0.5. Our application of interest is related to the pre-mixing stage in direct injection gas engines where high injection pressures are typically utilized. First, the developed flow solver is discussed and validated. Then, the implementation of subsonic inflow conditions using a forcing region in combination with a simplified nozzle geometry is discussed and validated. After this, LES of mixing in compressible, round jets at Ma = 0.3, 0.5 and 0.65 are carried out. Respectively, the Reynolds numbers of the jets correspond to Re = 6000, 10000 and 13000. Results for two meshes are presented. The results imply that the present solver produces turbulent structures, resolves a range of turbulent eddy frequencies and gives also mesh independent results within satisfactory limits for mean flow and turbulence statistics.
Large eddy simulation of stably stratified turbulence
International Nuclear Information System (INIS)
Shen Zhi; Zhang Zhaoshun; Cui Guixiang; Xu Chunxiao
2011-01-01
Stably stratified turbulence is a common phenomenon in atmosphere and ocean. In this paper the large eddy simulation is utilized for investigating homogeneous stably stratified turbulence numerically at Reynolds number Re = uL/v = 10 2 ∼10 3 and Froude number Fr = u/NL = 10 −2 ∼10 0 in which u is root mean square of velocity fluctuations, L is integral scale and N is Brunt-Vaïsälä frequency. Three sets of computation cases are designed with different initial conditions, namely isotropic turbulence, Taylor Green vortex and internal waves, to investigate the statistical properties from different origins. The computed horizontal and vertical energy spectra are consistent with observation in atmosphere and ocean when the composite parameter ReFr 2 is greater than O(1). It has also been found in this paper that the stratification turbulence can be developed under different initial velocity conditions and the internal wave energy is dominated in the developed stably stratified turbulence.
Early, Derrick A.; Haile, William B.; Turczyn, Mark T.; Griffin, Thomas J. (Technical Monitor)
2001-01-01
NASA Goddard Space Flight Center and the European Space Agency (ESA) conducted a disturbance verification test on a flight Solar Array 3 (SA3) for the Hubble Space Telescope using the ESA Large Space Simulator (LSS) in Noordwijk, the Netherlands. The LSS cyclically illuminated the SA3 to simulate orbital temperature changes in a vacuum environment. Data acquisition systems measured signals from force transducers and accelerometers resulting from thermally induced vibrations of the SAI The LSS with its seismic mass boundary provided an excellent background environment for this test. This paper discusses the analysis performed on the measured transient SA3 responses and provides a summary of the results.
Benchmarking processes for managing large international space programs
Mandell, Humboldt C., Jr.; Duke, Michael B.
1993-01-01
The relationship between management style and program costs is analyzed to determine the feasibility of financing large international space missions. The incorporation of management systems is considered to be essential to realizing low cost spacecraft and planetary surface systems. Several companies ranging from large Lockheed 'Skunk Works' to small companies including Space Industries, Inc., Rocket Research Corp., and Orbital Sciences Corp. were studied. It is concluded that to lower the prices, the ways in which spacecraft and hardware are developed must be changed. Benchmarking of successful low cost space programs has revealed a number of prescriptive rules for low cost managements, including major changes in the relationships between the public and private sectors.
Nonterrestrial material processing and manufacturing of large space systems
Von Tiesenhausen, G.
1979-01-01
Nonterrestrial processing of materials and manufacturing of large space system components from preprocessed lunar materials at a manufacturing site in space is described. Lunar materials mined and preprocessed at the lunar resource complex will be flown to the space manufacturing facility (SMF), where together with supplementary terrestrial materials, they will be final processed and fabricated into space communication systems, solar cell blankets, radio frequency generators, and electrical equipment. Satellite Power System (SPS) material requirements and lunar material availability and utilization are detailed, and the SMF processing, refining, fabricating facilities, material flow and manpower requirements are described.
Large-eddy simulations of turbulence
National Research Council Canada - National Science Library
Lesieur, Marcel; Métais, O; Comte, P
2005-01-01
... physical-space models are generally more readily applied, spectral models give insight into the requirements and limitations in subgrid-scale modeling and backscattering. A third special feature ...
Potential large missions enabled by NASA's space launch system
Stahl, H. Philip; Hopkins, Randall C.; Schnell, Andrew; Smith, David A.; Jackman, Angela; Warfield, Keith R.
2016-07-01
Large space telescope missions have always been limited by their launch vehicle's mass and volume capacities. The Hubble Space Telescope (HST) was specifically designed to fit inside the Space Shuttle and the James Webb Space Telescope (JWST) is specifically designed to fit inside an Ariane 5. Astrophysicists desire even larger space telescopes. NASA's "Enduring Quests Daring Visions" report calls for an 8- to 16-m Large UV-Optical-IR (LUVOIR) Surveyor mission to enable ultra-high-contrast spectroscopy and coronagraphy. AURA's "From Cosmic Birth to Living Earth" report calls for a 12-m class High-Definition Space Telescope to pursue transformational scientific discoveries. NASA's "Planning for the 2020 Decadal Survey" calls for a Habitable Exoplanet Imaging (HabEx) and a LUVOIR as well as Far-IR and an X-Ray Surveyor missions. Packaging larger space telescopes into existing launch vehicles is a significant engineering complexity challenge that drives cost and risk. NASA's planned Space Launch System (SLS), with its 8 or 10-m diameter fairings and ability to deliver 35 to 45-mt of payload to Sun-Earth-Lagrange-2, mitigates this challenge by fundamentally changing the design paradigm for large space telescopes. This paper reviews the mass and volume capacities of the planned SLS, discusses potential implications of these capacities for designing large space telescope missions, and gives three specific mission concept implementation examples: a 4-m monolithic off-axis telescope, an 8-m monolithic on-axis telescope and a 12-m segmented on-axis telescope.
Potential Large Decadal Missions Enabled by Nasas Space Launch System
Stahl, H. Philip; Hopkins, Randall C.; Schnell, Andrew; Smith, David Alan; Jackman, Angela; Warfield, Keith R.
2016-01-01
Large space telescope missions have always been limited by their launch vehicle's mass and volume capacities. The Hubble Space Telescope (HST) was specifically designed to fit inside the Space Shuttle and the James Webb Space Telescope (JWST) is specifically designed to fit inside an Ariane 5. Astrophysicists desire even larger space telescopes. NASA's "Enduring Quests Daring Visions" report calls for an 8- to 16-m Large UV-Optical-IR (LUVOIR) Surveyor mission to enable ultra-high-contrast spectroscopy and coronagraphy. AURA's "From Cosmic Birth to Living Earth" report calls for a 12-m class High-Definition Space Telescope to pursue transformational scientific discoveries. NASA's "Planning for the 2020 Decadal Survey" calls for a Habitable Exoplanet Imaging (HabEx) and a LUVOIR as well as Far-IR and an X-Ray Surveyor missions. Packaging larger space telescopes into existing launch vehicles is a significant engineering complexity challenge that drives cost and risk. NASA's planned Space Launch System (SLS), with its 8 or 10-m diameter fairings and ability to deliver 35 to 45-mt of payload to Sun-Earth-Lagrange-2, mitigates this challenge by fundamentally changing the design paradigm for large space telescopes. This paper reviews the mass and volume capacities of the planned SLS, discusses potential implications of these capacities for designing large space telescope missions, and gives three specific mission concept implementation examples: a 4-m monolithic off-axis telescope, an 8-m monolithic on-axis telescope and a 12-m segmented on-axis telescope.
Large-Signal Klystron Simulations Using KLSC
Carlsten, B. E.; Ferguson, P.
1997-05-01
We describe a new, 2-1/2 dimensional, klystron-simulation code, KLSC. This code has a sophisticated input cavity model for calculating the klystron gain with arbitrary input cavity matching and tuning, and is capable of modeling coupled output cavities. We will discuss the input and output cavity models, and present simulation results from a high-power, S-band design. We will use these results to explore tuning issues with coupled output cavities.
Observation and simulation of AGW in Space
Kunitsyn, Vyacheslav; Kholodov, Alexander; Andreeva, Elena; Nesterov, Ivan; Padokhin, Artem; Vorontsov, Artem
2014-05-01
Examples are presented of satellite observations and imaging of AGW and related phenomena in space travelling ionospheric disturbances (TID). The structure of AGW perturbations was reconstructed by satellite radio tomography (RT) based on the signals of Global Navigation Satellite Systems (GNSS). The experiments use different GNSS, both low-orbiting (Russian Tsikada and American Transit) and high-orbiting (GPS, GLONASS, Galileo, Beidou). The examples of RT imaging of TIDs and AGWs from anthropogenic sources such as ground explosions, rocket launching, heating the ionosphere by high-power radio waves are presented. In the latter case, the corresponding AGWs and TIDs were generated in response to the modulation in the power of the heating wave. The natural AGW-like wave disturbances are frequently observed in the atmosphere and ionosphere in the form of variations in density and electron concentration. These phenomena are caused by the influence of the near-space environment, atmosphere, and surface phenomena including long-period vibrations of the Earth's surface, earthquakes, explosions, temperature heating, seisches, tsunami waves, etc. Examples of experimental RT reconstructions of wave disturbances associated with the earthquakes and tsunami waves are presented, and RT images of TIDs caused by the variations in the corpuscular ionization are demonstrated. The results of numerical modeling of AGW generation by some surface and volume sources are discussed. The milli-Hertz AGWs generated by these sources induce perturbations with a typical scale of a few hundred of kilometers at the heights of the middle atmosphere and ionosphere. The numerical modeling is based on the solution of equations of geophysical hydrodynamics. The results of the numerical simulations agree with the observations. The authors acknowledge the support of the Russian Foundation for Basic Research (grants 14-05-00855 and 13-05-01122), grant of the President of Russian Federation MK-2670
Deep Space Navigation and Timing Architecture and Simulation, Phase I
National Aeronautics and Space Administration — Microcosm will develop a deep space navigation and timing architecture and associated simulation, incorporating state-of-the art radiometric, x-ray pulsar, and laser...
25th Space Simulation Conference. Environmental Testing: The Earth-Space Connection
Packard, Edward
2008-01-01
Topics covered include: Methods of Helium Injection and Removal for Heat Transfer Augmentation; The ESA Large Space Simulator Mechanical Ground Support Equipment for Spacecraft Testing; Temperature Stability and Control Requirements for Thermal Vacuum/Thermal Balance Testing of the Aquarius Radiometer; The Liquid Nitrogen System for Chamber A: A Change from Original Forced Flow Design to a Natural Flow (Thermo Siphon) System; Return to Mercury: A Comparison of Solar Simulation and Flight Data for the MESSENGER Spacecraft; Floating Pressure Conversion and Equipment Upgrades of Two 3.5kw, 20k, Helium Refrigerators; Affect of Air Leakage into a Thermal-Vacuum Chamber on Helium Refrigeration Heat Load; Special ISO Class 6 Cleanroom for the Lunar Reconnaissance Orbiter (LRO) Project; A State-of-the-Art Contamination Effects Research and Test Facility Martian Dust Simulator; Cleanroom Design Practices and Their Influence on Particle Counts; Extra Terrestrial Environmental Chamber Design; Contamination Sources Effects Analysis (CSEA) - A Tool to Balance Cost/Schedule While Managing Facility Availability; SES and Acoustics at GSFC; HST Super Lightweight Interchangeable Carrier (SLIC) Static Test; Virtual Shaker Testing: Simulation Technology Improves Vibration Test Performance; Estimating Shock Spectra: Extensions beyond GEVS; Structural Dynamic Analysis of a Spacecraft Multi-DOF Shaker Table; Direct Field Acoustic Testing; Manufacture of Cryoshroud Surfaces for Space Simulation Chambers; The New LOTIS Test Facility; Thermal Vacuum Control Systems Options for Test Facilities; Extremely High Vacuum Chamber for Low Outgassing Processing at NASA Goddard; Precision Cleaning - Path to Premier; The New Anechoic Shielded Chambers Designed for Space and Commercial Applications at LIT; Extraction of Thermal Performance Values from Samples in the Lunar Dust Adhesion Bell Jar; Thermal (Silicon Diode) Data Acquisition System; Aquarius's Instrument Science Data System (ISDS) Automated
Macro Level Simulation Model Of Space Shuttle Processing
2000-01-01
The contents include: 1) Space Shuttle Processing Simulation Model; 2) Knowledge Acquisition; 3) Simulation Input Analysis; 4) Model Applications in Current Shuttle Environment; and 5) Model Applications for Future Reusable Launch Vehicles (RLV's). This paper is presented in viewgraph form.
Large-eddy simulation of maritime deep tropical convection
Directory of Open Access Journals (Sweden)
Peter A Bogenschutz
2009-12-01
Full Text Available This study represents an attempt to apply Large-Eddy Simulation (LES resolution to simulate deep tropical convection in near equilibrium for 24 hours over an area of about 205 x 205 km2, which is comparable to that of a typical horizontal grid cell in a global climate model. The simulation is driven by large-scale thermodynamic tendencies derived from mean conditions during the GATE Phase III field experiment. The LES uses 2048 x 2048 x 256 grid points with horizontal grid spacing of 100 m and vertical grid spacing ranging from 50 m in the boundary layer to 100 m in the free troposphere. The simulation reaches a near equilibrium deep convection regime in 12 hours. The simulated vertical cloud distribution exhibits a trimodal vertical distribution of deep, middle and shallow clouds similar to that often observed in Tropics. A sensitivity experiment in which cold pools are suppressed by switching off the evaporation of precipitation results in much lower amounts of shallow and congestus clouds. Unlike the benchmark LES where the new deep clouds tend to appear along the edges of spreading cold pools, the deep clouds in the no-cold-pool experiment tend to reappear at the sites of the previous deep clouds and tend to be surrounded by extensive areas of sporadic shallow clouds. The vertical velocity statistics of updraft and downdraft cores below 6 km height are compared to aircraft observations made during GATE. The comparison shows generally good agreement, and strongly suggests that the LES simulation can be used as a benchmark to represent the dynamics of tropical deep convection on scales ranging from large turbulent eddies to mesoscale convective systems. The effect of horizontal grid resolution is examined by running the same case with progressively larger grid sizes of 200, 400, 800, and 1600 m. These runs show a reasonable agreement with the benchmark LES in statistics such as convective available potential energy, convective inhibition
Deep Space Storm Shelter Simulation Study
Dugan, Kathryn; Phojanamongkolkij, Nipa; Cerro, Jeffrey; Simon, Matthew
2015-01-01
Missions outside of Earth's magnetic field are impeded by the presence of radiation from galactic cosmic rays and solar particle events. To overcome this issue, NASA's Advanced Exploration Systems Radiation Works Storm Shelter (RadWorks) has been studying different radiation protective habitats to shield against the onset of solar particle event radiation. These habitats have the capability of protecting occupants by utilizing available materials such as food, water, brine, human waste, trash, and non-consumables to build short-term shelters. Protection comes from building a barrier with the materials that dampens the impact of the radiation on astronauts. The goal of this study is to develop a discrete event simulation, modeling a solar particle event and the building of a protective shelter. The main hallway location within a larger habitat similar to the International Space Station (ISS) is analyzed. The outputs from this model are: 1) the total area covered on the shelter by the different materials, 2) the amount of radiation the crew members receive, and 3) the amount of time for setting up the habitat during specific points in a mission given an event occurs.
Nuclear spectroscopy in large shell model spaces: recent advances
International Nuclear Information System (INIS)
Kota, V.K.B.
1995-01-01
Three different approaches are now available for carrying out nuclear spectroscopy studies in large shell model spaces and they are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the recently introduced Monte Carlo method for the shell model; (iii) the spectral averaging theory, based on central limit theorems, in indefinitely large shell model spaces. The various principles, recent applications and possibilities of these three methods are described and the similarity between the Monte Carlo method and the spectral averaging theory is emphasized. (author). 28 refs., 1 fig., 5 tabs
A Process for Comparing Dynamics of Distributed Space Systems Simulations
Cures, Edwin Z.; Jackson, Albert A.; Morris, Jeffery C.
2009-01-01
The paper describes a process that was developed for comparing the primary orbital dynamics behavior between space systems distributed simulations. This process is used to characterize and understand the fundamental fidelities and compatibilities of the modeling of orbital dynamics between spacecraft simulations. This is required for high-latency distributed simulations such as NASA s Integrated Mission Simulation and must be understood when reporting results from simulation executions. This paper presents 10 principal comparison tests along with their rationale and examples of the results. The Integrated Mission Simulation (IMSim) (formerly know as the Distributed Space Exploration Simulation (DSES)) is a NASA research and development project focusing on the technologies and processes that are related to the collaborative simulation of complex space systems involved in the exploration of our solar system. Currently, the NASA centers that are actively participating in the IMSim project are the Ames Research Center, the Jet Propulsion Laboratory (JPL), the Johnson Space Center (JSC), the Kennedy Space Center, the Langley Research Center and the Marshall Space Flight Center. In concept, each center participating in IMSim has its own set of simulation models and environment(s). These simulation tools are used to build the various simulation products that are used for scientific investigation, engineering analysis, system design, training, planning, operations and more. Working individually, these production simulations provide important data to various NASA projects.
Large Eddy Simulation of Heat Entrainment Under Arctic Sea Ice
Ramudu, Eshwan; Gelderloos, Renske; Yang, Di; Meneveau, Charles; Gnanadesikan, Anand
2018-01-01
Arctic sea ice has declined rapidly in recent decades. The faster than projected retreat suggests that free-running large-scale climate models may not be accurately representing some key processes. The small-scale turbulent entrainment of heat from the mixed layer could be one such process. To better understand this mechanism, we model the Arctic Ocean's Canada Basin, which is characterized by a perennial anomalously warm Pacific Summer Water (PSW) layer residing at the base of the mixed layer and a summertime Near-Surface Temperature Maximum (NSTM) within the mixed layer trapping heat from solar radiation. We use large eddy simulation (LES) to investigate heat entrainment for different ice-drift velocities and different initial temperature profiles. The value of LES is that the resolved turbulent fluxes are greater than the subgrid-scale fluxes for most of our parameter space. The results show that the presence of the NSTM enhances heat entrainment from the mixed layer. Additionally there is no PSW heat entrained under the parameter space considered. We propose a scaling law for the ocean-to-ice heat flux which depends on the initial temperature anomaly in the NSTM layer and the ice-drift velocity. A case study of "The Great Arctic Cyclone of 2012" gives a turbulent heat flux from the mixed layer that is approximately 70% of the total ocean-to-ice heat flux estimated from the PIOMAS model often used for short-term predictions. Present results highlight the need for large-scale climate models to account for the NSTM layer.
Proceedings of the meeting on large scale computer simulation research
International Nuclear Information System (INIS)
2004-04-01
The meeting to summarize the collaboration activities for FY2003 on the Large Scale Computer Simulation Research was held January 15-16, 2004 at Theory and Computer Simulation Research Center, National Institute for Fusion Science. Recent simulation results, methodologies and other related topics were presented. (author)
Large Deployable Reflector (LDR) Requirements for Space Station Accommodations
Crowe, D. A.; Clayton, M. J.; Runge, F. C.
1985-01-01
Top level requirements for assembly and integration of the Large Deployable Reflector (LDR) Observatory at the Space Station are examined. Concepts are currently under study for LDR which will provide a sequel to the Infrared Astronomy Satellite and the Space Infrared Telescope Facility. LDR will provide a spectacular capability over a very broad spectral range. The Space Station will provide an essential facility for the initial assembly and check out of LDR, as well as a necessary base for refurbishment, repair and modification. By providing a manned platform, the Space Station will remove the time constraint on assembly associated with use of the Shuttle alone. Personnel safety during necessary EVA is enhanced by the presence of the manned facility.
Large Deployable Reflector (LDR) requirements for space station accommodations
Crowe, D. A.; Clayton, M. J.; Runge, F. C.
1985-04-01
Top level requirements for assembly and integration of the Large Deployable Reflector (LDR) Observatory at the Space Station are examined. Concepts are currently under study for LDR which will provide a sequel to the Infrared Astronomy Satellite and the Space Infrared Telescope Facility. LDR will provide a spectacular capability over a very broad spectral range. The Space Station will provide an essential facility for the initial assembly and check out of LDR, as well as a necessary base for refurbishment, repair and modification. By providing a manned platform, the Space Station will remove the time constraint on assembly associated with use of the Shuttle alone. Personnel safety during necessary EVA is enhanced by the presence of the manned facility.
Large TileCal magnetic field simulation
International Nuclear Information System (INIS)
Nessi, M.; Bergsma, F.; Vorozhtsov, S.B.; Borisov, O.N.; Lomakina, O.V.; Karamysheva, G.A.; Budagov, Yu.A.
1994-01-01
The ATLAS magnetic field map has been estimated in the presence of the hadron tile calorimeter. This is an important issue in order to quantify the needs for individual PMT shielding, the effect on the scintillator light yield and its implications on the calibration. The field source is based on a central solenoid and 8 superconducting air-core toroidal coils. The maximum induction value in the scintillating tiles does not exceed 6 mT. When an iron plate is used to close the open drawer window the field inside the PMT near to the extended barrel edge does not exceed 0.6 mT. Estimation of ponder motive force distribution, acting on individual units of the system was performed. VF electromagnetic software OPERA-TOSCA and CERN POISCR code were used for the field simulation of the system. 10 refs., 4 figs
Shell model in large spaces and statistical spectroscopy
International Nuclear Information System (INIS)
Kota, V.K.B.
1996-01-01
For many nuclear structure problems of current interest it is essential to deal with shell model in large spaces. For this, three different approaches are now in use and two of them are: (i) the conventional shell model diagonalization approach but taking into account new advances in computer technology; (ii) the shell model Monte Carlo method. A brief overview of these two methods is given. Large space shell model studies raise fundamental questions regarding the information content of the shell model spectrum of complex nuclei. This led to the third approach- the statistical spectroscopy methods. The principles of statistical spectroscopy have their basis in nuclear quantum chaos and they are described (which are substantiated by large scale shell model calculations) in some detail. (author)
Enabling parallel simulation of large-scale HPC network systems
International Nuclear Information System (INIS)
Mubarak, Misbah; Carothers, Christopher D.; Ross, Robert B.; Carns, Philip
2016-01-01
Here, with the increasing complexity of today’s high-performance computing (HPC) architectures, simulation has become an indispensable tool for exploring the design space of HPC systems—in particular, networks. In order to make effective design decisions, simulations of these systems must possess the following properties: (1) have high accuracy and fidelity, (2) produce results in a timely manner, and (3) be able to analyze a broad range of network workloads. Most state-of-the-art HPC network simulation frameworks, however, are constrained in one or more of these areas. In this work, we present a simulation framework for modeling two important classes of networks used in today’s IBM and Cray supercomputers: torus and dragonfly networks. We use the Co-Design of Multi-layer Exascale Storage Architecture (CODES) simulation framework to simulate these network topologies at a flit-level detail using the Rensselaer Optimistic Simulation System (ROSS) for parallel discrete-event simulation. Our simulation framework meets all the requirements of a practical network simulation and can assist network designers in design space exploration. First, it uses validated and detailed flit-level network models to provide an accurate and high-fidelity network simulation. Second, instead of relying on serial time-stepped or traditional conservative discrete-event simulations that limit simulation scalability and efficiency, we use the optimistic event-scheduling capability of ROSS to achieve efficient and scalable HPC network simulations on today’s high-performance cluster systems. Third, our models give network designers a choice in simulating a broad range of network workloads, including HPC application workloads using detailed network traces, an ability that is rarely offered in parallel with high-fidelity network simulations
Galactic cosmic ray simulation at the NASA Space Radiation Laboratory
Norbury, John W.; Schimmerling, Walter; Slaba, Tony C.; Azzam, Edouard I.; Badavi, Francis F.; Baiocco, Giorgio; Benton, Eric; Bindi, Veronica; Blakely, Eleanor A.; Blattnig, Steve R.; Boothman, David A.; Borak, Thomas B.; Britten, Richard A.; Curtis, Stan; Dingfelder, Michael; Durante, Marco; Dynan, William S.; Eisch, Amelia J.; Elgart, S. Robin; Goodhead, Dudley T.; Guida, Peter M.; Heilbronn, Lawrence H.; Hellweg, Christine E.; Huff, Janice L.; Kronenberg, Amy; La Tessa, Chiara; Lowenstein, Derek I.; Miller, Jack; Morita, Takashi; Narici, Livio; Nelson, Gregory A.; Norman, Ryan B.; Ottolenghi, Andrea; Patel, Zarana S.; Reitz, Guenther; Rusek, Adam; Schreurs, Ann-Sofie; Scott-Carnell, Lisa A.; Semones, Edward; Shay, Jerry W.; Shurshakov, Vyacheslav A.; Sihver, Lembit; Simonsen, Lisa C.; Story, Michael D.; Turker, Mitchell S.; Uchihori, Yukio; Williams, Jacqueline; Zeitlin, Cary J.
2017-01-01
Most accelerator-based space radiation experiments have been performed with single ion beams at fixed energies. However, the space radiation environment consists of a wide variety of ion species with a continuous range of energies. Due to recent developments in beam switching technology implemented at the NASA Space Radiation Laboratory (NSRL) at Brookhaven National Laboratory (BNL), it is now possible to rapidly switch ion species and energies, allowing for the possibility to more realistically simulate the actual radiation environment found in space. The present paper discusses a variety of issues related to implementation of galactic cosmic ray (GCR) simulation at NSRL, especially for experiments in radiobiology. Advantages and disadvantages of different approaches to developing a GCR simulator are presented. In addition, issues common to both GCR simulation and single beam experiments are compared to issues unique to GCR simulation studies. A set of conclusions is presented as well as a discussion of the technical implementation of GCR simulation. PMID:26948012
Extra-large letter spacing improves reading in dyslexia
Zorzi, Marco; Barbiero, Chiara; Facoetti, Andrea; Lonciari, Isabella; Carrozzi, Marco; Montico, Marcella; Bravar, Laura; George, Florence; Pech-Georgel, Catherine; Ziegler, Johannes C.
2012-01-01
Although the causes of dyslexia are still debated, all researchers agree that the main challenge is to find ways that allow a child with dyslexia to read more words in less time, because reading more is undisputedly the most efficient intervention for dyslexia. Sophisticated training programs exist, but they typically target the component skills of reading, such as phonological awareness. After the component skills have improved, the main challenge remains (that is, reading deficits must be treated by reading more—a vicious circle for a dyslexic child). Here, we show that a simple manipulation of letter spacing substantially improved text reading performance on the fly (without any training) in a large, unselected sample of Italian and French dyslexic children. Extra-large letter spacing helps reading, because dyslexics are abnormally affected by crowding, a perceptual phenomenon with detrimental effects on letter recognition that is modulated by the spacing between letters. Extra-large letter spacing may help to break the vicious circle by rendering the reading material more easily accessible. PMID:22665803
Space Science Investigation: NASA ISS Stowage Simulator
Crawford, Gary
2017-01-01
During this internship the opportunity was granted to work with the Integrated, Graphics, Operations and Analysis Laboratory (IGOAL) team. The main assignment was to create 12 achievement patches for the Space Station training simulator called the "NASA ISS Stowage Training Game." This project was built using previous IGOAL developed software. To accomplish this task, Adobe Photoshop and Adobe Illustrator were used to craft the badges and other elements required. Blender, a 3D modeling software, was used to make the required 3D elements. Blender was a useful tool to make things such as a CTB bag for the "No More Bob" patch which shows a gentleman kicking a CTB bag into the distance. It was also used to pose characters to the positions that was optimal for their patches as in the "Station Sanitation" patch which portrays and astronaut waving on a U.S module on a truck. Adobe Illustrator was the main piece of software for this task. It was used to craft the badges and upload them when they were completed. The style of the badges were flat, meaning that they shouldn't look three dimensional in any way, shape or form. Adobe Photoshop was used when any pictures need brightening and was where the texture for the CTB bag was made. In order for the patches to be ready for the game's next major release, they have to go under some critical reviewing, revising and re-editing to make sure the other artists and the rest of the staff are satisfied with the final products. Many patches were created and revamped to meet the flat setting and incorporate suggestions from the IGOAL team. After the three processes were completed, the badges were implemented into the game (reference fig1 for badges). After a month of designing badges, the finished products were placed into the final game build via the programmers. The art was the final piece in showcasing the latest build to the public for testing. Comments from the testers were often exceptional and the feedback on the badges were
Power conditioning for large dc motors for space flight applications
Veatch, Martin S.; Anderson, Paul M.; Eason, Douglas J.; Landis, David M.
1988-01-01
The design and performance of a prototype power-conditioning system for use with large brushless dc motors on NASA space missions are discussed in detail and illustrated with extensive diagrams, drawings, and graphs. The 5-kW 8-phase parallel module evaluated here would be suitable for use in the Space Shuttle Orbiter cargo bay. A current-balancing magnetic assembly with low distributed inductance permits high-speed current switching from a low-voltage bus as well as current balancing between parallel MOSFETs.
Density-functional theory simulation of large quantum dots
Jiang, Hong; Baranger, Harold U.; Yang, Weitao
2003-10-01
Kohn-Sham spin-density functional theory provides an efficient and accurate model to study electron-electron interaction effects in quantum dots, but its application to large systems is a challenge. Here an efficient method for the simulation of quantum dots using density-function theory is developed; it includes the particle-in-the-box representation of the Kohn-Sham orbitals, an efficient conjugate-gradient method to directly minimize the total energy, a Fourier convolution approach for the calculation of the Hartree potential, and a simplified multigrid technique to accelerate the convergence. We test the methodology in a two-dimensional model system and show that numerical studies of large quantum dots with several hundred electrons become computationally affordable. In the noninteracting limit, the classical dynamics of the system we study can be continuously varied from integrable to fully chaotic. The qualitative difference in the noninteracting classical dynamics has an effect on the quantum properties of the interacting system: integrable classical dynamics leads to higher-spin states and a broader distribution of spacing between Coulomb blockade peaks.
Large Scale Simulation Platform for NODES Validation Study
Energy Technology Data Exchange (ETDEWEB)
Sotorrio, P. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Qin, Y. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Min, L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2017-04-27
This report summarizes the Large Scale (LS) simulation platform created for the Eaton NODES project. The simulation environment consists of both wholesale market simulator and distribution simulator and includes the CAISO wholesale market model and a PG&E footprint of 25-75 feeders to validate the scalability under a scenario of 33% RPS in California with additional 17% of DERS coming from distribution and customers. The simulator can generate hourly unit commitment, 5-minute economic dispatch, and 4-second AGC regulation signals. The simulator is also capable of simulating greater than 10k individual controllable devices. Simulated DERs include water heaters, EVs, residential and light commercial HVAC/buildings, and residential-level battery storage. Feeder-level voltage regulators and capacitor banks are also simulated for feeder-level real and reactive power management and Vol/Var control.
An FPGA computing demo core for space charge simulation
International Nuclear Information System (INIS)
Wu, Jinyuan; Huang, Yifei
2009-01-01
In accelerator physics, space charge simulation requires large amount of computing power. In a particle system, each calculation requires time/resource consuming operations such as multiplications, divisions, and square roots. Because of the flexibility of field programmable gate arrays (FPGAs), we implemented this task with efficient use of the available computing resources and completely eliminated non-calculating operations that are indispensable in regular micro-processors (e.g. instruction fetch, instruction decoding, etc.). We designed and tested a 16-bit demo core for computing Coulomb's force in an Altera Cyclone II FPGA device. To save resources, the inverse square-root cube operation in our design is computed using a memory look-up table addressed with nine to ten most significant non-zero bits. At 200 MHz internal clock, our demo core reaches a throughput of 200 M pairs/s/core, faster than a typical 2 GHz micro-processor by about a factor of 10. Temperature and power consumption of FPGAs were also lower than those of micro-processors. Fast and convenient, FPGAs can serve as alternatives to time-consuming micro-processors for space charge simulation.
An FPGA computing demo core for space charge simulation
Energy Technology Data Exchange (ETDEWEB)
Wu, Jinyuan; Huang, Yifei; /Fermilab
2009-01-01
In accelerator physics, space charge simulation requires large amount of computing power. In a particle system, each calculation requires time/resource consuming operations such as multiplications, divisions, and square roots. Because of the flexibility of field programmable gate arrays (FPGAs), we implemented this task with efficient use of the available computing resources and completely eliminated non-calculating operations that are indispensable in regular micro-processors (e.g. instruction fetch, instruction decoding, etc.). We designed and tested a 16-bit demo core for computing Coulomb's force in an Altera Cyclone II FPGA device. To save resources, the inverse square-root cube operation in our design is computed using a memory look-up table addressed with nine to ten most significant non-zero bits. At 200 MHz internal clock, our demo core reaches a throughput of 200 M pairs/s/core, faster than a typical 2 GHz micro-processor by about a factor of 10. Temperature and power consumption of FPGAs were also lower than those of micro-processors. Fast and convenient, FPGAs can serve as alternatives to time-consuming micro-processors for space charge simulation.
Visualising very large phylogenetic trees in three dimensional hyperbolic space
Directory of Open Access Journals (Sweden)
Liberles David A
2004-04-01
Full Text Available Abstract Background Common existing phylogenetic tree visualisation tools are not able to display readable trees with more than a few thousand nodes. These existing methodologies are based in two dimensional space. Results We introduce the idea of visualising phylogenetic trees in three dimensional hyperbolic space with the Walrus graph visualisation tool and have developed a conversion tool that enables the conversion of standard phylogenetic tree formats to Walrus' format. With Walrus, it becomes possible to visualise and navigate phylogenetic trees with more than 100,000 nodes. Conclusion Walrus enables desktop visualisation of very large phylogenetic trees in 3 dimensional hyperbolic space. This application is potentially useful for visualisation of the tree of life and for functional genomics derivatives, like The Adaptive Evolution Database (TAED.
Lagrangian space consistency relation for large scale structure
International Nuclear Information System (INIS)
Horn, Bart; Hui, Lam; Xiao, Xiao
2015-01-01
Consistency relations, which relate the squeezed limit of an (N+1)-point correlation function to an N-point function, are non-perturbative symmetry statements that hold even if the associated high momentum modes are deep in the nonlinear regime and astrophysically complex. Recently, Kehagias and Riotto and Peloso and Pietroni discovered a consistency relation applicable to large scale structure. We show that this can be recast into a simple physical statement in Lagrangian space: that the squeezed correlation function (suitably normalized) vanishes. This holds regardless of whether the correlation observables are at the same time or not, and regardless of whether multiple-streaming is present. The simplicity of this statement suggests that an analytic understanding of large scale structure in the nonlinear regime may be particularly promising in Lagrangian space
Model Experiments for the Determination of Airflow in Large Spaces
DEFF Research Database (Denmark)
Nielsen, Peter V.
Model experiments are one of the methods used for the determination of airflow in large spaces. This paper will discuss the formation of the governing dimensionless numbers. It is shown that experiments with a reduced scale often will necessitate a fully developed turbulence level of the flow....... Details of the flow from supply openings are very important for the determination of room air distribution. It is in some cases possible to make a simplified supply opening for the model experiment....
Status Report of Simulated Space Radiation Environment Facility
Energy Technology Data Exchange (ETDEWEB)
Kang, Phil Hyun; Nho, Young Chang; Jeun, Joon Pyo; Choi, Jae Hak; Lim, Youn Mook; Jung, Chan Hee; Jeon, Young Kyu
2007-11-15
The technology for performance testing and improvement of materials which are durable at space environment is a military related technology and veiled and securely regulated in advanced countries such as US and Russia. This core technology cannot be easily transferred to other country too. Therefore, this technology is the most fundamental and necessary research area for the successful establishment of space environment system. Since the task for evaluating the effects of space materials and components by space radiation plays important role in satellite lifetime extension and running failure percentage decrease, it is necessary to establish simulated space radiation facility and systematic testing procedure. This report has dealt with the status of the technology to enable the simulation of space environment effects, including the effect of space radiation on space materials. This information such as the fundamental knowledge of space environment and research status of various countries as to the simulation of space environment effects of space materials will be useful for the research on radiation hardiness of the materials. Furthermore, it will be helpful for developer of space material on deriving a better choice of materials, reducing the design cycle time, and improving safety.
Status Report of Simulated Space Radiation Environment Facility
International Nuclear Information System (INIS)
Kang, Phil Hyun; Nho, Young Chang; Jeun, Joon Pyo; Choi, Jae Hak; Lim, Youn Mook; Jung, Chan Hee; Jeon, Young Kyu
2007-11-01
The technology for performance testing and improvement of materials which are durable at space environment is a military related technology and veiled and securely regulated in advanced countries such as US and Russia. This core technology cannot be easily transferred to other country too. Therefore, this technology is the most fundamental and necessary research area for the successful establishment of space environment system. Since the task for evaluating the effects of space materials and components by space radiation plays important role in satellite lifetime extension and running failure percentage decrease, it is necessary to establish simulated space radiation facility and systematic testing procedure. This report has dealt with the status of the technology to enable the simulation of space environment effects, including the effect of space radiation on space materials. This information such as the fundamental knowledge of space environment and research status of various countries as to the simulation of space environment effects of space materials will be useful for the research on radiation hardiness of the materials. Furthermore, it will be helpful for developer of space material on deriving a better choice of materials, reducing the design cycle time, and improving safety
A Data Management System for International Space Station Simulation Tools
Betts, Bradley J.; DelMundo, Rommel; Elcott, Sharif; McIntosh, Dawn; Niehaus, Brian; Papasin, Richard; Mah, Robert W.; Clancy, Daniel (Technical Monitor)
2002-01-01
Groups associated with the design, operational, and training aspects of the International Space Station make extensive use of modeling and simulation tools. Users of these tools often need to access and manipulate large quantities of data associated with the station, ranging from design documents to wiring diagrams. Retrieving and manipulating this data directly within the simulation and modeling environment can provide substantial benefit to users. An approach for providing these kinds of data management services, including a database schema and class structure, is presented. Implementation details are also provided as a data management system is integrated into the Intelligent Virtual Station, a modeling and simulation tool developed by the NASA Ames Smart Systems Research Laboratory. One use of the Intelligent Virtual Station is generating station-related training procedures in a virtual environment, The data management component allows users to quickly and easily retrieve information related to objects on the station, enhancing their ability to generate accurate procedures. Users can associate new information with objects and have that information stored in a database.
Utilization of Large Cohesive Interface Elements for Delamination Simulation
DEFF Research Database (Denmark)
Bak, Brian Lau Verndal; Lund, Erik
2012-01-01
This paper describes the difficulties of utilizing large interface elements in delamination simulation. Solutions to increase the size of applicable interface elements are described and cover numerical integration of the element and modifications of the cohesive law....
Large eddy simulation of premixed and non-premixed combustion
Malalasekera, W; Ibrahim, SS; Masri, AR; Sadasivuni, SK; Gubba, SR
2010-01-01
This paper summarises the authors experience in using the Large Eddy Simulation (LES) technique for the modelling of premixed and non-premixed combustion. The paper describes the application of LES based combustion modelling technique to two well defined experimental configurations where high quality data is available for validation. The large eddy simulation technique for the modelling flow and turbulence is based on the solution of governing equations for continuity and momentum in a struct...
On asymptotically efficient simulation of large deviation probabilities.
Dieker, A.B.; Mandjes, M.R.H.
2005-01-01
ABSTRACT: Consider a family of probabilities for which the decay is governed by a large deviation principle. To find an estimate for a fixed member of this family, one is often forced to use simulation techniques. Direct Monte Carlo simulation, however, is often impractical, particularly if the
Definition of technology development missions for early space stations: Large space structures
Gates, R. M.; Reid, G.
1984-01-01
The objectives studied are the definition of the tested role of an early Space Station for the construction of large space structures. This is accomplished by defining the LSS technology development missions (TDMs) identified in phase 1. Design and operations trade studies are used to identify the best structural concepts and procedures for each TDMs. Details of the TDM designs are then developed along with their operational requirements. Space Station resources required for each mission, both human and physical, are identified. The costs and development schedules for the TDMs provide an indication of the programs needed to develop these missions.
Development of space simulation / net-laboratory system
Usui, H.; Matsumoto, H.; Ogino, T.; Fujimoto, M.; Omura, Y.; Okada, M.; Ueda, H. O.; Murata, T.; Kamide, Y.; Shinagawa, H.; Watanabe, S.; Machida, S.; Hada, T.
A research project for the development of space simulation / net-laboratory system was approved by Japan Science and Technology Corporation (JST) in the category of Research and Development for Applying Advanced Computational Science and Technology(ACT-JST) in 2000. This research project, which continues for three years, is a collaboration with an astrophysical simulation group as well as other space simulation groups which use MHD and hybrid models. In this project, we develop a proto type of unique simulation system which enables us to perform simulation runs by providing or selecting plasma parameters through Web-based interface on the internet. We are also developing an on-line database system for space simulation from which we will be able to search and extract various information such as simulation method and program, manuals, and typical simulation results in graphic or ascii format. This unique system will help the simulation beginners to start simulation study without much difficulty or effort, and contribute to the promotion of simulation studies in the STP field. In this presentation, we will report the overview and the current status of the project.
Research of Impact Load in Large Electrohydraulic Load Simulator
Directory of Open Access Journals (Sweden)
Yongguang Liu
2014-01-01
Full Text Available The stronger impact load will appear in the initial phase when the large electric cylinder is tested in the hardware-in-loop simulation. In this paper, the mathematical model is built based on AMESim, and then the reason of the impact load is investigated through analyzing the changing tendency of parameters in the simulation results. The inhibition methods of impact load are presented according to the structural invariability principle and applied to the actual system. The final experimental result indicates that the impact load is inhibited, which provides a good experimental condition for the electric cylinder and promotes the study of large load simulator.
Camera memory study for large space telescope. [charge coupled devices
Hoffman, C. P.; Brewer, J. E.; Brager, E. A.; Farnsworth, D. L.
1975-01-01
Specifications were developed for a memory system to be used as the storage media for camera detectors on the large space telescope (LST) satellite. Detectors with limited internal storage time such as intensities charge coupled devices and silicon intensified targets are implied. The general characteristics are reported of different approaches to the memory system with comparisons made within the guidelines set forth for the LST application. Priority ordering of comparisons is on the basis of cost, reliability, power, and physical characteristics. Specific rationales are provided for the rejection of unsuitable memory technologies. A recommended technology was selected and used to establish specifications for a breadboard memory. Procurement scheduling is provided for delivery of system breadboards in 1976, prototypes in 1978, and space qualified units in 1980.
National Aeronautics and Space Administration — Development of computational infrastructure to support hyper-resolution large-ensemble hydrology simulations from local-to-continental scales A move is currently...
Solid State Large Area Pulsed Solar Simulator for 3-, 4- and 6-Junction Solar Cell Arrays, Phase II
National Aeronautics and Space Administration — The Phase I was successful in delivering a complete prototype of the proposed innovation, an LED-based, solid state, large area, pulsed, solar simulator (ssLAPSS)....
Quality and Reliability of Large-Eddy Simulations
Meyers, Johan; Sagaut, Pierre
2008-01-01
Computational resources have developed to the level that, for the first time, it is becoming possible to apply large-eddy simulation (LES) to turbulent flow problems of realistic complexity. Many examples can be found in technology and in a variety of natural flows. This puts issues related to assessing, assuring, and predicting the quality of LES into the spotlight. Several LES studies have been published in the past, demonstrating a high level of accuracy with which turbulent flow predictions can be attained, without having to resort to the excessive requirements on computational resources imposed by direct numerical simulations. However, the setup and use of turbulent flow simulations requires a profound knowledge of fluid mechanics, numerical techniques, and the application under consideration. The susceptibility of large-eddy simulations to errors in modelling, in numerics, and in the treatment of boundary conditions, can be quite large due to nonlinear accumulation of different contributions over time, ...
Analysis of the Thermo-Elastic Response of Space Reflectors to Simulated Space Environment
Allegri, G.; Ivagnes, M. M.; Marchetti, M.; Poscente, F.
2002-01-01
The evaluation of space environment effects on materials and structures is a key matter to develop a proper design of long duration missions: since a large part of satellites operating in the earth orbital environment are employed for telecommunications, the development of space antennas and reflectors featured by high dimensional stability versus space environment interactions represents a major challenge for designers. The structural layout of state of the art space antennas and reflectors is very complex, since several different sensible elements and materials are employed: particular care must be placed in evaluating the actual geometrical configuration of the reflectors operating in the space environment, since very limited distortions of the designed layout can produce severe effects on the quality of the signal both received and transmitted, especially for antennas operating at high frequencies. The effects of thermal loads due to direct sunlight exposition and to earth and moon albedo can be easily taken into account employing the standard methods of structural analysis: on the other hand the thermal cycling and the exposition to the vacuum environment produce a long term damage accumulation which affects the whole structure. The typical effects of the just mentioned exposition are the outgassing of polymeric materials and the contamination of the exposed surface, which can affect sensibly the thermo-mechanical properties of the materials themselves and, therefore, the structural global response. The main aim of the present paper is to evaluate the synergistic effects of thermal cycling and of the exposition to high vacuum environment on an innovative antenna developed by Alenia Spazio S.p.a.: to this purpose, both an experimental and numerical research activity has been developed. A complete prototype of the antenna has been exposed to the space environment simulated by the SAS facility: this latter is constituted by an high vacuum chamber, equipped by
Investigation of Secondary Neutron Production in Large Space Vehicles for Deep Space
Rojdev, Kristina; Koontz, Steve; Reddell, Brandon; Atwell, William; Boeder, Paul
2016-01-01
Future NASA missions will focus on deep space and Mars surface operations with large structures necessary for transportation of crew and cargo. In addition to the challenges of manufacturing these large structures, there are added challenges from the space radiation environment and its impacts on the crew, electronics, and vehicle materials. Primary radiation from the sun (solar particle events) and from outside the solar system (galactic cosmic rays) interact with materials of the vehicle and the elements inside the vehicle. These interactions lead to the primary radiation being absorbed or producing secondary radiation (primarily neutrons). With all vehicles, the high-energy primary radiation is of most concern. However, with larger vehicles, there is more opportunity for secondary radiation production, which can be significant enough to cause concern. In a previous paper, we embarked upon our first steps toward studying neutron production from large vehicles by validating our radiation transport codes for neutron environments against flight data. The following paper will extend the previous work to focus on the deep space environment and the resulting neutron flux from large vehicles in this deep space environment.
A logistics model for large space power systems
Koelle, H. H.
Space Power Systems (SPS) have to overcome two hurdles: (1) to find an attractive design, manufacturing and assembly concept and (2) to have available a space transportation system that can provide economical logistic support during the construction and operational phases. An initial system feasibility study, some five years ago, was based on a reference system that used terrestrial resources only and was based partially on electric propulsion systems. The conclusion was: it is feasible but not yet economically competitive with other options. This study is based on terrestrial and extraterrestrial resources and on chemical (LH 2/LOX) propulsion systems. These engines are available from the Space Shuttle production line and require small changes only. Other so-called advanced propulsion systems investigated did not prove economically superior if lunar LOX is available! We assume that a Shuttle derived Heavy Lift Launch Vehicle (HLLV) will become available around the turn of the century and that this will be used to establish a research base on the lunar surface. This lunar base has the potential to grow into a lunar factory producing LOX and construction materials for supporting among other projects also the construction of space power systems in geostationary orbit. A model was developed to simulate the logistics support of such an operation for a 50-year life cycle. After 50 years 111 SPS units with 5 GW each and an availability of 90% will produce 100 × 5 = 500 GW. The model comprises 60 equations and requires 29 assumptions of the parameter involved. 60-state variables calculated with the 60 equations mentioned above are given on an annual basis and as averages for the 50-year life cycle. Recycling of defective parts in geostationary orbit is one of the features of the model. The state-of-the-art with respect to SPS technology is introduced as a variable Mg mass/MW electric power delivered. If the space manufacturing facility, a maintenance and repair facility
SIMON: Remote collaboration system based on large scale simulation
International Nuclear Information System (INIS)
Sugawara, Akihiro; Kishimoto, Yasuaki
2003-01-01
Development of SIMON (SImulation MONitoring) system is described. SIMON aims to investigate many physical phenomena of tokamak type nuclear fusion plasma by simulation and to exchange information and to carry out joint researches with scientists in the world using internet. The characteristics of SIMON are followings; 1) decrease load of simulation by trigger sending method, 2) visualization of simulation results and hierarchical structure of analysis, 3) decrease of number of license by using command line when software is used, 4) improvement of support for using network of simulation data output by use of HTML (Hyper Text Markup Language), 5) avoidance of complex built-in work in client part and 6) small-sized and portable software. The visualization method of large scale simulation, remote collaboration system by HTML, trigger sending method, hierarchical analytical method, introduction into three-dimensional electromagnetic transportation code and technologies of SIMON system are explained. (S.Y.)
Photoluminescence in large fluence radiation irradiated space silicon solar cells
Energy Technology Data Exchange (ETDEWEB)
Hisamatsu, Tadashi; Kawasaki, Osamu; Matsuda, Sumio [National Space Development Agency of Japan, Tsukuba, Ibaraki (Japan). Tsukuba Space Center; Tsukamoto, Kazuyoshi
1997-03-01
Photoluminescence spectroscopy measurements were carried out for silicon 50{mu}m BSFR space solar cells irradiated with 1MeV electrons with a fluence exceeding 1 x 10{sup 16} e/cm{sup 2} and 10MeV protons with a fluence exceeding 1 x 10{sup 13} p/cm{sup 2}. The results were compared with the previous result performed in a relative low fluence region, and the radiation-induced defects which cause anomalous degradation of the cell performance in such large fluence regions were discussed. As far as we know, this is the first report which presents the PL measurement results at 4.2K of the large fluence radiation irradiated silicon solar cells. (author)
Modeling and Simulation of DC Power Electronics Systems Using Harmonic State Space (HSS) Method
DEFF Research Database (Denmark)
Kwon, Jun Bum; Wang, Xiongfei; Bak, Claus Leth
2015-01-01
based on the state-space averaging and generalized averaging, these also have limitations to show the same results as with the non-linear time domain simulations. This paper presents a modeling and simulation method for a large dc power electronic system by using Harmonic State Space (HSS) modeling......For the efficiency and simplicity of electric systems, the dc based power electronics systems are widely used in variety applications such as electric vehicles, ships, aircrafts and also in homes. In these systems, there could be a number of dynamic interactions between loads and other dc-dc....... Through this method, the required computation time and CPU memory for large dc power electronics systems can be reduced. Besides, the achieved results show the same results as with the non-linear time domain simulation, but with the faster simulation time which is beneficial in a large network....
A Simulation and Modeling Framework for Space Situational Awareness
International Nuclear Information System (INIS)
Olivier, S.S.
2008-01-01
This paper describes the development and initial demonstration of a new, integrated modeling and simulation framework, encompassing the space situational awareness enterprise, for quantitatively assessing the benefit of specific sensor systems, technologies and data analysis techniques. The framework is based on a flexible, scalable architecture to enable efficient, physics-based simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel computer systems available, for example, at Lawrence Livermore National Laboratory. The details of the modeling and simulation framework are described, including hydrodynamic models of satellite intercept and debris generation, orbital propagation algorithms, radar cross section calculations, optical brightness calculations, generic radar system models, generic optical system models, specific Space Surveillance Network models, object detection algorithms, orbit determination algorithms, and visualization tools. The use of this integrated simulation and modeling framework on a specific scenario involving space debris is demonstrated
Simulating cosmic microwave background maps in multiconnected spaces
International Nuclear Information System (INIS)
Riazuelo, Alain; Uzan, Jean-Philippe; Lehoucq, Roland; Weeks, Jeffrey
2004-01-01
This paper describes the computation of cosmic microwave background (CMB) anisotropies in a universe with multiconnected spatial sections and focuses on the implementation of the topology in standard CMB computer codes. The key ingredient is the computation of the eigenmodes of the Laplacian with boundary conditions compatible with multiconnected space topology. The correlators of the coefficients of the decomposition of the temperature fluctuation in spherical harmonics are computed and examples are given for spatially flat spaces and one family of spherical spaces, namely, the lens spaces. Under the hypothesis of Gaussian initial conditions, these correlators encode all the topological information of the CMB and suffice to simulate CMB maps
Manufacturing Process Simulation of Large-Scale Cryotanks
Babai, Majid; Phillips, Steven; Griffin, Brian
2003-01-01
NASA's Space Launch Initiative (SLI) is an effort to research and develop the technologies needed to build a second-generation reusable launch vehicle. It is required that this new launch vehicle be 100 times safer and 10 times cheaper to operate than current launch vehicles. Part of the SLI includes the development of reusable composite and metallic cryotanks. The size of these reusable tanks is far greater than anything ever developed and exceeds the design limits of current manufacturing tools. Several design and manufacturing approaches have been formulated, but many factors must be weighed during the selection process. Among these factors are tooling reachability, cycle times, feasibility, and facility impacts. The manufacturing process simulation capabilities available at NASA.s Marshall Space Flight Center have played a key role in down selecting between the various manufacturing approaches. By creating 3-D manufacturing process simulations, the varying approaches can be analyzed in a virtual world before any hardware or infrastructure is built. This analysis can detect and eliminate costly flaws in the various manufacturing approaches. The simulations check for collisions between devices, verify that design limits on joints are not exceeded, and provide cycle times which aide in the development of an optimized process flow. In addition, new ideas and concerns are often raised after seeing the visual representation of a manufacturing process flow. The output of the manufacturing process simulations allows for cost and safety comparisons to be performed between the various manufacturing approaches. This output helps determine which manufacturing process options reach the safety and cost goals of the SLI. As part of the SLI, The Boeing Company was awarded a basic period contract to research and propose options for both a metallic and a composite cryotank. Boeing then entered into a task agreement with the Marshall Space Flight Center to provide manufacturing
Remote collaboration system based on large scale simulation
International Nuclear Information System (INIS)
Kishimoto, Yasuaki; Sugahara, Akihiro; Li, J.Q.
2008-01-01
Large scale simulation using super-computer, which generally requires long CPU time and produces large amount of data, has been extensively studied as a third pillar in various advanced science fields in parallel to theory and experiment. Such a simulation is expected to lead new scientific discoveries through elucidation of various complex phenomena, which are hardly identified only by conventional theoretical and experimental approaches. In order to assist such large simulation studies for which many collaborators working at geographically different places participate and contribute, we have developed a unique remote collaboration system, referred to as SIMON (simulation monitoring system), which is based on client-server system control introducing an idea of up-date processing, contrary to that of widely used post-processing. As a key ingredient, we have developed a trigger method, which transmits various requests for the up-date processing from the simulation (client) running on a super-computer to a workstation (server). Namely, the simulation running on a super-computer actively controls the timing of up-date processing. The server that has received the requests from the ongoing simulation such as data transfer, data analyses, and visualizations, etc. starts operations according to the requests during the simulation. The server makes the latest results available to web browsers, so that the collaborators can monitor the results at any place and time in the world. By applying the system to a specific simulation project of laser-matter interaction, we have confirmed that the system works well and plays an important role as a collaboration platform on which many collaborators work with one another
Believability in simplifications of large scale physically based simulation
Han, Donghui; Hsu, Shu-wei; McNamara, Ann; Keyser, John
2013-01-01
We verify two hypotheses which are assumed to be true only intuitively in many rigid body simulations. I: In large scale rigid body simulation, viewers may not be able to perceive distortion incurred by an approximated simulation method. II: Fixing objects under a pile of objects does not affect the visual plausibility. Visual plausibility of scenarios simulated with these hypotheses assumed true are measured using subjective rating from viewers. As expected, analysis of results supports the truthfulness of the hypotheses under certain simulation environments. However, our analysis discovered four factors which may affect the authenticity of these hypotheses: number of collisions simulated simultaneously, homogeneity of colliding object pairs, distance from scene under simulation to camera position, and simulation method used. We also try to find an objective metric of visual plausibility from eye-tracking data collected from viewers. Analysis of these results indicates that eye-tracking does not present a suitable proxy for measuring plausibility or distinguishing between types of simulations. © 2013 ACM.
Next Generation Simulation Framework for Robotic and Human Space Missions
Cameron, Jonathan M.; Balaram, J.; Jain, Abhinandan; Kuo, Calvin; Lim, Christopher; Myint, Steven
2012-01-01
The Dartslab team at NASA's Jet Propulsion Laboratory (JPL) has a long history of developing physics-based simulations based on the Darts/Dshell simulation framework that have been used to simulate many planetary robotic missions, such as the Cassini spacecraft and the rovers that are currently driving on Mars. Recent collaboration efforts between the Dartslab team at JPL and the Mission Operations Directorate (MOD) at NASA Johnson Space Center (JSC) have led to significant enhancements to the Dartslab DSENDS (Dynamics Simulator for Entry, Descent and Surface landing) software framework. The new version of DSENDS is now being used for new planetary mission simulations at JPL. JSC is using DSENDS as the foundation for a suite of software known as COMPASS (Core Operations, Mission Planning, and Analysis Spacecraft Simulation) that is the basis for their new human space mission simulations and analysis. In this paper, we will describe the collaborative process with the JPL Dartslab and the JSC MOD team that resulted in the redesign and enhancement of the DSENDS software. We will outline the improvements in DSENDS that simplify creation of new high-fidelity robotic/spacecraft simulations. We will illustrate how DSENDS simulations are assembled and show results from several mission simulations.
Large-scale computing techniques for complex system simulations
Dubitzky, Werner; Schott, Bernard
2012-01-01
Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and
Visualization of the Flux Rope Generation Process Using Large Quantities of MHD Simulation Data
Directory of Open Access Journals (Sweden)
Y Kubota
2013-03-01
Full Text Available We present a new concept of analysis using visualization of large quantities of simulation data. The time development of 3D objects with high temporal resolution provides the opportunity for scientific discovery. We visualize large quantities of simulation data using the visualization application 'Virtual Aurora' based on AVS (Advanced Visual Systems and the parallel distributed processing at "Space Weather Cloud" in NICT based on Gfarm technology. We introduce two results of high temporal resolution visualization: the magnetic flux rope generation process and dayside reconnection using a system of magnetic field line tracing.
An optimal beam alignment method for large-scale distributed space surveillance radar system
Huang, Jian; Wang, Dongya; Xia, Shuangzhi
2018-06-01
Large-scale distributed space surveillance radar is a very important ground-based equipment to maintain a complete catalogue for Low Earth Orbit (LEO) space debris. However, due to the thousands of kilometers distance between each sites of the distributed radar system, how to optimally implement the Transmitting/Receiving (T/R) beams alignment in a great space using the narrow beam, which proposed a special and considerable technical challenge in the space surveillance area. According to the common coordinate transformation model and the radar beam space model, we presented a two dimensional projection algorithm for T/R beam using the direction angles, which could visually describe and assess the beam alignment performance. Subsequently, the optimal mathematical models for the orientation angle of the antenna array, the site location and the T/R beam coverage are constructed, and also the beam alignment parameters are precisely solved. At last, we conducted the optimal beam alignment experiments base on the site parameters of Air Force Space Surveillance System (AFSSS). The simulation results demonstrate the correctness and effectiveness of our novel method, which can significantly stimulate the construction for the LEO space debris surveillance equipment.
Desdemona and a ticket to space; training for space flight in a 3g motion simulator
Wouters, M.
2014-01-01
On October 5, 2013, Marijn Wouters and two other contestants of a nation-wide competition ‘Nederland Innoveert’ underwent a space training exercise. One by one, the trainees were pushed to their limits in the Desdemona motion simulator, an experience that mimicked the Space Expedition Corporation
A future large-aperture UVOIR space observatory: reference designs
Rioux, Norman; Thronson, Harley; Feinberg, Lee; Stahl, H. Philip; Redding, Dave; Jones, Andrew; Sturm, James; Collins, Christine; Liu, Alice
2015-09-01
Our joint NASA GSFC/JPL/MSFC/STScI study team has used community-provided science goals to derive mission needs, requirements, and candidate mission architectures for a future large-aperture, non-cryogenic UVOIR space observatory. We describe the feasibility assessment of system thermal and dynamic stability for supporting coronagraphy. The observatory is in a Sun-Earth L2 orbit providing a stable thermal environment and excellent field of regard. Reference designs include a 36-segment 9.2 m aperture telescope that stows within a five meter diameter launch vehicle fairing. Performance needs developed under the study are traceable to a variety of reference designs including options for a monolithic primary mirror.
Growth Chambers on the International Space Station for Large Plants
Massa, Gioia D.; Wheeler, Raymond M.; Morrow, Robert C.; Levine, Howard G.
2016-01-01
The International Space Station (ISS) now has platforms for conducting research on horticultural plant species under LED (Light Emitting Diodes) lighting, and those capabilities continue to expand. The Veggie vegetable production system was deployed to the ISS as an applied research platform for food production in space. Veggie is capable of growing a wide array of horticultural crops. It was designed for low power usage, low launch mass and stowage volume, and minimal crew time requirements. The Veggie flight hardware consists of a light cap containing red (630 nanometers), blue, (455 nanometers) and green (530 nanometers) LEDs. Interfacing with the light cap is an extendable bellowsbaseplate for enclosing the plant canopy. A second large plant growth chamber, the Advanced Plant Habitat (APH), is will fly to the ISS in 2017. APH will be a fully controllable environment for high-quality plant physiological research. APH will control light (quality, level, and timing), temperature, CO2, relative humidity, and irrigation, while scrubbing any cabin or plant-derived ethylene and other volatile organic compounds. Additional capabilities include sensing of leaf temperature and root zone moisture, root zone temperature, and oxygen concentration. The light cap will have red (630 nm), blue (450 nm), green (525 nm), far red (730 nm) and broad spectrum white LEDs (4100K). There will be several internal cameras (visible and IR) to monitor and record plant growth and operations. Veggie and APH are available for research proposals.
Advanced Mirror Technology Development for Very Large Space Telescopes
Stahl, H. P.
2014-01-01
Advanced Mirror Technology Development (AMTD) is a NASA Strategic Astrophysics Technology project to mature to TRL-6 the critical technologies needed to produce 4-m or larger flight-qualified UVOIR mirrors by 2018 so that a viable mission can be considered by the 2020 Decadal Review. The developed mirror technology must enable missions capable of both general astrophysics & ultra-high contrast observations of exoplanets. Just as JWST’s architecture was driven by launch vehicle, a future UVOIR mission’s architectures (monolithic, segmented or interferometric) will depend on capacities of future launch vehicles (and budget). Since we cannot predict the future, we must prepare for all potential futures. Therefore, to provide the science community with options, we are pursuing multiple technology paths. AMTD uses a science-driven systems engineering approach. We derived engineering specifications for potential future monolithic or segmented space telescopes based on science needs and implement constraints. And we are maturing six inter-linked critical technologies to enable potential future large aperture UVOIR space telescope: 1) Large-Aperture, Low Areal Density, High Stiffness Mirrors, 2) Support Systems, 3) Mid/High Spatial Frequency Figure Error, 4) Segment Edges, 5) Segment-to-Segment Gap Phasing, and 6) Integrated Model Validation Science Advisory Team and a Systems Engineering Team. We are maturing all six technologies simultaneously because all are required to make a primary mirror assembly (PMA); and, it is the PMA’s on-orbit performance which determines science return. PMA stiffness depends on substrate and support stiffness. Ability to cost-effectively eliminate mid/high spatial figure errors and polishing edges depends on substrate stiffness. On-orbit thermal and mechanical performance depends on substrate stiffness, the coefficient of thermal expansion (CTE) and thermal mass. And, segment-to-segment phasing depends on substrate & structure stiffness
Large Scale System Safety Integration for Human Rated Space Vehicles
Massie, Michael J.
2005-12-01
Since the 1960s man has searched for ways to establish a human presence in space. Unfortunately, the development and operation of human spaceflight vehicles carry significant safety risks that are not always well understood. As a result, the countries with human space programs have felt the pain of loss of lives in the attempt to develop human space travel systems. Integrated System Safety is a process developed through years of experience (since before Apollo and Soyuz) as a way to assess risks involved in space travel and prevent such losses. The intent of Integrated System Safety is to take a look at an entire program and put together all the pieces in such a way that the risks can be identified, understood and dispositioned by program management. This process has many inherent challenges and they need to be explored, understood and addressed.In order to prepare truly integrated analysis safety professionals must gain a level of technical understanding of all of the project's pieces and how they interact. Next, they must find a way to present the analysis so the customer can understand the risks and make decisions about managing them. However, every organization in a large-scale project can have different ideas about what is or is not a hazard, what is or is not an appropriate hazard control, and what is or is not adequate hazard control verification. NASA provides some direction on these topics, but interpretations of those instructions can vary widely.Even more challenging is the fact that every individual/organization involved in a project has different levels of risk tolerance. When the discrete hazard controls of the contracts and agreements cannot be met, additional risk must be accepted. However, when one has left the arena of compliance with the known rules, there can be no longer be specific ground rules on which to base a decision as to what is acceptable and what is not. The integrator must find common grounds between all parties to achieve
Monte Carlo simulation of continuous-space crystal growth
International Nuclear Information System (INIS)
Dodson, B.W.; Taylor, P.A.
1986-01-01
We describe a method, based on Monte Carlo techniques, of simulating the atomic growth of crystals without the discrete lattice space assumed by conventional Monte Carlo growth simulations. Since no lattice space is assumed, problems involving epitaxial growth, heteroepitaxy, phonon-driven mechanisms, surface reconstruction, and many other phenomena incompatible with the lattice-space approximation can be studied. Also, use of the Monte Carlo method circumvents to some extent the extreme limitations on simulated timescale inherent in crystal-growth techniques which might be proposed using molecular dynamics. The implementation of the new method is illustrated by studying the growth of strained-layer superlattice (SLS) interfaces in two-dimensional Lennard-Jones atomic systems. Despite the extreme simplicity of such systems, the qualitative features of SLS growth seen here are similar to those observed experimentally in real semiconductor systems
Simulation of space charge effects in a synchrotron
International Nuclear Information System (INIS)
Machida, Shinji; Ikegami, Masanori
1998-01-01
We have studied space charge effects in a synchrotron with multi-particle tracking in 2-D and 3-D configuration space (4-D and 6-D phase space, respectively). First, we will describe the modelling of space charge fields in the simulation and a procedure of tracking. Several ways of presenting tracking results will be also mentioned. Secondly, it is discussed as a demonstration of the simulation study that coherent modes of a beam play a major role in beam stability and intensity limit. The incoherent tune in a resonance condition should be replaced by the coherent tune. Finally, we consider the coherent motion of a beam core as a driving force of halo formation. The mechanism is familiar in linac, and we apply it in a synchrotron
Large Scale Simulations of the Euler Equations on GPU Clusters
Liebmann, Manfred; Douglas, Craig C.; Haase, Gundolf; Horvá th, Zoltá n
2010-01-01
The paper investigates the scalability of a parallel Euler solver, using the Vijayasundaram method, on a GPU cluster with 32 Nvidia Geforce GTX 295 boards. The aim of this research is to enable large scale fluid dynamics simulations with up to one
Large Eddy Simulation of Sydney Swirl Non-Reaction Jets
DEFF Research Database (Denmark)
Yang, Yang; Kær, Søren Knudsen; Yin, Chungen
The Sydney swirl burner non-reaction case was studied using large eddy simulation. The two-point correlation method was introduced and used to estimate grid resolution. Energy spectra and instantaneous pressure and velocity plots were used to identify features in flow field. By using these method......, vortex breakdown and precessing vortex core are identified and different flow zones are shown....
Large interface simulation in an averaged two-fluid code
International Nuclear Information System (INIS)
Henriques, A.
2006-01-01
Different ranges of size of interfaces and eddies are involved in multiphase flow phenomena. Classical formalisms focus on a specific range of size. This study presents a Large Interface Simulation (LIS) two-fluid compressible formalism taking into account different sizes of interfaces. As in the single-phase Large Eddy Simulation, a filtering process is used to point out Large Interface (LI) simulation and Small interface (SI) modelization. The LI surface tension force is modelled adapting the well-known CSF method. The modelling of SI transfer terms is done calling for classical closure laws of the averaged approach. To simulate accurately LI transfer terms, we develop a LI recognition algorithm based on a dimensionless criterion. The LIS model is applied in a classical averaged two-fluid code. The LI transfer terms modelling and the LI recognition are validated on analytical and experimental tests. A square base basin excited by a horizontal periodic movement is studied with the LIS model. The capability of the model is also shown on the case of the break-up of a bubble in a turbulent liquid flow. The break-up of a large bubble at a grid impact performed regime transition between two different scales of interface from LI to SI and from PI to LI. (author) [fr
Large-eddy simulation of highly underexpanded transient gas jets
Vuorinen, V.; Yu, J.; Tirunagari, S.; Kaario, O.; Larmi, M.; Duwig, C.; Boersma, B.J.
2013-01-01
Large-eddy simulations (LES) based on scale-selective implicit filtering are carried out in order to study the effect of nozzle pressure ratios on the characteristics of highly underexpanded jets. Pressure ratios ranging from 4.5 to 8.5 with Reynolds numbers of the order 75?000–140?000 are
Large signal simulation of photonic crystal Fano laser
DEFF Research Database (Denmark)
Zali, Aref Rasoulzadeh; Yu, Yi; Moravvej-Farshi, Mohammad Kazem
2017-01-01
be modulated at frequencies exceeding 1 THz which is much higher than its corresponding relaxation oscillation frequency. Large signal simulation of the Fano laser is also investigated based on pseudorandom bit sequence at 0.5 Tbit/s. It shows eye patterns are open at such high modulation frequency, verifying...
Large eddy simulations of an airfoil in turbulent inflow
DEFF Research Database (Denmark)
Gilling, Lasse; Sørensen, Niels N.
2008-01-01
Wind turbines operate in the turbulent boundary layer of the atmosphere and due to the rotational sampling effect the blades experience a high level of turbulence [1]. In this project the effect of turbulence is investigated by large eddy simulations of the turbulent flow past a NACA 0015 airfoil...
Planetary and Space Simulation Facilities PSI at DLR for Astrobiology
Rabbow, E.; Rettberg, P.; Panitz, C.; Reitz, G.
2008-09-01
Ground based experiments, conducted in the controlled planetary and space environment simulation facilities PSI at DLR, are used to investigate astrobiological questions and to complement the corresponding experiments in LEO, for example on free flying satellites or on space exposure platforms on the ISS. In-orbit exposure facilities can only accommodate a limited number of experiments for exposure to space parameters like high vacuum, intense radiation of galactic and solar origin and microgravity, sometimes also technically adapted to simulate extraterrestrial planetary conditions like those on Mars. Ground based experiments in carefully equipped and monitored simulation facilities allow the investigation of the effects of simulated single environmental parameters and selected combinations on a much wider variety of samples. In PSI at DLR, international science consortia performed astrobiological investigations and space experiment preparations, exposing organic compounds and a wide range of microorganisms, reaching from bacterial spores to complex microbial communities, lichens and even animals like tardigrades to simulated planetary or space environment parameters in pursuit of exobiological questions on the resistance to extreme environments and the origin and distribution of life. The Planetary and Space Simulation Facilities PSI of the Institute of Aerospace Medicine at DLR in Köln, Germany, providing high vacuum of controlled residual composition, ionizing radiation of a X-ray tube, polychromatic UV radiation in the range of 170-400 nm, VIS and IR or individual monochromatic UV wavelengths, and temperature regulation from -20°C to +80°C at the sample size individually or in selected combinations in 9 modular facilities of varying sizes are presented with selected experiments performed within.
Real-time simulation of large-scale floods
Liu, Q.; Qin, Y.; Li, G. D.; Liu, Z.; Cheng, D. J.; Zhao, Y. H.
2016-08-01
According to the complex real-time water situation, the real-time simulation of large-scale floods is very important for flood prevention practice. Model robustness and running efficiency are two critical factors in successful real-time flood simulation. This paper proposed a robust, two-dimensional, shallow water model based on the unstructured Godunov- type finite volume method. A robust wet/dry front method is used to enhance the numerical stability. An adaptive method is proposed to improve the running efficiency. The proposed model is used for large-scale flood simulation on real topography. Results compared to those of MIKE21 show the strong performance of the proposed model.
How to simulate global cosmic strings with large string tension
Energy Technology Data Exchange (ETDEWEB)
Klaer, Vincent B.; Moore, Guy D., E-mail: vklaer@theorie.ikp.physik.tu-darmstadt.de, E-mail: guy.moore@physik.tu-darmstadt.de [Institut für Kernphysik, Technische Universität Darmstadt, Schlossgartenstraße 2, Darmstadt, D-64289 Germany (Germany)
2017-10-01
Global string networks may be relevant in axion production in the early Universe, as well as other cosmological scenarios. Such networks contain a large hierarchy of scales between the string core scale and the Hubble scale, ln( f {sub a} / H ) ∼ 70, which influences the network dynamics by giving the strings large tensions T ≅ π f {sub a} {sup 2} ln( f {sub a} / H ). We present a new numerical approach to simulate such global string networks, capturing the tension without an exponentially large lattice.
Large-signal, dynamic simulation of the slowpoke-3 nuclear heating reactor
International Nuclear Information System (INIS)
Tseng, C.M.; Lepp, R.M.
1983-07-01
A 2 MWt nuclear reactor, called SLOWPOKE-3, is being developed at the Chalk River Nuclear Laboratories (CRNL). This reactor, which is cooled by natural circulation, is designed to produce hot water for commercial space heating and perhaps generate some electricity in remote locations where the costs of alternate forms of energy are high. A large-signal, dynamic simulation of this reactor, without closed-loop control, was developed and implemented on a hybrid computer, using the basic equations of conservation of mass, energy and momentum. The natural circulation of downcomer flow in the pool was simulated using a special filter, capable of modelling various flow conditions. The simulation was then used to study the intermediate and long-term transient response of SLOWPOKE-3 to large disturbances, such as loss of heat sink, loss of regulation, daily load following, and overcooling of the reactor coolant. Results of the simulation show that none of these disturbances produce hazardous transients
Planetary and Space Simulation Facilities (PSI) at DLR
Panitz, Corinna; Rabbow, E.; Rettberg, P.; Kloss, M.; Reitz, G.; Horneck, G.
2010-05-01
The Planetary and Space Simulation facilities at DLR offer the possibility to expose biological and physical samples individually or integrated into space hardware to defined and controlled space conditions like ultra high vacuum, low temperature and extraterrestrial UV radiation. An x-ray facility stands for the simulation of the ionizing component at the disposal. All of the simulation facilities are required for the preparation of space experiments: - for testing of the newly developed space hardware - for investigating the effect of different space parameters on biological systems as a preparation for the flight experiment - for performing the 'Experiment Verification Tests' (EVT) for the specification of the test parameters - and 'Experiment Sequence Tests' (EST) by simulating sample assemblies, exposure to selected space parameters, and sample disassembly. To test the compatibility of the different biological and chemical systems and their adaptation to the opportunities and constraints of space conditions a profound ground support program has been developed among many others for the ESA facilities of the ongoing missions EXPOSE-R and EXPOSE-E on board of the International Space Station ISS . Several experiment verification tests EVTs and an experiment sequence test EST have been conducted in the carefully equipped and monitored planetary and space simulation facilities PSI of the Institute of Aerospace Medicine at DLR in Cologne, Germany. These ground based pre-flight studies allowed the investigation of a much wider variety of samples and the selection of the most promising organisms for the flight experiment. EXPOSE-E had been attached to the outer balcony of the European Columbus module of the ISS in February 2008 and stayed for 1,5 years in space; EXPOSE-R has been attached to the Russian Svezda module of the ISS in spring 2009 and mission duration will be approx. 1,5 years. The missions will give new insights into the survivability of terrestrial
Large Eddy Simulation of Turbulent Flows in Wind Energy
DEFF Research Database (Denmark)
Chivaee, Hamid Sarlak
This research is devoted to the Large Eddy Simulation (LES), and to lesser extent, wind tunnel measurements of turbulent flows in wind energy. It starts with an introduction to the LES technique associated with the solution of the incompressible Navier-Stokes equations, discretized using a finite......, should the mesh resolution, numerical discretization scheme, time averaging period, and domain size be chosen wisely. A thorough investigation of the wind turbine wake interactions is also conducted and the simulations are validated against available experimental data from external sources. The effect...... Reynolds numbers, and thereafter, the fully-developed infinite wind farm boundary later simulations are performed. Sources of inaccuracy in the simulations are investigated and it is found that high Reynolds number flows are more sensitive to the choice of the SGS model than their low Reynolds number...
Large-eddy simulation of atmospheric flow over complex terrain
DEFF Research Database (Denmark)
Bechmann, Andreas
2007-01-01
The present report describes the development and validation of a turbulence model designed for atmospheric flows based on the concept of Large-Eddy Simulation (LES). The background for the work is the high Reynolds number k - #epsilon# model, which has been implemented on a finite-volume code...... turbulence model is able to handle both engineering and atmospheric flows and can be run in both RANS or LES mode. For LES simulations a time-dependent wind field that accurately represents the turbulent structures of a wind environment must be prescribed at the computational inlet. A method is implemented...... where the turbulent wind field from a separate LES simulation can be used as inflow. To avoid numerical dissipation of turbulence special care is paid to the numerical method, e.g. the turbulence model is calibrated with the specific numerical scheme used. This is done by simulating decaying isotropic...
Li, Zuqun
2011-01-01
Modeling and Simulation plays a very important role in mission design. It not only reduces design cost, but also prepares astronauts for their mission tasks. The SISO Smackdown is a simulation event that facilitates modeling and simulation in academia. The scenario of this year s Smackdown was to simulate a lunar base supply mission. The mission objective was to transfer Earth supply cargo to a lunar base supply depot and retrieve He-3 to take back to Earth. Federates for this scenario include the environment federate, Earth-Moon transfer vehicle, lunar shuttle, lunar rover, supply depot, mobile ISRU plant, exploratory hopper, and communication satellite. These federates were built by teams from all around the world, including teams from MIT, JSC, University of Alabama in Huntsville, University of Bordeaux from France, and University of Genoa from Italy. This paper focuses on the lunar shuttle federate, which was programmed by the USRP intern team from NASA JSC. The shuttle was responsible for provide transportation between lunar orbit and the lunar surface. The lunar shuttle federate was built using the NASA standard simulation package called Trick, and it was extended with HLA functions using TrickHLA. HLA functions of the lunar shuttle federate include sending and receiving interaction, publishing and subscribing attributes, and packing and unpacking fixed record data. The dynamics model of the lunar shuttle was modeled with three degrees of freedom, and the state propagation was obeying the law of two body dynamics. The descending trajectory of the lunar shuttle was designed by first defining a unique descending orbit in 2D space, and then defining a unique orbit in 3D space with the assumption of a non-rotating moon. Finally this assumption was taken away to define the initial position of the lunar shuttle so that it will start descending a second after it joins the execution. VPN software from SonicWall was used to connect federates with RTI during testing
Salama, Farid; Tan, Xiaofeng; Cami, Jan; Biennier, Ludovic; Remy, Jerome
2006-01-01
Polycyclic Aromatic Hydrocarbons (PAHs) are an important and ubiquitous component of carbon-bearing materials in space. A long-standing and major challenge for laboratory astrophysics has been to measure the spectra of large carbon molecules in laboratory environments that mimic (in a realistic way) the physical conditions that are associated with the interstellar emission and absorption regions [1]. This objective has been identified as one of the critical Laboratory Astrophysics objectives to optimize the data return from space missions [2]. An extensive laboratory program has been developed to assess the properties of PAHs in such environments and to describe how they influence the radiation and energy balance in space. We present and discuss the gas-phase electronic absorption spectra of neutral and ionized PAHs measured in the UV-Visible-NIR range in astrophysically relevant environments and discuss the implications for astrophysics [1]. The harsh physical conditions of the interstellar medium characterized by a low temperature, an absence of collisions and strong VUV radiation fields - have been simulated in the laboratory by associating a pulsed cavity ringdown spectrometer (CRDS) with a supersonic slit jet seeded with PAHs and an ionizing, penning-type, electronic discharge. We have measured for the {\\it first time} the spectra of a series of neutral [3,4] and ionized [5,6] interstellar PAHs analogs in the laboratory. An effort has also been attempted to quantify the mechanisms of ion and carbon nanoparticles production in the free jet expansion and to model our simulation of the diffuse interstellar medium in the laboratory [7]. These experiments provide {\\it unique} information on the spectra of free, large carbon-containing molecules and ions in the gas phase. We are now, for the first time, in the position to directly compare laboratory spectral data on free, cold, PAH ions and carbon nano-sized carbon particles with astronomical observations in the
Moehlmann, D.; Kochan, H.
1992-01-01
The Space Simulator of the German Aerospace Research Establishment at Cologne, formerly used for testing satellites, is now, since 1987, the central unit within the research sub-program 'Comet-Simulation' (KOSI). The KOSI team has investigated physical processes relevant to comets and their surfaces. As a byproduct we gained experience in sample-handling under simulated space conditions. In broadening the scope of the research activities of the DLR Institute of Space Simulation an extension to 'Laboratory-Planetology' is planned. Following the KOSI-experiments a Mars Surface-Simulation with realistic minerals and surface soil in a suited environment (temperature, pressure, and CO2-atmosphere) is foreseen as the next step. Here, our main interest is centered on thermophysical properties of the Martian surface and energy transport (and related gas transport) through the surface. These laboratory simulation activities can be related to space missions as typical pre-mission and during-the-mission support of the experiments design and operations (simulation in parallel). Post mission experiments for confirmation and interpretation of results are of great value. The physical dimensions of the Space Simulator (cylinder of about 2.5 m diameter and 5 m length) allows for testing and qualification of experimental hardware under realistic Martian conditions.
Large-Eddy Simulations of Flows in Complex Terrain
Kosovic, B.; Lundquist, K. A.
2011-12-01
Large-eddy simulation as a methodology for numerical simulation of turbulent flows was first developed to study turbulent flows in atmospheric by Lilly (1967). The first LES were carried by Deardorff (1970) who used these simulations to study atmospheric boundary layers. Ever since, LES has been extensively used to study canonical atmospheric boundary layers, in most cases flat plate boundary layers under the assumption of horizontal homogeneity. Carefully designed LES of canonical convective and neutrally stratified and more recently stably stratified atmospheric boundary layers have contributed significantly to development of better understanding of these flows and their parameterizations in large scale models. These simulations were often carried out using codes specifically designed and developed for large-eddy simulations of horizontally homogeneous flows with periodic lateral boundary conditions. Recent developments in multi-scale numerical simulations of atmospheric flows enable numerical weather prediction (NWP) codes such as ARPS (Chow and Street, 2009), COAMPS (Golaz et al., 2009) and Weather Research and Forecasting model, to be used nearly seamlessly across a wide range of atmospheric scales from synoptic down to turbulent scales in atmospheric boundary layers. Before we can with confidence carry out multi-scale simulations of atmospheric flows, NWP codes must be validated for accurate performance in simulating flows over complex or inhomogeneous terrain. We therefore carry out validation of WRF-LES for simulations of flows over complex terrain using data from Askervein Hill (Taylor and Teunissen, 1985, 1987) and METCRAX (Whiteman et al., 2008) field experiments. WRF's nesting capability is employed with a one-way nested inner domain that includes complex terrain representation while the coarser outer nest is used to spin up fully developed atmospheric boundary layer turbulence and thus represent accurately inflow to the inner domain. LES of a
Heating of large format filters in sub-mm and fir space optics
Baccichet, N.; Savini, G.
2017-11-01
Most FIR and sub-mm space borne observatories use polymer-based quasi-optical elements like filters and lenses, due to their high transparency and low absorption in such wavelength ranges. Nevertheless, data from those missions have proven that thermal imbalances in the instrument (not caused by filters) can complicate the data analysis. Consequently, for future, higher precision instrumentation, further investigation is required on any thermal imbalances embedded in such polymer-based filters. Particularly, in this paper the heating of polymers when operating at cryogenic temperature in space will be studied. Such phenomenon is an important aspect of their functioning since the transient emission of unwanted thermal radiation may affect the scientific measurements. To assess this effect, a computer model was developed for polypropylene based filters and PTFE-based coatings. Specifically, a theoretical model of their thermal properties was created and used into a multi-physics simulation that accounts for conductive and radiative heating effects of large optical elements, the geometry of which was suggested by the large format array instruments designed for future space missions. It was found that in the simulated conditions, the filters temperature was characterized by a time-dependent behaviour, modulated by a small scale fluctuation. Moreover, it was noticed that thermalization was reached only when a low power input was present.
Stray light field dependence for large astronomical space telescopes
Lightsey, Paul A.; Bowers, Charles W.
2017-09-01
Future large astronomical telescopes in space will have architectures that expose the optics to large angular extents of the sky. Options for reducing stray light coming from the sky range from enclosing the telescope in a tubular baffle to having an open telescope structure with a large sunshield to eliminate solar illumination. These two options are considered for an on-axis telescope design to explore stray light considerations. A tubular baffle design will limit the sky exposure to the solid angle of the cone in front of the telescope set by the aspect ratio of the baffle length to Primary Mirror (PM) diameter. Illumination from this portion of the sky will be limited to the PM and structures internal to the tubular baffle. Alternatively, an open structure design will allow a large portion of the sky to directly illuminate the PM and Secondary Mirror (SM) as well as illuminating sunshield and other structure surfaces which will reflect or scatter light onto the PM and SM. Portions of this illumination of the PM and SM will be scattered into the optical train as stray light. A Radiance Transfer Function (RTF) is calculated for the open architecture that determines the ratio of the stray light background radiance in the image contributed by a patch of sky having unit radiance. The full 4π steradian of sky is divided into a grid of patches, with the location of each patch defined in the telescope coordinate system. By rotating the celestial sky radiance maps into the telescope coordinate frame for a given pointing direction of the telescope, the RTF may be applied to the sky brightness and the results integrated to get the total stray light from the sky for that pointing direction. The RTF data generated for the open architecture may analyzed as a function of the expanding cone angle about the pointing direction. In this manner, the open architecture data may be used to directly compare to a tubular baffle design parameterized by allowed cone angle based on the
Saving time in a space-efficient simulation algorithm
Markovski, J.
2011-01-01
We present an efficient algorithm for computing the simulation preorder and equivalence for labeled transition systems. The algorithm improves an existing space-efficient algorithm and improves its time complexity by employing a variant of the stability condition and exploiting properties of the
Large-eddy simulation of sand dune morphodynamics
Khosronejad, Ali; Sotiropoulos, Fotis; St. Anthony Falls Laboratory, University of Minnesota Team
2015-11-01
Sand dunes are natural features that form under complex interaction between turbulent flow and bed morphodynamics. We employ a fully-coupled 3D numerical model (Khosronejad and Sotiropoulos, 2014, Journal of Fluid Mechanics, 753:150-216) to perform high-resolution large-eddy simulations of turbulence and bed morphodynamics in a laboratory scale mobile-bed channel to investigate initiation, evolution and quasi-equilibrium of sand dunes (Venditti and Church, 2005, J. Geophysical Research, 110:F01009). We employ a curvilinear immersed boundary method along with convection-diffusion and bed-morphodynamics modules to simulate the suspended sediment and the bed-load transports respectively. The coupled simulation were carried out on a grid with more than 100 million grid nodes and simulated about 3 hours of physical time of dune evolution. The simulations provide the first complete description of sand dune formation and long-term evolution. The geometric characteristics of the simulated dunes are shown to be in excellent agreement with observed data obtained across a broad range of scales. This work was supported by NSF Grants EAR-0120914 (as part of the National Center for Earth-Surface Dynamics). Computational resources were provided by the University of Minnesota Supercomputing Institute.
Cryogenic techniques for large superconducting magnets in space
International Nuclear Information System (INIS)
Green, M.A.
1988-12-01
A large superconducting magnet is proposed for use in a particle astrophysics experiment, ASTROMAG, which is to be mounted on the United States Space Station. This experiment will have a two-coil superconducting magnet with coils which are 1.3 to 1.7 meters in diameter. The two-coil magnet will have zero net magnetic dipole moment. The field 15 meters from the magnet will approach earth's field in low earth orbit. The issue of high Tc superconductor will be discussed in the paper. The reasons for using conventional niobium-titanium superconductor cooled with superfluid helium will be presented. Since the purpose of the magnet is to do particle astrophysics, the superconducting coils must be located close to the charged particle detectors. The trade off between the particle physics possible and the cryogenic insulation around the coils is discussed. As a result, the ASTROMAG magnet coils will be operated outside of the superfluid helium storage tank. The fountain effect pumping system which will be used to cool the coil is described in the report. Two methods for extending the operating life of the superfluid helium dewar are discussed. These include: operation with a third shield cooled to 90 K with a sterling cycle cryocooler, and a hybrid cryogenic system where there are three hydrogen-cooled shields and cryostat support heat intercept points. Both of these methods will extend the ASTROMAG cryogenic operating life from 2 years to almost 4 years. 14 refs., 8 figs., 4 tabs
Cryogenic techniques for large superconducting magnets in space
Green, M. A.
1989-01-01
A large superconducting magnet is proposed for use in a particle astrophysics experiment, ASTROMAG, which is to be mounted on the United States Space Station. This experiment will have a two-coil superconducting magnet with coils which are 1.3 to 1.7 meters in diameter. The two-coil magnet will have zero net magnetic dipole moment. The field 15 meters from the magnet will approach earth's field in low earth orbit. The issue of high Tc superconductor will be discussed in the paper. The reasons for using conventional niobium-titanium superconductor cooled with superfluid helium will be presented. Since the purpose of the magnet is to do particle astrophysics, the superconducting coils must be located close to the charged particle detectors. The trade off between the particle physics possible and the cryogenic insulation around the coils is discussed. As a result, the ASTROMAG magnet coils will be operated outside of the superfluid helium storage tank. The fountain effect pumping system which will be used to cool the coil is described in the report. Two methods for extending the operating life of the superfluid helium dewar are discussed. These include: operation with a third shield cooled to 90 K with a sterling cycle cryocooler, and a hybrid cryogenic system where there are three hydrogen-cooled shields and cryostat support heat intercept points.
Extremophiles survival to simulated space conditions: an astrobiology model study.
Mastascusa, V; Romano, I; Di Donato, P; Poli, A; Della Corte, V; Rotundi, A; Bussoletti, E; Quarto, M; Pugliese, M; Nicolaus, B
2014-09-01
In this work we investigated the ability of four extremophilic bacteria from Archaea and Bacteria domains to resist to space environment by exposing them to extreme conditions of temperature, UV radiation, desiccation coupled to low pressure generated in a Mars' conditions simulator. All the investigated extremophilic strains (namely Sulfolobus solfataricus, Haloterrigena hispanica, Thermotoga neapolitana and Geobacillus thermantarcticus) showed a good resistance to the simulation of the temperature variation in the space; on the other hand irradiation with UV at 254 nm affected only slightly the growth of H. hispanica, G. thermantarcticus and S. solfataricus; finally exposition to Mars simulated condition showed that H. hispanica and G. thermantarcticus were resistant to desiccation and low pressure.
Analyzing large data sets from XGC1 magnetic fusion simulations using apache spark
Energy Technology Data Exchange (ETDEWEB)
Churchill, R. Michael [Princeton Plasma Physics Lab. (PPPL), Princeton, NJ (United States)
2016-11-21
Apache Spark is explored as a tool for analyzing large data sets from the magnetic fusion simulation code XGCI. Implementation details of Apache Spark on the NERSC Edison supercomputer are discussed, including binary file reading, and parameter setup. Here, an unsupervised machine learning algorithm, k-means clustering, is applied to XGCI particle distribution function data, showing that highly turbulent spatial regions do not have common coherent structures, but rather broad, ring-like structures in velocity space.
A Simulation Base Investigation of High Latency Space Systems Operations
Li, Zu Qun; Crues, Edwin Z.; Bielski, Paul; Moore, Michael
2017-01-01
NASA's human space program has developed considerable experience with near Earth space operations. Although NASA has experience with deep space robotic missions, NASA has little substantive experience with human deep space operations. Even in the Apollo program, the missions lasted only a few weeks and the communication latencies were on the order of seconds. Human missions beyond the relatively close confines of the Earth-Moon system will involve missions with durations measured in months and communications latencies measured in minutes. To minimize crew risk and to maximize mission success, NASA needs to develop a better understanding of the implications of these types of mission durations and communication latencies on vehicle design, mission design and flight controller interaction with the crew. To begin to address these needs, NASA performed a study using a physics-based subsystem simulation to investigate the interactions between spacecraft crew and a ground-based mission control center for vehicle subsystem operations across long communication delays. The simulation, built with a subsystem modeling tool developed at NASA's Johnson Space Center, models the life support system of a Mars transit vehicle. The simulation contains models of the cabin atmosphere and pressure control system, electrical power system, drinking and waste water systems, internal and external thermal control systems, and crew metabolic functions. The simulation has three interfaces: 1) a real-time crew interface that can be use to monitor and control the vehicle subsystems; 2) a mission control center interface with data transport delays up to 15 minutes each way; 3) a real-time simulation test conductor interface that can be use to insert subsystem malfunctions and observe the interactions between the crew, ground, and simulated vehicle. The study was conducted at the 21st NASA Extreme Environment Mission Operations (NEEMO) mission between July 18th and Aug 3rd of year 2016. The NEEMO
Large eddy simulation of a wing-body junction flow
Ryu, Sungmin; Emory, Michael; Campos, Alejandro; Duraisamy, Karthik; Iaccarino, Gianluca
2014-11-01
We present numerical simulations of the wing-body junction flow experimentally investigated by Devenport & Simpson (1990). Wall-junction flows are common in engineering applications but relevant flow physics close to the corner region is not well understood. Moreover, performance of turbulence models for the body-junction case is not well characterized. Motivated by the insufficient investigations, we have numerically investigated the case with Reynolds-averaged Naiver-Stokes equation (RANS) and Large Eddy Simulation (LES) approaches. The Vreman model applied for the LES and SST k- ω model for the RANS simulation are validated focusing on the ability to predict turbulence statistics near the junction region. Moreover, a sensitivity study of the form of the Vreman model will also be presented. This work is funded under NASA Cooperative Agreement NNX11AI41A (Technical Monitor Dr. Stephen Woodruff)
Lightweight computational steering of very large scale molecular dynamics simulations
International Nuclear Information System (INIS)
Beazley, D.M.
1996-01-01
We present a computational steering approach for controlling, analyzing, and visualizing very large scale molecular dynamics simulations involving tens to hundreds of millions of atoms. Our approach relies on extensible scripting languages and an easy to use tool for building extensions and modules. The system is extremely easy to modify, works with existing C code, is memory efficient, and can be used from inexpensive workstations and networks. We demonstrate how we have used this system to manipulate data from production MD simulations involving as many as 104 million atoms running on the CM-5 and Cray T3D. We also show how this approach can be used to build systems that integrate common scripting languages (including Tcl/Tk, Perl, and Python), simulation code, user extensions, and commercial data analysis packages
Large Eddy Simulations of Severe Convection Induced Turbulence
Ahmad, Nash'at; Proctor, Fred
2011-01-01
Convective storms can pose a serious risk to aviation operations since they are often accompanied by turbulence, heavy rain, hail, icing, lightning, strong winds, and poor visibility. They can cause major delays in air traffic due to the re-routing of flights, and by disrupting operations at the airports in the vicinity of the storm system. In this study, the Terminal Area Simulation System is used to simulate five different convective events ranging from a mesoscale convective complex to isolated storms. The occurrence of convection induced turbulence is analyzed from these simulations. The validation of model results with the radar data and other observations is reported and an aircraft-centric turbulence hazard metric calculated for each case is discussed. The turbulence analysis showed that large pockets of significant turbulence hazard can be found in regions of low radar reflectivity. Moderate and severe turbulence was often found in building cumulus turrets and overshooting tops.
Large Eddy Simulation of Cryogenic Injection Processes at Supercritical Pressure
Oefelein, Joseph C.
2002-01-01
This paper highlights results from the first of a series of hierarchical simulations aimed at assessing the modeling requirements for application of the large eddy simulation technique to cryogenic injection and combustion processes in liquid rocket engines. The focus is on liquid-oxygen-hydrogen coaxial injectors at a condition where the liquid-oxygen is injected at a subcritical temperature into a supercritical environment. For this situation a diffusion dominated mode of combustion occurs in the presence of exceedingly large thermophysical property gradients. Though continuous, these gradients approach the behavior of a contact discontinuity. Significant real gas effects and transport anomalies coexist locally in colder regions of the flow, with ideal gas and transport characteristics occurring within the flame zone. The current focal point is on the interfacial region between the liquid-oxygen core and the coaxial hydrogen jet where the flame anchors itself.
Experimental simulation of microinteractions in large scale explosions
Energy Technology Data Exchange (ETDEWEB)
Chen, X.; Luo, R.; Yuen, W.W.; Theofanous, T.G. [California Univ., Santa Barbara, CA (United States). Center for Risk Studies and Safety
1998-01-01
This paper presents data and analysis of recent experiments conducted in the SIGMA-2000 facility to simulate microinteractions in large scale explosions. Specifically, the fragmentation behavior of a high temperature molten steel drop under high pressure (beyond critical) conditions are investigated. The current data demonstrate, for the first time, the effect of high pressure in suppressing the thermal effect of fragmentation under supercritical conditions. The results support the microinteractions idea, and the ESPROSE.m prediction of fragmentation rate. (author)
Simulation requirements for the Large Deployable Reflector (LDR)
Soosaar, K.
1984-01-01
Simulation tools for the large deployable reflector (LDR) are discussed. These tools are often the transfer function variety equations. However, transfer functions are inadequate to represent time-varying systems for multiple control systems with overlapping bandwidths characterized by multi-input, multi-output features. Frequency domain approaches are the useful design tools, but a full-up simulation is needed. Because of the need for a dedicated computer for high frequency multi degree of freedom components encountered, non-real time smulation is preferred. Large numerical analysis software programs are useful only to receive inputs and provide output to the next block, and should be kept out of the direct loop of simulation. The following blocks make up the simulation. The thermal model block is a classical heat transfer program. It is a non-steady state program. The quasistatic block deals with problems associated with rigid body control of reflector segments. The steady state block assembles data into equations of motion and dynamics. A differential raytrace is obtained to establish a change in wave aberrations. The observation scene is described. The focal plane module converts the photon intensity impinging on it into electron streams or into permanent film records.
Large-size space debris flyby in low earth orbits
Baranov, A. A.; Grishko, D. A.; Razoumny, Y. N.
2017-09-01
the analysis of NORAD catalogue of space objects executed with respect to the overall sizes of upper-stages and last stages of carrier rockets allows the classification of 5 groups of large-size space debris (LSSD). These groups are defined according to the proximity of orbital inclinations of the involved objects. The orbits within a group have various values of deviations in the Right Ascension of the Ascending Node (RAAN). It is proposed to use the RAANs deviations' evolution portrait to clarify the orbital planes' relative spatial distribution in a group so that the RAAN deviations should be calculated with respect to the concrete precessing orbital plane of the concrete object. In case of the first three groups (inclinations i = 71°, i = 74°, i = 81°) the straight lines of the RAAN relative deviations almost do not intersect each other. So the simple, successive flyby of group's elements is effective, but the significant value of total Δ V is required to form drift orbits. In case of the fifth group (Sun-synchronous orbits) these straight lines chaotically intersect each other for many times due to the noticeable differences in values of semi-major axes and orbital inclinations. The intersections' existence makes it possible to create such a flyby sequence for LSSD group when the orbit of one LSSD object simultaneously serves as the drift orbit to attain another LSSD object. This flyby scheme requiring less Δ V was called "diagonal." The RAANs deviations' evolution portrait built for the fourth group (to be studied in the paper) contains both types of lines, so the simultaneous combination of diagonal and successive flyby schemes is possible. The value of total Δ V and temporal costs were calculated to cover all the elements of the 4th group. The article is also enriched by the results obtained for the flyby problem solution in case of all the five mentioned LSSD groups. The general recommendations are given concerned with the required reserve of total
Towards Large Eddy Simulation of gas turbine compressors
McMullan, W. A.; Page, G. J.
2012-07-01
With increasing computing power, Large Eddy Simulation could be a useful simulation tool for gas turbine axial compressor design. This paper outlines a series of simulations performed on compressor geometries, ranging from a Controlled Diffusion Cascade stator blade to the periodic sector of a stage in a 3.5 stage axial compressor. The simulation results show that LES may offer advantages over traditional RANS methods when off-design conditions are considered - flow regimes where RANS models often fail to converge. The time-dependent nature of LES permits the resolution of transient flow structures, and can elucidate new mechanisms of vorticity generation on blade surfaces. It is shown that accurate LES is heavily reliant on both the near-wall mesh fidelity and the ability of the imposed inflow condition to recreate the conditions found in the reference experiment. For components embedded in a compressor this requires the generation of turbulence fluctuations at the inlet plane. A recycling method is developed that improves the quality of the flow in a single stage calculation of an axial compressor, and indicates that future developments in both the recycling technique and computing power will bring simulations of axial compressors within reach of industry in the coming years.
Large-scale particle simulations in a virtual-memory computer
International Nuclear Information System (INIS)
Gray, P.C.; Wagner, J.S.; Tajima, T.; Million, R.
1982-08-01
Virtual memory computers are capable of executing large-scale particle simulations even when the memory requirements exceed the computer core size. The required address space is automatically mapped onto slow disc memory by the operating system. When the simulation size is very large, frequent random accesses to slow memory occur during the charge accumulation and particle pushing processes. Accesses to slow memory significantly reduce the execution rate of the simulation. We demonstrate in this paper that with the proper choice of sorting algorithm, a nominal amount of sorting to keep physically adjacent particles near particles with neighboring array indices can reduce random access to slow memory, increase the efficiency of the I/O system, and hence, reduce the required computing time
Magnetic Testing, and Modeling, Simulation and Analysis for Space Applications
Boghosian, Mary; Narvaez, Pablo; Herman, Ray
2012-01-01
The Aerospace Corporation (Aerospace) and Lockheed Martin Space Systems (LMSS) participated with Jet Propulsion Laboratory (JPL) in the implementation of a magnetic cleanliness program of the NASA/JPL JUNO mission. The magnetic cleanliness program was applied from early flight system development up through system level environmental testing. The JUNO magnetic cleanliness program required setting-up a specialized magnetic test facility at Lockheed Martin Space Systems for testing the flight system and a testing program with facility for testing system parts and subsystems at JPL. The magnetic modeling, simulation and analysis capability was set up and performed by Aerospace to provide qualitative and quantitative magnetic assessments of the magnetic parts, components, and subsystems prior to or in lieu of magnetic tests. Because of the sensitive nature of the fields and particles scientific measurements being conducted by the JUNO space mission to Jupiter, the imposition of stringent magnetic control specifications required a magnetic control program to ensure that the spacecraft's science magnetometers and plasma wave search coil were not magnetically contaminated by flight system magnetic interferences. With Aerospace's magnetic modeling, simulation and analysis and JPL's system modeling and testing approach, and LMSS's test support, the project achieved a cost effective approach to achieving a magnetically clean spacecraft. This paper presents lessons learned from the JUNO magnetic testing approach and Aerospace's modeling, simulation and analysis activities used to solve problems such as remnant magnetization, performance of hard and soft magnetic materials within the targeted space system in applied external magnetic fields.
Modeling extreme (Carrington-type) space weather events using three-dimensional MHD code simulations
Ngwira, C. M.; Pulkkinen, A. A.; Kuznetsova, M. M.; Glocer, A.
2013-12-01
There is growing concern over possible severe societal consequences related to adverse space weather impacts on man-made technological infrastructure and systems. In the last two decades, significant progress has been made towards the modeling of space weather events. Three-dimensional (3-D) global magnetohydrodynamics (MHD) models have been at the forefront of this transition, and have played a critical role in advancing our understanding of space weather. However, the modeling of extreme space weather events is still a major challenge even for existing global MHD models. In this study, we introduce a specially adapted University of Michigan 3-D global MHD model for simulating extreme space weather events that have a ground footprint comparable (or larger) to the Carrington superstorm. Results are presented for an initial simulation run with ``very extreme'' constructed/idealized solar wind boundary conditions driving the magnetosphere. In particular, we describe the reaction of the magnetosphere-ionosphere system and the associated ground induced geoelectric field to such extreme driving conditions. We also discuss the results and what they might mean for the accuracy of the simulations. The model is further tested using input data for an observed space weather event to verify the MHD model consistence and to draw guidance for future work. This extreme space weather MHD model is designed specifically for practical application to the modeling of extreme geomagnetically induced electric fields, which can drive large currents in earth conductors such as power transmission grids.
Simulated Space Environmental Effects on Thin Film Solar Array Components
Finckenor, Miria; Carr, John; SanSoucie, Michael; Boyd, Darren; Phillips, Brandon
2017-01-01
The Lightweight Integrated Solar Array and Transceiver (LISA-T) experiment consists of thin-film, low mass, low volume solar panels. Given the variety of thin solar cells and cover materials and the lack of environmental protection typically afforded by thick coverglasses, a series of tests were conducted in Marshall Space Flight Center's Space Environmental Effects Facility to evaluate the performance of these materials. Candidate thin polymeric films and nitinol wires used for deployment were also exposed. Simulated space environment exposures were selected based on SSP 30425 rev. B, "Space Station Program Natural Environment Definition for Design" or AIAA Standard S-111A-2014, "Qualification and Quality Requirements for Space Solar Cells." One set of candidate materials were exposed to 5 eV atomic oxygen and concurrent vacuum ultraviolet (VUV) radiation for low Earth orbit simulation. A second set of materials were exposed to 1 MeV electrons. A third set of samples were exposed to 50, 100, 500, and 700 keV energy protons, and a fourth set were exposed to >2,000 hours of near ultraviolet (NUV) radiation. A final set was rapidly thermal cycled between -55 and +125degC. This test series provides data on enhanced power generation, particularly for small satellites with reduced mass and volume resources. Performance versus mass and cost per Watt is discussed.
Psychosocial value of space simulation for extended spaceflight
Kanas, N.
1997-01-01
There have been over 60 studies of Earth-bound activities that can be viewed as simulations of manned spaceflight. These analogs have involved Antarctic and Arctic expeditions, submarines and submersible simulators, land-based simulators, and hypodynamia environments. None of these analogs has accounted for all the variables related to extended spaceflight (e.g., microgravity, long-duration, heterogeneous crews), and some of the stimulation conditions have been found to be more representative of space conditions than others. A number of psychosocial factors have emerged from the simulation literature that correspond to important issues that have been reported from space. Psychological factors include sleep disorders, alterations in time sense, transcendent experiences, demographic issues, career motivation, homesickness, and increased perceptual sensitivities. Psychiatric factors include anxiety, depression, psychosis, psychosomatic symptoms, emotional reactions related to mission stage, asthenia, and postflight personality, and marital problems. Finally, interpersonal factors include tension resulting from crew heterogeneity, decreased cohesion over time, need for privacy, and issues involving leadership roles and lines of authority. Since future space missions will usually involve heterogeneous crews working on complicated objectives over long periods of time, these features require further study. Socio-cultural factors affecting confined crews (e.g., language and dialect, cultural differences, gender biases) should be explored in order to minimize tension and sustain performance. Career motivation also needs to be examined for the purpose of improving crew cohesion and preventing subgrouping, scapegoating, and territorial behavior. Periods of monotony and reduced activity should be addressed in order to maintain morale, provide meaningful use of leisure time, and prevent negative consequences of low stimulation, such as asthenia and crew member withdrawal
Parallel continuous simulated tempering and its applications in large-scale molecular simulations
Energy Technology Data Exchange (ETDEWEB)
Zang, Tianwu; Yu, Linglin; Zhang, Chong [Applied Physics Program and Department of Bioengineering, Rice University, Houston, Texas 77005 (United States); Ma, Jianpeng, E-mail: jpma@bcm.tmc.edu [Applied Physics Program and Department of Bioengineering, Rice University, Houston, Texas 77005 (United States); Verna and Marrs McLean Department of Biochemistry and Molecular Biology, Baylor College of Medicine, One Baylor Plaza, BCM-125, Houston, Texas 77030 (United States)
2014-07-28
In this paper, we introduce a parallel continuous simulated tempering (PCST) method for enhanced sampling in studying large complex systems. It mainly inherits the continuous simulated tempering (CST) method in our previous studies [C. Zhang and J. Ma, J. Chem. Phys. 130, 194112 (2009); C. Zhang and J. Ma, J. Chem. Phys. 132, 244101 (2010)], while adopts the spirit of parallel tempering (PT), or replica exchange method, by employing multiple copies with different temperature distributions. Differing from conventional PT methods, despite the large stride of total temperature range, the PCST method requires very few copies of simulations, typically 2–3 copies, yet it is still capable of maintaining a high rate of exchange between neighboring copies. Furthermore, in PCST method, the size of the system does not dramatically affect the number of copy needed because the exchange rate is independent of total potential energy, thus providing an enormous advantage over conventional PT methods in studying very large systems. The sampling efficiency of PCST was tested in two-dimensional Ising model, Lennard-Jones liquid and all-atom folding simulation of a small globular protein trp-cage in explicit solvent. The results demonstrate that the PCST method significantly improves sampling efficiency compared with other methods and it is particularly effective in simulating systems with long relaxation time or correlation time. We expect the PCST method to be a good alternative to parallel tempering methods in simulating large systems such as phase transition and dynamics of macromolecules in explicit solvent.
Interplanetary Transit Simulations Using the International Space Station
Charles, J. B.; Arya, Maneesh
2010-01-01
It has been suggested that the International Space Station (ISS) be utilized to simulate the transit portion of long-duration missions to Mars and near-Earth asteroids (NEA). The ISS offers a unique environment for such simulations, providing researchers with a high-fidelity platform to study, enhance, and validate technologies and countermeasures for these long-duration missions. From a space life sciences perspective, two major categories of human research activities have been identified that will harness the various capabilities of the ISS during the proposed simulations. The first category includes studies that require the use of the ISS, typically because of the need for prolonged weightlessness. The ISS is currently the only available platform capable of providing researchers with access to a weightless environment over an extended duration. In addition, the ISS offers high fidelity for other fundamental space environmental factors, such as isolation, distance, and accessibility. The second category includes studies that do not require use of the ISS in the strictest sense, but can exploit its use to maximize their scientific return more efficiently and productively than in ground-based simulations. In addition to conducting Mars and NEA simulations on the ISS, increasing the current increment duration on the ISS from 6 months to a longer duration will provide opportunities for enhanced and focused research relevant to long-duration Mars and NEA missions. Although it is currently believed that increasing the ISS crew increment duration to 9 or even 12 months will pose little additional risk to crewmembers, additional medical monitoring capabilities may be required beyond those currently used for the ISS operations. The use of the ISS to simulate aspects of Mars and NEA missions seems practical, and it is recommended that planning begin soon, in close consultation with all international partners.
Can we close large prosthetic space with orthodontics?
Mesko, Mauro Elias; Skupien, Jovito Adiel; Valentini, Fernanda; Pereira-Cenci, Tatiana
2013-01-01
For years, the treatment for the replacement of a missing tooth was a fixed dental prosthesis. Currently, implants are indicated to replace missing teeth due to high clinical success and with the advantage of not performing preparations in the adjacent tooth. Another option for space closure is the use of orthodontics associated to miniscrews for anchorage allowing better control of the orthodontic biomechanics and especially making possible closure of larger prosthetic spaces. Thus, this article describes two cases with indications and discussion of the advantages and disadvantages of using orthodontics for prosthetic spaces closure. The cases herein presented show that it is possible to close an space when there are available teeth in the adjacent area. It can be concluded that when a malocclusion is present there will be a strong trend to indicate space closure by orthodontic movement as it preserves natural teeth and seems a more physiological approach.
Large-eddy simulation of atmospheric flow over complex terrain
Energy Technology Data Exchange (ETDEWEB)
Bechmann, A.
2006-11-15
The present report describes the development and validation of a turbulence model designed for atmospheric flows based on the concept of Large-Eddy Simulation (LES). The background for the work is the high Reynolds number k - epsilon model, which has been implemented on a finite-volume code of the incompressible Reynolds-averaged Navier-Stokes equations (RANS). The k - epsilon model is traditionally used for RANS computations, but is here developed to also enable LES. LES is able to provide detailed descriptions of a wide range of engineering flows at low Reynolds numbers. For atmospheric flows, however, the high Reynolds numbers and the rough surface of the earth provide difficulties normally not compatible with LES. Since these issues are most severe near the surface they are addressed by handling the near surface region with RANS and only use LES above this region. Using this method, the developed turbulence model is able to handle both engineering and atmospheric flows and can be run in both RANS or LES mode. For LES simulations a time-dependent wind field that accurately represents the turbulent structures of a wind environment must be prescribed at the computational inlet. A method is implemented where the turbulent wind field from a separate LES simulation can be used as inflow. To avoid numerical dissipation of turbulence special care is paid to the numerical method, e.g. the turbulence model is calibrated with the specific numerical scheme used. This is done by simulating decaying isotropic and homogeneous turbulence. Three atmospheric test cases are investigated in order to validate the behavior of the presented turbulence model. Simulation of the neutral atmospheric boundary layer, illustrates the turbulence model ability to generate and maintain the turbulent structures responsible for boundary layer transport processes. Velocity and turbulence profiles are in good agreement with measurements. Simulation of the flow over the Askervein hill is also
Large Eddy Simulation of High-Speed, Premixed Ethylene Combustion
Ramesh, Kiran; Edwards, Jack R.; Chelliah, Harsha; Goyne, Christopher; McDaniel, James; Rockwell, Robert; Kirik, Justin; Cutler, Andrew; Danehy, Paul
2015-01-01
A large-eddy simulation / Reynolds-averaged Navier-Stokes (LES/RANS) methodology is used to simulate premixed ethylene-air combustion in a model scramjet designed for dual mode operation and equipped with a cavity for flameholding. A 22-species reduced mechanism for ethylene-air combustion is employed, and the calculations are performed on a mesh containing 93 million cells. Fuel plumes injected at the isolator entrance are processed by the isolator shock train, yielding a premixed fuel-air mixture at an equivalence ratio of 0.42 at the cavity entrance plane. A premixed flame is anchored within the cavity and propagates toward the opposite wall. Near complete combustion of ethylene is obtained. The combustor is highly dynamic, exhibiting a large-scale oscillation in global heat release and mass flow rate with a period of about 2.8 ms. Maximum heat release occurs when the flame front reaches its most downstream extent, as the flame surface area is larger. Minimum heat release is associated with flame propagation toward the cavity and occurs through a reduction in core flow velocity that is correlated with an upstream movement of the shock train. Reasonable agreement between simulation results and available wall pressure, particle image velocimetry, and OH-PLIF data is obtained, but it is not yet clear whether the system-level oscillations seen in the calculations are actually present in the experiment.
Simulations of Large-Area Electron Beam Diodes
Swanekamp, S. B.; Friedman, M.; Ludeking, L.; Smithe, D.; Obenschain, S. P.
1999-11-01
Large area electron beam diodes are typically used to pump the amplifiers of KrF lasers. Simulations of large-area electron beam diodes using the particle-in-cell code MAGIC3D have shown the electron flow in the diode to be unstable. Since this instability can potentially produce a non-uniform current and energy distribution in the hibachi structure and lasing medium it can be detrimental to laser efficiency. These results are similar to simulations performed using the ISIS code.(M.E. Jones and V.A. Thomas, Proceedings of the 8^th) International Conference on High-Power Particle Beams, 665 (1990). We have identified the instability as the so called ``transit-time" instability(C.K. Birdsall and W.B. Bridges, Electrodynamics of Diode Regions), (Academic Press, New York, 1966).^,(T.M. Antonsen, W.H. Miner, E. Ott, and A.T. Drobot, Phys. Fluids 27), 1257 (1984). and have investigated the role of the applied magnetic field and diode geometry. Experiments are underway to characterize the instability on the Nike KrF laser system and will be compared to simulation. Also some possible ways to mitigate the instability will be presented.
A Coordinated Initialization Process for the Distributed Space Exploration Simulation
Crues, Edwin Z.; Phillips, Robert G.; Dexter, Dan; Hasan, David
2007-01-01
A viewgraph presentation on the federate initialization process for the Distributed Space Exploration Simulation (DSES) is described. The topics include: 1) Background: DSES; 2) Simulation requirements; 3) Nine Step Initialization; 4) Step 1: Create the Federation; 5) Step 2: Publish and Subscribe; 6) Step 3: Create Object Instances; 7) Step 4: Confirm All Federates Have Joined; 8) Step 5: Achieve initialize Synchronization Point; 9) Step 6: Update Object Instances With Initial Data; 10) Step 7: Wait for Object Reflections; 11) Step 8: Set Up Time Management; 12) Step 9: Achieve startup Synchronization Point; and 13) Conclusions
Large breast compressions: Observations and evaluation of simulations
Energy Technology Data Exchange (ETDEWEB)
Tanner, Christine; White, Mark; Guarino, Salvatore; Hall-Craggs, Margaret A.; Douek, Michael; Hawkes, David J. [Centre of Medical Image Computing, UCL, London WC1E 6BT, United Kingdom and Computer Vision Laboratory, ETH Zuerich, 8092 Zuerich (Switzerland); Centre of Medical Image Computing, UCL, London WC1E 6BT (United Kingdom); Department of Surgery, UCL, London W1P 7LD (United Kingdom); Department of Imaging, UCL Hospital, London NW1 2BU (United Kingdom); Department of Surgery, UCL, London W1P 7LD (United Kingdom); Centre of Medical Image Computing, UCL, London WC1E 6BT (United Kingdom)
2011-02-15
Purpose: Several methods have been proposed to simulate large breast compressions such as those occurring during x-ray mammography. However, the evaluation of these methods against real data is rare. The aim of this study is to learn more about the deformation behavior of breasts and to assess a simulation method. Methods: Magnetic resonance (MR) images of 11 breasts before and after applying a relatively large in vivo compression in the medial direction were acquired. Nonrigid registration was employed to study the deformation behavior. Optimal material properties for finite element modeling were determined and their prediction performance was assessed. The realism of simulated compressions was evaluated by comparing the breast shapes on simulated and real mammograms. Results: Following image registration, 19 breast compressions from 8 women were studied. An anisotropic deformation behavior, with a reduced elongation in the anterior-posterior direction and an increased stretch in the inferior-superior direction was observed. Using finite element simulations, the performance of isotropic and transverse isotropic material models to predict the displacement of internal landmarks was compared. Isotropic materials reduced the mean displacement error of the landmarks from 23.3 to 4.7 mm, on average, after optimizing material properties with respect to breast surface alignment and image similarity. Statistically significantly smaller errors were achieved with transverse isotropic materials (4.1 mm, P=0.0045). Homogeneous material models performed substantially worse (transverse isotropic: 5.5 mm; isotropic: 6.7 mm). Of the parameters varied, the amount of anisotropy had the greatest influence on the results. Optimal material properties varied less when grouped by patient rather than by compression magnitude (mean: 0.72 vs 1.44). Employing these optimal materials for simulating mammograms from ten MR breast images of a different cohort resulted in more realistic breast
Large breast compressions: observations and evaluation of simulations.
Tanner, Christine; White, Mark; Guarino, Salvatore; Hall-Craggs, Margaret A; Douek, Michael; Hawkes, David J
2011-02-01
Several methods have been proposed to simulate large breast compressions such as those occurring during x-ray mammography. However, the evaluation of these methods against real data is rare. The aim of this study is to learn more about the deformation behavior of breasts and to assess a simulation method. Magnetic resonance (MR) images of 11 breasts before and after applying a relatively large in vivo compression in the medial direction were acquired. Nonrigid registration was employed to study the deformation behavior. Optimal material properties for finite element modeling were determined and their prediction performance was assessed. The realism of simulated compressions was evaluated by comparing the breast shapes on simulated and real mammograms. Following image registration, 19 breast compressions from 8 women were studied. An anisotropic deformation behavior, with a reduced elongation in the anterior-posterior direction and an increased stretch in the inferior-superior direction was observed. Using finite element simulations, the performance of isotropic and transverse isotropic material models to predict the displacement of internal landmarks was compared. Isotropic materials reduced the mean displacement error of the landmarks from 23.3 to 4.7 mm, on average, after optimizing material properties with respect to breast surface alignment and image similarity. Statistically significantly smaller errors were achieved with transverse isotropic materials (4.1 mm, P=0.0045). Homogeneous material models performed substantially worse (transverse isotropic: 5.5 mm; isotropic: 6.7 mm). Of the parameters varied, the amount of anisotropy had the greatest influence on the results. Optimal material properties varied less when grouped by patient rather than by compression magnitude (mean: 0.72 vs. 1.44). Employing these optimal materials for simulating mammograms from ten MR breast images of a different cohort resulted in more realistic breast shapes than when using
Large breast compressions: Observations and evaluation of simulations
International Nuclear Information System (INIS)
Tanner, Christine; White, Mark; Guarino, Salvatore; Hall-Craggs, Margaret A.; Douek, Michael; Hawkes, David J.
2011-01-01
Purpose: Several methods have been proposed to simulate large breast compressions such as those occurring during x-ray mammography. However, the evaluation of these methods against real data is rare. The aim of this study is to learn more about the deformation behavior of breasts and to assess a simulation method. Methods: Magnetic resonance (MR) images of 11 breasts before and after applying a relatively large in vivo compression in the medial direction were acquired. Nonrigid registration was employed to study the deformation behavior. Optimal material properties for finite element modeling were determined and their prediction performance was assessed. The realism of simulated compressions was evaluated by comparing the breast shapes on simulated and real mammograms. Results: Following image registration, 19 breast compressions from 8 women were studied. An anisotropic deformation behavior, with a reduced elongation in the anterior-posterior direction and an increased stretch in the inferior-superior direction was observed. Using finite element simulations, the performance of isotropic and transverse isotropic material models to predict the displacement of internal landmarks was compared. Isotropic materials reduced the mean displacement error of the landmarks from 23.3 to 4.7 mm, on average, after optimizing material properties with respect to breast surface alignment and image similarity. Statistically significantly smaller errors were achieved with transverse isotropic materials (4.1 mm, P=0.0045). Homogeneous material models performed substantially worse (transverse isotropic: 5.5 mm; isotropic: 6.7 mm). Of the parameters varied, the amount of anisotropy had the greatest influence on the results. Optimal material properties varied less when grouped by patient rather than by compression magnitude (mean: 0.72 vs 1.44). Employing these optimal materials for simulating mammograms from ten MR breast images of a different cohort resulted in more realistic breast
Large Eddy Simulation of the spray formation in confinements
International Nuclear Information System (INIS)
Lampa, A.; Fritsching, U.
2013-01-01
Highlights: • Process stability of confined spray processes is affected by the geometric design of the spray confinement. • LES simulations of confined spray flow have been performed successfully. • Clustering processes of droplets is predicted in simulations and validated with experiments. • Criteria for specific coherent gas flow patterns and droplet clustering behaviour are found. -- Abstract: The particle and powder properties produced within spray drying processes are influenced by various unsteady transport phenomena in the dispersed multiphase spray flow in a confined spray chamber. In this context differently scaled spray structures in a confined spray environment have been analyzed in experiments and numerical simulations. The experimental investigations have been carried out with Particle-Image-Velocimetry to determine the velocity of the gas and the discrete phase. Large-Eddy-Simulations have been set up to predict the transient behaviour of the spray process and have given more insight into the sensitivity of the spray flow structures in dependency from the spray chamber design
Inviscid Wall-Modeled Large Eddy Simulations for Improved Efficiency
Aikens, Kurt; Craft, Kyle; Redman, Andrew
2015-11-01
The accuracy of an inviscid flow assumption for wall-modeled large eddy simulations (LES) is examined because of its ability to reduce simulation costs. This assumption is not generally applicable for wall-bounded flows due to the high velocity gradients found near walls. In wall-modeled LES, however, neither the viscous near-wall region or the viscous length scales in the outer flow are resolved. Therefore, the viscous terms in the Navier-Stokes equations have little impact on the resolved flowfield. Zero pressure gradient flat plate boundary layer results are presented for both viscous and inviscid simulations using a wall model developed previously. The results are very similar and compare favorably to those from another wall model methodology and experimental data. Furthermore, the inviscid assumption reduces simulation costs by about 25% and 39% for supersonic and subsonic flows, respectively. Future research directions are discussed as are preliminary efforts to extend the wall model to include the effects of unresolved wall roughness. This work used the Extreme Science and Engineering Discovery Environment (XSEDE), which is supported by National Science Foundation grant number ACI-1053575. Computational resources on TACC Stampede were provided under XSEDE allocation ENG150001.
Large eddy simulation of a fuel rod subchannel
International Nuclear Information System (INIS)
Mayer, Gusztav
2007-01-01
In a VVER-440 reactor the measured outlet temperature is related to fuel limit parameters and the power upgrading plans of VVER-440 reactors motivated us to obtain more information on the mixing process of the fuel assemblies. In a VVER-440 rod bundle the fuel rods are arranged in triangular array. Measurement shows (Krauss and Meyer, 1998) that the classical engineering approach, which tries to trace the characterization of such systems back to equivalent (hydraulic diameter) pipe flows, does not give reasonable results. Due to the different turbulence characteristics, the mixing is more intensive in rod bundles than it would be expected based on equivalent pipe flow correlations. As a possible explanation of the high mixing, secondary flow was deduced from measurements by several experimentalists (Trupp and Azad, 1975). Another candidate to explain the high mixing is the so-called flow pulsation phenomenon (Krauss and Meyer, 1998). In this paper we present subchannel simulations (Mayer et al. 2007) using large eddy simulation (LES) methodology and the lattice Boltzmann method (LBM) without the spacers at Reynolds number 21000. The simulation results are compared with the measurements of Trupp and Azad (1975). The mean axial velocity profile shows good agreement with the measurement data. Secondary flow has been observed directly in the simulation results. Reasonable agreement has been achieved for most Reynolds stresses. Nevertheless, the calculated normal stresses show small, but systematic deviation from the measurement data. (author)
26th Space Simulation Conference Proceedings. Environmental Testing: The Path Forward
Packard, Edward A.
2010-01-01
Topics covered include: A Multifunctional Space Environment Simulation Facility for Accelerated Spacecraft Materials Testing; Exposure of Spacecraft Surface Coatings in a Simulated GEO Radiation Environment; Gravity-Offloading System for Large-Displacement Ground Testing of Spacecraft Mechanisms; Microscopic Shutters Controlled by cRIO in Sounding Rocket; Application of a Physics-Based Stabilization Criterion to Flight System Thermal Testing; Upgrade of a Thermal Vacuum Chamber for 20 Kelvin Operations; A New Approach to Improve the Uniformity of Solar Simulator; A Perfect Space Simulation Storm; A Planetary Environmental Simulator/Test Facility; Collimation Mirror Segment Refurbishment inside ESA s Large Space; Space Simulation of the CBERS 3 and 4 Satellite Thermal Model in the New Brazilian 6x8m Thermal Vacuum Chamber; The Certification of Environmental Chambers for Testing Flight Hardware; Space Systems Environmental Test Facility Database (SSETFD), Website Development Status; Wallops Flight Facility: Current and Future Test Capabilities for Suborbital and Orbital Projects; Force Limited Vibration Testing of JWST NIRSpec Instrument Using Strain Gages; Investigation of Acoustic Field Uniformity in Direct Field Acoustic Testing; Recent Developments in Direct Field Acoustic Testing; Assembly, Integration and Test Centre in Malaysia: Integration between Building Construction Works and Equipment Installation; Complex Ground Support Equipment for Satellite Thermal Vacuum Test; Effect of Charging Electron Exposure on 1064nm Transmission through Bare Sapphire Optics and SiO2 over HfO2 AR-Coated Sapphire Optics; Environmental Testing Activities and Capabilities for Turkish Space Industry; Integrated Circuit Reliability Simulation in Space Environments; Micrometeoroid Impacts and Optical Scatter in Space Environment; Overcoming Unintended Consequences of Ambient Pressure Thermal Cycling Environmental Tests; Performance and Functionality Improvements to Next Generation
Accelerating large-scale phase-field simulations with GPU
Directory of Open Access Journals (Sweden)
Xiaoming Shi
2017-10-01
Full Text Available A new package for accelerating large-scale phase-field simulations was developed by using GPU based on the semi-implicit Fourier method. The package can solve a variety of equilibrium equations with different inhomogeneity including long-range elastic, magnetostatic, and electrostatic interactions. Through using specific algorithm in Compute Unified Device Architecture (CUDA, Fourier spectral iterative perturbation method was integrated in GPU package. The Allen-Cahn equation, Cahn-Hilliard equation, and phase-field model with long-range interaction were solved based on the algorithm running on GPU respectively to test the performance of the package. From the comparison of the calculation results between the solver executed in single CPU and the one on GPU, it was found that the speed on GPU is enormously elevated to 50 times faster. The present study therefore contributes to the acceleration of large-scale phase-field simulations and provides guidance for experiments to design large-scale functional devices.
Quality and Reliability of Large-Eddy Simulations II
Salvetti, Maria Vittoria; Meyers, Johan; Sagaut, Pierre
2011-01-01
The second Workshop on "Quality and Reliability of Large-Eddy Simulations", QLES2009, was held at the University of Pisa from September 9 to September 11, 2009. Its predecessor, QLES2007, was organized in 2007 in Leuven (Belgium). The focus of QLES2009 was on issues related to predicting, assessing and assuring the quality of LES. The main goal of QLES2009 was to enhance the knowledge on error sources and on their interaction in LES and to devise criteria for the prediction and optimization of simulation quality, by bringing together mathematicians, physicists and engineers and providing a platform specifically addressing these aspects for LES. Contributions were made by leading experts in the field. The present book contains the written contributions to QLES2009 and is divided into three parts, which reflect the main topics addressed at the workshop: (i) SGS modeling and discretization errors; (ii) Assessment and reduction of computational errors; (iii) Mathematical analysis and foundation for SGS modeling.
Large Eddy Simulation (LES for IC Engine Flows
Directory of Open Access Journals (Sweden)
Kuo Tang-Wei
2013-10-01
Full Text Available Numerical computations are carried out using an engineering-level Large Eddy Simulation (LES model that is provided by a commercial CFD code CONVERGE. The analytical framework and experimental setup consist of a single cylinder engine with Transparent Combustion Chamber (TCC under motored conditions. A rigorous working procedure for comparing and analyzing the results from simulation and high speed Particle Image Velocimetry (PIV experiments is documented in this work. The following aspects of LES are analyzed using this procedure: number of cycles required for convergence with adequate accuracy; effect of mesh size, time step, sub-grid-scale (SGS turbulence models and boundary condition treatments; application of the proper orthogonal decomposition (POD technique.
Large Eddy Simulation for Incompressible Flows An Introduction
Sagaut, P
2005-01-01
The first and most exhaustive work of its kind devoted entirely to the subject, Large Eddy Simulation presents a comprehensive account and a unified view of this young but very rich discipline. LES is the only efficient technique for approaching high Reynolds numbers when simulating industrial, natural or experimental configurations. The author concentrates on incompressible fluids and chooses his topics in treating with care both the mathematical ideas and their applications. The book addresses researchers as well as graduate students and engineers. The second edition was a greatly enriched version motivated both by the increasing theoretical interest in LES and the increasing number of applications. Two entirely new chapters were devoted to the coupling of LES with multiresolution multidomain techniques and to the new hybrid approaches that relate the LES procedures to the classical statistical methods based on the Reynolds-Averaged Navier-Stokes equations. This 3rd edition adds various sections to the text...
Aero-Acoustic Modelling using Large Eddy Simulation
International Nuclear Information System (INIS)
Shen, W Z; Soerensen, J N
2007-01-01
The splitting technique for aero-acoustic computations is extended to simulate three-dimensional flow and acoustic waves from airfoils. The aero-acoustic model is coupled to a sub-grid-scale turbulence model for Large-Eddy Simulations. In the first test case, the model is applied to compute laminar flow past a NACA 0015 airfoil at a Reynolds number of 800, a Mach number of 0.2 and an angle of attack of 20 deg. The model is then applied to compute turbulent flow past a NACA 0015 airfoil at a Reynolds number of 100 000, a Mach number of 0.2 and an angle of attack of 20 deg. The predicted noise spectrum is compared to experimental data
DataSpaces: An Interaction and Coordination Framework for Coupled Simulation Workflows
International Nuclear Information System (INIS)
Docan, Ciprian; Klasky, Scott A.; Parashar, Manish
2010-01-01
Emerging high-performance distributed computing environments are enabling new end-to-end formulations in science and engineering that involve multiple interacting processes and data-intensive application workflows. For example, current fusion simulation efforts are exploring coupled models and codes that simultaneously simulate separate application processes, such as the core and the edge turbulence, and run on different high performance computing resources. These components need to interact, at runtime, with each other and with services for data monitoring, data analysis and visualization, and data archiving. As a result, they require efficient support for dynamic and flexible couplings and interactions, which remains a challenge. This paper presents Data-Spaces, a flexible interaction and coordination substrate that addresses this challenge. DataSpaces essentially implements a semantically specialized virtual shared space abstraction that can be associatively accessed by all components and services in the application workflow. It enables live data to be extracted from running simulation components, indexes this data online, and then allows it to be monitored, queried and accessed by other components and services via the space using semantically meaningful operators. The underlying data transport is asynchronous, low-overhead and largely memory-to-memory. The design, implementation, and experimental evaluation of DataSpaces using a coupled fusion simulation workflow is presented.
Very large eddy simulation of the Red Sea overflow
Ilıcak, Mehmet; Özgökmen, Tamay M.; Peters, Hartmut; Baumert, Helmut Z.; Iskandarani, Mohamed
Mixing between overflows and ambient water masses is a critical problem of deep-water mass formation in the downwelling branch of the meridional overturning circulation of the ocean. Modeling approaches that have been tested so far rely either on algebraic parameterizations in hydrostatic ocean circulation models, or on large eddy simulations that resolve most of the mixing using nonhydrostatic models. In this study, we examine the performance of a set of turbulence closures, that have not been tested in comparison to observational data for overflows before. We employ the so-called very large eddy simulation (VLES) technique, which allows the use of k-ɛ models in nonhydrostatic models. This is done by applying a dynamic spatial filtering to the k-ɛ equations. To our knowledge, this is the first time that the VLES approach is adopted for an ocean modeling problem. The performance of k-ɛ and VLES models are evaluated by conducting numerical simulations of the Red Sea overflow and comparing them to observations from the Red Sea Outflow Experiment (REDSOX). The computations are constrained to one of the main channels transporting the overflow, which is narrow enough to permit the use of a two-dimensional (and nonhydrostatic) model. A large set of experiments are conducted using different closure models, Reynolds numbers and spatial resolutions. It is found that, when no turbulence closure is used, the basic structure of the overflow, consisting of a well-mixed bottom layer (BL) and entraining interfacial layer (IL), cannot be reproduced. The k-ɛ model leads to unrealistic thicknesses for both BL and IL, while VLES results in the most realistic reproduction of the REDSOX observations.
Simulated Space Environment Effects on a Candidate Solar Sail Material
Kang, Jin Ho; Bryant, Robert G.; Wilkie, W. Keats; Wadsworth, Heather M.; Craven, Paul D.; Nehls, Mary K.; Vaughn, Jason A.
2017-01-01
For long duration missions of solar sails, the sail material needs to survive harsh space environments and the degradation of the sail material controls operational lifetime. Therefore, understanding the effects of the space environment on the sail membrane is essential for mission success. In this study, we investigated the effect of simulated space environment effects of ionizing radiation, thermal aging and simulated potential damage on mechanical, thermal and optical properties of a commercial off the shelf (COTS) polyester solar sail membrane to assess the degradation mechanisms on a feasible solar sail. The solar sail membrane was exposed to high energy electrons (about 70 keV and 10 nA/cm2), and the physical properties were characterized. After about 8.3 Grad dose, the tensile modulus, tensile strength and failure strain of the sail membrane decreased by about 20 95%. The aluminum reflective layer was damaged and partially delaminated but it did not show any significant change in solar absorbance or thermal emittance. The effect on mechanical properties of a pre-cracked sample, simulating potential impact damage of the sail membrane, as well as thermal aging effects on metallized PEN (polyethylene naphthalate) film will be discussed.
Regional variation of carbonaceous aerosols from space and simulations
Mukai, Sonoyo; Sano, Itaru; Nakata, Makiko; Kokhanovsky, Alexander
2017-04-01
Satellite remote sensing provides us with a systematic monitoring in a global scale. As such, aerosol observation via satellites is known to be useful and effective. However, before attempting to retrieve aerosol properties from satellite data, the efficient algorithms for aerosol retrieval need to be considered. The characteristics and distributions of atmospheric aerosols are known to be complicated, owing to both natural factors and human activities. It is known that the biomass burning aerosols generated by the large-scale forest fires and burn agriculture have influenced the severity of air pollution. Nevertheless the biomass burning episodes increase due to global warming and climate change and vice versa. It is worth noting that the near ultra violet (NUV) measurements are helpful for the detection of carbonaceous particles, which are the main component of aerosols from biomass burning. In this work, improved retrieval algorithms for biomass burning aerosols are shown by using the measurements observed by GLI and POLDER-2 on Japanese short term mission ADEOS-2 in 2003. The GLI sensor has 380nm channel. For detection of biomass burning episodes, the aerosol optical thickness of carbonaceous aerosols simulated with the numerical model simulations (SPRINTARS) is available as well as fire products from satellite imagery. Moreover the algorithm using shorter wavelength data is available for detection of absorbing aerosols. An algorithm based on the combined use of near-UV and violet data has been introduced in our previous work with ADEOS (Advanced Earth Observing Satellite) -2 /GLI measurements [1]. It is well known that biomass burning plume is a seasonal phenomenon peculiar to a particular region. Hence, the mass concentrations of aerosols are frequently governed with spatial and/or temporal variations of biomass burning plumes. Accordingly the satellite data sets for our present study are adopted from the view points of investigation of regional and seasonal
GLAST, the Gamma-ray Large Area Space Telescope
De Angelis, A
2001-01-01
GLAST, a detector for cosmic gamma rays in the range from 20 MeV to 300 GeV, will be launched in space in 2005. Breakthroughs are expected in particular in the study of particle acceleration mechanisms in space and of gamma ray bursts, and maybe on the search for cold dark matter; but of course the most exciting discoveries could come from the unexpected.
Biased Tracers in Redshift Space in the EFT of Large-Scale Structure
Energy Technology Data Exchange (ETDEWEB)
Perko, Ashley [Stanford U., Phys. Dept.; Senatore, Leonardo [KIPAC, Menlo Park; Jennings, Elise [Chicago U., KICP; Wechsler, Risa H. [Stanford U., Phys. Dept.
2016-10-28
The Effective Field Theory of Large-Scale Structure (EFTofLSS) provides a novel formalism that is able to accurately predict the clustering of large-scale structure (LSS) in the mildly non-linear regime. Here we provide the first computation of the power spectrum of biased tracers in redshift space at one loop order, and we make the associated code publicly available. We compare the multipoles $\\ell=0,2$ of the redshift-space halo power spectrum, together with the real-space matter and halo power spectra, with data from numerical simulations at $z=0.67$. For the samples we compare to, which have a number density of $\\bar n=3.8 \\cdot 10^{-2}(h \\ {\\rm Mpc}^{-1})^3$ and $\\bar n=3.9 \\cdot 10^{-4}(h \\ {\\rm Mpc}^{-1})^3$, we find that the calculation at one-loop order matches numerical measurements to within a few percent up to $k\\simeq 0.43 \\ h \\ {\\rm Mpc}^{-1}$, a significant improvement with respect to former techniques. By performing the so-called IR-resummation, we find that the Baryon Acoustic Oscillation peak is accurately reproduced. Based on the results presented here, long-wavelength statistics that are routinely observed in LSS surveys can be finally computed in the EFTofLSS. This formalism thus is ready to start to be compared directly to observational data.
Efficient Neural Network Modeling for Flight and Space Dynamics Simulation
Directory of Open Access Journals (Sweden)
Ayman Hamdy Kassem
2011-01-01
Full Text Available This paper represents an efficient technique for neural network modeling of flight and space dynamics simulation. The technique will free the neural network designer from guessing the size and structure for the required neural network model and will help to minimize the number of neurons. For linear flight/space dynamics systems, the technique can find the network weights and biases directly by solving a system of linear equations without the need for training. Nonlinear flight dynamic systems can be easily modeled by training its linearized models keeping the same network structure. The training is fast, as it uses the linear system knowledge to speed up the training process. The technique is tested on different flight/space dynamic models and showed promising results.
Large-eddy simulation of swirling pulverized-coal combustion
Energy Technology Data Exchange (ETDEWEB)
Hu, L.Y.; Luo, Y.H. [Shanghai Jiaotong Univ. (China). School of Mechanical Engineering; Zhou, L.X.; Xu, C.S. [Tsinghua Univ., Beijing (China). Dept. of Engineering Mechanics
2013-07-01
A Eulerian-Lagrangian large-eddy simulation (LES) with a Smagorinsky-Lilly sub-grid scale stress model, presumed-PDF fast chemistry and EBU gas combustion models, particle devolatilization and particle combustion models are used to study the turbulence and flame structures of swirling pulverized-coal combustion. The LES statistical results are validated by the measurement results. The instantaneous LES results show that the coherent structures for pulverized coal combustion is stronger than that for swirling gas combustion. The particles are concentrated in the periphery of the coherent structures. The flame is located at the high vorticity and high particle concentration zone.
Large Scale Simulations of the Euler Equations on GPU Clusters
Liebmann, Manfred
2010-08-01
The paper investigates the scalability of a parallel Euler solver, using the Vijayasundaram method, on a GPU cluster with 32 Nvidia Geforce GTX 295 boards. The aim of this research is to enable large scale fluid dynamics simulations with up to one billion elements. We investigate communication protocols for the GPU cluster to compensate for the slow Gigabit Ethernet network between the GPU compute nodes and to maintain overall efficiency. A diesel engine intake-port and a nozzle, meshed in different resolutions, give good real world examples for the scalability tests on the GPU cluster. © 2010 IEEE.
Large Eddy Simulation of the ventilated wave boundary layer
DEFF Research Database (Denmark)
Lohmann, Iris P.; Fredsøe, Jørgen; Sumer, B. Mutlu
2006-01-01
A Large Eddy Simulation (LES) of (1) a fully developed turbulent wave boundary layer and (2) case 1 subject to ventilation (i.e., suction and injection varying alternately in phase) has been performed, using the Smagorinsky subgrid-scale model to express the subgrid viscosity. The model was found...... slows down the flow in the full vertical extent of the boundary layer, destabilizes the flow and decreases the mean bed shear stress significantly; whereas suction generally speeds up the flow in the full vertical extent of the boundary layer, stabilizes the flow and increases the mean bed shear stress...
Ibrahim, Mohamed; Wickenhauser, Patrick; Rautek, Peter; Reina, Guido; Hadwiger, Markus
2017-01-01
Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.
Ibrahim, Mohamed
2017-08-28
Molecular dynamics (MD) simulations are crucial to investigating important processes in physics and thermodynamics. The simulated atoms are usually visualized as hard spheres with Phong shading, where individual particles and their local density can be perceived well in close-up views. However, for large-scale simulations with 10 million particles or more, the visualization of large fields-of-view usually suffers from strong aliasing artifacts, because the mismatch between data size and output resolution leads to severe under-sampling of the geometry. Excessive super-sampling can alleviate this problem, but is prohibitively expensive. This paper presents a novel visualization method for large-scale particle data that addresses aliasing while enabling interactive high-quality rendering. We introduce the novel concept of screen-space normal distribution functions (S-NDFs) for particle data. S-NDFs represent the distribution of surface normals that map to a given pixel in screen space, which enables high-quality re-lighting without re-rendering particles. In order to facilitate interactive zooming, we cache S-NDFs in a screen-space mipmap (S-MIP). Together, these two concepts enable interactive, scale-consistent re-lighting and shading changes, as well as zooming, without having to re-sample the particle data. We show how our method facilitates the interactive exploration of real-world large-scale MD simulation data in different scenarios.
Multiscale Data Assimilation for Large-Eddy Simulations
Li, Z.; Cheng, X.; Gustafson, W. I., Jr.; Xiao, H.; Vogelmann, A. M.; Endo, S.; Toto, T.
2017-12-01
Large-eddy simulation (LES) is a powerful tool for understanding atmospheric turbulence, boundary layer physics and cloud development, and there is a great need for developing data assimilation methodologies that can constrain LES models. The U.S. Department of Energy Atmospheric Radiation Measurement (ARM) User Facility has been developing the capability to routinely generate ensembles of LES. The LES ARM Symbiotic Simulation and Observation (LASSO) project (https://www.arm.gov/capabilities/modeling/lasso) is generating simulations for shallow convection days at the ARM Southern Great Plains site in Oklahoma. One of major objectives of LASSO is to develop the capability to observationally constrain LES using a hierarchy of ARM observations. We have implemented a multiscale data assimilation (MSDA) scheme, which allows data assimilation to be implemented separately for distinct spatial scales, so that the localized observations can be effectively assimilated to constrain the mesoscale fields in the LES area of about 15 km in width. The MSDA analysis is used to produce forcing data that drive LES. With such LES workflow we have examined 13 days with shallow convection selected from the period May-August 2016. We will describe the implementation of MSDA, present LES results, and address challenges and opportunities for applying data assimilation to LES studies.
Large eddy simulation of turbulent and stably-stratified flows
International Nuclear Information System (INIS)
Fallon, Benoit
1994-01-01
The unsteady turbulent flow over a backward-facing step is studied by mean of Large Eddy Simulations with structure function sub grid model, both in isothermal and stably-stratified configurations. Without stratification, the flow develops highly-distorted Kelvin-Helmholtz billows, undergoing to helical pairing, with A-shaped vortices shed downstream. We show that forcing injected by recirculation fluctuations governs this oblique mode instabilities development. The statistical results show good agreements with the experimental measurements. For stably-stratified configurations, the flow remains more bi-dimensional. We show with increasing stratification, how the shear layer growth is frozen by inhibition of pairing process then of Kelvin-Helmholtz instabilities, and the development of gravity waves or stable density interfaces. Eddy structures of the flow present striking analogies with the stratified mixing layer. Additional computations show the development of secondary Kelvin-Helmholtz instabilities on the vorticity layers between two primary structures. This important mechanism based on baroclinic effects (horizontal density gradients) constitutes an additional part of the turbulent mixing process. Finally, the feasibility of Large Eddy Simulation is demonstrated for industrial flows, by studying a complex stratified cavity. Temperature fluctuations are compared to experimental measurements. We also develop three-dimensional un-stationary animations, in order to understand and visualize turbulent interactions. (author) [fr
Large Eddy Simulation of Film-Cooling Jets
Iourokina, Ioulia
2005-11-01
Large Eddy Simulation of inclined jets issuing into a turbulent boundary layer crossflow has been performed. The simulation models film-cooling experiments of Pietrzyk et al. (J. of. Turb., 1989), consisting of a large plenum feeding an array of jets inclined at 35ÃÂ° to the flat surface with a pitch 3D and L/D=3.5. The blowing ratio is 0.5 with unity density ratio. The numerical method used is a hybrid combining external compressible solver with a low-Mach number code for the plenum and film holes. Vorticity dynamics pertinent to jet-in-crossflow interactions is analyzed and three-dimensional vortical structures are revealed. Turbulence statistics are compared to the experimental data. The turbulence production due to shearing in the crossflow is compared to that within the jet hole. The influence of three-dimensional coherent structures on the wall heat transfer is investigated and strategies to increase film- cooling performance are discussed.
Large Eddy Simulation of Supercritical CO2 Through Bend Pipes
He, Xiaoliang; Apte, Sourabh; Dogan, Omer
2017-11-01
Supercritical Carbon Dioxide (sCO2) is investigated as working fluid for power generation in thermal solar, fossil energy and nuclear power plants at high pressures. Severe erosion has been observed in the sCO2 test loops, particularly in nozzles, turbine blades and pipe bends. It is hypothesized that complex flow features such as flow separation and property variations may lead to large oscillations in the wall shear stresses and result in material erosion. In this work, large eddy simulations are conducted at different Reynolds numbers (5000, 27,000 and 50,000) to investigate the effect of heat transfer in a 90 degree bend pipe with unit radius of curvature in order to identify the potential causes of the erosion. The simulation is first performed without heat transfer to validate the flow solver against available experimental and computational studies. Mean flow statistics, turbulent kinetic energy, shear stresses and wall force spectra are computed and compared with available experimental data. Formation of counter-rotating vortices, named Dean vortices, are observed. Secondary flow pattern and swirling-switching flow motions are identified and visualized. Effects of heat transfer on these flow phenomena are then investigated by applying a constant heat flux at the wall. DOE Fossil Energy Crosscutting Technology Research Program.
Optimizing grade-control drillhole spacing with conditional simulations
Directory of Open Access Journals (Sweden)
Adrian Martínez-Vargas
2017-01-01
Full Text Available This paper summarizes a method to determine the optimum spacing of grade-control drillholes drilled with reverse-circulation. The optimum drillhole spacing was defined as that one whose cost equals the cost of misclassifying ore and waste in selection mining units (SMU. The cost of misclassification of a given drillhole spacing is equal to the cost of processing waste misclassified as ore (Type I error plus the value of the ore misclassified as waste (Type II error. Type I and Type II errors were deduced by comparing true and estimated grades at SMUs, in relation to a cuttoff grade value and assuming free ore selection. True grades at SMUs and grades at drillhole samples were generated with conditional simulations. A set of estimated grades at SMU, one per each drillhole spacing, were generated with ordinary kriging. This method was used to determine the optimum drillhole spacing in a gold deposit. The results showed that the cost of misclassification is sensitive to extreme block values and tend to be overrepresented. Capping SMU’s lost values and implementing diggability constraints was recommended to improve calculations of total misclassification costs.
Numerical techniques for large cosmological N-body simulations
International Nuclear Information System (INIS)
Efstathiou, G.; Davis, M.; Frenk, C.S.; White, S.D.M.
1985-01-01
We describe and compare techniques for carrying out large N-body simulations of the gravitational evolution of clustering in the fundamental cube of an infinite periodic universe. In particular, we consider both particle mesh (PM) codes and P 3 M codes in which a higher resolution force is obtained by direct summation of contributions from neighboring particles. We discuss the mesh-induced anisotropies in the forces calculated by these schemes, and the extent to which they can model the desired 1/r 2 particle-particle interaction. We also consider how transformation of the time variable can improve the efficiency with which the equations of motion are integrated. We present tests of the accuracy with which the resulting schemes conserve energy and are able to follow individual particle trajectories. We have implemented an algorithm which allows initial conditions to be set up to model any desired spectrum of linear growing mode density fluctuations. A number of tests demonstrate the power of this algorithm and delineate the conditions under which it is effective. We carry out several test simulations using a variety of techniques in order to show how the results are affected by dynamic range limitations in the force calculations, by boundary effects, by residual artificialities in the initial conditions, and by the number of particles employed. For most purposes cosmological simulations are limited by the resolution of their force calculation rather than by the number of particles they can employ. For this reason, while PM codes are quite adequate to study the evolution of structure on large scale, P 3 M methods are to be preferred, in spite of their greater cost and complexity, whenever the evolution of small-scale structure is important
Large Eddy Simulation for an inherent boron dilution transient
International Nuclear Information System (INIS)
Jayaraju, S.T.; Sathiah, P.; Komen, E.M.J.; Baglietto, E.
2013-01-01
Highlights: • Large Eddy Simulation is performed for a transient boron dilution scenario in the scaled experimental facility of ROCOM. • Fully conformal polyhedral grid of 14 million is created to capture all details of the domain. • Systematic multi-step validation methodology is followed to assess the accuracy of LES model. • For the presently simulated BDT scenario, the LES results lend support to its reliability in consistently predicting the slug transport in the RPV. -- Abstract: The present paper focuses on the validation and applicability of large eddy simulation (LES) to analyze the transport and mixing in the reactor pressure vessel (RPV) during an inherent boron dilution transient (BDT) scenario. Extensive validation data comes from relevant integral tests performed in the scaled ROCOM experimental facility. The modeling of sub-grid-scales is based on the WALE model. A fully conformal polyhedral grid of about 15 million cells is constructed to capture all details in the domain, including the complex structures of the lower-plenum. Detailed qualitative and quantitative validations are performed by following a systematic multi-step validation methodology. Qualitative comparisons to the experimental data in the cold legs, downcomer and the core inlet showed good predictions by the LES model. Minor deviations seen in the quantitative comparisons are rigorously quantified. A key parameter which is affecting the core neutron kinetics response is the value of highest deborated slug concentration that occurs at the core inlet during the transient. Detailed analyses are made at the core inlet to evaluate not only the value of the maximum slug concentration, but also the location and the time at which it occurs during the transient. The relative differences between the ensemble averaged experimental data and CFD predictions were within the range of relative differences seen within 10 different experimental realizations. For the studied scenario, the
Cardall, Christian Y.; Budiardja, Reuben D.
2018-01-01
The large-scale computer simulation of a system of physical fields governed by partial differential equations requires some means of approximating the mathematical limit of continuity. For example, conservation laws are often treated with a 'finite-volume' approach in which space is partitioned into a large number of small 'cells,' with fluxes through cell faces providing an intuitive discretization modeled on the mathematical definition of the divergence operator. Here we describe and make available Fortran 2003 classes furnishing extensible object-oriented implementations of simple meshes and the evolution of generic conserved currents thereon, along with individual 'unit test' programs and larger example problems demonstrating their use. These classes inaugurate the Mathematics division of our developing astrophysics simulation code GENASIS (Gen eral A strophysical Si mulation S ystem), which will be expanded over time to include additional meshing options, mathematical operations, solver types, and solver variations appropriate for many multiphysics applications.
Large Scale Beam-beam Simulations for the CERN LHC using Distributed Computing
Herr, Werner; McIntosh, E; Schmidt, F
2006-01-01
We report on a large scale simulation of beam-beam effects for the CERN Large Hadron Collider (LHC). The stability of particles which experience head-on and long-range beam-beam effects was investigated for different optical configurations and machine imperfections. To cover the interesting parameter space required computing resources not available at CERN. The necessary resources were available in the LHC@home project, based on the BOINC platform. At present, this project makes more than 60000 hosts available for distributed computing. We shall discuss our experience using this system during a simulation campaign of more than six months and describe the tools and procedures necessary to ensure consistent results. The results from this extended study are presented and future plans are discussed.
A dynamic globalization model for large eddy simulation of complex turbulent flow
Energy Technology Data Exchange (ETDEWEB)
Choi, Hae Cheon; Park, No Ma; Kim, Jin Seok [Seoul National Univ., Seoul (Korea, Republic of)
2005-07-01
A dynamic subgrid-scale model is proposed for large eddy simulation of turbulent flows in complex geometry. The eddy viscosity model by Vreman [Phys. Fluids, 16, 3670 (2004)] is considered as a base model. A priori tests with the original Vreman model show that it predicts the correct profile of subgrid-scale dissipation in turbulent channel flow but the optimal model coefficient is far from universal. Dynamic procedures of determining the model coefficient are proposed based on the 'global equilibrium' between the subgrid-scale dissipation and viscous dissipation. An important feature of the proposed procedures is that the model coefficient determined is globally constant in space but varies only in time. Large eddy simulations with the present dynamic model are conducted for forced isotropic turbulence, turbulent channel flow and flow over a sphere, showing excellent agreements with previous results.
Nesting Large-Eddy Simulations Within Mesoscale Simulations for Wind Energy Applications
Lundquist, J. K.; Mirocha, J. D.; Chow, F. K.; Kosovic, B.; Lundquist, K. A.
2008-12-01
With increasing demand for more accurate atmospheric simulations for wind turbine micrositing, for operational wind power forecasting, and for more reliable turbine design, simulations of atmospheric flow with resolution of tens of meters or higher are required. These time-dependent large-eddy simulations (LES) account for complex terrain and resolve individual atmospheric eddies on length scales smaller than turbine blades. These small-domain high-resolution simulations are possible with a range of commercial and open- source software, including the Weather Research and Forecasting (WRF) model. In addition to "local" sources of turbulence within an LES domain, changing weather conditions outside the domain can also affect flow, suggesting that a mesoscale model provide boundary conditions to the large-eddy simulations. Nesting a large-eddy simulation within a mesoscale model requires nuanced representations of turbulence. Our group has improved the Weather and Research Forecating model's (WRF) LES capability by implementing the Nonlinear Backscatter and Anisotropy (NBA) subfilter stress model following Kosoviæ (1997) and an explicit filtering and reconstruction technique to compute the Resolvable Subfilter-Scale (RSFS) stresses (following Chow et al, 2005). We have also implemented an immersed boundary method (IBM) in WRF to accommodate complex terrain. These new models improve WRF's LES capabilities over complex terrain and in stable atmospheric conditions. We demonstrate approaches to nesting LES within a mesoscale simulation for farms of wind turbines in hilly regions. Results are sensitive to the nesting method, indicating that care must be taken to provide appropriate boundary conditions, and to allow adequate spin-up of turbulence in the LES domain. This work is performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344.
Large eddy simulation of soot evolution in an aircraft combustor
Mueller, Michael E.; Pitsch, Heinz
2013-11-01
An integrated kinetics-based Large Eddy Simulation (LES) approach for soot evolution in turbulent reacting flows is applied to the simulation of a Pratt & Whitney aircraft gas turbine combustor, and the results are analyzed to provide insights into the complex interactions of the hydrodynamics, mixing, chemistry, and soot. The integrated approach includes detailed models for soot, combustion, and the unresolved interactions between soot, chemistry, and turbulence. The soot model is based on the Hybrid Method of Moments and detailed descriptions of soot aggregates and the various physical and chemical processes governing their evolution. The detailed kinetics of jet fuel oxidation and soot precursor formation is described with the Radiation Flamelet/Progress Variable model, which has been modified to account for the removal of soot precursors from the gas-phase. The unclosed filtered quantities in the soot and combustion models, such as source terms, are closed with a novel presumed subfilter PDF approach that accounts for the high subfilter spatial intermittency of soot. For the combustor simulation, the integrated approach is combined with a Lagrangian parcel method for the liquid spray and state-of-the-art unstructured LES technology for complex geometries. Two overall fuel-to-air ratios are simulated to evaluate the ability of the model to make not only absolute predictions but also quantitative predictions of trends. The Pratt & Whitney combustor is a Rich-Quench-Lean combustor in which combustion first occurs in a fuel-rich primary zone characterized by a large recirculation zone. Dilution air is then added downstream of the recirculation zone, and combustion continues in a fuel-lean secondary zone. The simulations show that large quantities of soot are formed in the fuel-rich recirculation zone, and, furthermore, the overall fuel-to-air ratio dictates both the dominant soot growth process and the location of maximum soot volume fraction. At the higher fuel
Robust large-scale parallel nonlinear solvers for simulations.
Energy Technology Data Exchange (ETDEWEB)
Bader, Brett William; Pawlowski, Roger Patrick; Kolda, Tamara Gibson (Sandia National Laboratories, Livermore, CA)
2005-11-01
This report documents research to develop robust and efficient solution techniques for solving large-scale systems of nonlinear equations. The most widely used method for solving systems of nonlinear equations is Newton's method. While much research has been devoted to augmenting Newton-based solvers (usually with globalization techniques), little has been devoted to exploring the application of different models. Our research has been directed at evaluating techniques using different models than Newton's method: a lower order model, Broyden's method, and a higher order model, the tensor method. We have developed large-scale versions of each of these models and have demonstrated their use in important applications at Sandia. Broyden's method replaces the Jacobian with an approximation, allowing codes that cannot evaluate a Jacobian or have an inaccurate Jacobian to converge to a solution. Limited-memory methods, which have been successful in optimization, allow us to extend this approach to large-scale problems. We compare the robustness and efficiency of Newton's method, modified Newton's method, Jacobian-free Newton-Krylov method, and our limited-memory Broyden method. Comparisons are carried out for large-scale applications of fluid flow simulations and electronic circuit simulations. Results show that, in cases where the Jacobian was inaccurate or could not be computed, Broyden's method converged in some cases where Newton's method failed to converge. We identify conditions where Broyden's method can be more efficient than Newton's method. We also present modifications to a large-scale tensor method, originally proposed by Bouaricha, for greater efficiency, better robustness, and wider applicability. Tensor methods are an alternative to Newton-based methods and are based on computing a step based on a local quadratic model rather than a linear model. The advantage of Bouaricha's method is that it can use any
University of Central Florida / Deep Space Industries Asteroid Regolith Simulants
Britt, Daniel; Covey, Steven D.; Schultz, Cody
2017-10-01
Introduction: The University of Central Florida (UCF), in partnership with Deep Space Industries (DSI) are working under a NASA Phase 2 SBIR contract to develop and produce a family of asteroid regolith simulants for use in research, engineering, and mission operations testing. We base simulant formulas on the mineralogy, particle size, and physical characteristics of CI, CR, CM, C2, CV, and L-Chondrite meteorites. The advantage in simulating meteorites is that the vast majority of meteoritic materials are common rock forming minerals that are available in commercial quantities. While formulas are guided by the meteorites our approach is one of constrained maximization under the limitations of safety, cost, source materials, and ease of handling. In all cases our goal is to deliver a safe, high fidelity analog at moderate cost.Source Materials, Safety, and Biohazards: A critical factor in any useful simulant is to minimize handling risks for biohazards or toxicity. All the terrestrial materials proposed for these simulants were reviewed for potential toxicity. Of particular interest is the organic component of volatile rich carbonaceous chondrites which contain polycyclic aromatic hydrocarbons (PAHs), some of which are known carcinogens and mutagens. Our research suggests that we can maintain rough chemical fidelity by substituting much safer sub-bituminous coal as our organic analog. A second safety consideration is the choice of serpentine group materials. While most serpentine polymorphs are quite safe we avoid fibrous chrysotile because of its asbestos content. Terrestrial materials identified as inputs for our simulants are common rock forming minerals that are available in commercial quantities. These include olivine, pyroxene, plagioclase feldspar, smectite, serpentine, saponite, pyrite, and magnetite in amounts that are appropriate for each type. For CI's and CR’s, their olivines tend to be Fo100 which is rare on Earth. We have substituted Fo90 olivine
Modeling and Simulation for Multi-Missions Space Exploration Vehicle
Chang, Max
2011-01-01
Asteroids and Near-Earth Objects [NEOs] are of great interest for future space missions. The Multi-Mission Space Exploration Vehicle [MMSEV] is being considered for future Near Earth Object missions and requires detailed planning and study of its Guidance, Navigation, and Control [GNC]. A possible mission of the MMSEV to a NEO would be to navigate the spacecraft to a stationary orbit with respect to the rotating asteroid and proceed to anchor into the surface of the asteroid with robotic arms. The Dynamics and Real-Time Simulation [DARTS] laboratory develops reusable models and simulations for the design and analysis of missions. In this paper, the development of guidance and anchoring models are presented together with their role in achieving mission objectives and relationships to other parts of the simulation. One important aspect of guidance is in developing methods to represent the evolution of kinematic frames related to the tasks to be achieved by the spacecraft and its robot arms. In this paper, we compare various types of mathematical interpolation methods for position and quaternion frames. Subsequent work will be on analyzing the spacecraft guidance system with different movements of the arms. With the analyzed data, the guidance system can be adjusted to minimize the errors in performing precision maneuvers.
Primary loop simulation of the SP-100 space nuclear reactor
International Nuclear Information System (INIS)
Borges, Eduardo M.; Braz Filho, Francisco A.; Guimaraes, Lamartine N.F.
2011-01-01
Between 1983 and 1992 the SP-100 space nuclear reactor development project for electric power generation in a range of 100 to 1000 kWh was conducted in the USA. Several configurations were studied to satisfy different mission objectives and power systems. In this reactor the heat is generated in a compact core and refrigerated by liquid lithium, the primary loops flow are controlled by thermoelectric electromagnetic pumps (EMTE), and thermoelectric converters produce direct current energy. To define the system operation point for an operating nominal power, it is necessary the simulation of the thermal-hydraulic components of the space nuclear reactor. In this paper the BEMTE-3 computer code is used to EMTE pump design performance evaluation to a thermalhydraulic primary loop configuration, and comparison of the system operation points of SP-100 reactor to two thermal powers, with satisfactory results. (author)
Large-scale ground motion simulation using GPGPU
Aoi, S.; Maeda, T.; Nishizawa, N.; Aoki, T.
2012-12-01
Huge computation resources are required to perform large-scale ground motion simulations using 3-D finite difference method (FDM) for realistic and complex models with high accuracy. Furthermore, thousands of various simulations are necessary to evaluate the variability of the assessment caused by uncertainty of the assumptions of the source models for future earthquakes. To conquer the problem of restricted computational resources, we introduced the use of GPGPU (General purpose computing on graphics processing units) which is the technique of using a GPU as an accelerator of the computation which has been traditionally conducted by the CPU. We employed the CPU version of GMS (Ground motion Simulator; Aoi et al., 2004) as the original code and implemented the function for GPU calculation using CUDA (Compute Unified Device Architecture). GMS is a total system for seismic wave propagation simulation based on 3-D FDM scheme using discontinuous grids (Aoi&Fujiwara, 1999), which includes the solver as well as the preprocessor tools (parameter generation tool) and postprocessor tools (filter tool, visualization tool, and so on). The computational model is decomposed in two horizontal directions and each decomposed model is allocated to a different GPU. We evaluated the performance of our newly developed GPU version of GMS on the TSUBAME2.0 which is one of the Japanese fastest supercomputer operated by the Tokyo Institute of Technology. First we have performed a strong scaling test using the model with about 22 million grids and achieved 3.2 and 7.3 times of the speed-up by using 4 and 16 GPUs. Next, we have examined a weak scaling test where the model sizes (number of grids) are increased in proportion to the degree of parallelism (number of GPUs). The result showed almost perfect linearity up to the simulation with 22 billion grids using 1024 GPUs where the calculation speed reached to 79.7 TFlops and about 34 times faster than the CPU calculation using the same number
Large Eddy Simulations of turbulent flows at supercritical pressure
Energy Technology Data Exchange (ETDEWEB)
Kunik, C.; Otic, I.; Schulenberg, T., E-mail: claus.kunik@kit.edu, E-mail: ivan.otic@kit.edu, E-mail: thomas.schulenberg@kit.edu [Karlsruhe Inst. of Tech. (KIT), Karlsruhe (Germany)
2011-07-01
A Large Eddy Simulation (LES) method is used to investigate turbulent heat transfer to CO{sub 2} at supercritical pressure for upward flows. At those pressure conditions the fluid undergoes strong variations of fluid properties in a certain temperature range, which can lead to a deterioration of heat transfer (DHT). In this analysis, the LES method is applied on turbulent forced convection conditions to investigate the influence of several subgrid scale models (SGS-model). At first, only velocity profiles of the so-called inflow generator are considered, whereas in the second part temperature profiles of the heated section are investigated in detail. The results are statistically analyzed and compared with DNS data from the literature. (author)
Background simulations for the Large Area Detector onboard LOFT
DEFF Research Database (Denmark)
Campana, Riccardo; Feroci, Marco; Ettore, Del Monte
2013-01-01
and magnetic fields around compact objects and in supranuclear density conditions. Having an effective area of similar to 10 m(2) at 8 keV, LOFT will be able to measure with high sensitivity very fast variability in the X-ray fluxes and spectra. A good knowledge of the in-orbit background environment...... is essential to assess the scientific performance of the mission and optimize the design of its main instrument, the Large Area Detector (LAD). In this paper the results of an extensive Geant-4 simulation of the instrumentwillbe discussed, showing the main contributions to the background and the design...... an anticipated modulation of the background rate as small as 10 % over the orbital timescale. The intrinsic photonic origin of the largest background component also allows for an efficient modelling, supported by an in-flight active monitoring, allowing to predict systematic residuals significantly better than...
Generating wind fluctuations for Large Eddy Simulation inflow boundary condition
International Nuclear Information System (INIS)
Bekele, S.A.; Hangan, H.
2004-01-01
Large Eddy Simulation (LES) studies of flows over bluff bodies immersed in a boundary layer wind environment require instantaneous wind characteristics. The influences of the wind environment on the building pressure distribution are a well-established fact in the experimental study of wind engineering. Measured wind data of full or model scale are available only at a limited number of points. A method of obtaining instantaneous wind data at all mesh points of the inlet boundary for LES computation is necessary. Herein previous and new wind inflow generation techniques are presented. The generated wind data is then applied to a LES computation of a channel flow. The characteristics of the generated wind fluctuations in comparison to the measured data and the properties of the flow field computed from these two wind data are discussed. (author)
Large eddy simulation of vortex breakdown behind a delta wing
International Nuclear Information System (INIS)
Mary, I.
2003-01-01
A large eddy simulation (LES) of a turbulent flow past a 70 deg. sweep angle delta wing is performed and compared with wind tunnel experiments. The angle of attack and the Reynolds number based on the root chord are equal to 27 deg. and 1.6x10 6 , respectively. Due to the high value of the Reynolds number and the three-dimensional geometry, the mesh resolution usually required by LES cannot be reached. Therefore a local mesh refinement technique based on semi-structured grids is proposed, whereas different wall functions are assessed in this paper. The goal is to evaluate if these techniques are sufficient to provide an accurate solution of such flow on available supercomputers. An implicit Miles model is retained for the subgrid scale (SGS) modelling because the resolution is too coarse to take advantage of more sophisticated SGS models. The solution sensitivity to grid refinement in the streamwise and wall normal direction is investigated
Langevin dynamics simulations of large frustrated Josephson junction arrays
International Nuclear Information System (INIS)
Groenbech-Jensen, N.; Bishop, A.R.; Lomdahl, P.S.
1991-01-01
Long-time Langevin dynamics simulations of large (N x N,N = 128) 2-dimensional arrays of Josephson junctions in a uniformly frustrating external magnetic field are reported. The results demonstrate: (1) Relaxation from an initially random flux configuration as a universal fit to a glassy stretched-exponential type of relaxation for the intermediate temperatures T(0.3 T c approx-lt T approx-lt 0.7 T c ), and an activated dynamic behavior for T ∼ T c ; (2) a glassy (multi-time, multi-length scale) voltage response to an applied current. Intrinsic dynamical symmetry breaking induced by boundaries as nucleation sites for flux lattice defects gives rise to transverse and noisy voltage response
Langevin dynamics simulations of large frustrated Josephson junction arrays
International Nuclear Information System (INIS)
Gronbech-Jensen, N.; Bishop, A.R.; Lomdahl, P.S.
1991-01-01
Long-time Langevin dynamics simulations of large (N x N, N = 128) 2-dimensional arrays of Josephson junctions in a uniformly frustrating external magnetic field are reported. The results demonstrate: Relaxation from an initially random flux configuration as a ''universal'' fit to a ''glassy'' stretched-exponential type of relaxation for the intermediate temperatures T (0.3 T c approx-lt T approx-lt 0.7 T c ), and an ''activated dynamic'' behavior for T ∼ T c A glassy (multi-time, multi-length scale) voltage response to an applied current. Intrinsic dynamical symmetry breaking induced by boundaries as nucleation sites for flux lattice defects gives rise to transverse and noisy voltage response
Large-Eddy-Simulation of turbulent magnetohydrodynamic flows
Directory of Open Access Journals (Sweden)
Woelck Johannes
2017-01-01
Full Text Available A magnetohydrodynamic turbulent channel flow under the influence of a wallnormal magnetic field is investigated using the Large-Eddy-Simulation technique and k-equation subgrid-scale-model. Therefore, the new solver MHDpisoFoam is implemented in the OpenFOAM CFD-Code. The temporal decay of an initial turbulent field for different magnetic parameters is investigated. The rms values of the averaged velocity fluctuations show a similar, trend for each coordinate direction. 80% of the fluctuations are damped out in the range between 0 < Ha < < 75 at Re = 6675. The trend can be approximated via an exponential of the form exp(−a·Ha, where a is a scaling parameter. At higher Hartmann numbers the fluctuations decrease in an almost linear way. Therefore, the results of this study show that it may be possible to construct a general law for the turbulence damping due to action of magnetic fields.
Large eddy simulation of the flow through a swirl generator
Energy Technology Data Exchange (ETDEWEB)
Conway, Stephen
1998-12-01
The advances made in computer technology over recent years have led to a great increase in the engineering problems that can be studied using CFD. The computation of flows over and through complex geometries at relatively high Reynolds numbers is becoming more common using the Large Eddy Simulation (LES) technique. Direct numerical simulations of such flows is still beyond the capacity of todays fastest supercomputers, requiring excessive computational times and memory. In addition, traditional Reynolds Averaged Navier Stokes (RANS) methods are known to have limited applicability in a wide range of engineering flow situations. In this thesis LES has been used to simulate the flow through a cascade of guidance vanes, more commonly known as a swirl generator, positioned at the inlet to a gas turbine combustion chamber. This flow case is of interest because of the complex flow phenomena which occur within the swirl generator, which include compressibility effects, different types of flow instabilities, transition, laminar and turbulent separation and near wall turbulence. It is also of interest because it fits very well into the range of engineering applications that can be studied using LES. Two computational grids with different resolutions and two subgrid scale stress models were used in the study. The effects of separation and transition are investigated. A vortex shedding frequency from the guidance vanes is determined which is seen to be dependent on the angle of incident air flow. Interaction between the movement of the separation region and the shedding frequency is also noted. Such vortex shedding phenomena can directly affect the quality of fuel and air mixing within the combustion chamber and can in some cases induce vibrations in the gas turbine structure. Comparisons between the results obtained using different grid resolutions with an implicit and a dynamic divergence (DDM) subgrid scale stress models are also made 32 refs, 35 figs, 2 tabs
Large Eddy Simulation of Vertical Axis Wind Turbine Wakes
Directory of Open Access Journals (Sweden)
Sina Shamsoddin
2014-02-01
Full Text Available In this study, large eddy simulation (LES is combined with a turbine model to investigate the wake behind a vertical-axis wind turbine (VAWT in a three-dimensional turbulent flow. Two methods are used to model the subgrid-scale (SGS stresses: (a the Smagorinsky model; and (b the modulated gradient model. To parameterize the effects of the VAWT on the flow, two VAWT models are developed: (a the actuator swept-surface model (ASSM, in which the time-averaged turbine-induced forces are distributed on a surface swept by the turbine blades, i.e., the actuator swept surface; and (b the actuator line model (ALM, in which the instantaneous blade forces are only spatially distributed on lines representing the blades, i.e., the actuator lines. This is the first time that LES has been applied and validated for the simulation of VAWT wakes by using either the ASSM or the ALM techniques. In both models, blade-element theory is used to calculate the lift and drag forces on the blades. The results are compared with flow measurements in the wake of a model straight-bladed VAWT, carried out in the Institute de Méchanique et Statistique de la Turbulence (IMST water channel. Different combinations of SGS models with VAWT models are studied, and a fairly good overall agreement between simulation results and measurement data is observed. In general, the ALM is found to better capture the unsteady-periodic nature of the wake and shows a better agreement with the experimental data compared with the ASSM. The modulated gradient model is also found to be a more reliable SGS stress modeling technique, compared with the Smagorinsky model, and it yields reasonable predictions of the mean flow and turbulence characteristics of a VAWT wake using its theoretically-determined model coefficient.
A laser particulate spectrometer for a space simulation facility
Schmitt, R. J.; Boyd, B. A.; Linford, R. M. F.; Richmond, R. G.
1975-01-01
A laser particulate spectrometer (LPS) system was developed to measure the size and speed distributions of particulate contaminants. Detection of the particulates is achieved by means of light scattering and extinction effects using a single laser beam to cover a size range of 0.8 to 275 microns diameter and a speed range of 0.2 to 20 meters/second. The LPS system was designed to operate in the high-vacuum environment of a space simulation chamber with cold shroud temperatures ranging from 77 to 300 K.
Program NAJOCSC and space charge effect simulation in C01
International Nuclear Information System (INIS)
Tang, J.Y.; Chabert, A.; Baron, E.
1999-01-01
During the beam tests of the THI project at GANIL, it was found it difficult to increase the beam power above 2 kW at CSS2 extraction. The space charge effect (abbreviated as S.C. effect) in cyclotrons is suspected to play some role in the phenomenon, especially the longitudinal S.C. one and also the coupling between longitudinal and radial motions. The injector cyclotron C01 is studied, and the role played by the S.C. effect in this cyclotron in the THI case is investigated by a simulation method. (K.A.)
Simulations of space charge neutralization in a magnetized electron cooler
Energy Technology Data Exchange (ETDEWEB)
Gerity, James [Texas A-M; McIntyre, Peter M. [Texas A-M; Bruhwiler, David Leslie [RadiaSoft, Boulder; Hall, Christopher [RadiaSoft, Boulder; Moens, Vince Jan [Ecole Polytechnique, Lausanne; Park, Chong Shik [Fermilab; Stancari, Giulio [Fermilab
2017-02-02
Magnetized electron cooling at relativistic energies and Ampere scale current is essential to achieve the proposed ion luminosities in a future electron-ion collider (EIC). Neutralization of the space charge in such a cooler can significantly increase the magnetized dynamic friction and, hence, the cooling rate. The Warp framework is being used to simulate magnetized electron beam dynamics during and after the build-up of neutralizing ions, via ionization of residual gas in the cooler. The design follows previous experiments at Fermilab as a verification case. We also discuss the relevance to EIC designs.
Study of Hydrokinetic Turbine Arrays with Large Eddy Simulation
Sale, Danny; Aliseda, Alberto
2014-11-01
Marine renewable energy is advancing towards commercialization, including electrical power generation from ocean, river, and tidal currents. The focus of this work is to develop numerical simulations capable of predicting the power generation potential of hydrokinetic turbine arrays-this includes analysis of unsteady and averaged flow fields, turbulence statistics, and unsteady loadings on turbine rotors and support structures due to interaction with rotor wakes and ambient turbulence. The governing equations of large-eddy-simulation (LES) are solved using a finite-volume method, and the presence of turbine blades are approximated by the actuator-line method in which hydrodynamic forces are projected to the flow field as a body force. The actuator-line approach captures helical wake formation including vortex shedding from individual blades, and the effects of drag and vorticity generation from the rough seabed surface are accounted for by wall-models. This LES framework was used to replicate a previous flume experiment consisting of three hydrokinetic turbines tested under various operating conditions and array layouts. Predictions of the power generation, velocity deficit and turbulence statistics in the wakes are compared between the LES and experimental datasets.
Private ground infrastructures for space exploration missions simulations
Souchier, Alain
2010-06-01
The Mars Society, a private non profit organisation devoted to promote the red planet exploration, decided to implement simulated Mars habitat in two locations on Earth: in northern Canada on the rim of a meteoritic crater (2000), in a US Utah desert, location of a past Jurassic sea (2001). These habitats have been built with large similarities to actual planned habitats for first Mars exploration missions. Participation is open to everybody either proposing experimentations or wishing only to participate as a crew member. Participants are from different organizations: Mars Society, Universities, experimenters working with NASA or ESA. The general philosophy of the work conducted is not to do an innovative scientific work on the field but to learn how the scientific work is affected or modified by the simulation conditions. Outside activities are conducted with simulated spacesuits limiting the experimenter abilities. Technology or procedures experimentations are also conducted as well as experimentations on the crew psychology and behaviour.
Large-Eddy Simulation Using Projection onto Local Basis Functions
Pope, S. B.
In the traditional approach to LES for inhomogeneous flows, the resolved fields are obtained by a filtering operation (with filter width Delta). The equations governing the resolved fields are then partial differential equations, which are solved numerically (on a grid of spacing h). For an LES computation of a given magnitude (i.e., given h), there are conflicting considerations in the choice of Delta: to resolve a large range of turbulent motions, Delta should be small; to solve the equations with numerical accuracy, Delta should be large. In the alternative approach advanced here, this conflict is avoided. The resolved fields are defined by projection onto local basis functions, so that the governing equations are ordinary differential equations for the evolution of the basis-function coefficients. There is no issue of numerical spatial discretization errors. A general methodology for modelling the effects of the residual motions is developed. The model is based directly on the basis-function coefficients, and its effect is to smooth the fields where their rates of change are not well resolved by the basis functions. Demonstration calculations are performed for Burgers' equation.
Average accelerator simulation Truebeam using phase space in IAEA format
International Nuclear Information System (INIS)
Santana, Emico Ferreira; Milian, Felix Mas; Paixao, Paulo Oliveira; Costa, Raranna Alves da; Velasco, Fermin Garcia
2015-01-01
In this paper is used a computational code of radiation transport simulation based on Monte Carlo technique, in order to model a linear accelerator of treatment by Radiotherapy. This work is the initial step of future proposals which aim to study several treatment of patient by Radiotherapy, employing computational modeling in cooperation with the institutions UESC, IPEN, UFRJ e COI. The Chosen simulation code is GATE/Geant4. The average accelerator is TrueBeam of Varian Company. The geometric modeling was based in technical manuals, and radiation sources on the phase space for photons, provided by manufacturer in the IAEA (International Atomic Energy Agency) format. The simulations were carried out in equal conditions to experimental measurements. Were studied photons beams of 6MV, with 10 per 10 cm of field, focusing on a water phantom. For validation were compared dose curves in depth, lateral profiles in different depths of the simulated results and experimental data. The final modeling of this accelerator will be used in future works involving treatments and real patients. (author)
Evolution of the large Deep Space Network antennas
Imbriale, William A.
1991-12-01
The evolution of the largest antenna of the US NASA Deep Space Network (DSN) is described. The design, performance analysis, and measurement techniques, beginning with its initial 64-m operation at S-band (2295 MHz) in 1966 and continuing through the present ka-band (32-GHz) operation at 70 m, is described. Although their diameters and mountings differ, these parabolic antennas all employ a Cassegrainian feed system, and each antenna dish surface is constructed of precision-shaped perforated-aluminum panels that are secured to an open steel framework
Tradespace investigation of strategic design factors for large space telescopes
Karlow, Brandon; Jewison, Christopher; Sternberg, David; Hall, Sherrie; Golkar, Alessandro
2015-04-01
Future large telescope arrays require careful balancing of satisfaction across the stakeholders' community. Development programs usually cannot afford to explicitly address all stakeholder tradeoffs during the conceptual design stage, but rather confine the analysis to performance, cost, and schedule discussions, treating policy and budget as constraints defining the envelope of the investigation. Thus, it is of interest to develop an integrated stakeholder analysis approach to explicitly address the impact of all stakeholder interactions on the design of large telescope arrays to address future science and exploration needs. This paper offers a quantitative approach for modeling some of the stakeholder influences relevant to large telescope array designs-the linkages between a given mission and the wider NASA community. The main goal of the analysis is to explore the tradespace of large telescope designs and understand the effects of different design decisions in the stakeholders' network. Proposed architectures that offer benefits to existing constellations of systems, institutions, and mission plans are expected to yield political and engineering benefits for NASA stakeholders' wider objectives. If such synergistic architectures are privileged in subsequent analysis, regions of the tradespace that better meet the needs of the wider NASA community can be selected for further development.
Sampling large random knots in a confined space
International Nuclear Information System (INIS)
Arsuaga, J; Blackstone, T; Diao, Y; Hinson, K; Karadayi, E; Saito, M
2007-01-01
DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e n 2 )). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n 2 ). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications
Sampling large random knots in a confined space
Arsuaga, J.; Blackstone, T.; Diao, Y.; Hinson, K.; Karadayi, E.; Saito, M.
2007-09-01
DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e^{n^2}) . We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n2). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.
Sampling large random knots in a confined space
Energy Technology Data Exchange (ETDEWEB)
Arsuaga, J [Department of Mathematics, San Francisco State University, 1600 Holloway Ave, San Francisco, CA 94132 (United States); Blackstone, T [Department of Computer Science, San Francisco State University, 1600 Holloway Ave., San Francisco, CA 94132 (United States); Diao, Y [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Hinson, K [Department of Mathematics and Statistics, University of North Carolina at Charlotte, Charlotte, NC 28223 (United States); Karadayi, E [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States); Saito, M [Department of Mathematics, University of South Florida, 4202 E Fowler Avenue, Tampa, FL 33620 (United States)
2007-09-28
DNA knots formed under extreme conditions of condensation, as in bacteriophage P4, are difficult to analyze experimentally and theoretically. In this paper, we propose to use the uniform random polygon model as a supplementary method to the existing methods for generating random knots in confinement. The uniform random polygon model allows us to sample knots with large crossing numbers and also to generate large diagrammatically prime knot diagrams. We show numerically that uniform random polygons sample knots with large minimum crossing numbers and certain complicated knot invariants (as those observed experimentally). We do this in terms of the knot determinants or colorings. Our numerical results suggest that the average determinant of a uniform random polygon of n vertices grows faster than O(e{sup n{sup 2}}). We also investigate the complexity of prime knot diagrams. We show rigorously that the probability that a randomly selected 2D uniform random polygon of n vertices is almost diagrammatically prime goes to 1 as n goes to infinity. Furthermore, the average number of crossings in such a diagram is at the order of O(n{sup 2}). Therefore, the two-dimensional uniform random polygons offer an effective way in sampling large (prime) knots, which can be useful in various applications.
International Nuclear Information System (INIS)
Sabati, M; Lauzon, M L; Frayne, R
2003-01-01
Data acquisition using a continuously moving table approach is a method capable of generating large field-of-view (FOV) 3D MR angiograms. However, in order to obtain venous contamination-free contrast-enhanced (CE) MR angiograms in the lower limbs, one of the major challenges is to acquire all necessary k-space data during the restricted arterial phase of the contrast agent. Preliminary investigation on the space-time relationship of continuously acquired peripheral angiography is performed in this work. Deterministic and stochastic undersampled hybrid-space (x, k y , k z ) acquisitions are simulated for large FOV peripheral runoff studies. Initial results show the possibility of acquiring isotropic large FOV images of the entire peripheral vascular system. An optimal trade-off between the spatial and temporal sampling properties was found that produced a high-spatial resolution peripheral CE-MR angiogram. The deterministic sampling pattern was capable of reconstructing the global structure of the peripheral arterial tree and showed slightly better global quantitative results than stochastic patterns. Optimal stochastic sampling patterns, on the other hand, enhanced small vessels and had more favourable local quantitative results. These simulations demonstrate the complex spatial-temporal relationship when sampling large FOV peripheral runoff studies. They also suggest that more investigation is required to maximize image quality as a function of hybrid-space coverage, acquisition repetition time and sampling pattern parameters
Large-Eddy Simulation of turbulent vortex shedding
International Nuclear Information System (INIS)
Archambeau, F.
1995-06-01
This thesis documents the development and application of a computational algorithm for Large-Eddy Simulation. Unusually, the method adopts a fully collocated variable storage arrangement and is applicable to complex, non-rectilinear geometries. A Reynolds-averaged Navier-Stokes algorithm has formed the starting point of the development, but has been modified substantially: the spatial approximation of convection is effected by an energy-conserving central-differencing scheme; a second-order time-marching Adams-Bashforth scheme has been introduced; the pressure field is determined by solving the pressure-Poisson equation; this equation is solved either by use of preconditioned Conjugate-Gradient methods or with the Generalised Minimum Residual method; two types of sub-grid scale models have been introduced and examined. The algorithm has been validated by reference to a hierarchy of unsteady flows of increasing complexity starting with unsteady lid-driven cavity flows and ending with 3-D turbulent vortex shedding behind a square prism. In the latter case, for which extensive experimental data are available, special emphasis has been put on examining the dependence of the results on mesh density, near-wall treatment and the nature of the sub-grid-scale model, one of which is an advanced dynamic model. The LES scheme is shown to return time-average and phase-averaged results which agree well with experimental data and which support the view that LES is a promising approach for unsteady flows dominated by large periodic structures. (author)
Large Eddy Simulation Study for Fluid Disintegration and Mixing
Bellan, Josette; Taskinoglu, Ezgi
2011-01-01
A new modeling approach is based on the concept of large eddy simulation (LES) within which the large scales are computed and the small scales are modeled. The new approach is expected to retain the fidelity of the physics while also being computationally efficient. Typically, only models for the small-scale fluxes of momentum, species, and enthalpy are used to reintroduce in the simulation the physics lost because the computation only resolves the large scales. These models are called subgrid (SGS) models because they operate at a scale smaller than the LES grid. In a previous study of thermodynamically supercritical fluid disintegration and mixing, additional small-scale terms, one in the momentum and one in the energy conservation equations, were identified as requiring modeling. These additional terms were due to the tight coupling between dynamics and real-gas thermodynamics. It was inferred that if these terms would not be modeled, the high density-gradient magnitude regions, experimentally identified as a characteristic feature of these flows, would not be accurately predicted without the additional term in the momentum equation; these high density-gradient magnitude regions were experimentally shown to redistribute turbulence in the flow. And it was also inferred that without the additional term in the energy equation, the heat flux magnitude could not be accurately predicted; the heat flux to the wall of combustion devices is a crucial quantity that determined necessary wall material properties. The present work involves situations where only the term in the momentum equation is important. Without this additional term in the momentum equation, neither the SGS-flux constant-coefficient Smagorinsky model nor the SGS-flux constant-coefficient Gradient model could reproduce in LES the pressure field or the high density-gradient magnitude regions; the SGS-flux constant- coefficient Scale-Similarity model was the most successful in this endeavor although not
Enhanced 2D-DOA Estimation for Large Spacing Three-Parallel Uniform Linear Arrays
Directory of Open Access Journals (Sweden)
Dong Zhang
2018-01-01
Full Text Available An enhanced two-dimensional direction of arrival (2D-DOA estimation algorithm for large spacing three-parallel uniform linear arrays (ULAs is proposed in this paper. Firstly, we use the propagator method (PM to get the highly accurate but ambiguous estimation of directional cosine. Then, we use the relationship between the directional cosine to eliminate the ambiguity. This algorithm not only can make use of the elements of the three-parallel ULAs but also can utilize the connection between directional cosine to improve the estimation accuracy. Besides, it has satisfied estimation performance when the elevation angle is between 70° and 90° and it can automatically pair the estimated azimuth and elevation angles. Furthermore, it has low complexity without using any eigen value decomposition (EVD or singular value decompostion (SVD to the covariance matrix. Simulation results demonstrate the effectiveness of our proposed algorithm.
Unified Simulation and Analysis Framework for Deep Space Navigation Design
Anzalone, Evan; Chuang, Jason; Olsen, Carrie
2013-01-01
As the technology that enables advanced deep space autonomous navigation continues to develop and the requirements for such capability continues to grow, there is a clear need for a modular expandable simulation framework. This tool's purpose is to address multiple measurement and information sources in order to capture system capability. This is needed to analyze the capability of competing navigation systems as well as to develop system requirements, in order to determine its effect on the sizing of the integrated vehicle. The development for such a framework is built upon Model-Based Systems Engineering techniques to capture the architecture of the navigation system and possible state measurements and observations to feed into the simulation implementation structure. These models also allow a common environment for the capture of an increasingly complex operational architecture, involving multiple spacecraft, ground stations, and communication networks. In order to address these architectural developments, a framework of agent-based modules is implemented to capture the independent operations of individual spacecraft as well as the network interactions amongst spacecraft. This paper describes the development of this framework, and the modeling processes used to capture a deep space navigation system. Additionally, a sample implementation describing a concept of network-based navigation utilizing digitally transmitted data packets is described in detail. This developed package shows the capability of the modeling framework, including its modularity, analysis capabilities, and its unification back to the overall system requirements and definition.
High stability space frame for a large fusion laser
International Nuclear Information System (INIS)
Hurley, C.A.; Myall, J.O.
1975-01-01
The Shiva laser system is a large neodymium glass laser target irradiation facility being constructed at LLL to perform laser fusion experiments. A frame is being constructed to support the large number of laser components that make up the Shiva system. Twenty laser chains composed of amplifiers, spatial filters, polarizers, rotators, and mirrors will be arranged in an optimum geometry so that each beam arrives at the target simultaneously and within alignment tolerances. This frame is capable of supporting approximately 600 individual component assemblies and maintaining a tolerance of +-4-μrad rotation between any two points over a period of 100 s. Consideration has been given to the positional stability and support of the components, the geometrical array of stacked beams with respect to the oscillator and target, the flow of utilities (e.g., power cables and cooling gas pipes), good accessibility for operation and maintenance, and adaptability for change and growth
Space Debris Attitude Simulation - IOTA (In-Orbit Tumbling Analysis)
Kanzler, R.; Schildknecht, T.; Lips, T.; Fritsche, B.; Silha, J.; Krag, H.
Today, there is little knowledge on the attitude state of decommissioned intact objects in Earth orbit. Observational means have advanced in the past years, but are still limited with respect to an accurate estimate of motion vector orientations and magnitude. Especially for the preparation of Active Debris Removal (ADR) missions as planned by ESA's Clean Space initiative or contingency scenarios for ESA spacecraft like ENVISAT, such knowledge is needed. The In-Orbit Tumbling Analysis tool (IOTA) is a prototype software, currently in development within the framework of ESA's “Debris Attitude Motion Measurements and Modelling” project (ESA Contract No. 40000112447), which is led by the Astronomical Institute of the University of Bern (AIUB). The project goal is to achieve a good understanding of the attitude evolution and the considerable internal and external effects which occur. To characterize the attitude state of selected targets in LEO and GTO, multiple observation methods are combined. Optical observations are carried out by AIUB, Satellite Laser Ranging (SLR) is performed by the Space Research Institute of the Austrian Academy of Sciences (IWF) and radar measurements and signal level determination are provided by the Fraunhofer Institute for High Frequency Physics and Radar Techniques (FHR). Developed by Hyperschall Technologie Göttingen GmbH (HTG), IOTA will be a highly modular software tool to perform short- (days), medium- (months) and long-term (years) propagation of the orbit and attitude motion (six degrees-of-freedom) of spacecraft in Earth orbit. The simulation takes into account all relevant acting forces and torques, including aerodynamic drag, solar radiation pressure, gravitational influences of Earth, Sun and Moon, eddy current damping, impulse and momentum transfer from space debris or micro meteoroid impact, as well as the optional definition of particular spacecraft specific influences like tank sloshing, reaction wheel behaviour
Virtual Reality Simulation of the International Space Welding Experiment
Phillips, James A.
1996-01-01
Virtual Reality (VR) is a set of breakthrough technologies that allow a human being to enter and fully experience a 3-dimensional, computer simulated environment. A true virtual reality experience meets three criteria: (1) It involves 3-dimensional computer graphics; (2) It includes real-time feedback and response to user actions; and (3) It must provide a sense of immersion. Good examples of a virtual reality simulator are the flight simulators used by all branches of the military to train pilots for combat in high performance jet fighters. The fidelity of such simulators is extremely high -- but so is the price tag, typically millions of dollars. Virtual reality teaching and training methods are manifestly effective, and we have therefore implemented a VR trainer for the International Space Welding Experiment. My role in the development of the ISWE trainer consisted of the following: (1) created texture-mapped models of the ISWE's rotating sample drum, technology block, tool stowage assembly, sliding foot restraint, and control panel; (2) developed C code for control panel button selection and rotation of the sample drum; (3) In collaboration with Tim Clark (Antares Virtual Reality Systems), developed a serial interface box for the PC and the SGI Indigo so that external control devices, similar to ones actually used on the ISWE, could be used to control virtual objects in the ISWE simulation; (4) In collaboration with Peter Wang (SFFP) and Mark Blasingame (Boeing), established the interference characteristics of the VIM 1000 head-mounted-display and tested software filters to correct the problem; (5) In collaboration with Peter Wang and Mark Blasingame, established software and procedures for interfacing the VPL DataGlove and the Polhemus 6DOF position sensors to the SGI Indigo serial ports. The majority of the ISWE modeling effort was conducted on a PC-based VR Workstation, described below.
Space headache on Earth: head-down-tilted bed rest studies simulating outer-space microgravity.
van Oosterhout, W P J; Terwindt, G M; Vein, A A; Ferrari, M D
2015-04-01
Headache is a common symptom during space travel, both isolated and as part of space motion syndrome. Head-down-tilted bed rest (HDTBR) studies are used to simulate outer space microgravity on Earth, and allow countermeasure interventions such as artificial gravity and training protocols, aimed at restoring microgravity-induced physiological changes. The objectives of this article are to assess headache incidence and characteristics during HDTBR, and to evaluate the effects of countermeasures. In a randomized cross-over design by the European Space Agency (ESA), 22 healthy male subjects, without primary headache history, underwent three periods of -6-degree HDTBR. In two of these episodes countermeasure protocols were added, with either centrifugation or aerobic exercise training protocols. Headache occurrence and characteristics were daily assessed using a specially designed questionnaire. In total 14/22 (63.6%) subjects reported a headache during ≥1 of the three HDTBR periods, in 12/14 (85.7%) non-specific, and two of 14 (14.4%) migraine. The occurrence of headache did not differ between HDTBR with and without countermeasures: 12/22 (54.5%) subjects vs. eight of 22 (36.4%) subjects; p = 0.20; 13/109 (11.9%) headache days vs. 36/213 (16.9%) headache days; p = 0.24). During countermeasures headaches were, however, more often mild (p = 0.03) and had fewer associated symptoms (p = 0.008). Simulated microgravity during HDTBR induces headache episodes, mostly on the first day. Countermeasures are useful in reducing headache severity and associated symptoms. Reversible, microgravity-induced cephalic fluid shift may cause headache, also on Earth. HDTBR can be used to study space headache on Earth. © International Headache Society 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.
Integrated visualization of simulation results and experimental devices in virtual-reality space
International Nuclear Information System (INIS)
Ohtani, Hiroaki; Ishiguro, Seiji; Shohji, Mamoru; Kageyama, Akira; Tamura, Yuichi
2011-01-01
We succeeded in integrating the visualization of both simulation results and experimental device data in virtual-reality (VR) space using CAVE system. Simulation results are shown using Virtual LHD software, which can show magnetic field line, particle trajectory, and isosurface of plasma pressure of the Large Helical Device (LHD) based on data from the magnetohydrodynamics equilibrium simulation. A three-dimensional mouse, or wand, determines the initial position and pitch angle of a drift particle or the starting point of a magnetic field line, interactively in the VR space. The trajectory of a particle and the stream-line of magnetic field are calculated using the Runge-Kutta-Huta integration method on the basis of the results obtained after pointing the initial condition. The LHD vessel is objectively visualized based on CAD-data. By using these results and data, the simulated LHD plasma can be interactively drawn in the objective description of the LHD experimental vessel. Through this integrated visualization, it is possible to grasp the three-dimensional relationship of the positions between the device and plasma in the VR space, opening a new path in contribution to future research. (author)
Blocked inverted indices for exact clustering of large chemical spaces.
Thiel, Philipp; Sach-Peltason, Lisa; Ottmann, Christian; Kohlbacher, Oliver
2014-09-22
The calculation of pairwise compound similarities based on fingerprints is one of the fundamental tasks in chemoinformatics. Methods for efficient calculation of compound similarities are of the utmost importance for various applications like similarity searching or library clustering. With the increasing size of public compound databases, exact clustering of these databases is desirable, but often computationally prohibitively expensive. We present an optimized inverted index algorithm for the calculation of all pairwise similarities on 2D fingerprints of a given data set. In contrast to other algorithms, it neither requires GPU computing nor yields a stochastic approximation of the clustering. The algorithm has been designed to work well with multicore architectures and shows excellent parallel speedup. As an application example of this algorithm, we implemented a deterministic clustering application, which has been designed to decompose virtual libraries comprising tens of millions of compounds in a short time on current hardware. Our results show that our implementation achieves more than 400 million Tanimoto similarity calculations per second on a common desktop CPU. Deterministic clustering of the available chemical space thus can be done on modern multicore machines within a few days.
Large Energy Development Projects: Lessons Learned from Space and Politics
International Nuclear Information System (INIS)
Schmitt, Harrison H.
2005-01-01
The challenge to global energy future lies in meeting the needs and aspirations of the ten to twelve billion earthlings that will be on this planet by 2050. At least an eight-fold increase in annual production will be required by the middle of this century. The energy sources that can be considered developed and 'in the box' for consideration as sources for major increases in supply over the next half century are fossil fuels, nuclear fission, and, to a lesser degree, various forms of direct and stored solar energy and conservation. None of these near-term sources of energy will provide an eight-fold or more increase in energy supply for various technical, environmental and political reasons.Only a few potential energy sources that fall 'out of the box' appear worthy of additional consideration as possible contributors to energy demand in 2050 and beyond. These particular candidates are deuterium-tritium fusion, space solar energy, and lunar helium-3 fusion. The primary advantage that lunar helium-3 fusion will have over other 'out of the box' energy sources in the pre-2050 timeframe is a clear path into the private capital markets. The development and demonstration of new energy sources will require several development paths, each of Apollo-like complexity and each with sub-paths of parallel development for critical functions and components
Large Scale Simulation of Hydrogen Dispersion by a Stabilized Balancing Domain Decomposition Method
Directory of Open Access Journals (Sweden)
Qing-He Yao
2014-01-01
Full Text Available The dispersion behaviour of leaking hydrogen in a partially open space is simulated by a balancing domain decomposition method in this work. An analogy of the Boussinesq approximation is employed to describe the connection between the flow field and the concentration field. The linear systems of Navier-Stokes equations and the convection diffusion equation are symmetrized by a pressure stabilized Lagrange-Galerkin method, and thus a balancing domain decomposition method is enabled to solve the interface problem of the domain decomposition system. Numerical results are validated by comparing with the experimental data and available numerical results. The dilution effect of ventilation is investigated, especially at the doors, where flow pattern is complicated and oscillations appear in the past research reported by other researchers. The transient behaviour of hydrogen and the process of accumulation in the partially open space are discussed, and more details are revealed by large scale computation.
Structural-electromagnetic bidirectional coupling analysis of space large film reflector antennas
Zhang, Xinghua; Zhang, Shuxin; Cheng, ZhengAi; Duan, Baoyan; Yang, Chen; Li, Meng; Hou, Xinbin; Li, Xun
2017-10-01
As used for energy transmission, a space large film reflector antenna (SLFRA) is characterized by large size and enduring high power density. The structural flexibility and the microwave radiation pressure (MRP) will lead to the phenomenon of structural-electromagnetic bidirectional coupling (SEBC). In this paper, the SEBC model of SLFRA is presented, then the deformation induced by the MRP and the corresponding far field pattern deterioration are simulated. Results show that, the direction of the MRP is identical to the normal of the reflector surface, and the magnitude is proportional to the power density and the square of cosine incident angle. For a typical cosine function distributed electric field, the MRP is a square of cosine distributed across the diameter. The maximum deflections of SLFRA linearly increase with the increasing microwave power densities and the square of the reflector diameters, and vary inversely with the film thicknesses. When the reflector diameter becomes 100 m large and the microwave power density exceeds 102 W/cm2, the gain loss of the 6.3 μm-thick reflector goes beyond 0.75 dB. When the MRP-induced deflection degrades the reflector performance, the SEBC should be taken into account.
Large-Eddy Simulation of turbulent vortex shedding
Energy Technology Data Exchange (ETDEWEB)
Archambeau, F
1995-06-01
This thesis documents the development and application of a computational algorithm for Large-Eddy Simulation. Unusually, the method adopts a fully collocated variable storage arrangement and is applicable to complex, non-rectilinear geometries. A Reynolds-averaged Navier-Stokes algorithm has formed the starting point of the development, but has been modified substantially: the spatial approximation of convection is effected by an energy-conserving central-differencing scheme; a second-order time-marching Adams-Bashforth scheme has been introduced; the pressure field is determined by solving the pressure-Poisson equation; this equation is solved either by use of preconditioned Conjugate-Gradient methods or with the Generalised Minimum Residual method; two types of sub-grid scale models have been introduced and examined. The algorithm has been validated by reference to a hierarchy of unsteady flows of increasing complexity starting with unsteady lid-driven cavity flows and ending with 3-D turbulent vortex shedding behind a square prism. In the latter case, for which extensive experimental data are available, special emphasis has been put on examining the dependence of the results on mesh density, near-wall treatment and the nature of the sub-grid-scale model, one of which is an advanced dynamic model. The LES scheme is shown to return time-average and phase-averaged results which agree well with experimental data and which support the view that LES is a promising approach for unsteady flows dominated by large periodic structures. (author) 87 refs.
Commercial applications of large-scale Research and Development computer simulation technologies
International Nuclear Information System (INIS)
Kuok Mee Ling; Pascal Chen; Wen Ho Lee
1998-01-01
The potential commercial applications of two large-scale R and D computer simulation technologies are presented. One such technology is based on the numerical solution of the hydrodynamics equations, and is embodied in the two-dimensional Eulerian code EULE2D, which solves the hydrodynamic equations with various models for the equation of state (EOS), constitutive relations and fracture mechanics. EULE2D is an R and D code originally developed to design and analyze conventional munitions for anti-armor penetrations such as shaped charges, explosive formed projectiles, and kinetic energy rods. Simulated results agree very well with actual experiments. A commercial application presented here is the design and simulation of shaped charges for oil and gas well bore perforation. The other R and D simulation technology is based on the numerical solution of Maxwell's partial differential equations of electromagnetics in space and time, and is implemented in the three-dimensional code FDTD-SPICE, which solves Maxwell's equations in the time domain with finite-differences in the three spatial dimensions and calls SPICE for information when nonlinear active devices are involved. The FDTD method has been used in the radar cross-section modeling of military aircrafts and many other electromagnetic phenomena. The coupling of FDTD method with SPICE, a popular circuit and device simulation program, provides a powerful tool for the simulation and design of microwave and millimeter-wave circuits containing nonlinear active semiconductor devices. A commercial application of FDTD-SPICE presented here is the simulation of a two-element active antenna system. The simulation results and the experimental measurements are in excellent agreement. (Author)
Mathematics of large eddy simulation of turbulent flows
Energy Technology Data Exchange (ETDEWEB)
Berselli, L.C. [Pisa Univ. (Italy). Dept. of Applied Mathematics ' ' U. Dini' ' ; Iliescu, T. [Virginia Polytechnic Inst. and State Univ., Blacksburg, VA (United States). Dept. of Mathematics; Layton, W.J. [Pittsburgh Univ., PA (United States). Dept. of Mathematics
2006-07-01
Large eddy simulation (LES) is a method of scientific computation seeking to predict the dynamics of organized structures in turbulent flows by approximating local, spatial averages of the flow. Since its birth in 1970, LES has undergone an explosive development and has matured into a highly-developed computational technology. It uses the tools of turbulence theory and the experience gained from practical computation. This book focuses on the mathematical foundations of LES and its models and provides a connection between the powerful tools of applied mathematics, partial differential equations and LES. Thus, it is concerned with fundamental aspects not treated so deeply in the other books in the field, aspects such as well-posedness of the models, their energy balance and the connection to the Leray theory of weak solutions of the Navier-Stokes equations. The authors give a mathematically informed and detailed treatment of an interesting selection of models, focusing on issues connected with understanding and expanding the correctness and universality of LES. This volume offers a useful entry point into the field for PhD students in applied mathematics, computational mathematics and partial differential equations. Non-mathematicians will appreciate it as a reference that introduces them to current tools and advances in the mathematical theory of LES. (orig.)
Simulation of fatigue crack growth under large scale yielding conditions
Schweizer, Christoph; Seifert, Thomas; Riedel, Hermann
2010-07-01
A simple mechanism based model for fatigue crack growth assumes a linear correlation between the cyclic crack-tip opening displacement (ΔCTOD) and the crack growth increment (da/dN). The objective of this work is to compare analytical estimates of ΔCTOD with results of numerical calculations under large scale yielding conditions and to verify the physical basis of the model by comparing the predicted and the measured evolution of the crack length in a 10%-chromium-steel. The material is described by a rate independent cyclic plasticity model with power-law hardening and Masing behavior. During the tension-going part of the cycle, nodes at the crack-tip are released such that the crack growth increment corresponds approximately to the crack-tip opening. The finite element analysis performed in ABAQUS is continued for so many cycles until a stabilized value of ΔCTOD is reached. The analytical model contains an interpolation formula for the J-integral, which is generalized to account for cyclic loading and crack closure. Both simulated and estimated ΔCTOD are reasonably consistent. The predicted crack length evolution is found to be in good agreement with the behavior of microcracks observed in a 10%-chromium steel.
Contextual Compression of Large-Scale Wind Turbine Array Simulations
Energy Technology Data Exchange (ETDEWEB)
Gruchalla, Kenny M [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Brunhart-Lupo, Nicholas J [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Potter, Kristin C [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Clyne, John [National Center for Atmospheric Research (NCAR)
2017-12-04
Data sizes are becoming a critical issue particularly for HPC applications. We have developed a user-driven lossy wavelet-based storage model to facilitate the analysis and visualization of large-scale wind turbine array simulations. The model stores data as heterogeneous blocks of wavelet coefficients, providing high-fidelity access to user-defined data regions believed the most salient, while providing lower-fidelity access to less salient regions on a block-by-block basis. In practice, by retaining the wavelet coefficients as a function of feature saliency, we have seen data reductions in excess of 94 percent, while retaining lossless information in the turbine-wake regions most critical to analysis and providing enough (low-fidelity) contextual information in the upper atmosphere to track incoming coherent turbulent structures. Our contextual wavelet compression approach has allowed us to deliver interative visual analysis while providing the user control over where data loss, and thus reduction in accuracy, in the analysis occurs. We argue this reduced but contextualized representation is a valid approach and encourages contextual data management.
Large-eddy simulations of unidirectional water flow over dunes
Grigoriadis, D. G. E.; Balaras, E.; Dimas, A. A.
2009-06-01
The unidirectional, subcritical flow over fixed dunes is studied numerically using large-eddy simulation, while the immersed boundary method is implemented to incorporate the bed geometry. Results are presented for a typical dune shape and two Reynolds numbers, Re = 17,500 and Re = 93,500, on the basis of bulk velocity and water depth. The numerical predictions of velocity statistics at the low Reynolds number are in very good agreement with available experimental data. A primary recirculation region develops downstream of the dune crest at both Reynolds numbers, while a secondary region develops at the toe of the dune crest only for the low Reynolds number. Downstream of the reattachment point, on the dune stoss, the turbulence intensity in the developing boundary layer is weaker than in comparable equilibrium boundary layers. Coherent vortical structures are identified using the fluctuating pressure field and the second invariant of the velocity gradient tensor. Vorticity is primarily generated at the dune crest in the form of spanwise "roller" structures. Roller structures dominate the flow dynamics near the crest, and are responsible for perturbing the boundary layer downstream of the reattachment point, which leads to the formation of "horseshoe" structures. Horseshoe structures dominate the near-wall dynamics after the reattachment point, do not rise to the free surface, and are distorted by the shear layer of the next crest. The occasional interaction between roller and horseshoe structures generates tube-like "kolk" structures, which rise to the free surface and persist for a long time before attenuating.
MAGNETIC NULL POINTS IN KINETIC SIMULATIONS OF SPACE PLASMAS
International Nuclear Information System (INIS)
Olshevsky, Vyacheslav; Innocenti, Maria Elena; Cazzola, Emanuele; Lapenta, Giovanni; Deca, Jan; Divin, Andrey; Peng, Ivy Bo; Markidis, Stefano
2016-01-01
We present a systematic attempt to study magnetic null points and the associated magnetic energy conversion in kinetic particle-in-cell simulations of various plasma configurations. We address three-dimensional simulations performed with the semi-implicit kinetic electromagnetic code iPic3D in different setups: variations of a Harris current sheet, dipolar and quadrupolar magnetospheres interacting with the solar wind, and a relaxing turbulent configuration with multiple null points. Spiral nulls are more likely created in space plasmas: in all our simulations except lunar magnetic anomaly (LMA) and quadrupolar mini-magnetosphere the number of spiral nulls prevails over the number of radial nulls by a factor of 3–9. We show that often magnetic nulls do not indicate the regions of intensive energy dissipation. Energy dissipation events caused by topological bifurcations at radial nulls are rather rare and short-lived. The so-called X-lines formed by the radial nulls in the Harris current sheet and LMA simulations are rather stable and do not exhibit any energy dissipation. Energy dissipation is more powerful in the vicinity of spiral nulls enclosed by magnetic flux ropes with strong currents at their axes (their cross sections resemble 2D magnetic islands). These null lines reminiscent of Z-pinches efficiently dissipate magnetic energy due to secondary instabilities such as the two-stream or kinking instability, accompanied by changes in magnetic topology. Current enhancements accompanied by spiral nulls may signal magnetic energy conversion sites in the observational data
Simulating large-scale spiking neuronal networks with NEST
Schücker, Jannis; Eppler, Jochen Martin
2014-01-01
The Neural Simulation Tool NEST [1, www.nest-simulator.org] is the simulator for spiking neural networkmodels of the HBP that focuses on the dynamics, size and structure of neural systems rather than on theexact morphology of individual neurons. Its simulation kernel is written in C++ and it runs on computinghardware ranging from simple laptops to clusters and supercomputers with thousands of processor cores.The development of NEST is coordinated by the NEST Initiative [www.nest-initiative.or...
Energy Technology Data Exchange (ETDEWEB)
Murakami, Y.; Shi, B. [Geological Survey of Japan, Tsukuba (Japan); Matsushima, J. [The University of Tokyo, Tokyo (Japan). Faculty of Engineering
1997-05-27
Large deformation of the crust is generated by relatively large displacement of the mediums on both sides along a fault. In the conventional finite element method, faults are dealt with by special elements which are called joint elements, but joint elements, elements microscopic in width, generate numerical instability if large shear displacement is given. Therefore, by introducing the master slave (MO) method used for contact analysis in the metal processing field, developed was a large deformation simulator for analyzing diastrophism including large displacement along the fault. Analysis examples were shown in case the upper basement and lower basement were relatively dislocated with the fault as a boundary. The bottom surface and right end boundary of the lower basement are fixed boundaries. The left end boundary of the lower basement is fixed, and to the left end boundary of the upper basement, the horizontal speed, 3{times}10{sup -7}m/s, was given. In accordance with the horizontal movement of the upper basement, the boundary surface largely deformed. Stress is almost at right angles at the boundary surface. As to the analysis of faults by the MO method, it has been used for a single simple fault, but should be spread to lots of faults in the future. 13 refs., 2 figs.
Directory of Open Access Journals (Sweden)
Young Tae Chae
2016-06-01
Full Text Available A calibrated building simulation model was developed to assess the energy performance of a large historic research building. The complexity of space functions and operational conditions with limited availability of energy meters makes it hard to understand the end-used energy consumption in detail and to identify appropriate retrofitting options for reducing energy consumption and greenhouse gas (GHG emissions. An energy simulation model was developed to study the energy usage patterns not only at a building level, but also of the internal thermal zones, and system operations. The model was validated using site measurements of energy usage and a detailed audit of the internal load conditions, system operation, and space programs to minimize the discrepancy between the documented status and actual operational conditions. Based on the results of the calibrated model and end-used energy consumption, the study proposed potential energy conservation measures (ECMs for the building envelope, HVAC system operational methods, and system replacement. It also evaluated each ECM from the perspective of both energy and utility cost saving potentials to help retrofitting plan decision making. The study shows that the energy consumption of the building was highly dominated by the thermal requirements of laboratory spaces. Among other ECMs the demand management option of overriding the setpoint temperature is the most cost effective measure.
Nuclear EMP simulation for large-scale urban environments. FDTD for electrically large problems.
Energy Technology Data Exchange (ETDEWEB)
Smith, William S. [Los Alamos National Laboratory; Bull, Jeffrey S. [Los Alamos National Laboratory; Wilcox, Trevor [Los Alamos National Laboratory; Bos, Randall J. [Los Alamos National Laboratory; Shao, Xuan-Min [Los Alamos National Laboratory; Goorley, John T. [Los Alamos National Laboratory; Costigan, Keeley R. [Los Alamos National Laboratory
2012-08-13
In case of a terrorist nuclear attack in a metropolitan area, EMP measurement could provide: (1) a prompt confirmation of the nature of the explosion (chemical or nuclear) for emergency response; and (2) and characterization parameters of the device (reaction history, yield) for technical forensics. However, urban environment could affect the fidelity of the prompt EMP measurement (as well as all other types of prompt measurement): (1) Nuclear EMP wavefront would no longer be coherent, due to incoherent production, attenuation, and propagation of gamma and electrons; and (2) EMP propagation from source region outward would undergo complicated transmission, reflection, and diffraction processes. EMP simulation for electrically-large urban environment: (1) Coupled MCNP/FDTD (Finite-difference time domain Maxwell solver) approach; and (2) FDTD tends to be limited to problems that are not 'too' large compared to the wavelengths of interest because of numerical dispersion and anisotropy. We use a higher-order low-dispersion, isotropic FDTD algorithm for EMP propagation.
Space-Charge Simulation of Integrable Rapid Cycling Synchrotron
Energy Technology Data Exchange (ETDEWEB)
Eldred, Jeffery [Fermilab; Valishev, Alexander [Fermilab
2017-05-01
Integrable optics is an innovation in particle accelerator design that enables strong nonlinear focusing without generating parametric resonances. We use a Synergia space-charge simulation to investigate the application of integrable optics to a high-intensity hadron ring that could replace the Fermilab Booster. We find that incorporating integrability into the design suppresses the beam halo generated by a mismatched KV beam. Our integrable rapid cycling synchrotron (iRCS) design includes other features of modern ring design such as low momentum compaction factor and harmonically canceling sextupoles. Experimental tests of high-intensity beams in integrable lattices will take place over the next several years at the Fermilab Integrable Optics Test Accelerator (IOTA) and the University of Maryland Electron Ring (UMER).
Characteristics of Tornado-Like Vortices Simulated in a Large-Scale Ward-Type Simulator
Tang, Zhuo; Feng, Changda; Wu, Liang; Zuo, Delong; James, Darryl L.
2018-02-01
Tornado-like vortices are simulated in a large-scale Ward-type simulator to further advance the understanding of such flows, and to facilitate future studies of tornado wind loading on structures. Measurements of the velocity fields near the simulator floor and the resulting floor surface pressures are interpreted to reveal the mean and fluctuating characteristics of the flow as well as the characteristics of the static-pressure deficit. We focus on the manner in which the swirl ratio and the radial Reynolds number affect these characteristics. The transition of the tornado-like flow from a single-celled vortex to a dual-celled vortex with increasing swirl ratio and the impact of this transition on the flow field and the surface-pressure deficit are closely examined. The mean characteristics of the surface-pressure deficit caused by tornado-like vortices simulated at a number of swirl ratios compare well with the corresponding characteristics recorded during full-scale tornadoes.
Large-eddy simulation of unidirectional turbulent flow over dunes
Omidyeganeh, Mohammad
We performed large eddy simulation of the flow over a series of two- and three-dimensional dune geometries at laboratory scale using the Lagrangian dynamic eddy-viscosity subgrid-scale model. First, we studied the flow over a standard 2D transverse dune geometry, then bedform three-dimensionality was imposed. Finally, we investigated the turbulent flow over barchan dunes. The results are validated by comparison with simulations and experiments for the 2D dune case, while the results of the 3D dunes are validated qualitatively against experiments. The flow over transverse dunes separates at the dune crest, generating a shear layer that plays a crucial role in the transport of momentum and energy, as well as the generation of coherent structures. Spanwise vortices are generated in the separated shear; as they are advected, they undergo lateral instabilities and develop into horseshoe-like structures and finally reach the surface. The ejection that occurs between the legs of the vortex creates the upwelling and downdrafting events on the free surface known as "boils". The three-dimensional separation of flow at the crestline alters the distribution of wall pressure, which may cause secondary flow across the stream. The mean flow is characterized by a pair of counter-rotating streamwise vortices, with core radii of the order of the flow depth. Staggering the crestlines alters the secondary motion; two pairs of streamwise vortices appear (a strong one, centred about the lobe, and a weaker one, coming from the previous dune, centred around the saddle). The flow over barchan dunes presents significant differences to that over transverse dunes. The flow near the bed, upstream of the dune, diverges from the centerline plane; the flow close to the centerline plane separates at the crest and reattaches on the bed. Away from the centerline plane and along the horns, flow separation occurs intermittently. The flow in the separation bubble is routed towards the horns and leaves
Energy Technology Data Exchange (ETDEWEB)
Lartigue, G.
2004-11-15
The new european laws on pollutants emission impose more and more constraints to motorists. This is particularly true for gas turbines manufacturers, that must design motors operating with very fuel-lean mixtures. Doing so, pollutants formation is significantly reduced but the problem of combustion stability arises. Actually, combustion regimes that have a large excess of air are naturally more sensitive to combustion instabilities. Numerical predictions of these instabilities is thus a key issue for many industrial involved in energy production. This thesis work tries to show that recent numerical tools are now able to predict these combustion instabilities. Particularly, the Large Eddy Simulation method, when implemented in a compressible CFD code, is able to take into account the main processes involved in combustion instabilities, such as acoustics and flame/vortex interaction. This work describes a new formulation of a Large Eddy Simulation numerical code that enables to take into account very precisely thermodynamics and chemistry, that are essential in combustion phenomena. A validation of this work will be presented in a complex geometry (the PRECCINSTA burner). Our numerical results will be successfully compared with experimental data gathered at DLR Stuttgart (Germany). Moreover, a detailed analysis of the acoustics in this configuration will be presented, as well as its interaction with the combustion. For this acoustics analysis, another CERFACS code has been extensively used, the Helmholtz solver AVSP. (author)
WRF nested large-eddy simulations of deep convection during SEAC4RS
Heath, Nicholas K.; Fuelberg, Henry E.; Tanelli, Simone; Turk, F. Joseph; Lawson, R. Paul; Woods, Sarah; Freeman, Sean
2017-04-01
Large-eddy simulations (LES) and observations are often combined to increase our understanding and improve the simulation of deep convection. This study evaluates a nested LES method that uses the Weather Research and Forecasting (WRF) model and, specifically, tests whether the nested LES approach is useful for studying deep convection during a real-world case. The method was applied on 2 September 2013, a day of continental convection that occurred during the Studies of Emissions and Atmospheric Composition, Clouds and Climate Coupling by Regional Surveys (SEAC4RS) campaign. Mesoscale WRF output (1.35 km grid length) was used to drive a nested LES with 450 m grid spacing, which then drove a 150 m domain. Results reveal that the 450 m nested LES reasonably simulates observed reflectivity distributions and aircraft-observed in-cloud vertical velocities during the study period. However, when examining convective updrafts, reducing the grid spacing to 150 m worsened results. We find that the simulated updrafts in the 150 m run become too diluted by entrainment, thereby generating updrafts that are weaker than observed. Lastly, the 450 m simulation is combined with observations to study the processes forcing strong midlevel cloud/updraft edge downdrafts that were observed on 2 September. Results suggest that these strong downdrafts are forced by evaporative cooling due to mixing and by perturbation pressure forces acting to restore mass continuity around neighboring updrafts. We conclude that the WRF nested LES approach, with further development and evaluation, could potentially provide an effective method for studying deep convection in real-world cases.
Thermal System Upgrade of the Space Environment Simulation Test Chamber
Desai, Ashok B.
1997-01-01
The paper deals with the refurbishing and upgrade of the thermal system for the existing thermal vacuum test facility, the Space Environment Simulator, at NASA's Goddard Space Flight Center. The chamber is the largest such facility at the center. This upgrade is the third phase of the long range upgrade of the chamber that has been underway for last few years. The first phase dealt with its vacuum system, the second phase involved the GHe subsystem. The paper describes the considerations of design philosophy options for the thermal system; approaches taken and methodology applied, in the evaluation of the remaining "life" in the chamber shrouds and related equipment by conducting special tests and studies; feasibility and extent of automation, using computer interfaces and Programmable Logic Controllers in the control system and finally, matching the old components to the new ones into an integrated, highly reliable and cost effective thermal system for the facility. This is a multi-year project just started and the paper deals mainly with the plans and approaches to implement the project successfully within schedule and costs.
Ngwira, Chigomezyo M.; Pulkkinen, Antti; Kuznetsova, Maria M.; Glocer, Alex
2014-06-01
There is a growing concern over possible severe societal consequences related to adverse space weather impacts on man-made technological infrastructure. In the last two decades, significant progress has been made toward the first-principles modeling of space weather events, and three-dimensional (3-D) global magnetohydrodynamics (MHD) models have been at the forefront of this transition, thereby playing a critical role in advancing our understanding of space weather. However, the modeling of extreme space weather events is still a major challenge even for the modern global MHD models. In this study, we introduce a specially adapted University of Michigan 3-D global MHD model for simulating extreme space weather events with a Dst footprint comparable to the Carrington superstorm of September 1859 based on the estimate by Tsurutani et. al. (2003). Results are presented for a simulation run with "very extreme" constructed/idealized solar wind boundary conditions driving the magnetosphere. In particular, we describe the reaction of the magnetosphere-ionosphere system and the associated induced geoelectric field on the ground to such extreme driving conditions. The model setup is further tested using input data for an observed space weather event of Halloween storm October 2003 to verify the MHD model consistence and to draw additional guidance for future work. This extreme space weather MHD model setup is designed specifically for practical application to the modeling of extreme geomagnetically induced electric fields, which can drive large currents in ground-based conductor systems such as power transmission grids. Therefore, our ultimate goal is to explore the level of geoelectric fields that can be induced from an assumed storm of the reported magnitude, i.e., Dst˜=-1600 nT.
A survey of modelling methods for high-fidelity wind farm simulations using large eddy simulation
DEFF Research Database (Denmark)
Breton, Simon-Philippe; Sumner, J.; Sørensen, Jens Nørkær
2017-01-01
surveys the most common schemes available to model the rotor, atmospheric conditions and terrain effects within current state-of-the-art LES codes, of which an overview is provided. A summary of the experimental research data available for validation of LES codes within the context of single and multiple......Large eddy simulations (LES) of wind farms have the capability to provide valuable and detailed information about the dynamics of wind turbine wakes. For this reason, their use within the wind energy research community is on the rise, spurring the development of new models and methods. This review...
Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K.; Kunaseth, Manaschai; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Ohmura, Satoshi; Rajak, Pankaj; Shimamura, Kohei; Vashishta, Priya
2014-05-01
We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 106-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of techniques
International Nuclear Information System (INIS)
Shimojo, Fuyuki; Hattori, Shinnosuke; Kalia, Rajiv K.; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Rajak, Pankaj; Vashishta, Priya; Kunaseth, Manaschai; Ohmura, Satoshi; Shimamura, Kohei
2014-01-01
We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786 432 cores for a 50.3 × 10 6 -atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16 661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of
Representative elements: A step to large-scale fracture system simulation
International Nuclear Information System (INIS)
Clemo, T.M.
1987-01-01
Large-scale simulation of flow and transport in fractured media requires the development of a technique to represent the effect of a large number of fractures. Representative elements are used as a tool to model a subset of a fracture system as a single distributed entity. Representative elements are part of a modeling concept called dual permeability. Dual permeability modeling combines discrete fracture simulation of the most important fractures with the distributed modeling of the less important fracture of a fracture system. This study investigates the use of stochastic analysis to determine properties of representative elements. Given an assumption of fully developed laminar flow, the net fracture conductivities and hence flow velocities can be determined from descriptive statistics of fracture spacing, orientation, aperture, and extent. The distribution of physical characteristics about their mean leads to a distribution of the associated conductivities. The variance of hydraulic conductivity induces dispersion into the transport process. Simple fracture systems are treated to demonstrate the usefulness of stochastic analysis. Explicit equations for conductivity of an element are developed and the dispersion characteristics are shown. Explicit formulation of the hydraulic conductivity and transport dispersion reveals the dependence of these important characteristics on the parameters used to describe the fracture system. Understanding these dependencies will help to focus efforts to identify the characteristics of fracture systems. Simulations of stochastically generated fracture sets do not provide this explicit functional dependence on the fracture system parameters. 12 refs., 6 figs
Large-Eddy Simulation of Internal Flow through Human Vocal Folds
Lasota, Martin; Šidlof, Petr
2018-06-01
The phonatory process occurs when air is expelled from the lungs through the glottis and the pressure drop causes flow-induced oscillations of the vocal folds. The flow fields created in phonation are highly unsteady and the coherent vortex structures are also generated. For accuracy it is essential to compute on humanlike computational domain and appropriate mathematical model. The work deals with numerical simulation of air flow within the space between plicae vocales and plicae vestibulares. In addition to the dynamic width of the rima glottidis, where the sound is generated, there are lateral ventriculus laryngis and sacculus laryngis included in the computational domain as well. The paper presents the results from OpenFOAM which are obtained with a large-eddy simulation using second-order finite volume discretization of incompressible Navier-Stokes equations. Large-eddy simulations with different subgrid scale models are executed on structured mesh. In these cases are used only the subgrid scale models which model turbulence via turbulent viscosity and Boussinesq approximation in subglottal and supraglottal area in larynx.
Large-Eddy Simulation of Internal Flow through Human Vocal Folds
Directory of Open Access Journals (Sweden)
Lasota Martin
2018-01-01
Full Text Available The phonatory process occurs when air is expelled from the lungs through the glottis and the pressure drop causes flow-induced oscillations of the vocal folds. The flow fields created in phonation are highly unsteady and the coherent vortex structures are also generated. For accuracy it is essential to compute on humanlike computational domain and appropriate mathematical model. The work deals with numerical simulation of air flow within the space between plicae vocales and plicae vestibulares. In addition to the dynamic width of the rima glottidis, where the sound is generated, there are lateral ventriculus laryngis and sacculus laryngis included in the computational domain as well. The paper presents the results from OpenFOAM which are obtained with a large-eddy simulation using second-order finite volume discretization of incompressible Navier-Stokes equations. Large-eddy simulations with different subgrid scale models are executed on structured mesh. In these cases are used only the subgrid scale models which model turbulence via turbulent viscosity and Boussinesq approximation in subglottal and supraglottal area in larynx.
The large dimension limit of a small black hole instability in anti-de Sitter space
Herzog, Christopher P.; Kim, Youngshin
2018-02-01
We study the dynamics of a black hole in an asymptotically AdS d × S d space-time in the limit of a large number of dimensions, d → ∞. Such a black hole is known to become dynamically unstable below a critical radius. We derive the dispersion relation for the quasinormal mode that governs this instability in an expansion in 1 /d. We also provide a full nonlinear analysis of the instability at leading order in 1 /d. We find solutions that resemble the lumpy black spots and black belts previously constructed numerically for small d, breaking the SO( d + 1) rotational symmetry of the sphere down to SO( d). We are also able to follow the time evolution of the instability. Due possibly to limitations in our analysis, our time dependent simulations do not settle down to stationary solutions. This work has relevance for strongly interacting gauge theories; through the AdS/CFT correspondence, the special case d = 5 corresponds to maximally supersymmetric Yang-Mills theory on a spatial S 3 in the microcanonical ensemble and in a strong coupling and large number of colors limit.
Capabilities of a Laser Guide Star for a Large Segmented Space Telescope
Clark, James R.; Carlton, Ashley; Douglas, Ewan S.; Males, Jared R.; Lumbres, Jennifer; Feinberg, Lee; Guyon, Olivier; Marlow, Weston; Cahoy, Kerri L.
2018-01-01
Large segmented mirror telescopes are planned for future space telescope missions such as LUVOIR (Large UV Optical Infrared Surveyor) to enable the improvement in resolution and contrast necessary to directly image Earth-like exoplanets, in addition to making contributions to general astrophysics. The precision surface control of these complex, large optical systems, which may have over a hundred meter-sized segments, is a challenge. Our initial simulations show that imaging a star of 2nd magnitude or brighter with a Zernike wavefront sensor should relax the segment stability requirements by factors between 10 and 50 depending on the wavefront control strategy. Fewer than fifty stars brighter than magnitude 2 can be found in the sky. A laser guide star (LGS) on a companion spacecraft will allow the telescope to target a dimmer science star and achieve wavefront control to the required stability without requiring slew or repointing maneuvers.We present initial results for one possible mission architecture, with a LGS flying at 100,000 km range from the large telescope in an L2 halo orbit, using a laser transmit power of 8 days) for an expenditure of system, it can be accommodated in a 6U CubeSat bus, but may require an extended period of time to transition between targets and match velocities with the telescope (e.g. 6 days to transit 10 degrees). If the LGS uses monopropellant propulsion, it must use at least a 27U bus to achieve the the same delta-V capability, but can transition between targets much more rapidly (flight are being refined. A low-cost prototype mission (e.g. between a small satellite in LEO and an LGS in GEO) to validate the feasibility is in development.
Space Geodetic Technique Co-location in Space: Simulation Results for the GRASP Mission
Kuzmicz-Cieslak, M.; Pavlis, E. C.
2011-12-01
The Global Geodetic Observing System-GGOS, places very stringent requirements in the accuracy and stability of future realizations of the International Terrestrial Reference Frame (ITRF): an origin definition at 1 mm or better at epoch and a temporal stability on the order of 0.1 mm/y, with similar numbers for the scale (0.1 ppb) and orientation components. These goals were derived from the requirements of Earth science problems that are currently the international community's highest priority. None of the geodetic positioning techniques can achieve this goal alone. This is due in part to the non-observability of certain attributes from a single technique. Another limitation is imposed from the extent and uniformity of the tracking network and the schedule of observational availability and number of suitable targets. The final limitation derives from the difficulty to "tie" the reference points of each technique at the same site, to an accuracy that will support the GGOS goals. The future GGOS network will address decisively the ground segment and to certain extent the space segment requirements. The JPL-proposed multi-technique mission GRASP (Geodetic Reference Antenna in Space) attempts to resolve the accurate tie between techniques, using their co-location in space, onboard a well-designed spacecraft equipped with GNSS receivers, a SLR retroreflector array, a VLBI beacon and a DORIS system. Using the anticipated system performance for all four techniques at the time the GGOS network is completed (ca 2020), we generated a number of simulated data sets for the development of a TRF. Our simulation studies examine the degree to which GRASP can improve the inter-technique "tie" issue compared to the classical approach, and the likely modus operandi for such a mission. The success of the examined scenarios is judged by the quality of the origin and scale definition of the resulting TRF.
Research and development at the Marshall Space Flight Center Neutral Buoyancy Simulator
Kulpa, Vygantas P.
1987-01-01
The Neutral Buoyancy Simulator (NBS), a facility designed to imitate zero-gravity conditions, was used to test the Experimental Assembly of Structures in Extravehicular Activity (EASE) and the Assembly Concept for Construction of Erectable Space Structures (ACCESS). Neutral Buoyancy Simulator applications and operations; early space structure research; development of the EASE/ACCESS experiments; and improvement of NBS simulation are summarized.
Space Situational Awareness of Large Numbers of Payloads From a Single Deployment
Segerman, A.; Byers, J.; Emmert, J.; Nicholas, A.
2014-09-01
The nearly simultaneous deployment of a large number of payloads from a single vehicle presents a new challenge for space object catalog maintenance and space situational awareness (SSA). Following two cubesat deployments last November, it took five weeks to catalog the resulting 64 orbits. The upcoming Kicksat mission will present an even greater SSA challenge, with its deployment of 128 chip-sized picosats. Although all of these deployments are in short-lived orbits, future deployments will inevitably occur at higher altitudes, with a longer term threat of collision with active spacecraft. With such deployments, individual scientific payload operators require rapid precise knowledge of their satellites' locations. Following the first November launch, the cataloguing did not initially associate a payload with each orbit, leaving this to the satellite operators. For short duration missions, the time required to identify an experiment's specific orbit may easily be a large fraction of the spacecraft's lifetime. For a Kicksat-type deployment, present tracking cannot collect enough observations to catalog each small object. The current approach is to treat the chip cloud as a single catalog object. However, the cloud dissipates into multiple subclouds and, ultimately, tiny groups of untrackable chips. One response to this challenge may be to mandate installation of a transponder on each spacecraft. Directional transponder transmission detections could be used as angle observations for orbit cataloguing. Of course, such an approach would only be employable with cooperative spacecraft. In other cases, a probabilistic association approach may be useful, with the goal being to establish the probability of an element being at a given point in space. This would permit more reliable assessment of the probability of collision of active spacecraft with any cloud element. This paper surveys the cataloguing challenges presented by large scale deployments of small spacecraft
Cloud-enabled large-scale land surface model simulations with the NASA Land Information System
Duffy, D.; Vaughan, G.; Clark, M. P.; Peters-Lidard, C. D.; Nijssen, B.; Nearing, G. S.; Rheingrover, S.; Kumar, S.; Geiger, J. V.
2017-12-01
Developed by the Hydrological Sciences Laboratory at NASA Goddard Space Flight Center (GSFC), the Land Information System (LIS) is a high-performance software framework for terrestrial hydrology modeling and data assimilation. LIS provides the ability to integrate satellite and ground-based observational products and advanced modeling algorithms to extract land surface states and fluxes. Through a partnership with the National Center for Atmospheric Research (NCAR) and the University of Washington, the LIS model is currently being extended to include the Structure for Unifying Multiple Modeling Alternatives (SUMMA). With the addition of SUMMA in LIS, meaningful simulations containing a large multi-model ensemble will be enabled and can provide advanced probabilistic continental-domain modeling capabilities at spatial scales relevant for water managers. The resulting LIS/SUMMA application framework is difficult for non-experts to install due to the large amount of dependencies on specific versions of operating systems, libraries, and compilers. This has created a significant barrier to entry for domain scientists that are interested in using the software on their own systems or in the cloud. In addition, the requirement to support multiple run time environments across the LIS community has created a significant burden on the NASA team. To overcome these challenges, LIS/SUMMA has been deployed using Linux containers, which allows for an entire software package along with all dependences to be installed within a working runtime environment, and Kubernetes, which orchestrates the deployment of a cluster of containers. Within a cloud environment, users can now easily create a cluster of virtual machines and run large-scale LIS/SUMMA simulations. Installations that have taken weeks and months can now be performed in minutes of time. This presentation will discuss the steps required to create a cloud-enabled large-scale simulation, present examples of its use, and
3D Simulations of Space Charge Effects in Particle Beams
Energy Technology Data Exchange (ETDEWEB)
Adelmann, A
2002-10-01
For the first time, it is possible to calculate the complicated three-dimensional proton accelerator structures at the Paul Scherrer Institut (PSI). Under consideration are external and self effects, arising from guiding and space-charge forces. This thesis has as its theme the design, implementation and validation of a tracking program for charged particles in accelerator structures. This work form part of the discipline of Computational Science and Engineering (CSE), more specifically in computational accelerator modelling. The physical model is based on the collisionless Vlasov-Maxwell theory, justified by the low density ({approx} 10{sup 9} protons/cm{sup 3}) of the beam and of the residual gas. The probability of large angle scattering between the protons and the residual gas is then sufficiently low, as can be estimated by considering the mean free path and the total distance a particle travels in the accelerator structure. (author)
3D Simulations of Space Charge Effects in Particle Beams
International Nuclear Information System (INIS)
Adelmann, A.
2002-10-01
For the first time, it is possible to calculate the complicated three-dimensional proton accelerator structures at the Paul Scherrer Institut (PSI). Under consideration are external and self effects, arising from guiding and space-charge forces. This thesis has as its theme the design, implementation and validation of a tracking program for charged particles in accelerator structures. This work form part of the discipline of Computational Science and Engineering (CSE), more specifically in computational accelerator modelling. The physical model is based on the collisionless Vlasov-Maxwell theory, justified by the low density (∼ 10 9 protons/cm 3 ) of the beam and of the residual gas. The probability of large angle scattering between the protons and the residual gas is then sufficiently low, as can be estimated by considering the mean free path and the total distance a particle travels in the accelerator structure. (author)
Extraterrestrial processing and manufacturing of large space systems. Volume 3: Executive summary
Miller, R. H.; Smith, D. B. S.
1979-01-01
Facilities and equipment are defined for refining processes to commercial grade of lunar material that is delivered to a 'space manufacturing facility' in beneficiated, primary processed quality. The manufacturing facilities and the equipment for producing elements of large space systems from these materials and providing programmatic assessments of the concepts are also defined. In-space production processes of solar cells (by vapor deposition) and arrays, structures and joints, conduits, waveguides, RF equipment radiators, wire cables, converters, and others are described.
Extraterrestrial processing and manufacturing of large space systems, volume 1, chapters 1-6
Miller, R. H.; Smith, D. B. S.
1979-01-01
Space program scenarios for production of large space structures from lunar materials are defined. The concept of the space manufacturing facility (SMF) is presented. The manufacturing processes and equipment for the SMF are defined and the conceptual layouts are described for the production of solar cells and arrays, structures and joints, conduits, waveguides, RF equipment radiators, wire cables, and converters. A 'reference' SMF was designed and its operation requirements are described.
Low-Power Large-Area Radiation Detector for Space Science Measurements
National Aeronautics and Space Administration — The objective of this task is to develop a low-power, large-area detectors from SiC, taking advantage of very low thermal noise characteristics and high radiation...
DEFF Research Database (Denmark)
Kwon, Jun Bum; Wang, Xiongfei; Blaabjerg, Frede
2017-01-01
For the efficiency and simplicity of electric systems, the dc power electronic systems are widely used in a variety of applications such as electric vehicles, ships, aircraft and also in homes. In these systems, there could be a number of dynamic interactions and frequency coupling between network...... with different switching frequency or harmonics from ac-dc converters makes that harmonics and frequency coupling are both problems of ac system and challenges of dc system. This paper presents a modeling and simulation method for a large dc power electronic system by using Harmonic State Space (HSS) modeling...
Gleam: the GLAST Large Area Telescope Simulation Framework
Boinee, P; De Angelis, Alessandro; Favretto, Dario; Frailis, Marco; Giannitrapani, Riccardo; Milotti, Edoardo; Longo, Francesco; Brigida, Monica; Gargano, Fabio; Giglietto, Nicola; Loparco, Francesco; Mazziotta, Mario Nicola; Cecchi, Claudia; Lubrano, Pasquale; Pepe, Monica; Baldini, Luca; Cohen-Tanugi, Johann; Kuss, Michael; Latronico, Luca; Omodei, Nicola; Spandre, Gloria; Bogart, Joanne R.; Dubois, Richard; Kamae, Tune; Rochester, Leon; Usher, Tracy; Burnett, Thompson H.; Robinson, Sean M.; Bastieri, Denis; Rando, Riccardo
2003-01-01
This paper presents the simulation of the GLAST high energy gamma-ray telescope. The simulation package, written in C++, is based on the Geant4 toolkit, and it is integrated into a general framework used to process events. A detailed simulation of the electronic signals inside Silicon detectors has been provided and it is used for the particle tracking, which is handled by a dedicated software. A unique repository for the geometrical description of the detector has been realized using the XML language and a C++ library to access this information has been designed and implemented.
Simulating Space Radiation-Induced Breast Tumor Incidence Using Automata.
Heuskin, A C; Osseiran, A I; Tang, J; Costes, S V
2016-07-01
Estimating cancer risk from space radiation has been an ongoing challenge for decades primarily because most of the reported epidemiological data on radiation-induced risks are derived from studies of atomic bomb survivors who were exposed to an acute dose of gamma rays instead of chronic high-LET cosmic radiation. In this study, we introduce a formalism using cellular automata to model the long-term effects of ionizing radiation in human breast for different radiation qualities. We first validated and tuned parameters for an automata-based two-stage clonal expansion model simulating the age dependence of spontaneous breast cancer incidence in an unexposed U.S. We then tested the impact of radiation perturbation in the model by modifying parameters to reflect both targeted and nontargeted radiation effects. Targeted effects (TE) reflect the immediate impact of radiation on a cell's DNA with classic end points being gene mutations and cell death. They are well known and are directly derived from experimental data. In contrast, nontargeted effects (NTE) are persistent and affect both damaged and undamaged cells, are nonlinear with dose and are not well characterized in the literature. In this study, we introduced TE in our model and compared predictions against epidemiologic data of the atomic bomb survivor cohort. TE alone are not sufficient for inducing enough cancer. NTE independent of dose and lasting ∼100 days postirradiation need to be added to accurately predict dose dependence of breast cancer induced by gamma rays. Finally, by integrating experimental relative biological effectiveness (RBE) for TE and keeping NTE (i.e., radiation-induced genomic instability) constant with dose and LET, the model predicts that RBE for breast cancer induced by cosmic radiation would be maximum at 220 keV/μm. This approach lays the groundwork for further investigation into the impact of chronic low-dose exposure, inter-individual variation and more complex space radiation
Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers
Wu, Xingfu; Duan, Benchun; Taylor, Valerie
2011-01-01
, such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular
Monte Carlo simulations for the space radiation superconducting shield project (SR2S).
Vuolo, M; Giraudo, M; Musenich, R; Calvelli, V; Ambroglini, F; Burger, W J; Battiston, R
2016-02-01
Astronauts on deep-space long-duration missions will be exposed for long time to galactic cosmic rays (GCR) and Solar Particle Events (SPE). The exposure to space radiation could lead to both acute and late effects in the crew members and well defined countermeasures do not exist nowadays. The simplest solution given by optimized passive shielding is not able to reduce the dose deposited by GCRs below the actual dose limits, therefore other solutions, such as active shielding employing superconducting magnetic fields, are under study. In the framework of the EU FP7 SR2S Project - Space Radiation Superconducting Shield--a toroidal magnetic system based on MgB2 superconductors has been analyzed through detailed Monte Carlo simulations using Geant4 interface GRAS. Spacecraft and magnets were modeled together with a simplified mechanical structure supporting the coils. Radiation transport through magnetic fields and materials was simulated for a deep-space mission scenario, considering for the first time the effect of secondary particles produced in the passage of space radiation through the active shielding and spacecraft structures. When modeling the structures supporting the active shielding systems and the habitat, the radiation protection efficiency of the magnetic field is severely decreasing compared to the one reported in previous studies, when only the magnetic field was modeled around the crew. This is due to the large production of secondary radiation taking place in the material surrounding the habitat. Copyright © 2016 The Committee on Space Research (COSPAR). Published by Elsevier Ltd. All rights reserved.
Rare event simulation in finite-infinite dimensional space
International Nuclear Information System (INIS)
Au, Siu-Kui; Patelli, Edoardo
2016-01-01
Modern engineering systems are becoming increasingly complex. Assessing their risk by simulation is intimately related to the efficient generation of rare failure events. Subset Simulation is an advanced Monte Carlo method for risk assessment and it has been applied in different disciplines. Pivotal to its success is the efficient generation of conditional failure samples, which is generally non-trivial. Conventionally an independent-component Markov Chain Monte Carlo (MCMC) algorithm is used, which is applicable to high dimensional problems (i.e., a large number of random variables) without suffering from ‘curse of dimension’. Experience suggests that the algorithm may perform even better for high dimensional problems. Motivated by this, for any given problem we construct an equivalent problem where each random variable is represented by an arbitrary (hence possibly infinite) number of ‘hidden’ variables. We study analytically the limiting behavior of the algorithm as the number of hidden variables increases indefinitely. This leads to a new algorithm that is more generic and offers greater flexibility and control. It coincides with an algorithm recently suggested by independent researchers, where a joint Gaussian distribution is imposed between the current sample and the candidate. The present work provides theoretical reasoning and insights into the algorithm.
Continuum Vlasov Simulation in Four Phase-space Dimensions
Cohen, B. I.; Banks, J. W.; Berger, R. L.; Hittinger, J. A.; Brunner, S.
2010-11-01
In the VALHALLA project, we are developing scalable algorithms for the continuum solution of the Vlasov-Maxwell equations in two spatial and two velocity dimensions. We use fourth-order temporal and spatial discretizations of the conservative form of the equations and a finite-volume representation to enable adaptive mesh refinement and nonlinear oscillation control [1]. The code has been implemented with and without adaptive mesh refinement, and with electromagnetic and electrostatic field solvers. A goal is to study the efficacy of continuum Vlasov simulations in four phase-space dimensions for laser-plasma interactions. We have verified the code in examples such as the two-stream instability, the weak beam-plasma instability, Landau damping, electron plasma waves with electron trapping and nonlinear frequency shifts [2]^ extended from 1D to 2D propagation, and light wave propagation.^ We will report progress on code development, computational methods, and physics applications. This work was performed under the auspices of the U.S. DOE by LLNL under contract no. DE-AC52-07NA27344. This work was funded by the Lab. Dir. Res. and Dev. Prog. at LLNL under project tracking code 08-ERD-031. [1] J.W. Banks and J.A.F. Hittinger, to appear in IEEE Trans. Plas. Sci. (Sept., 2010). [2] G.J. Morales and T.M. O'Neil, Phys. Rev. Lett. 28,417 (1972); R. L. Dewar, Phys. Fluids 15,712 (1972).
Characteristics and prediction of sound level in extra-large spaces
Wang, C.; Ma, H.; Wu, Y.; Kang, J.
2018-01-01
This paper aims to examine sound fields in extra-large spaces, which are defined in this paper as spaces used by people, with a volume approximately larger than 125,000m 3 and absorption coefficient less than 0.7. In such spaces inhomogeneous reverberant energy caused by uneven early reflections with increasing volume has a significant effect on sound fields. Measurements were conducted in four spaces to examine the attenuation of the total and reverberant energy with increasing source-receiv...
Real time simulation of large systems on mini-computer
International Nuclear Information System (INIS)
Nakhle, Michel; Roux, Pierre.
1979-01-01
Most simulation languages will only accept an explicit formulation of differential equations, and logical variables hold no special status therein. The pace of the suggested methods of integration is limited by the smallest time constant of the model submitted. The NEPTUNIX 2 simulation software has a language that will take implicit equations and an integration method of which the variable pace is not limited by the time constants of the model. This, together with high time and memory ressources optimization of the code generated, makes NEPTUNIX 2 a basic tool for simulation on mini-computers. Since the logical variables are specific entities under centralized control, correct processing of discontinuities and synchronization with a real process are feasible. The NEPTUNIX 2 is the industrial version of NEPTUNIX 1 [fr
Curran, R. T.; Hornfeck, W. A.
1972-01-01
The functional requirements for the design of an interpretive simulator for the space ultrareliable modular computer (SUMC) are presented. A review of applicable existing computer simulations is included along with constraints on the SUMC simulator functional design. Input requirements, output requirements, and language requirements for the simulator are discussed in terms of a SUMC configuration which may vary according to the application.
World, We Have Problems: Simulation for Large Complex, Risky Projects, and Events
Elfrey, Priscilla
2010-01-01
Prior to a spacewalk during the NASA STS/129 mission in November 2009, Columbia Broadcasting System (CBS) correspondent William Harwood reported astronauts, "were awakened again", as they had been the day previously. Fearing something not properly connected was causing a leak, the crew, both on the ground and in space, stopped and checked everything. The alarm proved false. The crew did complete its work ahead of schedule, but the incident reminds us that correctly connecting hundreds and thousands of entities, subsystems and systems, finding leaks, loosening stuck valves, and adding replacements to very large complex systems over time does not occur magically. Everywhere major projects present similar pressures. Lives are at - risk. Responsibility is heavy. Large natural and human-created disasters introduce parallel difficulties as people work across boundaries their countries, disciplines, languages, and cultures with known immediate dangers as well as the unexpected. NASA has long accepted that when humans have to go where humans cannot go that simulation is the sole solution. The Agency uses simulation to achieve consensus, reduce ambiguity and uncertainty, understand problems, make decisions, support design, do planning and troubleshooting, as well as for operations, training, testing, and evaluation. Simulation is at the heart of all such complex systems, products, projects, programs, and events. Difficult, hazardous short and, especially, long-term activities have a persistent need for simulation from the first insight into a possibly workable idea or answer until the final report perhaps beyond our lifetime is put in the archive. With simulation we create a common mental model, try-out breakdowns of machinery or teamwork, and find opportunity for improvement. Lifecycle simulation proves to be increasingly important as risks and consequences intensify. Across the world, disasters are increasing. We anticipate more of them, as the results of global warming
Impact of large-scale tides on cosmological distortions via redshift-space power spectrum
Akitsu, Kazuyuki; Takada, Masahiro
2018-03-01
Although large-scale perturbations beyond a finite-volume survey region are not direct observables, these affect measurements of clustering statistics of small-scale (subsurvey) perturbations in large-scale structure, compared with the ensemble average, via the mode-coupling effect. In this paper we show that a large-scale tide induced by scalar perturbations causes apparent anisotropic distortions in the redshift-space power spectrum of galaxies in a way depending on an alignment between the tide, wave vector of small-scale modes and line-of-sight direction. Using the perturbation theory of structure formation, we derive a response function of the redshift-space power spectrum to large-scale tide. We then investigate the impact of large-scale tide on estimation of cosmological distances and the redshift-space distortion parameter via the measured redshift-space power spectrum for a hypothetical large-volume survey, based on the Fisher matrix formalism. To do this, we treat the large-scale tide as a signal, rather than an additional source of the statistical errors, and show that a degradation in the parameter is restored if we can employ the prior on the rms amplitude expected for the standard cold dark matter (CDM) model. We also discuss whether the large-scale tide can be constrained at an accuracy better than the CDM prediction, if the effects up to a larger wave number in the nonlinear regime can be included.
Analysis of Waves in Space Plasma (WISP) near field simulation and experiment
Richie, James E.
1992-01-01
The WISP payload scheduler for a 1995 space transportation system (shuttle flight) will include a large power transmitter on board at a wide range of frequencies. The levels of electromagnetic interference/electromagnetic compatibility (EMI/EMC) must be addressed to insure the safety of the shuttle crew. This report is concerned with the simulation and experimental verification of EMI/EMC for the WISP payload in the shuttle cargo bay. The simulations have been carried out using the method of moments for both thin wires and patches to stimulate closed solids. Data obtained from simulation is compared with experimental results. An investigation of the accuracy of the modeling approach is also included. The report begins with a description of the WISP experiment. A description of the model used to simulate the cargo bay follows. The results of the simulation are compared to experimental data on the input impedance of the WISP antenna with the cargo bay present. A discussion of the methods used to verify the accuracy of the model is shown to illustrate appropriate methods for obtaining this information. Finally, suggestions for future work are provided.
Hot air impingement on a flat plate using Large Eddy Simulation (LES) technique
Plengsa-ard, C.; Kaewbumrung, M.
2018-01-01
Impinging hot gas jets to a flat plate generate very high heat transfer coefficients in the impingement zone. The magnitude of heat transfer prediction near the stagnation point is important and accurate heat flux distribution are needed. This research studies on heat transfer and flow field resulting from a single hot air impinging wall. The simulation is carried out using computational fluid dynamics (CFD) commercial code FLUENT. Large Eddy Simulation (LES) approach with a subgrid-scale Smagorinsky-Lilly model is present. The classical Werner-Wengle wall model is used to compute the predicted results of velocity and temperature near walls. The Smagorinsky constant in the turbulence model is set to 0.1 and is kept constant throughout the investigation. The hot gas jet impingement on the flat plate with a constant surface temperature is chosen to validate the predicted heat flux results with experimental data. The jet Reynolds number is equal to 20,000 and a fixed jet-to-plate spacing of H/D = 2.0. Nusselt number on the impingement surface is calculated. As predicted by the wall model, the instantaneous computed Nusselt number agree fairly well with experimental data. The largest values of calculated Nusselt number are near the stagnation point and decrease monotonically in the wall jet region. Also, the contour plots of instantaneous values of wall heat flux on a flat plate are captured by LES simulation.
Simulated effects of host fish distribution on juvenile unionid mussel dispersal in a large river
Daraio, J.A.; Weber, L.J.; Zigler, S.J.; Newton, T.J.; Nestler, J.M.
2012-01-01
Larval mussels (Family Unionidae) are obligate parasites on fish, and after excystment from their host, as juveniles, they are transported with flow. We know relatively little about the mechanisms that affect dispersal and subsequent settlement of juvenile mussels in large rivers. We used a three-dimensional hydrodynamic model of a reach of the Upper Mississippi River with stochastic Lagrangian particle tracking to simulate juvenile dispersal. Sensitivity analyses were used to determine the importance of excystment location in two-dimensional space (lateral and longitudinal) and to assess the effects of vertical location (depth in the water column) on dispersal distances and juvenile settling distributions. In our simulations, greater than 50% of juveniles mussels settled on the river bottom within 500 m of their point of excystment, regardless of the vertical location of the fish in the water column. Dispersal distances were most variable in environments with higher velocity and high gradients in velocity, such as along channel margins, near the channel bed, or where effects of river bed morphology caused large changes in hydraulics. Dispersal distance was greater and variance was greater when juvenile excystment occurred in areas where vertical velocity (w) was positive (indicating an upward velocity) than when w was negative. Juvenile dispersal distance is likely to be more variable for mussels species whose hosts inhabit areas with steeper velocity gradients (e.g. channel margins) than a host that generally inhabits low-flow environments (e.g. impounded areas).
International Nuclear Information System (INIS)
Bogacz, S.A.; Griffin, J.E.; Khiari, F.Z.
1988-05-01
Excitation of large amplitude coherent dipole bunch oscillations by beam induced voltages in spurious narrow resonances are simulated using a longitudinal phase-space tracking code (ESME). Simulation of the developing instability in a high intensity proton beam driven by a spurious parasitic resonance of the rf cavities allows one to estimate the final longitudinal emittance of the beam at the end of the cycle, which puts serious limitations on the machine performance. The growth of the coupled bunch modes is significantly enhanced if a gap of missing bunches is present, which is an inherent feature of the high intensity proton machines. A strong transient excitation of the parasitic resonance by the Fourier components of the beam spectrum resulting from the presence of the gap is suggested as a possible mechanism of this enhancement. 10 refs., 4 figs., 1 tab
Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing
Directory of Open Access Journals (Sweden)
Qianghui Zhang
2016-07-01
Full Text Available Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS, which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD based on Stolt interpolation. Finally, a modified TSP (MTSP is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application.
Large eddy simulation of turbulent mixing in a T-junction
International Nuclear Information System (INIS)
Kim, Jung Woo
2010-12-01
In this report, large eddy simulation was performed in order to further improve our understanding the physics of turbulent mixing in a T-junction, which is recently regarded as one of the most important problems in nuclear thermal-hydraulics safety. Large eddy simulation technique and the other numerical methods used in this study were presented in Sec. 2, and the numerical results obtained from large eddy simulation were described in Sec. 3. Finally, the summary was written in Sec. 4
James Webb Space Telescope Optical Simulation Testbed: Segmented Mirror Phase Retrieval Testing
Laginja, Iva; Egron, Sylvain; Brady, Greg; Soummer, Remi; Lajoie, Charles-Philippe; Bonnefois, Aurélie; Long, Joseph; Michau, Vincent; Choquet, Elodie; Ferrari, Marc; Leboulleux, Lucie; Mazoyer, Johan; N’Diaye, Mamadou; Perrin, Marshall; Petrone, Peter; Pueyo, Laurent; Sivaramakrishnan, Anand
2018-01-01
The James Webb Space Telescope (JWST) Optical Simulation Testbed (JOST) is a hardware simulator designed to produce JWST-like images. A model of the JWST three mirror anastigmat is realized with three lenses in form of a Cooke Triplet, which provides JWST-like optical quality over a field equivalent to a NIRCam module, and an Iris AO segmented mirror with hexagonal elements is standing in for the JWST segmented primary. This setup successfully produces images extremely similar to NIRCam images from cryotesting in terms of the PSF morphology and sampling relative to the diffraction limit.The testbed is used for staff training of the wavefront sensing and control (WFS&C) team and for independent analysis of WFS&C scenarios of the JWST. Algorithms like geometric phase retrieval (GPR) that may be used in flight and potential upgrades to JWST WFS&C will be explored. We report on the current status of the testbed after alignment, implementation of the segmented mirror, and testing of phase retrieval techniques.This optical bench complements other work at the Makidon laboratory at the Space Telescope Science Institute, including the investigation of coronagraphy for segmented aperture telescopes. Beyond JWST we intend to use JOST for WFS&C studies for future large segmented space telescopes such as LUVOIR.
Simulation of space-charge effects in an ungated GEM-based TPC
Energy Technology Data Exchange (ETDEWEB)
Böhmer, F.V., E-mail: felix.boehmer@tum.de; Ball, M.; Dørheim, S.; Höppner, C.; Ketzer, B.; Konorov, I.; Neubert, S.; Paul, S.; Rauch, J.; Vandenbroucke, M.
2013-08-11
A fundamental limit to the application of Time Projection Chambers (TPCs) in high-rate experiments is the accumulation of slowly drifting ions in the active gas volume, which compromises the homogeneity of the drift field and hence the detector resolution. Conventionally, this problem is overcome by the use of ion-gating structures. This method, however, introduces large dead times and restricts trigger rates to a few hundred per second. The ion gate can be eliminated from the setup by the use of Gas Electron Multiplier (GEM) foils for gas amplification, which intrinsically suppress the backflow of ions. This makes the continuous operation of a TPC at high rates feasible. In this work, Monte Carlo simulations of the buildup of ion space charge in a GEM-based TPC and the correction of the resulting drift distortions are discussed, based on realistic numbers for the ion backflow in a triple-GEM amplification stack. A TPC in the future P{sup ¯}ANDA experiment at FAIR serves as an example for the experimental environment. The simulations show that space charge densities up to 65 fC cm{sup −3} are reached, leading to electron drift distortions of up to 10 mm. The application of a laser calibration system to correct these distortions is investigated. Based on full simulations of the detector physics and response, we show that it is possible to correct for the drift distortions and to maintain the good momentum resolution of the GEM-TPC.
Simulation of space-charge effects in an ungated GEM-based TPC
International Nuclear Information System (INIS)
Böhmer, F.V.; Ball, M.; Dørheim, S.; Höppner, C.; Ketzer, B.; Konorov, I.; Neubert, S.; Paul, S.; Rauch, J.; Vandenbroucke, M.
2013-01-01
A fundamental limit to the application of Time Projection Chambers (TPCs) in high-rate experiments is the accumulation of slowly drifting ions in the active gas volume, which compromises the homogeneity of the drift field and hence the detector resolution. Conventionally, this problem is overcome by the use of ion-gating structures. This method, however, introduces large dead times and restricts trigger rates to a few hundred per second. The ion gate can be eliminated from the setup by the use of Gas Electron Multiplier (GEM) foils for gas amplification, which intrinsically suppress the backflow of ions. This makes the continuous operation of a TPC at high rates feasible. In this work, Monte Carlo simulations of the buildup of ion space charge in a GEM-based TPC and the correction of the resulting drift distortions are discussed, based on realistic numbers for the ion backflow in a triple-GEM amplification stack. A TPC in the future P ¯ ANDA experiment at FAIR serves as an example for the experimental environment. The simulations show that space charge densities up to 65 fC cm −3 are reached, leading to electron drift distortions of up to 10 mm. The application of a laser calibration system to correct these distortions is investigated. Based on full simulations of the detector physics and response, we show that it is possible to correct for the drift distortions and to maintain the good momentum resolution of the GEM-TPC
3D space combat simulation game with artificial intelligence
Pernička, Václav
2013-01-01
The goal of this thesis is to design and implement a 3D space shooter with artifitial intelligence. This thesis includes theoretic analysis of space shooters, types of artifitial intelligence and assumptions important for developing in 3D space. The game also includes a simple artifitial intelligent player.
A research on the excavation, support, and environment control of large scale underground space
Energy Technology Data Exchange (ETDEWEB)
Kang, Pil Chong; Kwon, Kwang Soo; Jeong, So Keul [Korea Institute of Geology Mining and Materials, Taejon (Korea, Republic of)
1995-12-01
With the growing necessity of the underground space due to the deficiency of above-ground space, the size and shape of underground structures tend to be complex and diverse. This complexity and variety force the development of new techniques for rock mass classification, excavation and supporting of underground space, monitoring and control of underground environment. All these techniques should be applied together to make the underground space comfortable. To achieve this, efforts have been made on 5 different areas; research on the underground space design and stability analysis, research on the techniques for excavation of rock by controlled blasting, research on the development of monitoring system to forecast the rock behaviour of underground space, research on the environment inspection system in closed space, and research on dynamic analysis of the airflow and environmental control in the large geos-spaces. The 5 main achievements are improvement of the existing structure analysis program(EXCRACK) to consider the deformation and failure characteristics of rock joints, development of new blasting design (SK-cut), prediction of ground vibration through the newly proposed wave propagation equation, development and In-Situ application of rock mass deformation monitoring system and data acquisition software, and trial manufacture of the environment inspection system in closed space. Should these techniques be applied to the development of underground space, prevention of industrial disaster, cut down of construction cost, domestication of monitoring system, improvement of tunnel stability, curtailment of royalty, upgrade of domestic technologies will be brought forth. (Abstract Truncated)
International Nuclear Information System (INIS)
Surzhikov, S.
2012-01-01
Graphical abstract: It has been shown that different coupled vibrational dissociation models, being applied for solving coupled radiative gasdynamic problems for large size space vehicles, exert noticeable effect on radiative heating of its surface at orbital entry on high altitudes (h ⩾ 70 km). This influence decreases with decreasing the space vehicles sizes. Figure shows translational (solid lines) and vibrational (dashed lines) temperatures in shock layer with (circle markers) and without (triangles markers) radiative-gasdynamic interaction for one trajectory point of entering space vehicle. Highlights: ► Nonequilibrium dissociation processes exert effect on radiation heating of space vehicles (SV). ► The radiation gas dynamic interaction enhances this influence. ► This influence increases with increasing the SV sizes. - Abstract: Radiative aerothermodynamics of large-scale space vehicles is considered for Earth orbital entry at zero angle of attack. Brief description of used radiative gasdynamic model of physically and chemically nonequilibrium, viscous, heat conductive and radiative gas of complex chemical composition is presented. Radiation gasdynamic (RadGD) interaction in high temperature shock layer is studied by means of numerical experiment. It is shown that radiation–gasdynamic coupling for orbital space vehicles of large size is important for high altitude part of entering trajectory. It is demonstrated that the use of different models of coupled vibrational dissociation (CVD) in conditions of RadGD interaction gives rise temperature variation in shock layer and, as a result, leads to significant variation of radiative heating of space vehicle.
National Aeronautics and Space Administration — TRS Technologies proposes innovative hybrid electrostatic/flextensional membrane deformable mirror capable of large amplitude aberration correction for large...
Modeling and Simulation Techniques for Large-Scale Communications Modeling
National Research Council Canada - National Science Library
Webb, Steve
1997-01-01
.... Tests of random number generators were also developed and applied to CECOM models. It was found that synchronization of random number strings in simulations is easy to implement and can provide significant savings for making comparative studies. If synchronization is in place, then statistical experiment design can be used to provide information on the sensitivity of the output to input parameters. The report concludes with recommendations and an implementation plan.
Entropic Lattice Boltzmann: an implicit Large-Eddy Simulation?
Tauzin, Guillaume; Biferale, Luca; Sbragaglia, Mauro; Gupta, Abhineet; Toschi, Federico; Ehrhardt, Matthias; Bartel, Andreas
2017-11-01
We study the modeling of turbulence implied by the unconditionally stable Entropic Lattice Boltzmann Method (ELBM). We first focus on 2D homogeneous turbulence, for which we conduct numerical simulations for a wide range of relaxation times τ. For these simulations, we analyze the effective viscosity obtained by numerically differentiating the kinetic energy and enstrophy balance equations averaged over sub-domains of the computational grid. We aim at understanding the behavior of the implied sub-grid scale model and verify a formulation previously derived using Chapman-Enskog expansion. These ELBM benchmark simulations are thus useful to understand the range of validity of ELBM as a turbulence model. Finally, we will discuss an extension of the previously obtained results to the 3D case. Supported by the European Unions Framework Programme for Research and Innovation Horizon 2020 (2014-2020) under the Marie Sklodowska-Curie Grant Agreement No. 642069 and by the European Research Council under the ERC Grant Agreement No. 339032.
Parallel Earthquake Simulations on Large-Scale Multicore Supercomputers
Wu, Xingfu
2011-01-01
Earthquakes are one of the most destructive natural hazards on our planet Earth. Hugh earthquakes striking offshore may cause devastating tsunamis, as evidenced by the 11 March 2011 Japan (moment magnitude Mw9.0) and the 26 December 2004 Sumatra (Mw9.1) earthquakes. Earthquake prediction (in terms of the precise time, place, and magnitude of a coming earthquake) is arguably unfeasible in the foreseeable future. To mitigate seismic hazards from future earthquakes in earthquake-prone areas, such as California and Japan, scientists have been using numerical simulations to study earthquake rupture propagation along faults and seismic wave propagation in the surrounding media on ever-advancing modern computers over past several decades. In particular, ground motion simulations for past and future (possible) significant earthquakes have been performed to understand factors that affect ground shaking in populated areas, and to provide ground shaking characteristics and synthetic seismograms for emergency preparation and design of earthquake-resistant structures. These simulation results can guide the development of more rational seismic provisions for leading to safer, more efficient, and economical50pt]Please provide V. Taylor author e-mail ID. structures in earthquake-prone regions.
Statistics of LES simulations of large wind farms
DEFF Research Database (Denmark)
Andersen, Søren Juhl; Sørensen, Jens Nørkær; Mikkelsen, Robert Flemming
2016-01-01
. The statistical moments appear to collapse and hence the turbulence inside large wind farms can potentially be scaled accordingly. The thrust coefficient is estimated by two different reference velocities and the generic CT expression by Frandsen. A reference velocity derived from the power production is shown...... to give very good agreement and furthermore enables the very good estimation of the thrust force using only the steady CT-curve, even for very short time samples. Finally, the effective turbulence inside large wind farms and the equivalent loads are examined....
Influence of grid aspect ratio on planetary boundary layer turbulence in large-eddy simulations
Directory of Open Access Journals (Sweden)
S. Nishizawa
2015-10-01
Full Text Available We examine the influence of the grid aspect ratio of horizontal to vertical grid spacing on turbulence in the planetary boundary layer (PBL in a large-eddy simulation (LES. In order to clarify and distinguish them from other artificial effects caused by numerical schemes, we used a fully compressible meteorological LES model with a fully explicit scheme of temporal integration. The influences are investigated with a series of sensitivity tests with parameter sweeps of spatial resolution and grid aspect ratio. We confirmed that the mixing length of the eddy viscosity and diffusion due to sub-grid-scale turbulence plays an essential role in reproducing the theoretical −5/3 slope of the energy spectrum. If we define the filter length in LES modeling based on consideration of the numerical scheme, and introduce a corrective factor for the grid aspect ratio into the mixing length, the theoretical slope of the energy spectrum can be obtained; otherwise, spurious energy piling appears at high wave numbers. We also found that the grid aspect ratio has influence on the turbulent statistics, especially the skewness of the vertical velocity near the top of the PBL, which becomes spuriously large with large aspect ratio, even if a reasonable spectrum is obtained.
Implementation of an Open-Scenario, Long-Term Space Debris Simulation Approach
Nelson, Bron; Yang Yang, Fan; Carlino, Roberto; Dono Perez, Andres; Faber, Nicolas; Henze, Chris; Karacalioglu, Arif Goktug; O'Toole, Conor; Swenson, Jason; Stupl, Jan
2015-01-01
This paper provides a status update on the implementation of a flexible, long-term space debris simulation approach. The motivation is to build a tool that can assess the long-term impact of various options for debris-remediation, including the LightForce space debris collision avoidance concept that diverts objects using photon pressure [9]. State-of-the-art simulation approaches that assess the long-term development of the debris environment use either completely statistical approaches, or they rely on large time steps on the order of several days if they simulate the positions of single objects over time. They cannot be easily adapted to investigate the impact of specific collision avoidance schemes or de-orbit schemes, because the efficiency of a collision avoidance maneuver can depend on various input parameters, including ground station positions and orbital and physical parameters of the objects involved in close encounters (conjunctions). Furthermore, maneuvers take place on timescales much smaller than days. For example, LightForce only changes the orbit of a certain object (aiming to reduce the probability of collision), but it does not remove entire objects or groups of objects. In the same sense, it is also not straightforward to compare specific de-orbit methods in regard to potential collision risks during a de-orbit maneuver. To gain flexibility in assessing interactions with objects, we implement a simulation that includes every tracked space object in Low Earth Orbit (LEO) and propagates all objects with high precision and variable time-steps as small as one second. It allows the assessment of the (potential) impact of physical or orbital changes to any object. The final goal is to employ a Monte Carlo approach to assess the debris evolution during the simulation time-frame of 100 years and to compare a baseline scenario to debris remediation scenarios or other scenarios of interest. To populate the initial simulation, we use the entire space
Desert Cyanobacteria under simulated space and Martian conditions
Billi, D.; Ghelardini, P.; Onofri, S.; Cockell, C. S.; Rabbow, E.; Horneck, G.
2008-09-01
The environment in space and on planets such as Mars, can be lethal to living organisms and high levels of tolerance to desiccation, cold and radiation are needed for survival: rock-inhabiting cyanobacteria belonging to the genus Chroococcidiopsis can fulfil these requirements [1]. These cyanobacteria constantly appear in the most extreme and dry habitats on Earth, including the McMurdo Dry Valleys (Antarctica) and the Atacama Desert (Chile), which are considered the closest terrestrial analogs of two Mars environmental extremes: cold and aridity. In their natural environment, these cyanobacteria occupy the last refuges for life inside porous rocks or at the stone-soil interfaces, where they survive in a dry, dormant state for prolonged periods. How desert strains of Chroococcidiopsis can dry without dying is only partially understood, even though experimental evidences support the existence of an interplay between mechanisms to avoid (or limit) DNA damage and repair it: i) desert strains of Chroococcidiopsis mend genome fragmentation induced by ionizing radiation [2]; ii) desiccation-survivors protect their genome from complete fragmentation; iii) in the dry state they show a survival to an unattenuated Martian UV flux greater than that of Bacillus subtilis spores [3], and even though they die following atmospheric entry after having orbited the Earth for 16 days [4], they survive to simulated shock pressures up to 10 GPa [5]. Recently additional experiments were carried out at the German Aerospace Center (DLR) of Cologne (Germany) in order to identify suitable biomarkers to investigate the survival of Chroococcidiopsis cells present in lichen-dominated communities, in view of their direct and long term space exposition on the International Space Station (ISS) in the framework of the LIchens and Fungi Experiments (LIFE, EXPOSEEuTEF, ESA). Multilayers of dried cells of strains CCMEE 134 (Beacon Valley, Antarctica), and CCMEE 123 (costal desert, Chile ), shielded by
Large Blast and Thermal Simulator Reflected Wave Eliminator Study
1990-03-01
it delays the passage of this wave through the test section until after the test is complete. The required length of extra duct depends on the strength...tube axis, which acts like an additional contraction effect since Se = Sj/[Cqsin(aj)]. Tii extra area is illustrated best by plotting (Se-Ae)/Ac versus...34Simulation de Choc et de Soaffie. Comimpensateur d’Ondes de Detente de Bouche pour tube a Choc de 2400 mm de diametre de Veine. Description, Compte- Renda
Automatic Measurement in Large-Scale Space with the Laser Theodolite and Vision Guiding Technology
Directory of Open Access Journals (Sweden)
Bin Wu
2013-01-01
Full Text Available The multitheodolite intersection measurement is a traditional approach to the coordinate measurement in large-scale space. However, the procedure of manual labeling and aiming results in the low automation level and the low measuring efficiency, and the measurement accuracy is affected easily by the manual aiming error. Based on the traditional theodolite measuring methods, this paper introduces the mechanism of vision measurement principle and presents a novel automatic measurement method for large-scale space and large workpieces (equipment combined with the laser theodolite measuring and vision guiding technologies. The measuring mark is established on the surface of the measured workpiece by the collimating laser which is coaxial with the sight-axis of theodolite, so the cooperation targets or manual marks are no longer needed. With the theoretical model data and the multiresolution visual imaging and tracking technology, it can realize the automatic, quick, and accurate measurement of large workpieces in large-scale space. Meanwhile, the impact of artificial error is reduced and the measuring efficiency is improved. Therefore, this method has significant ramification for the measurement of large workpieces, such as the geometry appearance characteristics measuring of ships, large aircraft, and spacecraft, and deformation monitoring for large building, dams.
Image-based Exploration of Iso-surfaces for Large Multi- Variable Datasets using Parameter Space.
Binyahib, Roba S.
2013-05-13
With an increase in processing power, more complex simulations have resulted in larger data size, with higher resolution and more variables. Many techniques have been developed to help the user to visualize and analyze data from such simulations. However, dealing with a large amount of multivariate data is challenging, time- consuming and often requires high-end clusters. Consequently, novel visualization techniques are needed to explore such data. Many users would like to visually explore their data and change certain visual aspects without the need to use special clusters or having to load a large amount of data. This is the idea behind explorable images (EI). Explorable images are a novel approach that provides limited interactive visualization without the need to re-render from the original data [40]. In this work, the concept of EI has been used to create a workflow that deals with explorable iso-surfaces for scalar fields in a multivariate, time-varying dataset. As a pre-processing step, a set of iso-values for each scalar field is inferred and extracted from a user-assisted sampling technique in time-parameter space. These iso-values are then used to generate iso- surfaces that are then pre-rendered (from a fixed viewpoint) along with additional buffers (i.e. normals, depth, values of other fields, etc.) to provide a compressed representation of iso-surfaces in the dataset. We present a tool that at run-time allows the user to interactively browse and calculate a combination of iso-surfaces superimposed on each other. The result is the same as calculating multiple iso- surfaces from the original data but without the memory and processing overhead. Our tool also allows the user to change the (scalar) values superimposed on each of the surfaces, modify their color map, and interactively re-light the surfaces. We demonstrate the effectiveness of our approach over a multi-terabyte combustion dataset. We also illustrate the efficiency and accuracy of our
SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering
Hadwiger, Markus
2017-08-28
Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.
SparseLeap: Efficient Empty Space Skipping for Large-Scale Volume Rendering
Hadwiger, Markus; Al-Awami, Ali K.; Beyer, Johanna; Agus, Marco; Pfister, Hanspeter
2017-01-01
Recent advances in data acquisition produce volume data of very high resolution and large size, such as terabyte-sized microscopy volumes. These data often contain many fine and intricate structures, which pose huge challenges for volume rendering, and make it particularly important to efficiently skip empty space. This paper addresses two major challenges: (1) The complexity of large volumes containing fine structures often leads to highly fragmented space subdivisions that make empty regions hard to skip efficiently. (2) The classification of space into empty and non-empty regions changes frequently, because the user or the evaluation of an interactive query activate a different set of objects, which makes it unfeasible to pre-compute a well-adapted space subdivision. We describe the novel SparseLeap method for efficient empty space skipping in very large volumes, even around fine structures. The main performance characteristic of SparseLeap is that it moves the major cost of empty space skipping out of the ray-casting stage. We achieve this via a hybrid strategy that balances the computational load between determining empty ray segments in a rasterization (object-order) stage, and sampling non-empty volume data in the ray-casting (image-order) stage. Before ray-casting, we exploit the fast hardware rasterization of GPUs to create a ray segment list for each pixel, which identifies non-empty regions along the ray. The ray-casting stage then leaps over empty space without hierarchy traversal. Ray segment lists are created by rasterizing a set of fine-grained, view-independent bounding boxes. Frame coherence is exploited by re-using the same bounding boxes unless the set of active objects changes. We show that SparseLeap scales better to large, sparse data than standard octree empty space skipping.
Directory of Open Access Journals (Sweden)
Alison Heppenstall
2016-01-01
Full Text Available Cities are complex systems, comprising of many interacting parts. How we simulate and understand causality in urban systems is continually evolving. Over the last decade the agent-based modeling (ABM paradigm has provided a new lens for understanding the effects of interactions of individuals and how through such interactions macro structures emerge, both in the social and physical environment of cities. However, such a paradigm has been hindered due to computational power and a lack of large fine scale datasets. Within the last few years we have witnessed a massive increase in computational processing power and storage, combined with the onset of Big Data. Today geographers find themselves in a data rich era. We now have access to a variety of data sources (e.g., social media, mobile phone data, etc. that tells us how, and when, individuals are using urban spaces. These data raise several questions: can we effectively use them to understand and model cities as complex entities? How well have ABM approaches lent themselves to simulating the dynamics of urban processes? What has been, or will be, the influence of Big Data on increasing our ability to understand and simulate cities? What is the appropriate level of spatial analysis and time frame to model urban phenomena? Within this paper we discuss these questions using several examples of ABM applied to urban geography to begin a dialogue about the utility of ABM for urban modeling. The arguments that the paper raises are applicable across the wider research environment where researchers are considering using this approach.
Development of automation and robotics for space via computer graphic simulation methods
Fernandez, Ken
1988-01-01
A robot simulation system, has been developed to perform automation and robotics system design studies. The system uses a procedure-oriented solid modeling language to produce a model of the robotic mechanism. The simulator generates the kinematics, inverse kinematics, dynamics, control, and real-time graphic simulations needed to evaluate the performance of the model. Simulation examples are presented, including simulation of the Space Station and the design of telerobotics for the Orbital Maneuvering Vehicle.
Simulating Coupling Complexity in Space Plasmas: First Results from a new code
Kryukov, I.; Zank, G. P.; Pogorelov, N. V.; Raeder, J.; Ciardo, G.; Florinski, V. A.; Heerikhuisen, J.; Li, G.; Petrini, F.; Shematovich, V. I.; Winske, D.; Shaikh, D.; Webb, G. M.; Yee, H. M.
2005-12-01
The development of codes that embrace 'coupling complexity' via the self-consistent incorporation of multiple physical scales and multiple physical processes in models has been identified by the NRC Decadal Survey in Solar and Space Physics as a crucial necessary development in simulation/modeling technology for the coming decade. The National Science Foundation, through its Information Technology Research (ITR) Program, is supporting our efforts to develop a new class of computational code for plasmas and neutral gases that integrates multiple scales and multiple physical processes and descriptions. We are developing a highly modular, parallelized, scalable code that incorporates multiple scales by synthesizing 3 simulation technologies: 1) Computational fluid dynamics (hydrodynamics or magneto-hydrodynamics-MHD) for the large-scale plasma; 2) direct Monte Carlo simulation of atoms/neutral gas, and 3) transport code solvers to model highly energetic particle distributions. We are constructing the code so that a fourth simulation technology, hybrid simulations for microscale structures and particle distributions, can be incorporated in future work, but for the present, this aspect will be addressed at a test-particle level. This synthesis we will provide a computational tool that will advance our understanding of the physics of neutral and charged gases enormously. Besides making major advances in basic plasma physics and neutral gas problems, this project will address 3 Grand Challenge space physics problems that reflect our research interests: 1) To develop a temporal global heliospheric model which includes the interaction of solar and interstellar plasma with neutral populations (hydrogen, helium, etc., and dust), test-particle kinetic pickup ion acceleration at the termination shock, anomalous cosmic ray production, interaction with galactic cosmic rays, while incorporating the time variability of the solar wind and the solar cycle. 2) To develop a coronal
Directory of Open Access Journals (Sweden)
Zhang Guowei
2014-01-01
Full Text Available Based on a full-scale bookcase fire experiment, a fire development model is proposed for the whole process of localized fires in large-space buildings. We found that for localized fires in large-space buildings full of wooden combustible materials the fire growing phases can be simplified into a t2 fire with a 0.0346 kW/s2 fire growth coefficient. FDS technology is applied to study the smoke temperature curve for a 2 MW to 25 MW fire occurring within a large space with a height of 6 m to 12 m and a building area of 1 500 m2 to 10 000 m2 based on the proposed fire development model. Through the analysis of smoke temperature in various fire scenarios, a new approach is proposed to predict the smoke temperature curve. Meanwhile, a modified model of steel temperature development in localized fire is built. In the modified model, the localized fire source is treated as a point fire source to evaluate the flame net heat flux to steel. The steel temperature curve in the whole process of a localized fire could be accurately predicted by the above findings. These conclusions obtained in this paper could provide valuable reference to fire simulation, hazard assessment, and fire protection design.
Large-Eddy Simulations of Reacting Liquid Spray
Lederlin, Thomas; Sanjose, Marlene; Gicquel, Laurent; Cuenot, Benedicte; Pitsch, Heinz; Poinsot, Thierry
2008-11-01
Numerical simulation, which is commonly used in many stages of aero-engine design, still has to demonstrate its predictive capability for two-phase reacting flows. This study is a collaboration between Stanford University and CERFACS to perform LES of a realistic spray combustor installed at ONERA, Toulouse. The experimental configuration is computed on the same unstructured mesh with two different solvers: Stanford's CDP code and CERFACS's AVBP code. CDP uses a low-Mach, variable-density solver with implicit time advancement. Droplets are tracked in a Lagrangian point-particle framework. The combustion model uses a flamelet approach, based on two transported scalars, mixture fraction and reaction progress variable. AVBP is a fully compressible solver with explicit time advancement. The liquid phase is described with an Eulerian method. The flame-turbulence interaction is modeled using a dynamically-thickened flame. Results are compared with experimental data for three regimes: purely gaseous non-reacting flow, non-reacting flow with evaporating droplets, reacting flow with droplets. Both simulations show a good agreement with experimental data and also stress the difference and relative advantages of the numerical methods.
Simulation of the space debris environment in LEO using a simplified approach
Kebschull, Christopher; Scheidemann, Philipp; Hesselbach, Sebastian; Radtke, Jonas; Braun, Vitali; Krag, H.; Stoll, Enrico
2017-01-01
Several numerical approaches exist to simulate the evolution of the space debris environment. These simulations usually rely on the propagation of a large population of objects in order to determine the collision probability for each object. Explosion and collision events are triggered randomly using a Monte-Carlo (MC) approach. So in many different scenarios different objects are fragmented and contribute to a different version of the space debris environment. The results of the single Monte-Carlo runs therefore represent the whole spectrum of possible evolutions of the space debris environment. For the comparison of different scenarios, in general the average of all MC runs together with its standard deviation is used. This method is computationally very expensive due to the propagation of thousands of objects over long timeframes and the application of the MC method. At the Institute of Space Systems (IRAS) a model capable of describing the evolution of the space debris environment has been developed and implemented. The model is based on source and sink mechanisms, where yearly launches as well as collisions and explosions are considered as sources. The natural decay and post mission disposal measures are the only sink mechanisms. This method reduces the computational costs tremendously. In order to achieve this benefit a few simplifications have been applied. The approach of the model partitions the Low Earth Orbit (LEO) region into altitude shells. Only two kinds of objects are considered, intact bodies and fragments, which are also divided into diameter bins. As an extension to a previously presented model the eccentricity has additionally been taken into account with 67 eccentricity bins. While a set of differential equations has been implemented in a generic manner, the Euler method was chosen to integrate the equations for a given time span. For this paper parameters have been derived so that the model is able to reflect the results of the numerical MC
Energy Technology Data Exchange (ETDEWEB)
Priere, C
2005-01-15
Nowadays, environmental and economic constraints require considerable research efforts from the gas turbine industry. Objectives aim at lowering pollutants emissions and fuel consumption. These efforts take a primary importance to satisfy a continue growth of energy production and to obey to stringent environmental legislations. Recorded progresses are linked to mixing enhancement in combustors running at lean premixed operating point. Indeed, industry shows itself to be attentive in the mixing enhancement and during the last years, efforts are concentrated on fresh and burned gas dilution. The Jet In Cross Flow (JICF), which constitutes a representative case to further the research effort. It has been to be widely studied both in experimentally and numerically, and is particularly well suited for the evaluation of Large Eddy Simulations (LES). This approach, where large scale phenomena are naturally taken into account in the governing equation while the small scales are modelled, offers the means to well-predict such flows. The main objective of this work is to gauge and to enhance the quality of the LES predictions in JICF configurations by means of numerical tools developed in the compressible AVBP code. Physical and numerical parameters considered in the JICF modelization are taken into account and strategies that are able to enhance quality of LES results are proposed. Configurations studied in this work are the following: - Influences of the boundary conditions and jet injection system on a free JICF - Study of static mixing device in an industrial gas turbine chamber. - Study of a JICF configuration represented a dilution zone in low emissions combustors. (author)
Cosmological special relativity the large scale structure of space, time and velocity
Carmeli, Moshe
1997-01-01
This book deals with special relativity theory and its application to cosmology. It presents Einstein's theory of space and time in detail, and describes the large scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The book will be of interest to cosmologists, astrophysicists, theoretical
Cosmological special relativity the large scale structure of space, time and velocity
Carmeli, Moshe
2002-01-01
This book presents Einstein's theory of space and time in detail, and describes the large-scale structure of space, time and velocity as a new cosmological special relativity. A cosmological Lorentz-like transformation, which relates events at different cosmic times, is derived and applied. A new law of addition of cosmic times is obtained, and the inflation of the space at the early universe is derived, both from the cosmological transformation. The relationship between cosmic velocity, acceleration and distances is given. In the appendices gravitation is added in the form of a cosmological g
DEFF Research Database (Denmark)
Yang, Yang; Kær, Søren Knudsen
2012-01-01
The flow structure of one isothermal swirling case in the Sydney swirl flame database was studied using two numerical methods. Results from the Reynolds-averaged Navier-Stokes (RANS) approach and large eddy simulation (LES) were compared with experimental measurements. The simulations were applied...
Large Eddy Simulation of stratified flows over structures
Brechler J.; Fuka V.
2013-01-01
We tested the ability of the LES model CLMM (Charles University Large-Eddy Microscale Model) to model the stratified flow around three dimensional hills. We compared the quantities, as the height of the dividing streamline, recirculation zone length or length of the lee waves with experiments by Hunt and Snyder[3] and numerical computations by Ding, Calhoun and Street[5]. The results mostly agreed with the references, but some important differences are present.
Large Eddy Simulation of stratified flows over structures
Directory of Open Access Journals (Sweden)
Brechler J.
2013-04-01
Full Text Available We tested the ability of the LES model CLMM (Charles University Large-Eddy Microscale Model to model the stratified flow around three dimensional hills. We compared the quantities, as the height of the dividing streamline, recirculation zone length or length of the lee waves with experiments by Hunt and Snyder[3] and numerical computations by Ding, Calhoun and Street[5]. The results mostly agreed with the references, but some important differences are present.
Large Eddy Simulation of stratified flows over structures
Fuka, V.; Brechler, J.
2013-04-01
We tested the ability of the LES model CLMM (Charles University Large-Eddy Microscale Model) to model the stratified flow around three dimensional hills. We compared the quantities, as the height of the dividing streamline, recirculation zone length or length of the lee waves with experiments by Hunt and Snyder[3] and numerical computations by Ding, Calhoun and Street[5]. The results mostly agreed with the references, but some important differences are present.
Large-eddy simulation of the temporal mixing layer using the Clark model
Vreman, A.W.; Geurts, B.J.; Kuerten, J.G.M.
1996-01-01
The Clark model for the turbulent stress tensor in large-eddy simulation is investigated from a theoretical and computational point of view. In order to be applicable to compressible turbulent flows, the Clark model has been reformulated. Actual large-eddy simulation of a weakly compressible,
LaWen Hollingsworth; James Menakis
2010-01-01
This project mapped wildland fire potential (WFP) for the conterminous United States by using the large fire simulation system developed for Fire Program Analysis (FPA) System. The large fire simulation system, referred to here as LFSim, consists of modules for weather generation, fire occurrence, fire suppression, and fire growth modeling. Weather was generated with...
Fast Simulation of Large-Scale Floods Based on GPU Parallel Computing
Qiang Liu; Yi Qin; Guodong Li
2018-01-01
Computing speed is a significant issue of large-scale flood simulations for real-time response to disaster prevention and mitigation. Even today, most of the large-scale flood simulations are generally run on supercomputers due to the massive amounts of data and computations necessary. In this work, a two-dimensional shallow water model based on an unstructured Godunov-type finite volume scheme was proposed for flood simulation. To realize a fast simulation of large-scale floods on a personal...
Interactive computer graphics and its role in control system design of large space structures
Reddy, A. S. S. R.
1985-01-01
This paper attempts to show the relevance of interactive computer graphics in the design of control systems to maintain attitude and shape of large space structures to accomplish the required mission objectives. The typical phases of control system design, starting from the physical model such as modeling the dynamics, modal analysis, and control system design methodology are reviewed and the need of the interactive computer graphics is demonstrated. Typical constituent parts of large space structures such as free-free beams and free-free plates are used to demonstrate the complexity of the control system design and the effectiveness of the interactive computer graphics.
Mud pressure simulation on large horizontal directional drilling
Energy Technology Data Exchange (ETDEWEB)
Placido, Rafael R.; Avesani Neto, Jose O.; Martins, Pedro R.R.; Rocha, Ronaldo [Instituto de Pesquisas Tecnologicas do Estado de Sao Paulo (IPT), Sao Paulo, SP (Brazil)
2009-07-01
Horizontal Directional Drilling (HDD) is being extensively used in Brazil for installation of oil and gas pipelines. This trenchless technology is currently used in crossings of water bodies, environmental sensitive areas, densely populated areas, areas prone to mass movement and anywhere the traditional technology is not suitable because of the risks. One of the unwanted effects of HDD is collapsing of the soil surrounding the bore-hole, leading to loss of fluid. This can result in problems such as reducing the drilling efficiency, ground heave, structures damage, fluid infiltration and other environmental problems. This paper presents four simulations of down-hole fluid pressures which represents two different geometrical characteristics of the drilling and two different soils. The results showed that greater depths are needed in longer drillings to avoid ground rupture. Thus the end section of the drilling often represents the critical stage. (author)
Methodology for analysis and simulation of large multidisciplinary problems
Russell, William C.; Ikeda, Paul J.; Vos, Robert G.
1989-01-01
The Integrated Structural Modeling (ISM) program is being developed for the Air Force Weapons Laboratory and will be available for Air Force work. Its goal is to provide a design, analysis, and simulation tool intended primarily for directed energy weapons (DEW), kinetic energy weapons (KEW), and surveillance applications. The code is designed to run on DEC (VMS and UNIX), IRIS, Alliant, and Cray hosts. Several technical disciplines are included in ISM, namely structures, controls, optics, thermal, and dynamics. Four topics from the broad ISM goal are discussed. The first is project configuration management and includes two major areas: the software and database arrangement and the system model control. The second is interdisciplinary data transfer and refers to exchange of data between various disciplines such as structures and thermal. Third is a discussion of the integration of component models into one system model, i.e., multiple discipline model synthesis. Last is a presentation of work on a distributed processing computing environment.
Cyclic loading of simulated fault gouge to large strains
Jones, Lucile M.
1980-04-01
As part of a study of the mechanics of simulated fault gouge, deformation of Kayenta Sandstone (24% initial porosity) was observed in triaxial stress tests through several stress cycles. Between 50- and 300-MPa effective pressure the specimens deformed stably without stress drops and with deformation occurring throughout the sample. At 400-MPa effective pressure the specimens underwent strain softening with the deformation occurring along one plane. However, the difference in behavior seems to be due to the density variation at different pressures rather than to the difference in pressure. After peak stress was reached in each cycle, the samples dilated such that the volumetric strain and the linear strain maintained a constant ratio (approximately 0.1) at all pressures. The behavior was independent of the number of stress cycles to linear strains up to 90% and was in general agreement with laws of soil behavior derived from experiments conducted at low pressure (below 5 MPa).
Numerical simulations of a large scale oxy-coal burner
Energy Technology Data Exchange (ETDEWEB)
Chae, Taeyoung [Korea Institute of Industrial Technology, Cheonan (Korea, Republic of). Energy System R and D Group; Sungkyunkwan Univ., Suwon (Korea, Republic of). School of Mechanical Engineering; Park, Sanghyun; Ryu, Changkook [Sungkyunkwan Univ., Suwon (Korea, Republic of). School of Mechanical Engineering; Yang, Won [Korea Institute of Industrial Technology, Cheonan (Korea, Republic of). Energy System R and D Group
2013-07-01
Oxy-coal combustion is one of promising carbon dioxide capture and storage (CCS) technologies that uses oxygen and recirculated CO{sub 2} as an oxidizer instead of air. Due to difference in physical properties between CO{sub 2} and N{sub 2}, the oxy-coal combustion requires development of burner and boiler based on fundamental understanding of the flame shape, temperature, radiation and heat flux. For design of a new oxy-coal combustion system, computational fluid dynamics (CFD) is an essential tool to evaluate detailed combustion characteristics and supplement experimental results. In this study, CFD analysis was performed to understand the combustion characteristics inside a tangential vane swirl type 30 MW coal burner for air-mode and oxy-mode operations. In oxy-mode operations, various compositions of primary and secondary oxidizers were assessed which depended on the recirculation ratio of flue gas. For the simulations, devolatilization of coal and char burnout by O{sub 2}, CO{sub 2} and H{sub 2}O were predicted with a Lagrangian particle tracking method considering size distribution of pulverized coal and turbulent dispersion. The radiative heat transfer was solved by employing the discrete ordinate method with the weighted sum of gray gases model (WSGGM) optimized for oxy-coal combustion. In the simulation results for oxy-model operation, the reduced swirl strength of secondary oxidizer increased the flame length due to lower specific volume of CO{sub 2} than N{sub 2}. The flame length was also sensitive to the flow rate of primary oxidizer. The oxidizer without N{sub 2} that reduces thermal NO{sub x} formation makes the NO{sub x} lower in oxy-mode than air-mode. The predicted results showed similar trends with measured temperature profiles for various oxidizer compositions. Further numerical investigations are required to improve the burner design combined with more detailed experimental results.
A Path Space Extension for Robust Light Transport Simulation
DEFF Research Database (Denmark)
Hachisuka, Toshiya; Pantaleoni, Jacopo; Jensen, Henrik Wann
2012-01-01
We present a new sampling space for light transport paths that makes it possible to describe Monte Carlo path integration and photon density estimation in the same framework. A key contribution of our paper is the introduction of vertex perturbations, which extends the space of paths with loosely...
Electrothermal Simulation of Large-Area Semiconductor Devices
Directory of Open Access Journals (Sweden)
C Kirsch
2017-06-01
Full Text Available The lateral charge transport in thin-film semiconductor devices is affected by the sheet resistance of the various layers. This may lead to a non-uniform current distribution across a large-area device resulting in inhomogeneous luminance, for example, as observed in organic light-emitting diodes (Neyts et al., 2006. The resistive loss in electrical energy is converted into thermal energy via Joule heating, which results in a temperature increase inside the device. On the other hand, the charge transport properties of the device materials are also temperature-dependent, such that we are facing a two-way coupled electrothermal problem. It has been demonstrated that adding thermal effects to an electrical model significantly changes the results (Slawinski et al., 2011. We present a mathematical model for the steady-state distribution of the electric potential and of the temperature across one electrode of a large-area semiconductor device, as well as numerical solutions obtained using the finite element method.
Wind Data Analysis and Wind Flow Simulation Over Large Areas
Directory of Open Access Journals (Sweden)
Terziev Angel
2014-03-01
Full Text Available Increasing the share of renewable energy sources is one of the core policies of the European Union. This is because of the fact that this energy is essential in reducing the greenhouse gas emissions and securing energy supplies. Currently, the share of wind energy from all renewable energy sources is relatively low. The choice of location for a certain wind farm installation strongly depends on the wind potential. Therefore the accurate assessment of wind potential is extremely important. In the present paper an analysis is made on the impact of significant possible parameters on the determination of wind energy potential for relatively large areas. In the analysis the type of measurements (short- and long-term on-site measurements, the type of instrumentation and the terrain roughness factor are considered. The study on the impact of turbulence on the wind flow distribution over complex terrain is presented, and it is based on the real on-site data collected by the meteorological tall towers installed in the northern part of Bulgaria. By means of CFD based software a wind map is developed for relatively large areas. Different turbulent models in numerical calculations were tested and recommendations for the usage of the specific models in flows modeling over complex terrains are presented. The role of each parameter in wind map development is made. Different approaches for determination of wind energy potential based on the preliminary developed wind map are presented.
Large eddy simulation and direct numerical simulation of high speed turbulent reacting flows
Adumitroaie, V.; Frankel, S. H.; Madnia, C. K.; Givi, P.
The objective of this research is to make use of Large Eddy Simulation (LES) and Direct Numerical Simulation (DNS) for the computational analyses of high speed reacting flows. Our efforts in the first phase of this research conducted within the past three years have been directed in several issues pertaining to intricate physics of turbulent reacting flows. In our previous 5 semi-annual reports submitted to NASA LaRC, as well as several technical papers in archival journals, the results of our investigations have been fully described. In this progress report which is different in format as compared to our previous documents, we focus only on the issue of LES. The reason for doing so is that LES is the primary issue of interest to our Technical Monitor and that our other findings were needed to support the activities conducted under this prime issue. The outcomes of our related investigations, nevertheless, are included in the appendices accompanying this report. The relevance of the materials in these appendices are, therefore, discussed only briefly within the body of the report. Here, results are presented of a priori and a posterior analyses for validity assessments of assumed Probability Density Function (PDF) methods as potential subgrid scale (SGS) closures for LES of turbulent reacting flows. Simple non-premixed reacting systems involving an isothermal reaction of the type A + B yields Products under both chemical equilibrium and non-equilibrium conditions are considered. A priori analyses are conducted of a homogeneous box flow, and a spatially developing planar mixing layer to investigate the performance of the Pearson Family of PDF's as SGS models. A posteriori analyses are conducted of the mixing layer using a hybrid one-equation Smagorinsky/PDF SGS closure. The Smagorinsky closure augmented by the solution of the subgrid turbulent kinetic energy (TKE) equation is employed to account for hydrodynamic fluctuations, and the PDF is employed for modeling the
Energy Technology Data Exchange (ETDEWEB)
Burgwinkel, Paul; Vreydal, Daniel; Eltaliawi, Gamil; Vijayakumar, Nandhakumar [RWTH Aachen (DE). Inst. fuer Maschinentechnik der Rohstoffindustrie (IMR)
2010-09-15
For the first time the Co-simulation method was successfully used for full representation of a large belt conveyor for an open cast mine in a simulation model at the Institute for Mechanical Engineering in the Raw Materials Industry at Rhineland-Westphalia Technological University in Aachen. The aim of this project was the development of an electro-mechanical simulation model, which represents all components of a large belt conveyor from the drive motor to the conveyor belt in one simulation model and thus makes the interactions between the individual assemblies verifiable by calculations. With the aid of the developed model it was possible to determine critical operating speeds of the represented large belt conveyor and derive suitable measures to combat undesirable resonance states in the drive assembly. Furthermore it was possible to clarify the advantage of the full numerical representation of an electromechanical drive system. (orig.)
Large anterior temporal Virchow-Robin spaces: unique MR imaging features
Energy Technology Data Exchange (ETDEWEB)
Lim, Anthony T. [Monash University, Neuroradiology Service, Monash Imaging, Monash Health, Melbourne, Victoria (Australia); Chandra, Ronil V. [Monash University, Neuroradiology Service, Monash Imaging, Monash Health, Melbourne, Victoria (Australia); Monash University, Department of Surgery, Faculty of Medicine, Nursing and Health Sciences, Melbourne (Australia); Trost, Nicholas M. [St Vincent' s Hospital, Neuroradiology Service, Melbourne (Australia); McKelvie, Penelope A. [St Vincent' s Hospital, Anatomical Pathology, Melbourne (Australia); Stuckey, Stephen L. [Monash University, Neuroradiology Service, Monash Imaging, Monash Health, Melbourne, Victoria (Australia); Monash University, Southern Clinical School, Faculty of Medicine, Nursing and Health Sciences, Melbourne (Australia)
2015-05-01
Large Virchow-Robin (VR) spaces may mimic cystic tumor. The anterior temporal subcortical white matter is a recently described preferential location, with only 18 reported cases. Our aim was to identify unique MR features that could increase prospective diagnostic confidence. Thirty-nine cases were identified between November 2003 and February 2014. Demographic, clinical data and the initial radiological report were retrospectively reviewed. Two neuroradiologists reviewed all MR imaging; a neuropathologist reviewed histological data. Median age was 58 years (range 24-86 years); the majority (69 %) was female. There were no clinical symptoms that could be directly referable to the lesion. Two thirds were considered to be VR spaces on the initial radiological report. Mean maximal size was 9 mm (range 5-17 mm); majority (79 %) had perilesional T2 or fluid-attenuated inversion recovery (FLAIR) hyperintensity. The following were identified as potential unique MR features: focal cortical distortion by an adjacent branch of the middle cerebral artery (92 %), smaller adjacent VR spaces (26 %), and a contiguous cerebrospinal fluid (CSF) intensity tract (21 %). Surgery was performed in three asymptomatic patients; histopathology confirmed VR spaces. Unique MR features were retrospectively identified in all three patients. Large anterior temporal lobe VR spaces commonly demonstrate perilesional T2 or FLAIR signal and can be misdiagnosed as cystic tumor. Potential unique MR features that could increase prospective diagnostic confidence include focal cortical distortion by an adjacent branch of the middle cerebral artery, smaller adjacent VR spaces, and a contiguous CSF intensity tract. (orig.)
Experimental simulations of beam propagation over large distances in a compact linear Paul trap
International Nuclear Information System (INIS)
Gilson, Erik P.; Chung, Moses; Davidson, Ronald C.; Dorf, Mikhail; Efthimion, Philip C.; Majeski, Richard
2006-01-01
The Paul Trap Simulator Experiment (PTSX) is a compact laboratory experiment that places the physicist in the frame of reference of a long, charged-particle bunch coasting through a kilometers-long magnetic alternating-gradient (AG) transport system. The transverse dynamics of particles in both systems are described by similar equations, including nonlinear space-charge effects. The time-dependent voltages applied to the PTSX quadrupole electrodes are equivalent to the axially oscillating magnetic fields applied in the AG system. Experiments concerning the quiescent propagation of intense beams over large distances can then be performed in a compact and flexible facility. An understanding and characterization of the conditions required for quiescent beam transport, minimum halo particle generation, and precise beam compression and manipulation techniques, are essential, as accelerators and transport systems demand that ever-increasing amounts of space charge be transported. Application areas include ion-beam-driven high energy density physics, high energy and nuclear physics accelerator systems, etc. One-component cesium plasmas have been trapped in PTSX that correspond to normalized beam intensities, s=ω p 2 (0)/2ω q 2 , up to 80% of the space-charge limit where self-electric forces balance the applied focusing force. Here, ω p (0)=[n b (0)e b 2 /m b ε 0 ] 1/2 is the on-axis plasma frequency, and ω q is the smooth-focusing frequency associated with the applied focusing field. Plasmas in PTSX with values of s that are 20% of the limit have been trapped for times corresponding to equivalent beam propagation over 10 km. Results are presented for experiments in which the amplitude of the quadrupole focusing lattice is modified as a function of time. It is found that instantaneous changes in lattice amplitude can be detrimental to transverse confinement of the charge bunch
Experimental simulations of beam propagation over large distances in a compact linear Paul trapa)
Gilson, Erik P.; Chung, Moses; Davidson, Ronald C.; Dorf, Mikhail; Efthimion, Philip C.; Majeski, Richard
2006-05-01
The Paul Trap Simulator Experiment (PTSX) is a compact laboratory experiment that places the physicist in the frame of reference of a long, charged-particle bunch coasting through a kilometers-long magnetic alternating-gradient (AG) transport system. The transverse dynamics of particles in both systems are described by similar equations, including nonlinear space-charge effects. The time-dependent voltages applied to the PTSX quadrupole electrodes are equivalent to the axially oscillating magnetic fields applied in the AG system. Experiments concerning the quiescent propagation of intense beams over large distances can then be performed in a compact and flexible facility. An understanding and characterization of the conditions required for quiescent beam transport, minimum halo particle generation, and precise beam compression and manipulation techniques, are essential, as accelerators and transport systems demand that ever-increasing amounts of space charge be transported. Application areas include ion-beam-driven high energy density physics, high energy and nuclear physics accelerator systems, etc. One-component cesium plasmas have been trapped in PTSX that correspond to normalized beam intensities, ŝ=ωp2(0)/2ωq2, up to 80% of the space-charge limit where self-electric forces balance the applied focusing force. Here, ωp(0)=[nb(0)eb2/mbɛ0]1/2 is the on-axis plasma frequency, and ωq is the smooth-focusing frequency associated with the applied focusing field. Plasmas in PTSX with values of ŝ that are 20% of the limit have been trapped for times corresponding to equivalent beam propagation over 10km. Results are presented for experiments in which the amplitude of the quadrupole focusing lattice is modified as a function of time. It is found that instantaneous changes in lattice amplitude can be detrimental to transverse confinement of the charge bunch.
Unified Approach to Modeling and Simulation of Space Communication Networks and Systems
Barritt, Brian; Bhasin, Kul; Eddy, Wesley; Matthews, Seth
2010-01-01
Network simulator software tools are often used to model the behaviors and interactions of applications, protocols, packets, and data links in terrestrial communication networks. Other software tools that model the physics, orbital dynamics, and RF characteristics of space systems have matured to allow for rapid, detailed analysis of space communication links. However, the absence of a unified toolset that integrates the two modeling approaches has encumbered the systems engineers tasked with the design, architecture, and analysis of complex space communication networks and systems. This paper presents the unified approach and describes the motivation, challenges, and our solution - the customization of the network simulator to integrate with astronautical analysis software tools for high-fidelity end-to-end simulation. Keywords space; communication; systems; networking; simulation; modeling; QualNet; STK; integration; space networks
The seesaw space, a vector space to identify and characterize large-scale structures at 1 AU
Lara, A.; Niembro, T.
2017-12-01
We introduce the seesaw space, an orthonormal space formed by the local and the global fluctuations of any of the four basic solar parameters: velocity, density, magnetic field and temperature at any heliospheric distance. The fluctuations compare the standard deviation of a moving average of three hours against the running average of the parameter in a month (consider as the local fluctuations) and in a year (global fluctuations) We created this new vectorial spaces to identify the arrival of transients to any spacecraft without the need of an observer. We applied our method to the one-minute resolution data of WIND spacecraft from 1996 to 2016. To study the behavior of the seesaw norms in terms of the solar cycle, we computed annual histograms and fixed piecewise functions formed by two log-normal distributions and observed that one of the distributions is due to large-scale structures while the other to the ambient solar wind. The norm values in which the piecewise functions change vary in terms of the solar cycle. We compared the seesaw norms of each of the basic parameters due to the arrival of coronal mass ejections, co-rotating interaction regions and sector boundaries reported in literature. High seesaw norms are due to large-scale structures. We found three critical values of the norms that can be used to determined the arrival of coronal mass ejections. We present as well general comparisons of the norms during the two maxima and the minimum solar cycle periods and the differences of the norms due to large-scale structures depending on each period.
Prediction of Thermal Environment in a Large Space Using Artificial Neural Network
Directory of Open Access Journals (Sweden)
Hyun-Jung Yoon
2018-02-01
Full Text Available Since the thermal environment of large space buildings such as stadiums can vary depending on the location of the stands, it is important to divide them into different zones and evaluate their thermal environment separately. The thermal environment can be evaluated using physical values measured with the sensors, but the occupant density of the stadium stands is high, which limits the locations available to install the sensors. As a method to resolve the limitations of installing the sensors, we propose a method to predict the thermal environment of each zone in a large space. We set six key thermal factors affecting the thermal environment in a large space to be predicted factors (indoor air temperature, mean radiant temperature, and clothing and the fixed factors (air velocity, metabolic rate, and relative humidity. Using artificial neural network (ANN models and the outdoor air temperature and the surface temperature of the interior walls around the stands as input data, we developed a method to predict the three thermal factors. Learning and verification datasets were established using STAR CCM+ (2016.10, Siemens PLM software, Plano, TX, USA. An analysis of each model’s prediction results showed that the prediction accuracy increased with the number of learning data points. The thermal environment evaluation process developed in this study can be used to control heating, ventilation, and air conditioning (HVAC facilities in each zone in a large space building with sufficient learning by ANN models at the building testing or the evaluation stage.
Simulating Nonlinear Dynamics of Deployable Space Structures, Phase I
National Aeronautics and Space Administration — To support NASA's vital interest in developing much larger solar array structures over the next 20 years, MotionPort LLC's Phase I SBIR project will strengthen...
Modeling a Large Data Acquisition Network in a Simulation Framework
AUTHOR|(INSPIRE)INSPIRE-00337030; The ATLAS collaboration; Froening, Holger; Garcia, Pedro Javier; Vandelli, Wainer
2015-01-01
The ATLAS detector at CERN records particle collision “events” delivered by the Large Hadron Collider. Its data-acquisition system is a distributed software system that identifies, selects, and stores interesting events in near real-time, with an aggregate throughput of several 10 GB/s. It is a distributed software system executed on a farm of roughly 2000 commodity worker nodes communicating via TCP/IP on an Ethernet network. Event data fragments are received from the many detector readout channels and are buffered, collected together, analyzed and either stored permanently or discarded. This system, and data-acquisition systems in general, are sensitive to the latency of the data transfer from the readout buffers to the worker nodes. Challenges affecting this transfer include the many-to-one communication pattern and the inherently bursty nature of the traffic. In this paper we introduce the main performance issues brought about by this workload, focusing in particular on the so-called TCP incast pathol...
Coarse-Grain Bandwidth Estimation Techniques for Large-Scale Space Network
Cheung, Kar-Ming; Jennings, Esther
2013-01-01
In this paper, we describe a top-down analysis and simulation approach to size the bandwidths of a store-andforward network for a given network topology, a mission traffic scenario, and a set of data types with different latency requirements. We use these techniques to estimate the wide area network (WAN) bandwidths of the ground links for different architecture options of the proposed Integrated Space Communication and Navigation (SCaN) Network.
Yuen, Vincent K.
1989-01-01
The Systems Engineering Simulator has addressed the major issues in providing visual data to its real-time man-in-the-loop simulations. Out-the-window views and CCTV views are provided by three scene systems to give the astronauts their real-world views. To expand the window coverage for the Space Station Freedom workstation a rotating optics system is used to provide the widest field of view possible. To provide video signals to as many viewpoints as possible, windows and CCTVs, with a limited amount of hardware, a video distribution system has been developed to time-share the video channels among viewpoints at the selection of the simulation users. These solutions have provided the visual simulation facility for real-time man-in-the-loop simulations for the NASA space program.
Lattice models for large-scale simulations of coherent wave scattering
Wang, Shumin; Teixeira, Fernando L.
2004-01-01
Lattice approximations for partial differential equations describing physical phenomena are commonly used for the numerical simulation of many problems otherwise intractable by pure analytical approaches. The discretization inevitably leads to many of the original symmetries to be broken or modified. In the case of Maxwell’s equations for example, invariance and isotropy of the speed of light in vacuum is invariably lost because of the so-called grid dispersion. Since it is a cumulative effect, grid dispersion is particularly harmful for the accuracy of results of large-scale simulations of scattering problems. Grid dispersion is usually combated by either increasing the lattice resolution or by employing higher-order schemes with larger stencils for the space and time derivatives. Both alternatives lead to increased computational cost to simulate a problem of a given physical size. Here, we introduce a general approach to develop lattice approximations with reduced grid dispersion error for a given stencil (and hence at no additional computational cost). The present approach is based on first obtaining stencil coefficients in the Fourier domain that minimize the maximum grid dispersion error for wave propagation at all directions (minimax sense). The resulting coefficients are then expanded into a Taylor series in terms of the frequency variable and incorporated into time-domain (update) equations after an inverse Fourier transformation. Maximally flat (Butterworth) or Chebyshev filters are subsequently used to minimize the wave speed variations for a given frequency range of interest. The use of such filters also allows for the adjustment of the grid dispersion characteristics so as to minimize not only the local dispersion error but also the accumulated phase error in a frequency range of interest.
A large-eddy simulation based power estimation capability for wind farms over complex terrain
Senocak, I.; Sandusky, M.; Deleon, R.
2017-12-01
There has been an increasing interest in predicting wind fields over complex terrain at the micro-scale for resource assessment, turbine siting, and power forecasting. These capabilities are made possible by advancements in computational speed from a new generation of computing hardware, numerical methods and physics modelling. The micro-scale wind prediction model presented in this work is based on the large-eddy simulation paradigm with surface-stress parameterization. The complex terrain is represented using an immersed-boundary method that takes into account the parameterization of the surface stresses. Governing equations of incompressible fluid flow are solved using a projection method with second-order accurate schemes in space and time. We use actuator disk models with rotation to simulate the influence of turbines on the wind field. Data regarding power production from individual turbines are mostly restricted because of proprietary nature of the wind energy business. Most studies report percentage drop of power relative to power from the first row. There have been different approaches to predict power production. Some studies simply report available wind power in the upstream, some studies estimate power production using power curves available from turbine manufacturers, and some studies estimate power as torque multiplied by rotational speed. In the present work, we propose a black-box approach that considers a control volume around a turbine and estimate the power extracted from the turbine based on the conservation of energy principle. We applied our wind power prediction capability to wind farms over flat terrain such as the wind farm over Mower County, Minnesota and the Horns Rev offshore wind farm in Denmark. The results from these simulations are in good agreement with published data. We also estimate power production from a hypothetical wind farm in complex terrain region and identify potential zones suitable for wind power production.
Visual Data-Analytics of Large-Scale Parallel Discrete-Event Simulations
Energy Technology Data Exchange (ETDEWEB)
Ross, Caitlin; Carothers, Christopher D.; Mubarak, Misbah; Carns, Philip; Ross, Robert; Li, Jianping Kelvin; Ma, Kwan-Liu
2016-11-13
Parallel discrete-event simulation (PDES) is an important tool in the codesign of extreme-scale systems because PDES provides a cost-effective way to evaluate designs of highperformance computing systems. Optimistic synchronization algorithms for PDES, such as Time Warp, allow events to be processed without global synchronization among the processing elements. A rollback mechanism is provided when events are processed out of timestamp order. Although optimistic synchronization protocols enable the scalability of large-scale PDES, the performance of the simulations must be tuned to reduce the number of rollbacks and provide an improved simulation runtime. To enable efficient large-scale optimistic simulations, one has to gain insight into the factors that affect the rollback behavior and simulation performance. We developed a tool for ROSS model developers that gives them detailed metrics on the performance of their large-scale optimistic simulations at varying levels of simulation granularity. Model developers can use this information for parameter tuning of optimistic simulations in order to achieve better runtime and fewer rollbacks. In this work, we instrument the ROSS optimistic PDES framework to gather detailed statistics about the simulation engine. We have also developed an interactive visualization interface that uses the data collected by the ROSS instrumentation to understand the underlying behavior of the simulation engine. The interface connects real time to virtual time in the simulation and provides the ability to view simulation data at different granularities. We demonstrate the usefulness of our framework by performing a visual analysis of the dragonfly network topology model provided by the CODES simulation framework built on top of ROSS. The instrumentation needs to minimize overhead in order to accurately collect data about the simulation performance. To ensure that the instrumentation does not introduce unnecessary overhead, we perform a
Simulation of transients with space-dependent feedback by coarse mesh flux expansion method
International Nuclear Information System (INIS)
Langenbuch, S.; Maurer, W.; Werner, W.
1975-01-01
For the simulation of the time-dependent behaviour of large LWR-cores, even the most efficient Finite-Difference (FD) methods require a prohibitive amount of computing time in order to achieve results of acceptable accuracy. Static CM-solutions computed with a mesh-size corresponding to the fuel element structure (about 20 cm) are at least as accurate as FD-solutions computed with about 5 cm mesh-size. For 3d-calculations this results in a reduction of storage requirements by a factor 60 and of computing costs by a factor 40, relative to FD-methods. These results have been obtained for pure neutronic calculations, where feedback is not taken into account. In this paper it is demonstrated that the method retains its accuracy also in kinetic calculations, even in the presence of strong space dependent feedback. (orig./RW) [de
Simulating and assessing boson sampling experiments with phase-space representations
Opanchuk, Bogdan; Rosales-Zárate, Laura; Reid, Margaret D.; Drummond, Peter D.
2018-04-01
The search for new, application-specific quantum computers designed to outperform any classical computer is driven by the ending of Moore's law and the quantum advantages potentially obtainable. Photonic networks are promising examples, with experimental demonstrations and potential for obtaining a quantum computer to solve problems believed classically impossible. This introduces a challenge: how does one design or understand such photonic networks? One must be able to calculate observables using general methods capable of treating arbitrary inputs, dissipation, and noise. We develop complex phase-space software for simulating these photonic networks, and apply this to boson sampling experiments. Our techniques give sampling errors orders of magnitude lower than experimental correlation measurements for the same number of samples. We show that these techniques remove systematic errors in previous algorithms for estimating correlations, with large improvements in errors in some cases. In addition, we obtain a scalable channel-combination strategy for assessment of boson sampling devices.
Prime focus architectures for large space telescopes: reduce surfaces to save cost
Breckinridge, J. B.; Lillie, C. F.
2016-07-01
Conceptual architectures are now being developed to identify future directions for post JWST large space telescope systems to operate in the UV Optical and near IR regions of the spectrum. Here we show that the cost of optical surfaces within large aperture telescope/instrument systems can exceed $100M/reflection when expressed in terms of the aperture increase needed to over come internal absorption loss. We recommend a program in innovative optical design to minimize the number of surfaces by considering multiple functions for mirrors. An example is given using the Rowland circle imaging spectrometer systems for UV space science. With few exceptions, current space telescope architectures are based on systems optimized for ground-based astronomy. Both HST and JWST are classical "Cassegrain" telescopes derived from the ground-based tradition to co-locate the massive primary mirror and the instruments at the same end of the metrology structure. This requirement derives from the dual need to minimize observatory dome size and cost in the presence of the Earth's 1-g gravitational field. Space telescopes, however function in the zero gravity of space and the 1- g constraint is relieved to the advantage of astronomers. Here we suggest that a prime focus large aperture telescope system in space may have potentially have higher transmittance, better pointing, improved thermal and structural control, less internal polarization and broader wavelength coverage than Cassegrain telescopes. An example is given showing how UV astronomy telescopes use single optical elements for multiple functions and therefore have a minimum number of reflections.
Energy Technology Data Exchange (ETDEWEB)
Sidles, John A; Jacky, Jonathan P [Department of Orthopaedics and Sports Medicine, Box 356500, School of Medicine, University of Washington, Seattle, WA, 98195 (United States); Garbini, Joseph L; Malcomb, Joseph R; Williamson, Austin M [Department of Mechanical Engineering, University of Washington, Seattle, WA 98195 (United States); Harrell, Lee E [Department of Physics, US Military Academy, West Point, NY 10996 (United States); Hero, Alfred O [Department of Electrical Engineering, University of Michigan, MI 49931 (United States); Norman, Anthony G [Department of Bioengineering, University of Washington, Seattle, WA 98195 (United States)], E-mail: sidles@u.washington.edu
2009-06-15
Practical recipes are presented for simulating high-temperature and nonequilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto state-space manifolds having reduced dimensionality and possessing a Kaehler potential of multilinear algebraic form. These state-spaces can be regarded as ruled algebraic varieties upon which a projective quantum model order reduction (MOR) is performed. The Riemannian sectional curvature of ruled Kaehlerian varieties is analyzed, and proved to be non-positive upon all sections that contain a rule. These manifolds are shown to contain Slater determinants as a special case and their identity with Grassmannian varieties is demonstrated. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low dimensionality Kaehler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candes-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given and methods for quantum state optimization by Dantzig selection are given.
International Nuclear Information System (INIS)
Sidles, John A; Jacky, Jonathan P; Garbini, Joseph L; Malcomb, Joseph R; Williamson, Austin M; Harrell, Lee E; Hero, Alfred O; Norman, Anthony G
2009-01-01
Practical recipes are presented for simulating high-temperature and nonequilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto state-space manifolds having reduced dimensionality and possessing a Kaehler potential of multilinear algebraic form. These state-spaces can be regarded as ruled algebraic varieties upon which a projective quantum model order reduction (MOR) is performed. The Riemannian sectional curvature of ruled Kaehlerian varieties is analyzed, and proved to be non-positive upon all sections that contain a rule. These manifolds are shown to contain Slater determinants as a special case and their identity with Grassmannian varieties is demonstrated. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low dimensionality Kaehler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candes-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given and methods for quantum state optimization by Dantzig selection are given.
Sidles, John A.; Garbini, Joseph L.; Harrell, Lee E.; Hero, Alfred O.; Jacky, Jonathan P.; Malcomb, Joseph R.; Norman, Anthony G.; Williamson, Austin M.
2009-06-01
Practical recipes are presented for simulating high-temperature and nonequilibrium quantum spin systems that are continuously measured and controlled. The notion of a spin system is broadly conceived, in order to encompass macroscopic test masses as the limiting case of large-j spins. The simulation technique has three stages: first the deliberate introduction of noise into the simulation, then the conversion of that noise into an equivalent continuous measurement and control process, and finally, projection of the trajectory onto state-space manifolds having reduced dimensionality and possessing a Kähler potential of multilinear algebraic form. These state-spaces can be regarded as ruled algebraic varieties upon which a projective quantum model order reduction (MOR) is performed. The Riemannian sectional curvature of ruled Kählerian varieties is analyzed, and proved to be non-positive upon all sections that contain a rule. These manifolds are shown to contain Slater determinants as a special case and their identity with Grassmannian varieties is demonstrated. The resulting simulation formalism is used to construct a positive P-representation for the thermal density matrix. Single-spin detection by magnetic resonance force microscopy (MRFM) is simulated, and the data statistics are shown to be those of a random telegraph signal with additive white noise. Larger-scale spin-dust models are simulated, having no spatial symmetry and no spatial ordering; the high-fidelity projection of numerically computed quantum trajectories onto low dimensionality Kähler state-space manifolds is demonstrated. The reconstruction of quantum trajectories from sparse random projections is demonstrated, the onset of Donoho-Stodden breakdown at the Candès-Tao sparsity limit is observed, a deterministic construction for sampling matrices is given and methods for quantum state optimization by Dantzig selection are given.
International Nuclear Information System (INIS)
Zoubian, Julien
2012-01-01
The observations of the supernovae, the cosmic microwave background, and more recently the measurement of baryon acoustic oscillations and the weak lensing effects, converge to a Lambda CDM model, with an accelerating expansion of the today Universe. This model need two dark components to fit the observations, the dark matter and the dark energy. Two approaches seem particularly promising to measure both geometry of the Universe and growth of dark matter structures, the analysis of the weak distortions of distant galaxies by gravitational lensing and the study of the baryon acoustic oscillations. Both methods required a very large sky surveys of several thousand square degrees. In the context of the spectroscopic survey of the space mission EUCLID, dedicated to the study of the dark side of the universe, I developed a pixel simulation tool for analyzing instrumental performances. The proposed method can be summarized in three steps. The first step is to simulate the observables, i.e. mainly the sources of the sky. I work up a new method, adapted for spectroscopic simulations, which allows to mock an existing survey of galaxies in ensuring that the distribution of the spectral properties of galaxies are representative of current observations, in particular the distribution of the emission lines. The second step is to simulate the instrument and produce images which are equivalent to the expected real images. Based on the pixel simulator of the HST, I developed a new tool to compute the images of the spectroscopic channel of EUCLID. The new simulator have the particularity to be able to simulate PSF with various energy distributions and detectors which have different pixels. The last step is the estimation of the performances of the instrument. Based on existing tools, I set up a pipeline of image processing and performances measurement. My main results were: 1) to validate the method by simulating an existing survey of galaxies, the WISP survey, 2) to determine the
Large-scale numerical simulations of star formation put to the test
DEFF Research Database (Denmark)
Frimann, Søren; Jørgensen, Jes Kristian; Haugbølle, Troels
2016-01-01
(SEDs), calculated from large-scalenumerical simulations, to observational studies, thereby aiding in boththe interpretation of the observations and in testing the fidelity ofthe simulations. Methods: The adaptive mesh refinement code,RAMSES, is used to simulate the evolution of a 5 pc × 5 pc ×5 pc...... to calculate evolutionary tracers Tbol andLsmm/Lbol. It is shown that, while the observeddistributions of the tracers are well matched by the simulation, theygenerally do a poor job of tracking the protostellar ages. Disks formearly in the simulation, with 40% of the Class 0 protostars beingencircled by one...
Large-Scale Demonstration of Liquid Hydrogen Storage with Zero Boiloff for In-Space Applications
Hastings, L. J.; Bryant, C. B.; Flachbart, R. H.; Holt, K. A.; Johnson, E.; Hedayat, A.; Hipp, B.; Plachta, D. W.
2010-01-01
Cryocooler and passive insulation technology advances have substantially improved prospects for zero-boiloff cryogenic storage. Therefore, a cooperative effort by NASA s Ames Research Center, Glenn Research Center, and Marshall Space Flight Center (MSFC) was implemented to develop zero-boiloff concepts for in-space cryogenic storage. Described herein is one program element - a large-scale, zero-boiloff demonstration using the MSFC multipurpose hydrogen test bed (MHTB). A commercial cryocooler was interfaced with an existing MHTB spray bar mixer and insulation system in a manner that enabled a balance between incoming and extracted thermal energy.
Major technological innovations introduced in the large antennas of the Deep Space Network
Imbriale, W. A.
2002-01-01
The NASA Deep Space Network (DSN) is the largest and most sensitive scientific, telecommunications and radio navigation network in the world. Its principal responsibilities are to provide communications, tracking, and science services to most of the world's spacecraft that travel beyond low Earth orbit. The network consists of three Deep Space Communications Complexes. Each of the three complexes consists of multiple large antennas equipped with ultra sensitive receiving systems. A centralized Signal Processing Center (SPC) remotely controls the antennas, generates and transmits spacecraft commands, and receives and processes the spacecraft telemetry.
An optimum organizational structure for a large earth-orbiting multidisciplinary Space Base
Ragusa, J. M.
1973-01-01
The purpose of this exploratory study was to identify an optimum hypothetical organizational structure for a large earth-orbiting multidisciplinary research and applications (R&A) Space Base manned by a mixed crew of technologists. Since such a facility does not presently exist, in situ empirical testing was not possible. Study activity was, therefore, concerned with the identification of a desired organizational structural model rather than the empirical testing of it. The essential finding of this research was that a four-level project type 'total matrix' model will optimize the efficiency and effectiveness of Space Base technologists.
Hopkins, Randall C.; Capizzo, Peter; Fincher, Sharon; Hornsby, Linda S.; Jones, David
2010-01-01
The Advanced Concepts Office at Marshall Space Flight Center completed a brief spacecraft design study for the 8-meter monolithic Advanced Technology Large Aperture Space Telescope (ATLAST-8m). This spacecraft concept provides all power, communication, telemetry, avionics, guidance and control, and thermal control for the observatory, and inserts the observatory into a halo orbit about the second Sun-Earth Lagrange point. The multidisciplinary design team created a simple spacecraft design that enables component and science instrument servicing, employs articulating solar panels for help with momentum management, and provides precise pointing control while at the same time fast slewing for the observatory.
Space Weathering Evolution on Airless Bodies - Laboratory Simulations with Olivine
Czech Academy of Sciences Publication Activity Database
Kohout, Tomáš; Čuda, J.; Bradley, T.; Britt, D.; Filip, J.; Tuček, J.; Malina, O.; Kašlík, J.; Šišková, K.; Zbořil, R.
2013-01-01
Roč. 45, č. 9 (2013), s. 25-26 ISSN 0002-7537. [Annual meeting of the Division for Planetary Sciences of the American Astronomical Society /45./. 06.10.2013-11.10.2013, Denver] Institutional support: RVO:67985831 Keywords : space weathering * asteroid * Moon * olivine Subject RIV: BN - Astronomy, Celestial Mechanics, Astrophysics http://aas.org/files/resources/dps_abstract_book.pdf
Space-Charge-Limited Emission Models for Particle Simulation
Verboncoeur, J. P.; Cartwright, K. L.; Murphy, T.
2004-11-01
Space-charge-limited (SCL) emission of electrons from various materials is a common method of generating the high current beams required to drive high power microwave (HPM) sources. In the SCL emission process, sufficient space charge is extracted from a surface, often of complicated geometry, to drive the electric field normal to the surface close to zero. The emitted current is highly dominated by space charge effects as well as ambient fields near the surface. In this work, we consider computational models for the macroscopic SCL emission process including application of Gauss's law and the Child-Langmuir law for space-charge-limited emission. Models are described for ideal conductors, lossy conductors, and dielectrics. Also considered is the discretization of these models, and the implications for the emission physics. Previous work on primary and dual-cell emission models [Watrous et al., Phys. Plasmas 8, 289-296 (2001)] is reexamined, and aspects of the performance, including fidelity and noise properties, are improved. Models for one-dimensional diodes are considered, as well as multidimensional emitting surfaces, which include corners and transverse fields.
Very large virtual compound spaces: construction, storage and utility in drug discovery.
Peng, Zhengwei
2013-09-01
Recent activities in the construction, storage and exploration of very large virtual compound spaces are reviewed by this report. As expected, the systematic exploration of compound spaces at the highest resolution (individual atoms and bonds) is intrinsically intractable. By contrast, by staying within a finite number of reactions and a finite number of reactants or fragments, several virtual compound spaces have been constructed in a combinatorial fashion with sizes ranging from 10(11)11 to 10(20)20 compounds. Multiple search methods have been developed to perform searches (e.g. similarity, exact and substructure) into those compound spaces without the need for full enumeration. The up-front investment spent on synthetic feasibility during the construction of some of those virtual compound spaces enables a wider adoption by medicinal chemists to design and synthesize important compounds for drug discovery. Recent activities in the area of exploring virtual compound spaces via the evolutionary approach based on Genetic Algorithm also suggests a positive shift of focus from method development to workflow, integration and ease of use, all of which are required for this approach to be widely adopted by medicinal chemists.
Directory of Open Access Journals (Sweden)
Suozhu Wang
2014-02-01
Full Text Available The large eddy simulation (LES of spatially evolving supersonic boundary layer transition over a flat-plate with freestream Mach number 4.5 is performed in the present work. The Favre-filtered Navier-Stokes equations are used to simulate large scales, while a dynamic mixed subgrid-scale (SGS model is used to simulate subgrid stress. The convective terms are discretized with a fifth-order upwind compact difference scheme, while a sixth-order symmetric compact difference scheme is employed for the diffusive terms. The basic mean flow is obtained from the similarity solution of the compressible laminar boundary layer. In order to ensure the transition from the initial laminar flow to fully developed turbulence, a pair of oblique first-mode perturbation is imposed on the inflow boundary. The whole process of the spatial transition is obtained from the simulation. Through the space-time average, the variations of typical statistical quantities are analyzed. It is found that the distributions of turbulent Mach number, root-mean-square (rms fluctuation quantities, and Reynolds stresses along the wall-normal direction at different streamwise locations exhibit self-similarity in fully developed turbulent region. Finally, the onset and development of large-scale coherent structures through the transition process are depicted.
Simulation analysis of photometric data for attitude estimation of unresolved space objects
Du, Xiaoping; Gou, Ruixin; Liu, Hao; Hu, Heng; Wang, Yang
2017-10-01
The attitude information acquisition of unresolved space objects, such as micro-nano satellites and GEO objects under the way of ground-based optical observations, is a challenge to space surveillance. In this paper, a useful method is proposed to estimate the SO attitude state according to the simulation analysis of photometric data in different attitude states. The object shape model was established and the parameters of the BRDF model were determined, then the space object photometric model was established. Furthermore, the photometric data of space objects in different states are analyzed by simulation and the regular characteristics of the photometric curves are summarized. The simulation results show that the photometric characteristics are useful for attitude inversion in a unique way. Thus, a new idea is provided for space object identification in this paper.
Energy Technology Data Exchange (ETDEWEB)
NONE
2003-03-01
Joint meeting of the 6th Simulation Science Symposium and the NIFS Collaboration Research 'Large Scale Computer Simulation' was held on December 12-13, 2002 at National Institute for Fusion Science, with the aim of promoting interdisciplinary collaborations in various fields of computer simulations. The present meeting attended by more than 40 people consists of the 11 invited and 22 contributed papers, of which topics were extended not only to fusion science but also to related fields such as astrophysics, earth science, fluid dynamics, molecular dynamics, computer science etc. (author)
Language Simulations: The Blending Space for Writing and Critical Thinking
Kovalik, Doina L.; Kovalik, Ludovic M.
2007-01-01
This article describes a language simulation involving six distinct phases: an in-class quick response, a card game, individual research, a classroom debate, a debriefing session, and an argumentative essay. An analysis of student artifacts--quick-response writings and final essays, respectively, both addressing the definition of liberty in a…
Harvey, Jason; Moore, Michael
2013-01-01
The General-Use Nodal Network Solver (GUNNS) is a modeling software package that combines nodal analysis and the hydraulic-electric analogy to simulate fluid, electrical, and thermal flow systems. GUNNS is developed by L-3 Communications under the TS21 (Training Systems for the 21st Century) project for NASA Johnson Space Center (JSC), primarily for use in space vehicle training simulators at JSC. It has sufficient compactness and fidelity to model the fluid, electrical, and thermal aspects of space vehicles in real-time simulations running on commodity workstations, for vehicle crew and flight controller training. It has a reusable and flexible component and system design, and a Graphical User Interface (GUI), providing capability for rapid GUI-based simulator development, ease of maintenance, and associated cost savings. GUNNS is optimized for NASA's Trick simulation environment, but can be run independently of Trick.
Parallel simulation of tsunami inundation on a large-scale supercomputer
Oishi, Y.; Imamura, F.; Sugawara, D.
2013-12-01
An accurate prediction of tsunami inundation is important for disaster mitigation purposes. One approach is to approximate the tsunami wave source through an instant inversion analysis using real-time observation data (e.g., Tsushima et al., 2009) and then use the resulting wave source data in an instant tsunami inundation simulation. However, a bottleneck of this approach is the large computational cost of the non-linear inundation simulation and the computational power of recent massively parallel supercomputers is helpful to enable faster than real-time execution of a tsunami inundation simulation. Parallel computers have become approximately 1000 times faster in 10 years (www.top500.org), and so it is expected that very fast parallel computers will be more and more prevalent in the near future. Therefore, it is important to investigate how to efficiently conduct a tsunami simulation on parallel computers. In this study, we are targeting very fast tsunami inundation simulations on the K computer, currently the fastest Japanese supercomputer, which has a theoretical peak performance of 11.2 PFLOPS. One computing node of the K computer consists of 1 CPU with 8 cores that share memory, and the nodes are connected through a high-performance torus-mesh network. The K computer is designed for distributed-memory parallel computation, so we have developed a parallel tsunami model. Our model is based on TUNAMI-N2 model of Tohoku University, which is based on a leap-frog finite difference method. A grid nesting scheme is employed to apply high-resolution grids only at the coastal regions. To balance the computation load of each CPU in the parallelization, CPUs are first allocated to each nested layer in proportion to the number of grid points of the nested layer. Using CPUs allocated to each layer, 1-D domain decomposition is performed on each layer. In the parallel computation, three types of communication are necessary: (1) communication to adjacent neighbours for the
Hybrid Reynolds-Averaged/Large Eddy Simulation of the Flow in a Model SCRamjet Cavity Flameholder
Baurle, R. A.
2016-01-01
Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. Experimental data available for this configuration include velocity statistics obtained from particle image velocimetry. Several turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged/large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This e ort was undertaken to not only assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community, but to also begin to understand how this capability can best be used to augment standard Reynolds-averaged simulations. The numerical errors were quantified for the steady-state simulations, and at least qualitatively assessed for the scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results displayed a high degree of variability when comparing the flameholder fuel distributions obtained from each turbulence model. This prompted the consideration of applying the higher-fidelity scale-resolving simulations as a surrogate "truth" model to calibrate the Reynolds-averaged closures in a non-reacting setting prior to their use for the combusting simulations. In general, the Reynolds-averaged velocity profile predictions at the lowest fueling level matched the particle imaging measurements almost as well as was observed for the non-reacting condition. However, the velocity field predictions proved to be more sensitive to the flameholder fueling rate than was indicated in the measurements.
Implementation of a Large Eddy Simulation Method Applied to Recirculating Flow in a Ventilated Room
DEFF Research Database (Denmark)
Davidson, Lars
In the present work Large Eddy Simulations are presented. The flow in a ventilated enclosure is studied. We use an explicit, two-steps time-advancement scheme where the pressure is solved from a Poisson equation.......In the present work Large Eddy Simulations are presented. The flow in a ventilated enclosure is studied. We use an explicit, two-steps time-advancement scheme where the pressure is solved from a Poisson equation....
Some thoughts on the management of large, complex international space ventures
Lee, T. J.; Kutzer, Ants; Schneider, W. C.
1992-01-01
Management issues relevant to the development and deployment of large international space ventures are discussed with particular attention given to previous experience. Management approaches utilized in the past are labeled as either simple or complex, and signs of efficient management are examined. Simple approaches include those in which experiments and subsystems are developed for integration into spacecraft, and the Apollo-Soyuz Test Project is given as an example of a simple multinational approach. Complex approaches include those for ESA's Spacelab Project and the Space Station Freedom in which functional interfaces cross agency and political boundaries. It is concluded that individual elements of space programs should be managed by individual participating agencies, and overall configuration control is coordinated by level with a program director acting to manage overall objectives and project interfaces.
Understanding Large-scale Structure in the SSA22 Protocluster Region Using Cosmological Simulations
Topping, Michael W.; Shapley, Alice E.; Steidel, Charles C.; Naoz, Smadar; Primack, Joel R.
2018-01-01
We investigate the nature and evolution of large-scale structure within the SSA22 protocluster region at z = 3.09 using cosmological simulations. A redshift histogram constructed from current spectroscopic observations of the SSA22 protocluster reveals two separate peaks at z = 3.065 (blue) and z = 3.095 (red). Based on these data, we report updated overdensity and mass calculations for the SSA22 protocluster. We find {δ }b,{gal}=4.8+/- 1.8 and {δ }r,{gal}=9.5+/- 2.0 for the blue and red peaks, respectively, and {δ }t,{gal}=7.6+/- 1.4 for the entire region. These overdensities correspond to masses of {M}b=(0.76+/- 0.17)× {10}15{h}-1 {M}ȯ , {M}r=(2.15+/- 0.32)× {10}15{h}-1 {M}ȯ , and {M}t=(3.19+/- 0.40)× {10}15{h}-1 {M}ȯ for the red, blue, and total peaks, respectively. We use the Small MultiDark Planck (SMDPL) simulation to identify comparably massive z∼ 3 protoclusters, and uncover the underlying structure and ultimate fate of the SSA22 protocluster. For this analysis, we construct mock redshift histograms for each simulated z∼ 3 protocluster, quantitatively comparing them with the observed SSA22 data. We find that the observed double-peaked structure in the SSA22 redshift histogram corresponds not to a single coalescing cluster, but rather the proximity of a ∼ {10}15{h}-1 {M}ȯ protocluster and at least one > {10}14{h}-1 {M}ȯ cluster progenitor. Such associations in the SMDPL simulation are easily understood within the framework of hierarchical clustering of dark matter halos. We finally find that the opportunity to observe such a phenomenon is incredibly rare, with an occurrence rate of 7.4{h}3 {{{Gpc}}}-3. Based on data obtained at the W.M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California, and the National Aeronautics and Space Administration, and was made possible by the generous financial support of the W.M. Keck Foundation.
Navigating the Problem Space: The Medium of Simulation Games in the Teaching of History
McCall, Jeremiah
2012-01-01
Simulation games can play a critical role in enabling students to navigate the problem spaces of the past while simultaneously critiquing the models designers offer to represent those problem spaces. There is much to be gained through their use. This includes rich opportunities for students to engage the past as independent historians; to consider…
Khosronejad, Ali; Sotiropoulos, Fotis; Stony Brook University Team
2016-11-01
We present a coupled flow and morphodynamic simulations of extreme flooding in 3 km long and 300 m wide reach of the Mississippi River in Minnesota, which includes three islands and hydraulic structures. We employ the large-eddy simulation (LES) and bed-morphodynamic modules of the VFS-Geophysics model to investigate the flow and bed evolution of the river during a 500 year flood. The coupling of the two modules is carried out via a fluid-structure interaction approach using a nested domain approach to enhance the resolution of bridge scour predictions. The geometrical data of the river, islands and structures are obtained from LiDAR, sub-aqueous sonar and in-situ surveying to construct a digital map of the river bathymetry. Our simulation results for the bed evolution of the river reveal complex sediment dynamics near the hydraulic structures. The numerically captured scour depth near some of the structures reach a maximum of about 10 m. The data-driven simulation strategy we present in this work exemplifies a practical simulation-based-engineering-approach to investigate the resilience of infrastructures to extreme flood events in intricate field-scale riverine systems. This work was funded by a Grant from Minnesota Dept. of Transportation.
Advanced UVOIR Mirror Technology Development (AMTD) for Very Large Space Telescopes
Stahl, H. Philip; Smith, W. Scott; Mosier, Gary; Abplanalp, Laura; Arnold, William
2014-01-01
ASTRO2010 Decadal stated that an advanced large-aperture ultraviolet, optical, near-infrared (UVOIR) telescope is required to enable the next generation of compelling astrophysics and exoplanet science; and, that present technology is not mature enough to affordably build and launch any potential UVOIR mission concept. AMTD builds on the state of art (SOA) defined by over 30 years of monolithic & segmented ground & space-telescope mirror technology to mature six key technologies. AMTD is deliberately pursuing multiple design paths to provide the science community with op-tions to enable either large aperture monolithic or segmented mirrors with clear engineering metrics traceable to science requirements.
Design and analysis of throttle orifice applying to small space with large pressure drop
International Nuclear Information System (INIS)
Li Yan; Lu Daogang; Zeng Xiaokang
2013-01-01
Throttle orifices are widely used in various pipe systems of nuclear power plants. Improper placement of orifices would aggravate the vibration of the pipe with strong noise, damaging the structure of the pipe and the completeness of the system. In this paper, effects of orifice diameter, thickness, eccentric distance and chamfering on the throttling are analyzed applying CFD software. Based on that, we propose the throttle orifices which apply to small space with large pressure drop are multiple eccentric orifices. The results show that the multiple eccentric orifices can effectively restrain the cavitation and flash distillation, while generating a large pressure drop. (authors)
Yang, Eui-Hyeok; Shcheglov, Kirill
2002-01-01
Future concepts of ultra large space telescopes include segmented silicon mirrors and inflatable polymer mirrors. Primary mirrors for these systems cannot meet optical surface figure requirements and are likely to generate over several microns of wavefront errors. In order to correct for these large wavefront errors, high stroke optical quality deformable mirrors are required. JPL has recently developed a new technology for transferring an entire wafer-level mirror membrane from one substrate to another. A thin membrane, 100 mm in diameter, has been successfully transferred without using adhesives or polymers. The measured peak-to-valley surface error of a transferred and patterned membrane (1 mm x 1 mm x 0.016 mm) is only 9 nm. The mirror element actuation principle is based on a piezoelectric unimorph. A voltage applied to the piezoelectric layer induces stress in the longitudinal direction causing the film to deform and pull on the mirror connected to it. The advantage of this approach is that the small longitudinal strains obtainable from a piezoelectric material at modest voltages are thus translated into large vertical displacements. Modeling is performed for a unimorph membrane consisting of clamped rectangular membrane with a PZT layer with variable dimensions. The membrane transfer technology is combined with the piezoelectric bimorph actuator concept to constitute a compact deformable mirror device with a large stroke actuation of a continuous mirror membrane, resulting in a compact A0 systems for use in ultra large space telescopes.
Phases of a stack of membranes in a large number of dimensions of configuration space
Borelli, M. E.; Kleinert, H.
2001-05-01
The phase diagram of a stack of tensionless membranes with nonlinear curvature energy and vertical harmonic interaction is calculated exactly in a large number of dimensions of configuration space. At low temperatures, the system forms a lamellar phase with spontaneously broken translational symmetry in the vertical direction. At a critical temperature, the stack disorders vertically in a meltinglike transition. The critical temperature is determined as a function of the interlayer separation l.
Evaluation of linear DC motor actuators for control of large space structures
Ide, Eric Nelson
1988-01-01
This thesis examines the use of a linear DC motor as a proof mass actuator for the control of large space structures. A model for the actuator, including the current and force compensation used, is derived. Because of the force compensation, the actuator is unstable when placed on a structure. Relative position feedback is used for actuator stabilization. This method of compensation couples the actuator to the mast in a feedback configuration. Three compensator designs are prop...
Matsuda, K.; Onishi, R.; Takahashi, K.
2017-12-01
Urban high temperatures due to the combined influence of global warming and urban heat islands increase the risk of heat stroke. Greenery is one of possible countermeasures for mitigating the heat environments since the transpiration and shading effect of trees can reduce the air temperature and the radiative heat flux. In order to formulate effective measures, it is important to estimate the influence of the greenery on the heat stroke risk. In this study, we have developed a tree-crown-resolving large-eddy simulation (LES) model that is coupled with three-dimensional radiative transfer (3DRT) model. The Multi-Scale Simulator for the Geoenvironment (MSSG) is used for performing building- and tree-crown-resolving LES. The 3DRT model is implemented in the MSSG so that the 3DRT is calculated repeatedly during the time integration of the LES. We have confirmed that the computational time for the 3DRT model is negligibly small compared with that for the LES and the accuracy of the 3DRT model is sufficiently high to evaluate the radiative heat flux at the pedestrian level. The present model is applied to the analysis of the heat environment in an actual urban area around the Tokyo Bay area, covering 8 km × 8 km with 5-m grid mesh, in order to confirm its feasibility. The results show that the wet-bulb globe temperature (WBGT), which is an indicator of the heat stroke risk, is predicted in a sufficiently high accuracy to evaluate the influence of tree crowns on the heat environment. In addition, by comparing with a case without the greenery in the Tokyo Bay area, we have confirmed that the greenery increases the low WBGT areas in major pedestrian spaces by a factor of 3.4. This indicates that the present model can predict the greenery effect on the urban heat environment quantitatively.
Large-scale micromagnetics simulations with dipolar interaction using all-to-all communications
Directory of Open Access Journals (Sweden)
Hiroshi Tsukahara
2016-05-01
Full Text Available We implement on our micromagnetics simulator low-complexity parallel fast-Fourier-transform algorithms, which reduces the frequency of all-to-all communications from six to two times. Almost all the computation time of micromagnetics simulation is taken up by the calculation of the magnetostatic field which can be calculated using the fast Fourier transform method. The results show that the simulation time is decreased with good scalability, even if the micromagentics simulation is performed using 8192 physical cores. This high parallelization effect enables large-scale micromagentics simulation using over one billion to be performed. Because massively parallel computing is needed to simulate the magnetization dynamics of real permanent magnets composed of many micron-sized grains, it is expected that our simulator reveals how magnetization dynamics influences the coercivity of the permanent magnet.
Soni, Rahul Kumar; De, Ashoke
2018-05-01
The present study primarily focuses on the effect of the jet spacing and strut geometry on the evolution and structure of the large-scale vortices which play a key role in mixing characteristics in turbulent supersonic flows. Numerically simulated results corresponding to varying parameters such as strut geometry and jet spacing (Xn = nDj such that n = 2, 3, and 5) for a square jet of height Dj = 0.6 mm are presented in the current study, while the work also investigates the presence of the local quasi-two-dimensionality for the X2(2Dj) jet spacing; however, the same is not true for higher jet spacing. Further, the tapered strut (TS) section is modified into the straight strut (SS) for investigation, where the remarkable difference in flow physics is unfolded between the two configurations for similar jet spacing (X2: 2Dj). The instantaneous density and vorticity contours reveal the structures of varying scales undergoing different evolution for the different configurations. The effect of local spanwise rollers is clearly manifested in the mixing efficiency and the jet spreading rate. The SS configuration exhibits excellent near field mixing behavior amongst all the arrangements. However, in the case of TS cases, only the X2(2Dj) configuration performs better due to the presence of local spanwise rollers. The qualitative and quantitative analysis reveals that near-field mixing is strongly affected by the two-dimensional rollers, while the early onset of the wake mode is another crucial parameter to have improved mixing. Modal decomposition performed for the SS arrangement sheds light onto the spatial and temporal coherence of the structures, where the most dominant structures are found to be the von Kármán street vortices in the wake region.
arXiv Stochastic locality and master-field simulations of very large lattices
Lüscher, Martin
2018-01-01
In lattice QCD and other field theories with a mass gap, the field variables in distant regions of a physically large lattice are only weakly correlated. Accurate stochastic estimates of the expectation values of local observables may therefore be obtained from a single representative field. Such master-field simulations potentially allow very large lattices to be simulated, but require various conceptual and technical issues to be addressed. In this talk, an introduction to the subject is provided and some encouraging results of master-field simulations of the SU(3) gauge theory are reported.
Development of the simulation package 'ELSES' for extra-large-scale electronic structure calculation
International Nuclear Information System (INIS)
Hoshi, T; Fujiwara, T
2009-01-01
An early-stage version of the simulation package 'ELSES' (extra-large-scale electronic structure calculation) is developed for simulating the electronic structure and dynamics of large systems, particularly nanometer-scale and ten-nanometer-scale systems (see www.elses.jp). Input and output files are written in the extensible markup language (XML) style for general users. Related pre-/post-simulation tools are also available. A practical workflow and an example are described. A test calculation for the GaAs bulk system is shown, to demonstrate that the present code can handle systems with more than one atom species. Several future aspects are also discussed.
Baurle, R. A.
2015-01-01
Steady-state and scale-resolving simulations have been performed for flow in and around a model scramjet combustor flameholder. The cases simulated corresponded to those used to examine this flowfield experimentally using particle image velocimetry. A variety of turbulence models were used for the steady-state Reynolds-averaged simulations which included both linear and non-linear eddy viscosity models. The scale-resolving simulations used a hybrid Reynolds-averaged / large eddy simulation strategy that is designed to be a large eddy simulation everywhere except in the inner portion (log layer and below) of the boundary layer. Hence, this formulation can be regarded as a wall-modeled large eddy simulation. This effort was undertaken to formally assess the performance of the hybrid Reynolds-averaged / large eddy simulation modeling approach in a flowfield of interest to the scramjet research community. The numerical errors were quantified for both the steady-state and scale-resolving simulations prior to making any claims of predictive accuracy relative to the measurements. The steady-state Reynolds-averaged results showed a high degree of variability when comparing the predictions obtained from each turbulence model, with the non-linear eddy viscosity model (an explicit algebraic stress model) providing the most accurate prediction of the measured values. The hybrid Reynolds-averaged/large eddy simulation results were carefully scrutinized to ensure that even the coarsest grid had an acceptable level of resolution for large eddy simulation, and that the time-averaged statistics were acceptably accurate. The autocorrelation and its Fourier transform were the primary tools used for this assessment. The statistics extracted from the hybrid simulation strategy proved to be more accurate than the Reynolds-averaged results obtained using the linear eddy viscosity models. However, there was no predictive improvement noted over the results obtained from the explicit
Jumbo Space Environment Simulation and Spacecraft Charging Chamber Characterization
2015-04-09
probes for Jumbo. Both probes are produced by Trek Inc. Trek probe model 370 is capable of -3 to 3kV and has an extremely fast, 50µs/kV response to...changing surface potentials. Trek probe 341B is capable of -20 to 20kV with a 200 µs/kV response time. During our charging experiments the probe sits...unlimited. 12 REFERENCES [1] R. D. Leach and M. B. Alexander, "Failures and anomalies attributed to spacecraft charging," NASA RP-1375, Marshall Space
Simulating the Effect of Space Vehicle Environments on Directional Solidification of a Binary Alloy
Westra, D. G.; Heinrich, J. C.; Poirier, D. R.
2003-01-01
Space microgravity missions are designed to provide a microgravity environment for scientific experiments, but these missions cannot provide a perfect environment, due to vibrations caused by crew activity, on-board experiments, support systems (pumps, fans, etc.), periodic orbital maneuvers, and water dumps. Therefore, it is necessary to predict the impact of these vibrations on space experiments, prior to performing them. Simulations were conducted to study the effect of the vibrations on the directional solidification of a dendritic alloy. Finite element ca!cu!attie?ls were dme with a simd2titcr based on a continuum model of dendritic solidification, using the Fractional Step Method (FSM). The FSM splits the solution of the momentum equation into two steps: the viscous intermediate step, which does not enforce continuity; and the inviscid projection step, which calculates the pressure and enforces continuity. The FSM provides significant computational benefits for predicting flows in a directionally solidified alloy, compared to other methods presently employed, because of the efficiency gains in the uncoupled solution of velocity and pressure. finite differences, arises when the interdendritic liquid reaches the eutectic temperature and concentration. When a node reaches eutectic temperature, it is assumed that the solidification of the eutectic liquid continues at constant temperature until all the eutectic is solidified. With this approach, solidification is not achieved continuously across an element; rather, the element is not considered solidified until the eutectic isotherm overtakes the top nodes. For microgravity simulations, where the convection is driven by shrinkage, it introduces large variations in the fluid velocity. When the eutectic isotherm reaches a node, all the eutectic must be solidified in a short period, causing an abrupt increase in velocity. To overcome this difficulty, we employed a scheme to numerically predict a more accurate value
Validation of Varian TrueBeam electron phase–spaces for Monte Carlo simulation of MLC-shaped fields
International Nuclear Information System (INIS)
Lloyd, Samantha A. M.; Gagne, Isabelle M.; Zavgorodni, Sergei; Bazalova-Carter, Magdalena
2016-01-01
Purpose: This work evaluates Varian’s electron phase–space sources for Monte Carlo simulation of the TrueBeam for modulated electron radiation therapy (MERT) and combined, modulated photon and electron radiation therapy (MPERT) where fields are shaped by the photon multileaf collimator (MLC) and delivered at 70 cm SSD. Methods: Monte Carlo simulations performed with EGSnrc-based BEAMnrc/DOSXYZnrc and PENELOPE-based PRIMO are compared against diode measurements for 5 × 5, 10 × 10, and 20 × 20 cm 2 MLC-shaped fields delivered with 6, 12, and 20 MeV electrons at 70 cm SSD (jaws set to 40 × 40 cm 2 ). Depth dose curves and profiles are examined. In addition, EGSnrc-based simulations of relative output as a function of MLC-field size and jaw-position are compared against ion chamber measurements for MLC-shaped fields between 3 × 3 and 25 × 25 cm 2 and jaw positions that range from the MLC-field size to 40 × 40 cm 2 . Results: Percent depth dose curves generated by BEAMnrc/DOSXYZnrc and PRIMO agree with measurement within 2%, 2 mm except for PRIMO’s 12 MeV, 20 × 20 cm 2 field where 90% of dose points agree within 2%, 2 mm. Without the distance to agreement, differences between measurement and simulation are as large as 7.3%. Characterization of simulated dose parameters such as FWHM, penumbra width and depths of 90%, 80%, 50%, and 20% dose agree within 2 mm of measurement for all fields except for the FWHM of the 6 MeV, 20 × 20 cm 2 field which falls within 2 mm distance to agreement. Differences between simulation and measurement exist in the profile shoulders and penumbra tails, in particular for 10 × 10 and 20 × 20 cm 2 fields of 20 MeV electrons, where both sets of simulated data fall short of measurement by as much as 3.5%. BEAMnrc/DOSXYZnrc simulated outputs agree with measurement within 2.3% except for 6 MeV MLC-shaped fields. Discrepancies here are as great as 5.5%. Conclusions: TrueBeam electron phase–spaces available from Varian have been
Validation of Varian TrueBeam electron phase–spaces for Monte Carlo simulation of MLC-shaped fields
Energy Technology Data Exchange (ETDEWEB)
Lloyd, Samantha A. M. [Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8P 3P6 5C2 (Canada); Gagne, Isabelle M., E-mail: imgagne@bccancer.bc.ca; Zavgorodni, Sergei [Department of Medical Physics, BC Cancer Agency–Vancouver Island Centre, Victoria, British Columbia V8R 6V5, Canada and Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 3P6 5C2 (Canada); Bazalova-Carter, Magdalena [Department of Physics and Astronomy, University of Victoria, Victoria, British Columbia V8W 3P6 5C2 (Canada)
2016-06-15
Purpose: This work evaluates Varian’s electron phase–space sources for Monte Carlo simulation of the TrueBeam for modulated electron radiation therapy (MERT) and combined, modulated photon and electron radiation therapy (MPERT) where fields are shaped by the photon multileaf collimator (MLC) and delivered at 70 cm SSD. Methods: Monte Carlo simulations performed with EGSnrc-based BEAMnrc/DOSXYZnrc and PENELOPE-based PRIMO are compared against diode measurements for 5 × 5, 10 × 10, and 20 × 20 cm{sup 2} MLC-shaped fields delivered with 6, 12, and 20 MeV electrons at 70 cm SSD (jaws set to 40 × 40 cm{sup 2}). Depth dose curves and profiles are examined. In addition, EGSnrc-based simulations of relative output as a function of MLC-field size and jaw-position are compared against ion chamber measurements for MLC-shaped fields between 3 × 3 and 25 × 25 cm{sup 2} and jaw positions that range from the MLC-field size to 40 × 40 cm{sup 2}. Results: Percent depth dose curves generated by BEAMnrc/DOSXYZnrc and PRIMO agree with measurement within 2%, 2 mm except for PRIMO’s 12 MeV, 20 × 20 cm{sup 2} field where 90% of dose points agree within 2%, 2 mm. Without the distance to agreement, differences between measurement and simulation are as large as 7.3%. Characterization of simulated dose parameters such as FWHM, penumbra width and depths of 90%, 80%, 50%, and 20% dose agree within 2 mm of measurement for all fields except for the FWHM of the 6 MeV, 20 × 20 cm{sup 2} field which falls within 2 mm distance to agreement. Differences between simulation and measurement exist in the profile shoulders and penumbra tails, in particular for 10 × 10 and 20 × 20 cm{sup 2} fields of 20 MeV electrons, where both sets of simulated data fall short of measurement by as much as 3.5%. BEAMnrc/DOSXYZnrc simulated outputs agree with measurement within 2.3% except for 6 MeV MLC-shaped fields. Discrepancies here are as great as 5.5%. Conclusions: TrueBeam electron phase–spaces
Simulation of space charge effects and transition crossing in the Fermilab Booster
International Nuclear Information System (INIS)
Lucas, P.; MacLachlan, J.
1987-03-01
The longitudinal phase space program ESME, modified for space charge and wall impedance effects, has been used to simulate transition crossing in the Fermilab Booster. The simulations yield results in reasonable quantitative agreement with measured parameters. They further indicate that a transition jump scheme currently under construction will significantly reduce emittance growth, while attempts to alter machine impedance are less obviously beneficial. In addition to presenting results, this paper points out a serious difficulty, related to statistical fluctuations, in the space charge calculation. False indications of emittance growth can appear if care is not taken to minimize this problem
Realizability conditions for the turbulent stress tensor in large-eddy simulation
Vreman, A.W.; Geurts, Bernardus J.; Kuerten, Johannes G.M.
1994-01-01
The turbulent stress tensor in large-eddy simulation is examined from a theoretical point of view. Realizability conditions for the components of this tensor are derived, which hold if and only if the filter function is positive. The spectral cut-off, one of the filters frequently used in large-eddy
Sensitivity of the scale partition for variational multiscale large-eddy simulation of channel flow
Holmen, J.; Hughes, T.J.R.; Oberai, A.A.; Wells, G.N.
2004-01-01
The variational multiscale method has been shown to perform well for large-eddy simulation (LES) of turbulent flows. The method relies upon a partition of the resolved velocity field into large- and small-scale components. The subgrid model then acts only on the small scales of motion, unlike
Directory of Open Access Journals (Sweden)
Guangtao Zhang
2015-01-01
Full Text Available In the field of hydropower station transient process simulation (HSTPS, characteristic graph-based iterative hydroturbine model (CGIHM has been widely used when large disturbance hydroturbine modeling is involved. However, by this model, iteration should be used to calculate speed and pressure, and slow convergence or no convergence problems may be encountered for some reasons like special characteristic graph profile, inappropriate iterative algorithm, or inappropriate interpolation algorithm, and so forth. Also, other conventional large disturbance hydroturbine models are of some disadvantages and difficult to be used widely in HSTPS. Therefore, to obtain an accurate simulation result, a simple method for hydroturbine modeling is proposed. By this method, both the initial operating point and the transfer coefficients of linear hydroturbine model keep changing during simulation. Hence, it can reflect the nonlinearity of the hydroturbine and be used for Francis turbine simulation under large disturbance condition. To validate the proposed method, both large disturbance and small disturbance simulations of a single hydrounit supplying a resistive, isolated load were conducted. It was shown that the simulation result is consistent with that of field test. Consequently, the proposed method is an attractive option for HSTPS involving Francis turbine modeling under large disturbance condition.
Kijvikai, Kittinut; Laguna, M. Pilar; de la Rosette, Jean
2006-01-01
We describe our technique for large renal vein control in the limited dissected space during laparoscopic nephrectomy. This technique is a simple, inexpensive and reliable method, especially for large and short renal vein ligation
Behavior of ionic conducting IPN actuators in simulated space conditions
Fannir, Adelyne; Plesse, Cédric; Nguyen, Giao T. M.; Laurent, Elisabeth; Cadiergues, Laurent; Vidal, Frédéric
2016-04-01
The presentation focuses on the performances of flexible all-polymer electroactive actuators under space-hazardous environmental factors in laboratory conditions. These bending actuators are based on high molecular weight nitrile butadiene rubber (NBR), poly(ethylene oxide) (PEO) derivative and poly(3,4-ethylenedioxithiophene) (PEDOT). The electroactive PEDOT is embedded within the PEO/NBR membrane which is subsequently swollen with an ionic liquid as electrolyte. Actuators have been submitted to thermal cycling test between -25 to 60°C under vacuum (2.4 10-8 mbar) and to ionizing Gamma radiations at a level of 210 rad/h during 100 h. Actuators have been characterized before and after space environmental condition ageing. In particular, the viscoelasticity properties and mechanical resistance of the materials have been determined by dynamic mechanical analysis and tensile tests. The evolution of the actuation properties as the strain and the output force have been characterized as well. The long-term vacuuming, the freezing temperature and the Gamma radiations do not affect significantly the thermomechanical properties of conducting IPNs actuators. Only a slight decrease on actuation performances has been observed.
International Nuclear Information System (INIS)
Norman, A.; Boyd, J.; Davies, G.; Flumerfelt, E.; Herner, K.; Mayer, N.; Mhashilhar, P.; Tamsett, M.; Timm, S.
2015-01-01
Modern long baseline neutrino experiments like the NOvA experiment at Fermilab, require large scale, compute intensive simulations of their neutrino beam fluxes and backgrounds induced by cosmic rays. The amount of simulation required to keep the systematic uncertainties in the simulation from dominating the final physics results is often 10x to 100x that of the actual detector exposure. For the first physics results from NOvA this has meant the simulation of more than 2 billion cosmic ray events in the far detector and more than 200 million NuMI beam spill simulations. Performing these high statistics levels of simulation have been made possible for NOvA through the use of the Open Science Grid and through large scale runs on commercial clouds like Amazon EC2. We details the challenges in performing large scale simulation in these environments and how the computing infrastructure for the NOvA experiment has been adapted to seamlessly support the running of different simulation and data processing tasks on these resources. (paper)
Prasad, K.
2017-12-01
Atmospheric transport is usually performed with weather models, e.g., the Weather Research and Forecasting (WRF) model that employs a parameterized turbulence model and does not resolve the fine scale dynamics generated by the flow around buildings and features comprising a large city. The NIST Fire Dynamics Simulator (FDS) is a computational fluid dynamics model that utilizes large eddy simulation methods to model flow around buildings at length scales much smaller than is practical with models like WRF. FDS has the potential to evaluate the impact of complex topography on near-field dispersion and mixing that is difficult to simulate with a mesoscale atmospheric model. A methodology has been developed to couple the FDS model with WRF mesoscale transport models. The coupling is based on nudging the FDS flow field towards that computed by WRF, and is currently limited to one way coupling performed in an off-line mode. This approach allows the FDS model to operate as a sub-grid scale model with in a WRF simulation. To test and validate the coupled FDS - WRF model, the methane leak from the Aliso Canyon underground storage facility was simulated. Large eddy simulations were performed over the complex topography of various natural gas storage facilities including Aliso Canyon, Honor Rancho and MacDonald Island at 10 m horizontal and vertical resolution. The goal of these simulations included improving and validating transport models as well as testing leak hypotheses. Forward simulation results were compared with aircraft and tower based in-situ measurements as well as methane plumes observed using the NASA Airborne Visible InfraRed Imaging Spectrometer (AVIRIS) and the next generation instrument AVIRIS-NG. Comparison of simulation results with measurement data demonstrate the capability of the coupled FDS-WRF models to accurately simulate the transport and dispersion of methane plumes over urban domains. Simulated integrated methane enhancements will be presented and
International Nuclear Information System (INIS)
Liu, H.
1996-01-01
Computer simulations using the multi-particle code PARMELA with a three-dimensional point-by-point space charge algorithm have turned out to be very helpful in supporting injector commissioning and operations at Thomas Jefferson National Accelerator Facility (Jefferson Lab, formerly called CEBAF). However, this algorithm, which defines a typical N 2 problem in CPU time scaling, is very time-consuming when N, the number of macro-particles, is large. Therefore, it is attractive to use massively parallel processors (MPPs) to speed up the simulations. Motivated by this, the authors modified the space charge subroutine for using the MPPs of the Cray T3D. The techniques used to parallelize and optimize the code on the T3D are discussed in this paper. The performance of the code on the T3D is examined in comparison with a Parallel Vector Processing supercomputer of the Cray C90 and an HP 735/15 high-end workstation
Riquelme, Mario; Quataert, Eliot; Verscharen, Daniel
2018-02-01
We use particle-in-cell (PIC) simulations of a collisionless, electron–ion plasma with a decreasing background magnetic field, {\\boldsymbol{B}}, to study the effect of velocity-space instabilities on the viscous heating and thermal conduction of the plasma. If | {\\boldsymbol{B}}| decreases, the adiabatic invariance of the magnetic moment gives rise to pressure anisotropies with {p}| | ,j> {p}\\perp ,j ({p}| | ,j and {p}\\perp ,j represent the pressure of species j (electron or ion) parallel and perpendicular to B ). Linear theory indicates that, for sufficiently large anisotropies, different velocity-space instabilities can be triggered. These instabilities in principle have the ability to pitch-angle scatter the particles, limiting the growth of the anisotropies. Our simulations focus on the nonlinear, saturated regime of the instabilities. This is done through the permanent decrease of | {\\boldsymbol{B}}| by an imposed plasma shear. We show that, in the regime 2≲ {β }j≲ 20 ({β }j\\equiv 8π {p}j/| {\\boldsymbol{B}}{| }2), the saturated ion and electron pressure anisotropies are controlled by the combined effect of the oblique ion firehose and the fast magnetosonic/whistler instabilities. These instabilities grow preferentially on the scale of the ion Larmor radius, and make {{Δ }}{p}e/{p}| | ,e≈ {{Δ }}{p}i/{p}| | ,i (where {{Δ }}{p}j={p}\\perp ,j-{p}| | ,j). We also quantify the thermal conduction of the plasma by directly calculating the mean free path of electrons, {λ }e, along the mean magnetic field, finding that {λ }e depends strongly on whether | {\\boldsymbol{B}}| decreases or increases. Our results can be applied in studies of low-collisionality plasmas such as the solar wind, the intracluster medium, and some accretion disks around black holes.
An Engineering Design Reference Mission for a Future Large-Aperture UVOIR Space Observatory
Thronson, Harley A.; Bolcar, Matthew R.; Clampin, Mark; Crooke, Julie A.; Redding, David; Rioux, Norman; Stahl, H. Philip
2016-01-01
From the 2010 NRC Decadal Survey and the NASA Thirty-Year Roadmap, Enduring Quests, Daring Visions, to the recent AURA report, From Cosmic Birth to Living Earths, multiple community assessments have recommended development of a large-aperture UVOIR space observatory capable of achieving a broad range of compelling scientific goals. Of these priority science goals, the most technically challenging is the search for spectroscopic biomarkers in the atmospheres of exoplanets in the solar neighborhood. Here we present an engineering design reference mission (EDRM) for the Advanced Technology Large-Aperture Space Telescope (ATLAST), which was conceived from the start as capable of breakthrough science paired with an emphasis on cost control and cost effectiveness. An EDRM allows the engineering design trade space to be explored in depth to determine what are the most demanding requirements and where there are opportunities for margin against requirements. Our joint NASA GSFC/JPL/MSFC/STScI study team has used community-provided science goals to derive mission needs, requirements, and candidate mission architectures for a future large-aperture, non-cryogenic UVOIR space observatory. The ATLAST observatory is designed to operate at a Sun-Earth L2 orbit, which provides a stable thermal environment and excellent field of regard. Our reference designs have emphasized a serviceable 36-segment 9.2 m aperture telescope that stows within a five-meter diameter launch vehicle fairing. As part of our cost-management effort, this particular reference mission builds upon the engineering design for JWST. Moreover, it is scalable to a variety of launch vehicle fairings. Performance needs developed under the study are traceable to a variety of additional reference designs, including options for a monolithic primary mirror.
Simulation and analysis of tape spring for deployed space structures
Chang, Wei; Cao, DongJing; Lian, MinLong
2018-03-01
The tape spring belongs to the configuration of ringent cylinder shell, and the mechanical properties of the structure are significantly affected by the change of geometrical parameters. There are few studies on the influence of geometrical parameters on the mechanical properties of the tape spring. The bending process of the single tape spring was simulated based on simulation software. The variations of critical moment, unfolding moment, and maximum strain energy in the bending process were investigated, and the effects of different radius angles of section and thickness and length on driving capability of the simple tape spring was studied by using these parameters. Results show that the driving capability and resisting disturbance capacity grow with the increase of radius angle of section in the bending process of the single tape spring. On the other hand, these capabilities decrease with increasing length of the single tape spring. In the end, the driving capability and resisting disturbance capacity grow with the increase of thickness in the bending process of the single tape spring. The research has a certain reference value for improving the kinematic accuracy and reliability of deployable structures.
International Nuclear Information System (INIS)
BEEBE - WANG, J.; LUCCIO, A.U.; D IMPERIO, N.; MACHIDA, S.
2002-01-01
Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed
Energy Technology Data Exchange (ETDEWEB)
BEEBE - WANG,J.; LUCCIO,A.U.; D IMPERIO,N.; MACHIDA,S.
2002-06-03
Space charge in high intensity beams is an important issue in accelerator physics. Due to the complicity of the problems, the most effective way of investigating its effect is by computer simulations. In the resent years, many space charge simulation methods have been developed and incorporated in various 2D or 3D multi-particle-tracking codes. It has becoming necessary to benchmark these methods against each other, and against experimental results. As a part of global effort, we present our initial comparison of the space charge methods incorporated in simulation codes ORBIT++, ORBIT and SIMPSONS. In this paper, the methods included in these codes are overviewed. The simulation results are presented and compared. Finally, from this study, the advantages and disadvantages of each method are discussed.
Simulation analysis of impulse characteristics of space debris irradiated by multi-pulse laser
Lin, Zhengguo; Jin, Xing; Chang, Hao; You, Xiangyu
2018-02-01
Cleaning space debris with laser is a hot topic in the field of space security research. Impulse characteristics are the basis of cleaning space debris with laser. In order to study the impulse characteristics of rotating irregular space debris irradiated by multi-pulse laser, the impulse calculation method of rotating space debris irradiated by multi-pulse laser is established based on the area matrix method. The calculation method of impulse and impulsive moment under multi-pulse irradiation is given. The calculation process of total impulse under multi-pulse irradiation is analyzed. With a typical non-planar space debris (cube) as example, the impulse characteristics of space debris irradiated by multi-pulse laser are simulated and analyzed. The effects of initial angular velocity, spot size and pulse frequency on impulse characteristics are investigated.
Electrical behaviour of a silicone elastomer under simulated space environment
International Nuclear Information System (INIS)
Roggero, A; Dantras, E; Paulmier, T; Rejsek-Riba, V; Tonon, C; Dagras, S; Balcon, N; Payan, D
2015-01-01
The electrical behavior of a space-used silicone elastomer was characterized using surface potential decay and dynamic dielectric spectroscopy techniques. In both cases, the dielectric manifestation of the glass transition (dipole orientation) and a charge transport phenomenon were observed. An unexpected linear increase of the surface potential with temperature was observed around T g in thermally-stimulated potential decay experiments, due to molecular mobility limiting dipolar orientation in one hand, and 3D thermal expansion reducing the materials capacitance in the other hand. At higher temperatures, the charge transport process, believed to be thermally activated electron hopping with an activation energy of about 0.4 eV, was studied with and without the silica and iron oxide fillers present in the commercial material. These fillers were found to play a preponderant role in the low-frequency electrical conductivity of this silicone elastomer, probably through a Maxwell–Wagner–Sillars relaxation phenomenon. (paper)
Monte Carlo simulations of Microdosimetry for Space Research at FAIR
International Nuclear Information System (INIS)
Burigo, Lucas; Pshenichnov, Igor; Mishustin, Igor; Bleicher, Marcus
2013-01-01
The exposure to high charge and energy (HZE) particles is one of major concerns for humans during their missions in space. As radiation effects essentialy depend on charge, mass and energy of cosmic-ray particles, the radiation quality has to be investigated, e.g. by means of microdosimetry measurements on the board of a spacecraft. We benchmark the electromagnetic models of the Geant4 toolkit with microdosimetry data obtained with a walled Tissue Equivalent Proportional Counter (TEPC) with beams of HZE particles. Our MCHIT model is able to reproduce in general the response functions and microdosimetry variables for nuclear beams from He to Fe with energies of 80–400 MeV per nucleon.
Large-signal analysis of DC motor drive system using state-space averaging technique
International Nuclear Information System (INIS)
Bekir Yildiz, Ali
2008-01-01
The analysis of a separately excited DC motor driven by DC-DC converter is realized by using state-space averaging technique. Firstly, a general and unified large-signal averaged circuit model for DC-DC converters is given. The method converts power electronic systems, which are periodic time-variant because of their switching operation, to unified and time independent systems. Using the averaged circuit model enables us to combine the different topologies of converters. Thus, all analysis and design processes about DC motor can be easily realized by using the unified averaged model which is valid during whole period. Some large-signal variations such as speed and current relating to DC motor, steady-state analysis, large-signal and small-signal transfer functions are easily obtained by using the averaged circuit model
Analyzing Damping Vibration Methods of Large-Size Space Vehicles in the Earth's Magnetic Field
Directory of Open Access Journals (Sweden)
G. A. Shcheglov
2016-01-01
Full Text Available It is known that most of today's space vehicles comprise large antennas, which are bracket-attached to the vehicle body. Dimensions of reflector antennas may be of 30 ... 50 m. The weight of such constructions can reach approximately 200 kg.Since the antenna dimensions are significantly larger than the size of the vehicle body and the points to attach the brackets to the space vehicles have a low stiffness, conventional dampers may be inefficient. The paper proposes to consider the damping antenna in terms of its interaction with the Earth's magnetic field.A simple dynamic model of the space vehicle equipped with a large-size structure is built. The space vehicle is a parallelepiped to which the antenna is attached through a beam.To solve the model problems, was used a simplified model of Earth's magnetic field: uniform, with intensity lines parallel to each other and perpendicular to the plane of the antenna.The paper considers two layouts of coils with respect to the antenna, namely: a vertical one in which an axis of magnetic dipole is perpendicular to the antenna plane, and a horizontal layout in which an axis of magnetic dipole lies in the antenna plane. It also explores two ways for magnetic damping of oscillations: through the controlled current that is supplied from the power supply system of the space vehicle, and by the self-induction current in the coil. Thus, four objectives were formulated.In each task was formulated an oscillation equation. Then a ratio of oscillation amplitudes and their decay time were estimated. It was found that each task requires the certain parameters either of the antenna itself, its dimensions and moment of inertia, or of the coil and, respectively, the current, which is supplied from the space vehicle. In each task for these parameters were found the ranges, which allow us to tell of efficient damping vibrations.The conclusion can be drawn based on the analysis of tasks that a specialized control system
Alberts, Samantha J.
The investigation of microgravity fluid dynamics emerged out of necessity with the advent of space exploration. In particular, capillary research took a leap forward in the 1960s with regards to liquid settling and interfacial dynamics. Due to inherent temperature variations in large spacecraft liquid systems, such as fuel tanks, forces develop on gas-liquid interfaces which induce thermocapillary flows. To date, thermocapillary flows have been studied in small, idealized research geometries usually under terrestrial conditions. The 1 to 3m lengths in current and future large tanks and hardware are designed based on hardware rather than research, which leaves spaceflight systems designers without the technological tools to effectively create safe and efficient designs. This thesis focused on the design and feasibility of a large length-scale thermocapillary flow experiment, which utilizes temperature variations to drive a flow. The design of a helical channel geometry ranging from 1 to 2.5m in length permits a large length-scale thermocapillary flow experiment to fit in a seemingly small International Space Station (ISS) facility such as the Fluids Integrated Rack (FIR). An initial investigation determined the proposed experiment produced measurable data while adhering to the FIR facility limitations. The computational portion of this thesis focused on the investigation of functional geometries of fuel tanks and depots using Surface Evolver. This work outlines the design of a large length-scale thermocapillary flow experiment for the ISS FIR. The results from this work improve the understanding thermocapillary flows and thus improve technological tools for predicting heat and mass transfer in large length-scale thermocapillary flows. Without the tools to understand the thermocapillary flows in these systems, engineers are forced to design larger, heavier vehicles to assure safety and mission success.
ROSA-IV Large Scale Test Facility (LSTF) system description for second simulated fuel assembly
International Nuclear Information System (INIS)
1990-10-01
The ROSA-IV Program's Large Scale Test Facility (LSTF) is a test facility for integral simulation of thermal-hydraulic response of a pressurized water reactor (PWR) during small break loss-of-coolant accidents (LOCAs) and transients. In this facility, the PWR core nuclear fuel rods are simulated using electric heater rods. The simulated fuel assembly which was installed during the facility construction was replaced with a new one in 1988. The first test with this second simulated fuel assembly was conducted in December 1988. This report describes the facility configuration and characteristics as of this date (December 1988) including the new simulated fuel assembly design and the facility changes which were made during the testing with the first assembly as well as during the renewal of the simulated fuel assembly. (author)
Preliminary results on the dynamics of large and flexible space structures in Halo orbits
Colagrossi, Andrea; Lavagna, Michèle
2017-05-01
The global exploration roadmap suggests, among other ambitious future space programmes, a possible manned outpost in lunar vicinity, to support surface operations and further astronaut training for longer and deeper space missions and transfers. In particular, a Lagrangian point orbit location - in the Earth- Moon system - is suggested for a manned cis-lunar infrastructure; proposal which opens an interesting field of study from the astrodynamics perspective. Literature offers a wide set of scientific research done on orbital dynamics under the Three-Body Problem modelling approach, while less of it includes the attitude dynamics modelling as well. However, whenever a large space structure (ISS-like) is considered, not only the coupled orbit-attitude dynamics should be modelled to run more accurate analyses, but the structural flexibility should be included too. The paper, starting from the well-known Circular Restricted Three-Body Problem formulation, presents some preliminary results obtained by adding a coupled orbit-attitude dynamical model and the effects due to the large structure flexibility. In addition, the most relevant perturbing phenomena, such as the Solar Radiation Pressure (SRP) and the fourth-body (Sun) gravity, are included in the model as well. A multi-body approach has been preferred to represent possible configurations of the large cis-lunar infrastructure: interconnected simple structural elements - such as beams, rods or lumped masses linked by springs - build up the space segment. To better investigate the relevance of the flexibility effects, the lumped parameters approach is compared with a distributed parameters semi-analytical technique. A sensitivity analysis of system dynamics, with respect to different configurations and mechanical properties of the extended structure, is also presented, in order to highlight drivers for the lunar outpost design. Furthermore, a case study for a large and flexible space structure in Halo orbits around
Gambicorti, Lisa; D'Amato, Francesco; Vettore, Christian; Duò, Fabrizio; Guercia, Alessio; Patauner, Christian; Biasi, Roberto; Lisi, Franco; Riccardi, Armando; Gallieni, Daniele; Lazzarini, Paolo; Tintori, Matteo; Zuccaro Marchi, Alessandro; Pereira do Carmo, Joao
2017-11-01
The aim of this work is to describe the latest results of new technological concepts for Large Aperture Telescopes Technology (LATT) using thin deployable lightweight active mirrors. This technology is developed under the European Space Agency (ESA) Technology Research Program and can be exploited in all the applications based on the use of primary mirrors of space telescopes with large aperture, segmented lightweight telescopes with wide Field of View (FOV) and low f/#, and LIDAR telescopes. The reference mission application is a potential future ESA mission, related to a space borne DIAL (Differential Absorption Lidar) instrument operating around 935.5 nm with the goal to measure water vapor profiles in atmosphere. An Optical BreadBoard (OBB) for LATT has been designed for investigating and testing two critical aspects of the technology: 1) control accuracy in the mirror surface shaping. 2) mirror survivability to launch. The aim is to evaluate the effective performances of the long stroke smart-actuators used for the mirror control and to demonstrate the effectiveness and the reliability of the electrostatic locking (EL) system to restraint the thin shell on the mirror backup structure during launch. The paper presents a comprehensive vision of the breadboard focusing on how the requirements have driven the design of the whole system and of the various subsystems. The manufacturing process of the thin shell is also presented.
Towse, Clare-Louise; Akke, Mikael; Daggett, Valerie
2017-04-27
Molecular dynamics (MD) simulations contain considerable information with regard to the motions and fluctuations of a protein, the magnitude of which can be used to estimate conformational entropy. Here we survey conformational entropy across protein fold space using the Dynameomics database, which represents the largest existing data set of protein MD simulations for representatives of essentially all known protein folds. We provide an overview of MD-derived entropies accounting for all possible degrees of dihedral freedom on an unprecedented scale. Although different side chains might be expected to impose varying restrictions on the conformational space that the backbone can sample, we found that the backbone entropy and side chain size are not strictly coupled. An outcome of these analyses is the Dynameomics Entropy Dictionary, the contents of which have been compared with entropies derived by other theoretical approaches and experiment. As might be expected, the conformational entropies scale linearly with the number of residues, demonstrating that conformational entropy is an extensive property of proteins. The calculated conformational entropies of folding agree well with previous estimates. Detailed analysis of specific cases identifies deviations in conformational entropy from the average values that highlight how conformational entropy varies with sequence, secondary structure, and tertiary fold. Notably, α-helices have lower entropy on average than do β-sheets, and both are lower than coil regions.
Heap, Sara; Folta, David; Gong, Qian; Howard, Joseph; Hull, Tony; Purves, Lloyd
2016-08-01
Large astronomical missions are usually general-purpose telescopes with a suite of instruments optimized for different wavelength regions, spectral resolutions, etc. Their end-to-end (E2E) simulations are typically photons-in to flux-out calculations made to verify that each instrument meets its performance specifications. In contrast, smaller space missions are usually single-purpose telescopes, and their E2E simulations start with the scientific question to be answered and end with an assessment of the effectiveness of the mission in answering the scientific question. Thus, E2E simulations for small missions consist a longer string of calculations than for large missions, as they include not only the telescope and instrumentation, but also the spacecraft, orbit, and external factors such as coordination with other telescopes. Here, we illustrate the strategy and organization of small-mission E2E simulations using the Galaxy Evolution Spectroscopic Explorer (GESE) as a case study. GESE is an Explorer/Probe-class space mission concept with the primary aim of understanding galaxy evolution. Operation of a small survey telescope in space like GESE is usually simpler than operations of large telescopes driven by the varied scientific programs of the observers or by transient events. Nevertheless, both types of telescopes share two common challenges: maximizing the integration time on target, while minimizing operation costs including communication costs and staffing on the ground. We show in the case of GESE how these challenges can be met through a custom orbit and a system design emphasizing simplification and leveraging information from ground-based telescopes.
International Nuclear Information System (INIS)
Candel, A.; Kabel, A.; Ko, K.; Lee, L.; Li, Z.; Limborg, C.; Ng, C.; Prudencio, E.; Schussman, G.; Uplenchwar, R.
2007-01-01
Over the past years, SLAC's Advanced Computations Department (ACD) has developed the parallel finite element (FE) particle-in-cell code Pic3P (Pic2P) for simulations of beam-cavity interactions dominated by space-charge effects. As opposed to standard space-charge dominated beam transport codes, which are based on the electrostatic approximation, Pic3P (Pic2P) includes space-charge, retardation and boundary effects as it self-consistently solves the complete set of Maxwell-Lorentz equations using higher-order FE methods on conformal meshes. Use of efficient, large-scale parallel processing allows for the modeling of photoinjectors with unprecedented accuracy, aiding the design and operation of the next-generation of accelerator facilities. Applications to the Linac Coherent Light Source (LCLS) RF gun are presented
Jennings, Esther H.; Nguyen, Sam P.; Wang, Shin-Ywan; Woo, Simon S.
2008-01-01
NASA's planned Lunar missions will involve multiple NASA centers where each participating center has a specific role and specialization. In this vision, the Constellation program (CxP)'s Distributed System Integration Laboratories (DSIL) architecture consist of multiple System Integration Labs (SILs), with simulators, emulators, testlabs and control centers interacting with each other over a broadband network to perform test and verification for mission scenarios. To support the end-to-end simulation and emulation effort of NASA' exploration initiatives, different NASA centers are interconnected to participate in distributed simulations. Currently, DSIL has interconnections among the following NASA centers: Johnson Space Center (JSC), Kennedy Space Center (KSC), Marshall Space Flight Center (MSFC) and Jet Propulsion Laboratory (JPL). Through interconnections and interactions among different NASA centers, critical resources and data can be shared, while independent simulations can be performed simultaneously at different NASA locations, to effectively utilize the simulation and emulation capabilities at each center. Furthermore, the development of DSIL can maximally leverage the existing project simulation and testing plans. In this work, we describe the specific role and development activities at JPL for Space Communications and Navigation Network (SCaN) simulator using the Multi-mission Advanced Communications Hybrid Environment for Test and Evaluation (MACHETE) tool to simulate communications effects among mission assets. Using MACHETE, different space network configurations among spacecrafts and ground systems of various parameter sets can be simulated. Data that is necessary for tracking, navigation, and guidance of spacecrafts such as Crew Exploration Vehicle (CEV), Crew Launch Vehicle (CLV), and Lunar Relay Satellite (LRS) and orbit calculation data are disseminated to different NASA centers and updated periodically using the High Level Architecture (HLA). In
Unique Programme of Indian Centre for Space Physics using large rubber Balloons
Chakrabarti, Sandip Kumar; Sarkar, Ritabrata; Bhowmick, Debashis; Chakraborty, Subhankar
Indian Centre for Space Physics (ICSP) has developed a unique capability to pursue space based studies at a very low cost. Here, large rubber balloons are sent to near space (~ 40km) with payloads of less than 4kg weight. These payloads can be cosmic ray detectors, X-ray detectors, muon detectors apart from communication device, GPS, and nine degrees of freedom measuring capabilities. With two balloons in orbiter-launcher configuration, ICSP has been able to conduct long duration flights upto 12 hours. ICSP has so far sent 56 Dignity missions to near space and obtained Cosmic Ray and muon variation on a regular basis, dynamical spectrum of solar flares and gamma ray burst apart from other usual parameters such as wind velocity components, temperature and pressure variations etc. Since all the payloads are retrieved by parachutes, the cost per mission remains very low, typically around USD1000.00. The preparation time is low. Furthermore, no special launching area is required. In principle, such experiments can be conducted on a daily basis, if need be. Presently, we are also incorporating studies relating to earth system science such as Ozone, aerosols, micro-meteorites etc.
Colagrossi, Andrea; Lavagna, Michèle
2018-03-01
A space station in the vicinity of the Moon can be exploited as a gateway for future human and robotic exploration of the solar system. The natural location for a space system of this kind is about one of the Earth-Moon libration points. The study addresses the dynamics during rendezvous and docking operations with a very large space infrastructure in an EML2 Halo orbit. The model takes into account the coupling effects between the orbital and the attitude motion in a circular restricted three-body problem environment. The flexibility of the system is included, and the interaction between the modes of the structure and those related with the orbital motion is investigated. A lumped parameter technique is used to represents the flexible dynamics. The parameters of the space station are maintained as generic as possible, in a way to delineate a global scenario of the mission. However, the developed model can be tuned and updated according to the information that will be available in the future, when the whole system will be defined with a higher level of precision.
Initial condition effects on large scale structure in numerical simulations of plane mixing layers
McMullan, W. A.; Garrett, S. J.
2016-01-01
In this paper, Large Eddy Simulations are performed on the spatially developing plane turbulent mixing layer. The simulated mixing layers originate from initially laminar conditions. The focus of this research is on the effect of the nature of the imposed fluctuations on the large-scale spanwise and streamwise structures in the flow. Two simulations are performed; one with low-level three-dimensional inflow fluctuations obtained from pseudo-random numbers, the other with physically correlated fluctuations of the same magnitude obtained from an inflow generation technique. Where white-noise fluctuations provide the inflow disturbances, no spatially stationary streamwise vortex structure is observed, and the large-scale spanwise turbulent vortical structures grow continuously and linearly. These structures are observed to have a three-dimensional internal geometry with branches and dislocations. Where physically correlated provide the inflow disturbances a "streaky" streamwise structure that is spatially stationary is observed, with the large-scale turbulent vortical structures growing with the square-root of time. These large-scale structures are quasi-two-dimensional, on top of which the secondary structure rides. The simulation results are discussed in the context of the varying interpretations of mixing layer growth that have been postulated. Recommendations are made concerning the data required from experiments in order to produce accurate numerical simulation recreations of real flows.
A change of coordinates on the large phase space of quantum cohomology
International Nuclear Information System (INIS)
Kabanov, A.
2001-01-01
The Gromov-Witten invariants of a smooth, projective variety V, when twisted by the tautological classes on the moduli space of stable maps, give rise to a family of cohomological field theories and endow the base of the family with coordinates. We prove that the potential functions associated to the tautological ψ classes (the large phase space) and the κ classes are related by a change of coordinates which generalizes a change of basis on the ring of symmetric functions. Our result is a generalization of the work of Manin-Zograf who studied the case where V is a point. We utilize this change of variables to derive the topological recursion relations associated to the κ classes from those associated to the ψ classes. (orig.)
On the rejection-based algorithm for simulation and analysis of large-scale reaction networks
Energy Technology Data Exchange (ETDEWEB)
Thanh, Vo Hong, E-mail: vo@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Zunino, Roberto, E-mail: roberto.zunino@unitn.it [Department of Mathematics, University of Trento, Trento (Italy); Priami, Corrado, E-mail: priami@cosbi.eu [The Microsoft Research-University of Trento Centre for Computational and Systems Biology, Piazza Manifattura 1, Rovereto 38068 (Italy); Department of Mathematics, University of Trento, Trento (Italy)
2015-06-28
Stochastic simulation for in silico studies of large biochemical networks requires a great amount of computational time. We recently proposed a new exact simulation algorithm, called the rejection-based stochastic simulation algorithm (RSSA) [Thanh et al., J. Chem. Phys. 141(13), 134116 (2014)], to improve simulation performance by postponing and collapsing as much as possible the propensity updates. In this paper, we analyze the performance of this algorithm in detail, and improve it for simulating large-scale biochemical reaction networks. We also present a new algorithm, called simultaneous RSSA (SRSSA), which generates many independent trajectories simultaneously for the analysis of the biochemical behavior. SRSSA improves simulation performance by utilizing a single data structure across simulations to select reaction firings and forming trajectories. The memory requirement for building and storing the data structure is thus independent of the number of trajectories. The updating of the data structure when needed is performed collectively in a single operation across the simulations. The trajectories generated by SRSSA are exact and independent of each other by exploiting the rejection-based mechanism. We test our new improvement on real biological systems with a wide range of reaction networks to demonstrate its applicability and efficiency.
Robust mode space approach for atomistic modeling of realistically large nanowire transistors
Huang, Jun Z.; Ilatikhameneh, Hesameddin; Povolotskyi, Michael; Klimeck, Gerhard
2018-01-01
Nanoelectronic transistors have reached 3D length scales in which the number of atoms is countable. Truly atomistic device representations are needed to capture the essential functionalities of the devices. Atomistic quantum transport simulations of realistically extended devices are, however, computationally very demanding. The widely used mode space (MS) approach can significantly reduce the numerical cost, but a good MS basis is usually very hard to obtain for atomistic full-band models. In this work, a robust and parallel algorithm is developed to optimize the MS basis for atomistic nanowires. This enables engineering-level, reliable tight binding non-equilibrium Green's function simulation of nanowire metal-oxide-semiconductor field-effect transistor (MOSFET) with a realistic cross section of 10 nm × 10 nm using a small computer cluster. This approach is applied to compare the performance of InGaAs and Si nanowire n-type MOSFETs (nMOSFETs) with various channel lengths and cross sections. Simulation results with full-band accuracy indicate that InGaAs nanowire nMOSFETs have no drive current advantage over their Si counterparts for cross sections up to about 10 nm × 10 nm.
Energy Technology Data Exchange (ETDEWEB)
Vervisch, Luc; Domingo, Pascale; Lodato, Guido [CORIA - CNRS and INSA de Rouen, Technopole du Madrillet, BP 8, 76801 Saint-Etienne-du-Rouvray (France); Veynante, Denis [EM2C - CNRS and Ecole Centrale Paris, Grande Voie des Vignes, 92295 Chatenay-Malabry (France)
2010-04-15
Large-Eddy Simulation (LES) provides space-filtered quantities to compare with measurements, which usually have been obtained using a different filtering operation; hence, numerical and experimental results can be examined side-by-side in a statistical sense only. Instantaneous, space-filtered and statistically time-averaged signals feature different characteristic length-scales, which can be combined in dimensionless ratios. From two canonical manufactured turbulent solutions, a turbulent flame and a passive scalar turbulent mixing layer, the critical values of these ratios under which measured and computed variances (resolved plus sub-grid scale) can be compared without resorting to additional residual terms are first determined. It is shown that actual Direct Numerical Simulation can hardly accommodate a sufficiently large range of length-scales to perform statistical studies of LES filtered reactive scalar-fields energy budget based on sub-grid scale variances; an estimation of the minimum Reynolds number allowing for such DNS studies is given. From these developments, a reliability mesh criterion emerges for scalar LES and scaling for scalar sub-grid scale energy is discussed. (author)
Large-size deployable construction heated by solar irradiation in free space
Pestrenina, Irena; Kondyurin, Alexey; Pestrenin, Valery; Kashin, Nickolay; Naymushin, Alexey
Large-size deployable construction in free space with subsequent direct curing was invented more than fifteen years ago (Briskman et al., 1997 and Kondyurin, 1998). It caused a lot of scientific problems, one of which is a possibility to use the solar energy for initiation of the curing reaction. This paper is devoted to investigate the curing process under sun irradiation during a space flight in Earth orbits. A rotation of the construction is considered. This motion can provide an optimal temperature distribution in the construction that is required for the polymerization reaction. The cylindrical construction of 80 m length with two hemispherical ends of 10 m radius is considered. The wall of the construction of 10 mm carbon fibers/epoxy matrix composite is irradiated by heat flux from the sun and radiates heat from the external surface by the Stefan- Boltzmann law. A stage of polymerization reaction is calculated as a function of temperature/time based on the laboratory experiments with certified composite materials for space exploitation. The curing kinetics of the composite is calculated for different inclination Low Earth Orbits (300 km altitude) and Geostationary Earth Orbit (40000 km altitude). The results show that • the curing process depends strongly on the Earth orbit and the rotation of the construction; • the optimal flight orbit and rotation can be found to provide the thermal regime that is sufficient for the complete curing of the considered construction. The study is supported by RFBR grant No.12-08-00970-a. 1. Briskman V., A.Kondyurin, K.Kostarev, V.Leontyev, M.Levkovich, A.Mashinsky, G.Nechitailo, T.Yudina, Polymerization in microgravity as a new process in space technology, Paper No IAA-97-IAA.12.1.07, 48th International Astronautical Congress, October 6-10, 1997, Turin Italy. 2. Kondyurin A.V., Building the shells of large space stations by the polymerisation of epoxy composites in open space, Int. Polymer Sci. and Technol., v.25, N4
Heavy-Ion Collimation at the Large Hadron Collider Simulations and Measurements
AUTHOR|(CDS)2083002; Wessels, Johannes Peter; Bruce, Roderik; Wessels, Johannes Peter; Bruce, Roderik
The CERN Large Hadron Collider (LHC) stores and collides proton and $^{208}$Pb$^{82+}$ beams of unprecedented energy and intensity. Thousands of superconducting magnets, operated at 1.9 K, guide the very intense and energetic particle beams, which have a large potential for destruction. This implies the demand for a multi-stage collimation system to provide protection from beam-induced quenches or even hardware damage. In heavy-ion operation, ion fragments with significant rigidity offsets can still scatter out of the collimation system. When they irradiate the superconducting LHC magnets, the latter risk to quench (lose their superconducting property). These secondary collimation losses can potentially impose a limitation for the stored heavy-ion beam energy. Therefore, their distribution in the LHC needs to be understood by sophisticated simulations. Such simulation tools must accurately simulate the particle motion of many different nuclides in the magnetic LHC lattice and simulate their interaction with t...
Federal Laboratory Consortium — The Space Power Facility (SPF) houses the world's largest space environment simulation chamber, measuring 100 ft. in diameter by 122 ft. high. In this chamber, large...
Chatterjee, Tanmoy; Peet, Yulia T.
2018-03-01
Length scales of eddies involved in the power generation of infinite wind farms are studied by analyzing the spectra of the turbulent flux of mean kinetic energy (MKE) from large eddy simulations (LES). Large-scale structures with an order of magnitude bigger than the turbine rotor diameter (D ) are shown to have substantial contribution to wind power. Varying dynamics in the intermediate scales (D -10 D ) are also observed from a parametric study involving interturbine distances and hub height of the turbines. Further insight about the eddies responsible for the power generation have been provided from the scaling analysis of two-dimensional premultiplied spectra of MKE flux. The LES code is developed in a high Reynolds number near-wall modeling framework, using an open-source spectral element code Nek5000, and the wind turbines have been modelled using a state-of-the-art actuator line model. The LES of infinite wind farms have been validated against the statistical results from the previous literature. The study is expected to improve our understanding of the complex multiscale dynamics in the domain of large wind farms and identify the length scales that contribute to the power. This information can be useful for design of wind farm layout and turbine placement that take advantage of the large-scale structures contributing to wind turbine power.
Analysis of large optical ground stations for deep-space optical communications
Garcia-Talavera, M. Reyes; Rivera, C.; Murga, G.; Montilla, I.; Alonso, A.
2017-11-01
Inter-satellite and ground to satellite optical communications have been successfully demonstrated over more than a decade with several experiments, the most recent being NASA's lunar mission Lunar Atmospheric Dust Environment Explorer (LADEE). The technology is in a mature stage that allows to consider optical communications as a high-capacity solution for future deep-space communications [1][2], where there is an increasing demand on downlink data rate to improve science return. To serve these deep-space missions, suitable optical ground stations (OGS) have to be developed providing large collecting areas. The design of such OGSs must face both technical and cost constraints in order to achieve an optimum implementation. To that end, different approaches have already been proposed and analyzed, namely, a large telescope based on a segmented primary mirror, telescope arrays, and even the combination of RF and optical receivers in modified versions of existing Deep-Space Network (DSN) antennas [3][4][5]. Array architectures have been proposed to relax some requirements, acting as one of the key drivers of the present study. The advantages offered by the array approach are attained at the expense of adding subsystems. Critical issues identified for each implementation include their inherent efficiency and losses, as well as its performance under high-background conditions, and the acquisition, pointing, tracking, and synchronization capabilities. It is worth noticing that, due to the photon-counting nature of detection, the system performance is not solely given by the signal-to-noise ratio parameter. To start with the analysis, first the main implications of the deep space scenarios are summarized, since they are the driving requirements to establish the technical specifications for the large OGS. Next, both the main characteristics of the OGS and the potential configuration approaches are presented, getting deeper in key subsystems with strong impact in the
Concept for a power system controller for large space electrical power systems
Lollar, L. F.; Lanier, J. R., Jr.; Graves, J. R.
1981-01-01
The development of technology for a fail-operatonal power system controller (PSC) utilizing microprocessor technology for managing the distribution and power processor subsystems of a large multi-kW space electrical power system is discussed. The specific functions which must be performed by the PSC, the best microprocessor available to do the job, and the feasibility, cost savings, and applications of a PSC were determined. A limited function breadboard version of a PSC was developed to demonstrate the concept and potential cost savings.
A flat array large telescope concept for use on the moon, earth, and in space
Woodgate, Bruce E.
1991-01-01
An astronomical optical telescope concept is described which can provide very large collecting areas, of order 1000 sq m. This is an order of magnitude larger than the new generation of telescopes now being designed and built. Multiple gimballed flat mirrors direct the beams from a celestial source into a single telescope of the same aperture as each flat mirror. Multiple images of the same source are formed at the telescope focal plane. A beam combiner collects these images and superimposes them into a single image, onto a detector or spectrograph aperture. This telescope could be used on the earth, the moon, or in space.
Piomelli, Ugo; Zang, Thomas A.; Speziale, Charles G.; Lund, Thomas S.
1990-01-01
An eddy viscosity model based on the renormalization group theory of Yakhot and Orszag (1986) is applied to the large-eddy simulation of transition in a flat-plate boundary layer. The simulation predicts with satisfactory accuracy the mean velocity and Reynolds stress profiles, as well as the development of the important scales of motion. The evolution of the structures characteristic of the nonlinear stages of transition is also predicted reasonably well.
Establishment of DNS database in a turbulent channel flow by large-scale simulations
Abe, Hiroyuki; Kawamura, Hiroshi; 阿部 浩幸; 河村 洋
2008-01-01
In the present study, we establish statistical DNS (Direct Numerical Simulation) database in a turbulent channel flow with passive scalar transport at high Reynolds numbers and make the data available at our web site (http://murasun.me.noda.tus.ac.jp/turbulence/). The established database is reported together with the implementation of large-scale simulations, representative DNS results and results on turbulence model testing using the DNS data.
VerHulst, Claire; Meneveau, Charles
2014-02-01
In this study, we address the question of how kinetic energy is entrained into large wind turbine arrays and, in particular, how large-scale flow structures contribute to such entrainment. Previous research has shown this entrainment to be an important limiting factor in the performance of very large arrays where the flow becomes fully developed and there is a balance between the forcing of the atmospheric boundary layer and the resistance of the wind turbines. Given the high Reynolds numbers and domain sizes on the order of kilometers, we rely on wall-modeled large eddy simulation (LES) to simulate turbulent flow within the wind farm. Three-dimensional proper orthogonal decomposition (POD) analysis is then used to identify the most energetic flow structures present in the LES data. We quantify the contribution of each POD mode to the kinetic energy entrainment and its dependence on the layout of the wind turbine array. The primary large-scale structures are found to be streamwise, counter-rotating vortices located above the height of the wind turbines. While the flow is periodic, the geometry is not invariant to all horizontal translations due to the presence of the wind turbines and thus POD modes need not be Fourier modes. Differences of the obtained modes with Fourier modes are documented. Some of the modes are responsible for a large fraction of the kinetic energy flux to the wind turbine region. Surprisingly, more flow structures (POD modes) are needed to capture at least 40% of the turbulent kinetic energy, for which the POD analysis is optimal, than are needed to capture at least 40% of the kinetic energy flux to the turbines. For comparison, we consider the cases of aligned and staggered wind turbine arrays in a neutral atmospheric boundary layer as well as a reference case without wind turbines. While the general characteristics of the flow structures are robust, the net kinetic energy entrainment to the turbines depends on the presence and relative
Development of a large scale Chimera grid system for the Space Shuttle Launch Vehicle
Pearce, Daniel G.; Stanley, Scott A.; Martin, Fred W., Jr.; Gomez, Ray J.; Le Beau, Gerald J.; Buning, Pieter G.; Chan, William M.; Chiu, Ing-Tsau; Wulf, Armin; Akdag, Vedat
1993-01-01
The application of CFD techniques to large problems has dictated the need for large team efforts. This paper offers an opportunity to examine the motivations, goals, needs, problems, as well as the methods, tools, and constraints that defined NASA's development of a 111 grid/16 million point grid system model for the Space Shuttle Launch Vehicle. The Chimera approach used for domain decomposition encouraged separation of the complex geometry into several major components each of which was modeled by an autonomous team. ICEM-CFD, a CAD based grid generation package, simplified the geometry and grid topology definition by provoding mature CAD tools and patch independent meshing. The resulting grid system has, on average, a four inch resolution along the surface.
Large-Scale Brain Simulation and Disorders of Consciousness. Mapping Technical and Conceptual Issues
Directory of Open Access Journals (Sweden)
Michele Farisco
2018-04-01
Full Text Available Modeling and simulations have gained a leading position in contemporary attempts to describe, explain, and quantitatively predict the human brain’s operations. Computer models are highly sophisticated tools developed to achieve an integrated knowledge of the brain with the aim of overcoming the actual fragmentation resulting from different neuroscientific approaches. In this paper we investigate the plausibility of simulation technologies for emulation of consciousness and the potential clinical impact of large-scale brain simulation on the assessment and care of disorders of consciousness (DOCs, e.g., Coma, Vegetative State/Unresponsive Wakefulness Syndrome, Minimally Conscious State. Notwithstanding their technical limitations, we suggest that simulation technologies may offer new solutions to old practical problems, particularly in clinical contexts. We take DOCs as an illustrative case, arguing that the simulation of neural correlates of consciousness is potentially useful for improving treatments of patients with DOCs.
Large-scale simulations with distributed computing: Asymptotic scaling of ballistic deposition
International Nuclear Information System (INIS)
Farnudi, Bahman; Vvedensky, Dimitri D
2011-01-01
Extensive kinetic Monte Carlo simulations are reported for ballistic deposition (BD) in (1 + 1) dimensions. The large system sizes L observed for the onset of asymptotic scaling (L ≅ 2 12 ) explains the widespread discrepancies in previous reports for exponents of BD in one and likely in higher dimensions. The exponents obtained directly from our simulations, α = 0.499 ± 0.004 and β = 0.336 ± 0.004, capture the exact values α = 1/2 and β = 1/3 for the one-dimensional Kardar-Parisi-Zhang equation. An analysis of our simulations suggests a criterion for identifying the onset of true asymptotic scaling, which enables a more informed evaluation of exponents for BD in higher dimensions. These simulations were made possible by the Simulation through Social Networking project at the Institute for Advanced Studies in Basic Sciences in 2007, which was re-launched in November 2010.
Dynamics Modeling and Simulation of Large Transport Airplanes in Upset Conditions
Foster, John V.; Cunningham, Kevin; Fremaux, Charles M.; Shah, Gautam H.; Stewart, Eric C.; Rivers, Robert A.; Wilborn, James E.; Gato, William
2005-01-01
As part of NASA's Aviation Safety and Security Program, research has been in progress to develop aerodynamic modeling methods for simulations that accurately predict the flight dynamics characteristics of large transport airplanes in upset conditions. The motivation for this research stems from the recognition that simulation is a vital tool for addressing loss-of-control accidents, including applications to pilot training, accident reconstruction, and advanced control system analysis. The ultimate goal of this effort is to contribute to the reduction of the fatal accident rate due to loss-of-control. Research activities have involved accident analyses, wind tunnel testing, and piloted simulation. Results have shown that significant improvements in simulation fidelity for upset conditions, compared to current training simulations, can be achieved using state-of-the-art wind tunnel testing and aerodynamic modeling methods. This paper provides a summary of research completed to date and includes discussion on key technical results, lessons learned, and future research needs.
International Nuclear Information System (INIS)
Jamali, J.; Aghajafari, R.; Moini, R.; Sadeghi, H.
2002-01-01
A time-domain approach is presented to calculate electromagnetic fields inside a large Electromagnetic Pulse (EMP) simulator. This type of EMP simulator is used for studying the effect of electromagnetic pulses on electrical apparatus in various structures such as vehicles, a reoplanes, etc. The simulator consists of three planar transmission lines. To solve the problem, we first model the metallic structure of the simulator as a grid of conducting wires. The numerical solution of the governing electric field integral equation is then obtained using the method of moments in time domain. To demonstrate the accuracy of the model, we consider a typical EMP simulator. The comparison of our results with those obtained experimentally in the literature validates the model introduced in this paper
Non-Markovian closure models for large eddy simulations using the Mori-Zwanzig formalism
Parish, Eric J.; Duraisamy, Karthik
2017-01-01
This work uses the Mori-Zwanzig (M-Z) formalism, a concept originating from nonequilibrium statistical mechanics, as a basis for the development of coarse-grained models of turbulence. The mechanics of the generalized Langevin equation (GLE) are considered, and insight gained from the orthogonal dynamics equation is used as a starting point for model development. A class of subgrid models is considered which represent nonlocal behavior via a finite memory approximation [Stinis, arXiv:1211.4285 (2012)], the length of which is determined using a heuristic that is related to the spectral radius of the Jacobian of the resolved variables. The resulting models are intimately tied to the underlying numerical resolution and are capable of approximating non-Markovian effects. Numerical experiments on the Burgers equation demonstrate that the M-Z-based models can accurately predict the temporal evolution of the total kinetic energy and the total dissipation rate at varying mesh resolutions. The trajectory of each resolved mode in phase space is accurately predicted for cases where the coarse graining is moderate. Large eddy simulations (LESs) of homogeneous isotropic turbulence and the Taylor-Green Vortex show that the M-Z-based models are able to provide excellent predictions, accurately capturing the subgrid contribution to energy transfer. Last, LESs of fully developed channel flow demonstrate the applicability of M-Z-based models to nondecaying problems. It is notable that the form of the closure is not imposed by the modeler, but is rather derived from the mathematics of the coarse graining, highlighting the potential of M-Z-based techniques to define LES closures.
Energy Technology Data Exchange (ETDEWEB)
Seeliger, Andreas; Vreydal, Daniel; Eltaliawi, Gamil; Vijayakumar, Nandhakumar [Technische Hochschule Aachen (Germany). Lehrstuhl und Inst. fuer Bergwerks- und Huettenmaschinenkunde
2009-04-28
The aim of the GrobaDyn research project is the complete modelling of a large conveyor system. With the aid of the model possible conversion of the previous drives with a constant speed to variable-speed drives will be simulated in advance of the planning phase of this conversion and any resonance phenomena within the operating speed range analysed and if necessary counter-measures taken. (orig.)
The politics of space mining - An account of a simulation game
Paikowsky, Deganit; Tzezana, Roey
2018-01-01
Celestial bodies like the Moon and asteroids contain materials and precious metals, which are valuable for human activity on Earth and beyond. Space mining has been mainly relegated to the realm of science fiction, and was not treated seriously by the international community. The private industry is starting to assemble towards space mining, and success on this front would have major impact on all nations. We present in this paper a review of current space mining ventures, and the international legislation, which could stand in their way - or aid them in their mission. Following that, we present the results of a role-playing simulation in which the role of several important nations was played by students of international relations. The results of the simulation are used as a basis for forecasting the potential initial responses of the nations of the world to a successful space mining operation in the future.
Cousineau, Sarah M
2005-01-01
Space charge effects are a major contributor to beam halo and emittance growth leading to beam loss in high intensity, low energy accelerators. As future accelerators strive towards unprecedented levels of beam intensity and beam loss control, a more comprehensive understanding of space charge effects is required. A wealth of simulation tools have been developed for modeling beams in linacs and rings, and with the growing availability of high-speed computing systems, computationally expensive problems that were inconceivable a decade ago are now being handled with relative ease. This has opened the field for realistic simulations of space charge effects, including detailed benchmarks with experimental data. A great deal of effort is being focused in this direction, and several recent benchmark studies have produced remarkably successful results. This paper reviews the achievements in space charge benchmarking in the last few years, and discusses the challenges that remain.
Cowen, Benjamin
2011-01-01
Simulations are essential for engineering design. These virtual realities provide characteristic data to scientists and engineers in order to understand the details and complications of the desired mission. A standard development simulation package known as Trick is used in developing a source code to model a component (federate in HLA terms). The runtime executive is integrated into an HLA based distributed simulation. TrickHLA is used to extend a Trick simulation for a federation execution, develop a source code for communication between federates, as well as foster data input and output. The project incorporates international cooperation along with team collaboration. Interactions among federates occur throughout the simulation, thereby relying on simulation interoperability. Communication through the semester went on between participants to figure out how to create this data exchange. The NASA intern team is designing a Lunar Rover federate and a Lunar Shuttle federate. The Lunar Rover federate supports transportation across the lunar surface and is essential for fostering interactions with other federates on the lunar surface (Lunar Shuttle, Lunar Base Supply Depot and Mobile ISRU Plant) as well as transporting materials to the desired locations. The Lunar Shuttle federate transports materials to and from lunar orbit. Materials that it takes to the supply depot include fuel and cargo necessary to continue moon-base operations. This project analyzes modeling and simulation technologies as well as simulation interoperability. Each team from participating universities will work on and engineer their own federate(s) to participate in the SISO Spring 2011 Workshop SIW Smackdown in Boston, Massachusetts. This paper will focus on the Lunar Rover federate.
Johannes, Bernd; Salnitski, Vyacheslav; Soll, Henning; Rauch, Melina; Hoermann, Hans-Juergen
For the evaluation of an operator's skill reliability indicators of work quality as well as of psychophysiological states during the work have to be considered. The herein presented methodology and measurement equipment were developed and tested in numerous terrestrial and space experiments using a simulation of a spacecraft docking on a space station. However, in this study the method was applied to a comparable terrestrial task—the flight simulator test (FST) used in the DLR selection procedure for ab initio pilot applicants for passenger airlines. This provided a large amount of data for a statistical verification of the space methodology. For the evaluation of the strain level of applicants during the FST psychophysiological measurements were used to construct a "psychophysiological arousal vector" (PAV) which is sensitive to various individual reaction patterns of the autonomic nervous system to mental load. Its changes and increases will be interpreted as "strain". In the first evaluation study, 614 subjects were analyzed. The subjects first underwent a calibration procedure for the assessment of their autonomic outlet type (AOT) and on the following day they performed the FST, which included three tasks and was evaluated by instructors applying well-established and standardized rating scales. This new method will possibly promote a wide range of other future applications in aviation and space psychology.
Overview of Small and Large-Scale Space Solar Power Concepts
Potter, Seth; Henley, Mark; Howell, Joe; Carrington, Connie; Fikes, John
2006-01-01
An overview of space solar power studies performed at the Boeing Company under contract with NASA will be presented. The major concepts to be presented are: 1. Power Plug in Orbit: this is a spacecraft that collects solar energy and distributes it to users in space using directed radio frequency or optical energy. Our concept uses solar arrays having the same dimensions as ISS arrays, but are assumed to be more efficient. If radiofrequency wavelengths are used, it will necessitate that the receiving satellite be equipped with a rectifying antenna (rectenna). For optical wavelengths, the solar arrays on the receiving satellite will collect the power. 2. Mars Clipper I Power Explorer: this is a solar electric Mars transfer vehicle to support human missions. A near-term precursor could be a high-power radar mapping spacecraft with self-transport capability. Advanced solar electric power systems and electric propulsion technology constitute viable elements for conducting human Mars missions that are roughly comparable in performance to similar missions utilizing alternative high thrust systems, with the one exception being their inability to achieve short Earth-Mars trip times. 3. Alternative Architectures: this task involves investigating alternatives to the traditional solar power satellite (SPS) to supply commercial power from space for use on Earth. Four concepts were studied: two using photovoltaic power generation, and two using solar dynamic power generation, with microwave and laser power transmission alternatives considered for each. All four architectures use geostationary orbit. 4. Cryogenic Propellant Depot in Earth Orbit: this concept uses large solar arrays (producing perhaps 600 kW) to electrolyze water launched from Earth, liquefy the resulting hydrogen and oxygen gases, and store them until needed by spacecraft. 5. Beam-Powered Lunar Polar Rover: a lunar rover powered by a microwave or laser beam can explore permanently shadowed craters near the lunar
Theory and Simulation of the Physics of Space Charge Dominated Beams
International Nuclear Information System (INIS)
Haber, Irving
2002-01-01
This report describes modeling of intense electron and ion beams in the space charge dominated regime. Space charge collective modes play an important role in the transport of intense beams over long distances. These modes were first observed in particle-in-cell simulations. The work presented here is closely tied to the University of Maryland Electron Ring (UMER) experiment and has application to accelerators for heavy ion beam fusion
Design and simulation of betavoltaic battery using large-grain polysilicon
International Nuclear Information System (INIS)
Yao, Shulin; Song, Zijun; Wang, Xiang; San, Haisheng; Yu, Yuxi
2012-01-01
In this paper, we present the design and simulation of a p–n junction betavoltaic battery based on large-grain polysilicon. By the Monte Carlo simulation, the average penetration depth were obtained, according to which the optimal depletion region width was designed. The carriers transport model of large-grain polysilicon is used to determine the diffusion length of minority carrier. By optimizing the doping concentration, the maximum power conversion efficiency can be achieved to be 0.90% with a 10 mCi/cm 2 Ni-63 source radiation. - Highlights: ► Ni 63 is employed as the pure beta radioisotope source. ► The planar p–n junction betavoltaic battery is based on large-grain polysilicon. ► The carriers transport model of large-grain polysilicon is used to determine the diffusion length of minority carrier. ► The average penetration depth was obtained by using the Monte Carlo Method.
Monte Carlo simulation of a medical linear accelerator for generation of phase spaces
International Nuclear Information System (INIS)
Oliveira, Alex C.H.; Santana, Marcelo G.; Lima, Fernando R.A.; Vieira, Jose W.
2013-01-01
Radiotherapy uses various techniques and equipment for local treatment of cancer. The equipment most often used in radiotherapy to the patient irradiation are linear accelerators (Linacs) which produce beams of X-rays in the range 5-30 MeV. Among the many algorithms developed over recent years for evaluation of dose distributions in radiotherapy planning, the algorithms based on Monte Carlo methods have proven to be very promising in terms of accuracy by providing more realistic results. The MC methods allow simulating the transport of ionizing radiation in complex configurations, such as detectors, Linacs, phantoms, etc. The MC simulations for applications in radiotherapy are divided into two parts. In the first, the simulation of the production of the radiation beam by the Linac is performed and then the phase space is generated. The phase space contains information such as energy, position, direction, etc. og millions of particles (photos, electrons, positrons). In the second part the simulation of the transport of particles (sampled phase space) in certain configurations of irradiation field is performed to assess the dose distribution in the patient (or phantom). The objective of this work is to create a computational model of a 6 MeV Linac using the MC code Geant4 for generation of phase spaces. From the phase space, information was obtained to asses beam quality (photon and electron spectra and two-dimensional distribution of energy) and analyze the physical processes involved in producing the beam. (author)
Directory of Open Access Journals (Sweden)
Xingchu Gong
Full Text Available A design space approach was applied to optimize the extraction process of Danhong injection. Dry matter yield and the yields of five active ingredients were selected as process critical quality attributes (CQAs. Extraction number, extraction time, and the mass ratio of water and material (W/M ratio were selected as critical process parameters (CPPs. Quadratic models between CPPs and CQAs were developed with determination coefficients higher than 0.94. Active ingredient yields and dry matter yield increased as the extraction number increased. Monte-Carlo simulation with models established using a stepwise regression method was applied to calculate the probability-based design space. Step length showed little effect on the calculation results. Higher simulation number led to results with lower dispersion. Data generated in a Monte Carlo simulation following a normal distribution led to a design space with a smaller size. An optimized calculation condition was obtained with 10,000 simulation times, 0.01 calculation step length, a significance level value of 0.35 for adding or removing terms in a stepwise regression, and a normal distribution for data generation. The design space with a probability higher than 0.95 to attain the CQA criteria was calculated and verified successfully. Normal operating ranges of 8.2-10 g/g of W/M ratio, 1.25-1.63 h of extraction time, and two extractions were recommended. The optimized calculation conditions can conveniently be used in design space development for other pharmaceutical processes.
A Coordinated Initialization Process for the Distributed Space Exploration Simulation (DSES)
Phillips, Robert; Dexter, Dan; Hasan, David; Crues, Edwin Z.
2007-01-01
This document describes the federate initialization process that was developed at the NASA Johnson Space Center with the HIIA Transfer Vehicle Flight Controller Trainer (HTV FCT) simulations and refined in the Distributed Space Exploration Simulation (DSES). These simulations use the High Level Architecture (HLA) IEEE 1516 to provide the communication and coordination between the distributed parts of the simulation. The purpose of the paper is to describe a generic initialization sequence that can be used to create a federate that can: 1. Properly initialize all HLA objects, object instances, interactions, and time management 2. Check for the presence of all federates 3. Coordinate startup with other federates 4. Robustly initialize and share initial object instance data with other federates.
An IBM PC-based math model for space station solar array simulation
Emanuel, E. M.
1986-01-01
This report discusses and documents the design, development, and verification of a microcomputer-based solar cell math model for simulating the Space Station's solar array Initial Operational Capability (IOC) reference configuration. The array model is developed utilizing a linear solar cell dc math model requiring only five input parameters: short circuit current, open circuit voltage, maximum power voltage, maximum power current, and orbit inclination. The accuracy of this model is investigated using actual solar array on orbit electrical data derived from the Solar Array Flight Experiment/Dynamic Augmentation Experiment (SAFE/DAE), conducted during the STS-41D mission. This simulator provides real-time simulated performance data during the steady state portion of the Space Station orbit (i.e., array fully exposed to sunlight). Eclipse to sunlight transients and shadowing effects are not included in the analysis, but are discussed briefly. Integrating the Solar Array Simulator (SAS) into the Power Management and Distribution (PMAD) subsystem is also discussed.
International Nuclear Information System (INIS)
Coleman, Brittany; Ostanek, Jason; Heinzel, John
2016-01-01
Highlights: • Finite element analysis to evaluate heat sinks for large format li-ion batteries. • Solid metal heat sink and composite heat sink (metal filler and wax). • Transient simulations show response from rest to steady-state with normal load. • Transient simulations of two different failure modes were considered. • Significance of spacing, material properties, interface quality, and phase change. - Abstract: Thermal management is critical for large-scale, shipboard energy storage systems utilizing lithium-ion batteries. In recent years, there has been growing research in thermal management of lithium-ion battery modules. However, there is little information available on the minimum cell-to-cell spacing limits for indirect, liquid cooled modules when considering heat release during a single cell failure. For this purpose, a generic four-cell module was modeled using finite element analysis to determine the sensitivity of module temperatures to cell spacing. Additionally, the effects of different heat sink materials and interface qualities were investigated. Two materials were considered, a solid aluminum block and a metal/wax composite block. Simulations were run for three different transient load profiles. The first profile simulates sustained high rate operation where the system begins at rest and generates heat continuously until it reaches steady state. And, two failure mode simulations were conducted to investigate block performance during a slow and a fast exothermic reaction, respectively. Results indicate that composite materials can perform well under normal operation and provide some protection against single cell failure; although, for very compact designs, the amount of wax available to absorb heat is reduced and the effectiveness of the phase change material is diminished. The aluminum block design performed well under all conditions, and showed that heat generated during a failure is quickly dissipated to the coolant, even under the
Large-scale simulations of plastic neural networks on neuromorphic hardware
Directory of Open Access Journals (Sweden)
James Courtney Knight
2016-04-01
Full Text Available SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 20000 neurons and 51200000 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.
International Nuclear Information System (INIS)
Garcia-Vela, A.
2002-01-01
A new quantum-type phase-space distribution is proposed in order to sample initial conditions for classical trajectory simulations. The phase-space distribution is obtained as the modulus of a quantum phase-space state of the system, defined as the direct product of the coordinate and momentum representations of the quantum initial state. The distribution is tested by sampling initial conditions which reproduce the initial state of the Ar-HCl cluster prepared by ultraviolet excitation, and by simulating the photodissociation dynamics by classical trajectories. The results are compared with those of a wave packet calculation, and with a classical simulation using an initial phase-space distribution recently suggested. A better agreement is found between the classical and the quantum predictions with the present phase-space distribution, as compared with the previous one. This improvement is attributed to the fact that the phase-space distribution propagated classically in this work resembles more closely the shape of the wave packet propagated quantum mechanically
Churchfield, Matthew J; Li, Ye; Moriarty, Patrick J
2013-02-28
This paper presents our initial work in performing large-eddy simulations of tidal turbine array flows. First, a horizontally periodic precursor simulation is performed to create turbulent flow data. Then those data are used as inflow into a tidal turbine array two rows deep and infinitely wide. The turbines are modelled using rotating actuator lines, and the finite-volume method is used to solve the governing equations. In studying the wakes created by the turbines, we observed that the vertical shear of the inflow combined with wake rotation causes lateral wake asymmetry. Also, various turbine configurations are simulated, and the total power production relative to isolated turbines is examined. We found that staggering consecutive rows of turbines in the simulated configurations allows the greatest efficiency using the least downstream row spacing. Counter-rotating consecutive downstream turbines in a non-staggered array shows a small benefit. This work has identified areas for improvement. For example, using a larger precursor domain would better capture elongated turbulent structures, and including salinity and temperature equations would account for density stratification and its effect on turbulence. Additionally, the wall shear stress modelling could be improved, and more array configurations could be examined.
Energy Technology Data Exchange (ETDEWEB)
Yuan, Haomin; Solberg, Jerome; Merzari, Elia; Kraus, Adam; Grindeanu, Iulian
2017-10-01
This paper describes a numerical study of flow-induced vibration in a helical coil steam generator experiment conducted at Argonne National Laboratory in the 1980s. In the experiment, a half-scale sector model of a steam generator helical coil tube bank was subjected to still and flowing air and water, and the vibrational characteristics were recorded. The research detailed in this document utilizes the multi-physics simulation toolkit SHARP developed at Argonne National Laboratory, in cooperation with Lawrence Livermore National Laboratory, to simulate the experiment. SHARP uses the spectral element code Nek5000 for fluid dynamics analysis and the finite element code DIABLO for structural analysis. The flow around the coil tubes is modeled in Nek5000 by using a large eddy simulation turbulence model. Transient pressure data on the tube surfaces is sampled and transferred to DIABLO for the structural simulation. The structural response is simulated in DIABLO via an implicit time-marching algorithm and a combination of continuum elements and structural shells. Tube vibration data (acceleration and frequency) are sampled and compared with the experimental data. Currently, only one-way coupling is used, which means that pressure loads from the fluid simulation are transferred to the structural simulation but the resulting structural displacements are not fed back to the fluid simulation
A Novel CPU/GPU Simulation Environment for Large-Scale Biologically-Realistic Neural Modeling
Directory of Open Access Journals (Sweden)
Roger V Hoang
2013-10-01
Full Text Available Computational Neuroscience is an emerging field that provides unique opportunities to studycomplex brain structures through realistic neural simulations. However, as biological details are added tomodels, the execution time for the simulation becomes longer. Graphics Processing Units (GPUs are now being utilized to accelerate simulations due to their ability to perform computations in parallel. As such, they haveshown significant improvement in execution time compared to Central Processing Units (CPUs. Most neural simulators utilize either multiple CPUs or a single GPU for better performance, but still show limitations in execution time when biological details are not sacrificed. Therefore, we present a novel CPU/GPU simulation environment for large-scale biological networks,the NeoCortical Simulator version 6 (NCS6. NCS6 is a free, open-source, parallelizable, and scalable simula-tor, designed to run on clusters of multiple machines, potentially with high performance computing devicesin each of them. It has built-in leaky-integrate-and-fire (LIF and Izhikevich (IZH neuron models, but usersalso have the capability to design their own plug-in interface for different neuron types as desired. NCS6is currently able to simulate one million cells and 100 million synapses in quasi real time by distributing dataacross these heterogeneous clusters of CPUs and GPUs.
International Nuclear Information System (INIS)
Doroshkevich, A.G.; Kotok, E.V.; Novikov, I.D.; Polyudov, A.N.; Shandarin, S.F.; Sigov, Y.S.
1980-01-01
The results of a numerical experiment are given that describe the non-linear stages of the development of perturbations in gravitating matter density in the expanding Universe. This process simulates the formation of the large-scale structure of the Universe from an initially almost homogeneous medium. In the one- and two-dimensional cases of this numerical experiment the evolution of the system from 4096 point masses that interact gravitationally only was studied with periodic boundary conditions (simulation of the infinite space). The initial conditions were chosen that resulted from the theory of the evolution of small perturbations in the expanding Universe. The results of numerical experiments are systematically compared with the approximate analytic theory. The results of the calculations show that in the case of collisionless particles, as well as in the gas-dynamic case, the cellular structure appeared at the non-linear stage in the case of the adiabatic perturbations. The greater part of the matter is in thin layers that separate vast regions of low density. In a Robertson-Walker universe the cellular structure exists for a finite time and then fragments into a few compact objects. In the open Universe the cellular structure also exists if the amplitude of initial perturbations is large enough. But the following disruption of the cellular structure is more difficult because of too rapid an expansion of the Universe. The large-scale structure is frozen. (author)
Space charge and magnet error simulations for the SNS accumulator ring
International Nuclear Information System (INIS)
Beebe-Wang, J.; Fedotov, A.V.; Wei, J.; Machida, S.
2000-01-01
The effects of space charge forces and magnet errors in the beam of the Spallation Neutron Source (SNS) accumulator ring are investigated. In this paper, the focus is on the emittance growth and halo/tail formation in the beam due to space charge with and without magnet errors. The beam properties of different particle distributions resulting from various injection painting schemes are investigated. Different working points in the design of SNS accumulator ring lattice are compared. The simulations in close-to-resonance condition in the presence of space charge and magnet errors are presented. (author)
International Nuclear Information System (INIS)
Battaglia, N.; Trac, H.; Cen, R.; Loeb, A.
2013-01-01
We present a new method for modeling inhomogeneous cosmic reionization on large scales. Utilizing high-resolution radiation-hydrodynamic simulations with 2048 3 dark matter particles, 2048 3 gas cells, and 17 billion adaptive rays in a L = 100 Mpc h –1 box, we show that the density and reionization redshift fields are highly correlated on large scales (∼> 1 Mpc h –1 ). This correlation can be statistically represented by a scale-dependent linear bias. We construct a parametric function for the bias, which is then used to filter any large-scale density field to derive the corresponding spatially varying reionization redshift field. The parametric model has three free parameters that can be reduced to one free parameter when we fit the two bias parameters to simulation results. We can differentiate degenerate combinations of the bias parameters by combining results for the global ionization histories and correlation length between ionized regions. Unlike previous semi-analytic models, the evolution of the reionization redshift field in our model is directly compared cell by cell against simulations and performs well in all tests. Our model maps the high-resolution, intermediate-volume radiation-hydrodynamic simulations onto lower-resolution, larger-volume N-body simulations (∼> 2 Gpc h –1 ) in order to make mock observations and theoretical predictions
Large surface scintillators as base of impact point detectors and their application in Space Weather
Ayuso, Sindulfo; Medina, José; Gómez-Herrero, Raul; José Blanco, Juan; García-Tejedor, Ignacio; García-Población, Oscar; Díaz-Romeral, Gonzalo
2016-04-01
The use of a pile of two 100 cm x 100 cm x 5 cm BC-400 organic scintillators is proposed as ground-based cosmic ray detector able to provide directional information on the incident muons. The challenge is to get in real time the muon impact point on the scintillator and its arrival direction using as few Photomultiplier Tubes (PMTs) as possible. The instrument is based on the dependence of attenuation of light with the traversed distance in each scintillator. For the time being, four photomultiplier tubes gather the light through the lateral faces (100 cm x 5 cm) of the scintillator. Several experiments have already been carried out. The results show how data contain information about the muon trajectory through the scintillator. This information can be extracted using the pulse heights collected by the PMTs working in coincidence mode. Reliability and accuracy of results strongly depend on the number of PMTs used and mainly on their appropriate geometrical arrangement with regard to the scintillator. In order to determine the optimal position and the minimum number of PMTs required, a Montecarlo simulation code has been developed. Preliminary experimental and simulation results are presented and the potential of the system for space weather monitoring is discussed.
Creech, Angus; Früh, Wolf-Gerrit; Maguire, A. Eoghan
2015-05-01
We present here a computational fluid dynamics (CFD) simulation of Lillgrund offshore wind farm, which is located in the Øresund Strait between Sweden and Denmark. The simulation combines a dynamic representation of wind turbines embedded within a large-eddy simulation CFD solver and uses hr-adaptive meshing to increase or decrease mesh resolution where required. This allows the resolution of both large-scale flow structures around the wind farm, and the local flow conditions at individual turbines; consequently, the response of each turbine to local conditions can be modelled, as well as the resulting evolution of the turbine wakes. This paper provides a detailed description of the turbine model which simulates the interaction between the wind, the turbine rotors, and the turbine generators by calculating the forces on the rotor, the body forces on the air, and instantaneous power output. This model was used to investigate a selection of key wind speeds and directions, investigating cases where a row of turbines would be fully aligned with the wind or at specific angles to the wind. Results shown here include presentations of the spin-up of turbines, the observation of eddies moving through the turbine array, meandering turbine wakes, and an extensive wind farm wake several kilometres in length. The key measurement available for cross-validation with operational wind farm data is the power output from the individual turbines, where the effect of unsteady turbine wakes on the performance of downstream turbines was a main point of interest. The results from the simulations were compared to the performance measurements from the real wind farm to provide a firm quantitative validation of this methodology. Having achieved good agreement between the model results and actual wind farm measurements, the potential of the methodology to provide a tool for further investigations of engineering and atmospheric science problems is outlined.
WENESSA, Wide Eye-Narrow Eye Space Simulation fo Situational Awareness
Albarait, O.; Payne, D. M.; LeVan, P. D.; Luu, K. K.; Spillar, E.; Freiwald, W.; Hamada, K.; Houchard, J.
In an effort to achieve timelier indications of anomalous object behaviors in geosynchronous earth orbit, a Planning Capability Concept (PCC) for a “Wide Eye-Narrow Eye” (WE-NE) telescope network has been established. The PCC addresses the problem of providing continuous and operationally robust, layered and cost-effective, Space Situational Awareness (SSA) that is focused on monitoring deep space for anomalous behaviors. It does this by first detecting the anomalies with wide field of regard systems, and then providing reliable handovers for detailed observational follow-up by another optical asset. WENESSA will explore the added value of such a system to the existing Space Surveillance Network (SSN). The study will assess and quantify the degree to which the PCC completely fulfills, or improves or augments, these deep space knowledge deficiencies relative to current operational systems. In order to improve organic simulation capabilities, we will explore options for the federation of diverse community simulation approaches, while evaluating the efficiencies offered by a network of small and larger aperture, ground-based telescopes. Existing Space Modeling and Simulation (M&S) tools designed for evaluating WENESSA-like problems will be taken into consideration as we proceed in defining and developing the tools needed to perform this study, leading to the creation of a unified Space M&S environment for the rapid assessment of new capabilities. The primary goal of this effort is to perform a utility assessment of the WE-NE concept. The assessment will explore the mission utility of various WE-NE concepts in discovering deep space anomalies in concert with the SSN. The secondary goal is to generate an enduring modeling and simulation environment to explore the utility of future proposed concepts and supporting technologies. Ultimately, our validated simulation framework would support the inclusion of other ground- and space-based SSA assets through integrated
Large-scale agent-based social simulation : A study on epidemic prediction and control
Zhang, M.
2016-01-01
Large-scale agent-based social simulation is gradually proving to be a versatile methodological approach for studying human societies, which could make contributions from policy making in social science, to distributed artificial intelligence and agent technology in computer science, and to theory
Simulations of muon-induced neutron flux at large depths underground
International Nuclear Information System (INIS)
Kudryavtsev, V.A.; Spooner, N.J.C.; McMillan, J.E.
2003-01-01
The production of neutrons by cosmic-ray muons at large depths underground is discussed. The most recent versions of the muon propagation code MUSIC, and particle transport code FLUKA are used to evaluate muon and neutron fluxes. The results of simulations are compared with experimental data
Comparison of Large Eddy Simulations of a convective boundary layer with wind LIDAR measurements
DEFF Research Database (Denmark)
Pedersen, Jesper Grønnegaard; Kelly, Mark C.; Gryning, Sven-Erik
2012-01-01
Vertical profiles of the horizontal wind speed and of the standard deviation of vertical wind speed from Large Eddy Simulations of a convective atmospheric boundary layer are compared to wind LIDAR measurements up to 1400 m. Fair agreement regarding both types of profiles is observed only when...
Synthetic atmospheric turbulence and wind shear in large eddy simulations of wind turbine wakes
DEFF Research Database (Denmark)
Keck, Rolf-Erik; Mikkelsen, Robert Flemming; Troldborg, Niels
2014-01-01
, superimposed on top of a mean deterministic shear layer consistent with that used in the IEC standard for wind turbine load calculations. First, the method is evaluated by running a series of large-eddy simulations in an empty domain, where the imposed turbulence and wind shear is allowed to reach a fully...
Large-scale tropospheric transport in the Chemistry-Climate Model Initiative (CCMI) simulations
Orbe, Clara; Yang, Huang; Waugh, Darryn W.; Zeng, Guang; Morgenstern, Olaf; Kinnison, Douglas E.; Lamarque, Jean-Francois; Tilmes, Simone; Plummer, David A.; Scinocca, John F.; Josse, Beatrice; Marecal, Virginie; Jöckel, Patrick; Oman, Luke D.; Strahan, Susan E.; Deushi, Makoto; Tanaka, Taichu Y.; Yoshida, Kohei; Akiyoshi, Hideharu; Yamashita, Yousuke; Stenke, Andreas; Revell, Laura; Sukhodolov, Timofei; Rozanov, Eugene; Pitari, Giovanni; Visioni, Daniele; Stone, Kane A.; Schofield, Robyn; Banerjee, Antara
2018-05-01
Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future) changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry-Climate Model Initiative (CCMI). Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH) midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than) the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.
Large-scale tropospheric transport in the Chemistry–Climate Model Initiative (CCMI simulations
Directory of Open Access Journals (Sweden)
C. Orbe
2018-05-01
Full Text Available Understanding and modeling the large-scale transport of trace gases and aerosols is important for interpreting past (and projecting future changes in atmospheric composition. Here we show that there are large differences in the global-scale atmospheric transport properties among the models participating in the IGAC SPARC Chemistry–Climate Model Initiative (CCMI. Specifically, we find up to 40 % differences in the transport timescales connecting the Northern Hemisphere (NH midlatitude surface to the Arctic and to Southern Hemisphere high latitudes, where the mean age ranges between 1.7 and 2.6 years. We show that these differences are related to large differences in vertical transport among the simulations, in particular to differences in parameterized convection over the oceans. While stronger convection over NH midlatitudes is associated with slower transport to the Arctic, stronger convection in the tropics and subtropics is associated with faster interhemispheric transport. We also show that the differences among simulations constrained with fields derived from the same reanalysis products are as large as (and in some cases larger than the differences among free-running simulations, most likely due to larger differences in parameterized convection. Our results indicate that care must be taken when using simulations constrained with analyzed winds to interpret the influence of meteorology on tropospheric composition.
Wind Energy-Related Atmospheric Boundary Layer Large-Eddy Simulation Using OpenFOAM: Preprint
Energy Technology Data Exchange (ETDEWEB)
Churchfield, M.J.; Vijayakumar, G.; Brasseur, J.G.; Moriarty, P.J.
2010-08-01
This paper develops and evaluates the performance of a large-eddy simulation (LES) solver in computing the atmospheric boundary layer (ABL) over flat terrain under a variety of stability conditions, ranging from shear driven (neutral stratification) to moderately convective (unstable stratification).
Modeling and analysis of large-eddy simulations of particle-laden turbulent boundary layer flows
Rahman, Mustafa M.
2017-01-05
We describe a framework for the large-eddy simulation of solid particles suspended and transported within an incompressible turbulent boundary layer (TBL). For the fluid phase, the large-eddy simulation (LES) of incompressible turbulent boundary layer employs stretched spiral vortex subgrid-scale model and a virtual wall model similar to the work of Cheng, Pullin & Samtaney (J. Fluid Mech., 2015). This LES model is virtually parameter free and involves no active filtering of the computed velocity field. Furthermore, a recycling method to generate turbulent inflow is implemented. For the particle phase, the direct quadrature method of moments (DQMOM) is chosen in which the weights and abscissas of the quadrature approximation are tracked directly rather than the moments themselves. The numerical method in this framework is based on a fractional-step method with an energy-conservative fourth-order finite difference scheme on a staggered mesh. This code is parallelized based on standard message passing interface (MPI) protocol and is designed for distributed-memory machines. It is proposed to utilize this framework to examine transport of particles in very large-scale simulations. The solver is validated using the well know result of Taylor-Green vortex case. A large-scale sandstorm case is simulated and the altitude variations of number density along with its fluctuations are quantified.
Hernandez Perez, F.E.
2011-01-01
Hydrogen (H2) enrichment of hydrocarbon fuels in lean premixed systems is desirable since it can lead to a progressive reduction in greenhouse-gas emissions, while paving the way towards pure hydrogen combustion. In recent decades, large-eddy simulation (LES) has emerged as a promising tool to
Vreman, A.W.; Oijen, van J.A.; Goey, de L.P.H.; Bastiaans, R.J.M.
2009-01-01
Large-eddy simulation (LES) of turbulent combustion with premixed flamelets is investigated in this paper. The approach solves the filtered Navier-Stokes equations supplemented with two transport equations, one for the mixture fraction and another for a progress variable. The LES premixed flamelet
Large-eddy simulation with accurate implicit subgrid-scale diffusion
B. Koren (Barry); C. Beets
1996-01-01
textabstractA method for large-eddy simulation is presented that does not use an explicit subgrid-scale diffusion term. Subgrid-scale effects are modelled implicitly through an appropriate monotone (in the sense of Spekreijse 1987) discretization method for the advective terms. Special attention is
Investigation of wake interaction using full-scale lidar measurements and large eddy simulation
DEFF Research Database (Denmark)
Machefaux, Ewan; Larsen, Gunner Chr.; Troldborg, Niels
2016-01-01
dynamics flow solver, using large eddy simulation and fully turbulent inflow. The rotors are modelled using the actuator disc technique. A mutual validation of the computational fluid dynamics model with the measurements is conducted for a selected dataset, where wake interaction occurs. This validation...
Large shear deformation of particle gels studied by Brownian Dynamics simulations
Rzepiela, A.A.; Opheusden, van J.H.J.; Vliet, van T.
2004-01-01
Brownian Dynamics (BD) simulations have been performed to study structure and rheology of particle gels under large shear deformation. The model incorporates soft spherical particles, and reversible flexible bond formation. Two different methods of shear deformation are discussed, namely affine and