WorldWideScience

Sample records for spacecraft source modeling

  1. The Solar Energetic Particle Event of 2010 August 14: Connectivity with the Solar Source Inferred from Multiple Spacecraft Observations and Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Lario, D.; Kwon, R.-Y.; Raouafi, N. E. [The Johns Hopkins University, Applied Physics Laboratory, 11100 Johns Hopkins Road, Laurel, MD 20723 (United States); Richardson, I. G.; Thompson, B. J.; Rosenvinge, T. T. von; Mays, M. L.; Mäkelä, P. A.; Xie, H.; Thakur, N. [NASA Goddard Space Flight Center, Greenbelt, MD 20771 (United States); Bain, H. M. [Space Sciences Laboratory, UC Berkeley, 7 Gauss Way, Berkeley, CA 94720-7450 (United States); Zhang, M.; Zhao, L. [Department of Physics and Space Sciences, Florida Institute of Technology, Melbourne, FL (United States); Cane, H. V. [Department of Mathematics and Physics, University of Tasmania, Hobart (Australia); Papaioannou, A. [Institute for Astronomy, Astrophysics, Space Applications and Remote Sensing, National Observatory of Athens, GR-15 236 Penteli (Greece); Riley, P., E-mail: david.lario@jhuapl.edu [Predictive Science, 9990 Mesa Rim Road, Suite 170, San Diego, CA 92121 (United States)

    2017-03-20

    We analyze one of the first solar energetic particle (SEP) events of solar cycle 24 observed at widely separated spacecraft in order to assess the reliability of models currently used to determine the connectivity between the sources of SEPs at the Sun and spacecraft in the inner heliosphere. This SEP event was observed on 2010 August 14 by near-Earth spacecraft, STEREO-A (∼80° west of Earth) and STEREO-B (∼72° east of Earth). In contrast to near-Earth spacecraft, the footpoints of the nominal magnetic field lines connecting STEREO-A and STEREO-B with the Sun were separated from the region where the parent fast halo coronal mass ejection (CME) originated by ∼88° and ∼47° in longitude, respectively. We discuss the properties of the phenomena associated with this solar eruption. Extreme ultraviolet and white-light images are used to specify the extent of the associated CME-driven coronal shock. We then assess whether the SEPs observed at the three heliospheric locations were accelerated by this shock or whether transport mechanisms in the corona and/or interplanetary space provide an alternative explanation for the arrival of particles at the poorly connected spacecraft. A possible scenario consistent with the observations indicates that the observation of SEPs at STEREO-B and near Earth resulted from particle injection by the CME shock onto the field lines connecting to these spacecraft, whereas SEPs reached STEREO-A mostly via cross-field diffusive transport processes. The successes, limitations, and uncertainties of the methods used to resolve the connection between the acceleration sites of SEPs and the spacecraft are evaluated.

  2. Spacecraft Internal Acoustic Environment Modeling

    Science.gov (United States)

    Chu, Shao-Sheng R.; Allen Christopher S.

    2010-01-01

    Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. This paper describes the implementation of acoustic modeling for design purposes by incrementally increasing model fidelity and validating the accuracy of the model while predicting the noise of sources under various conditions. During FY 07, a simple-geometry Statistical Energy Analysis (SEA) model was developed and validated using a physical mockup and acoustic measurements. A process for modeling the effects of absorptive wall treatments and the resulting reverberation environment were developed. During FY 08, a model with more complex and representative geometry of the Orion Crew Module (CM) interior was built, and noise predictions based on input noise sources were made. A corresponding physical mockup was also built. Measurements were made inside this mockup, and comparisons were made with the model and showed excellent agreement. During FY 09, the fidelity of the mockup and corresponding model were increased incrementally by including a simple ventilation system. The airborne noise contribution of the fans was measured using a sound intensity technique, since the sound power levels were not known beforehand. This is opposed to earlier studies where Reference Sound Sources (RSS) with known sound power level were used. Comparisons of the modeling result with the measurements in the mockup showed excellent results. During FY 10, the fidelity of the mockup and the model were further increased by including an ECLSS (Environmental Control and Life Support System) wall, associated closeout panels, and the gap between ECLSS wall and mockup wall. The effect of sealing the gap and adding sound absorptive treatment to ECLSS wall were also modeled and validated.

  3. Spacecraft Internal Acoustic Environment Modeling

    Science.gov (United States)

    Chu, SShao-sheng R.; Allen, Christopher S.

    2009-01-01

    Acoustic modeling can be used to identify key noise sources, determine/analyze sub-allocated requirements, keep track of the accumulation of minor noise sources, and to predict vehicle noise levels at various stages in vehicle development, first with estimates of noise sources, later with experimental data. In FY09, the physical mockup developed in FY08, with interior geometric shape similar to Orion CM (Crew Module) IML (Interior Mode Line), was used to validate SEA (Statistical Energy Analysis) acoustic model development with realistic ventilation fan sources. The sound power levels of these sources were unknown a priori, as opposed to previous studies that RSS (Reference Sound Source) with known sound power level was used. The modeling results were evaluated based on comparisons to measurements of sound pressure levels over a wide frequency range, including the frequency range where SEA gives good results. Sound intensity measurement was performed over a rectangular-shaped grid system enclosing the ventilation fan source. Sound intensities were measured at the top, front, back, right, and left surfaces of the and system. Sound intensity at the bottom surface was not measured, but sound blocking material was placed tinder the bottom surface to reflect most of the incident sound energy back to the remaining measured surfaces. Integrating measured sound intensities over measured surfaces renders estimated sound power of the source. The reverberation time T6o of the mockup interior had been modified to match reverberation levels of ISS US Lab interior for speech frequency bands, i.e., 0.5k, 1k, 2k, 4 kHz, by attaching appropriately sized Thinsulate sound absorption material to the interior wall of the mockup. Sound absorption of Thinsulate was modeled in three methods: Sabine equation with measured mockup interior reverberation time T60, layup model based on past impedance tube testing, and layup model plus air absorption correction. The evaluation/validation was

  4. Modeling SMAP Spacecraft Attitude Control Estimation Error Using Signal Generation Model

    Science.gov (United States)

    Rizvi, Farheen

    2016-01-01

    Two ground simulation software are used to model the SMAP spacecraft dynamics. The CAST software uses a higher fidelity model than the ADAMS software. The ADAMS software models the spacecraft plant, controller and actuator models, and assumes a perfect sensor and estimator model. In this simulation study, the spacecraft dynamics results from the ADAMS software are used as CAST software is unavailable. The main source of spacecraft dynamics error in the higher fidelity CAST software is due to the estimation error. A signal generation model is developed to capture the effect of this estimation error in the overall spacecraft dynamics. Then, this signal generation model is included in the ADAMS software spacecraft dynamics estimate such that the results are similar to CAST. This signal generation model has similar characteristics mean, variance and power spectral density as the true CAST estimation error. In this way, ADAMS software can still be used while capturing the higher fidelity spacecraft dynamics modeling from CAST software.

  5. Computational Model for Spacecraft/Habitat Volume

    Data.gov (United States)

    National Aeronautics and Space Administration — Please note that funding to Dr. Simon Hsiang, a critical co-investigator for the development of the Spacecraft Optimization Layout and Volume (SOLV) model, was...

  6. Parameter Estimation of Spacecraft Fuel Slosh Model

    Science.gov (United States)

    Gangadharan, Sathya; Sudermann, James; Marlowe, Andrea; Njengam Charles

    2004-01-01

    Fuel slosh in the upper stages of a spinning spacecraft during launch has been a long standing concern for the success of a space mission. Energy loss through the movement of the liquid fuel in the fuel tank affects the gyroscopic stability of the spacecraft and leads to nutation (wobble) which can cause devastating control issues. The rate at which nutation develops (defined by Nutation Time Constant (NTC can be tedious to calculate and largely inaccurate if done during the early stages of spacecraft design. Pure analytical means of predicting the influence of onboard liquids have generally failed. A strong need exists to identify and model the conditions of resonance between nutation motion and liquid modes and to understand the general characteristics of the liquid motion that causes the problem in spinning spacecraft. A 3-D computerized model of the fuel slosh that accounts for any resonant modes found in the experimental testing will allow for increased accuracy in the overall modeling process. Development of a more accurate model of the fuel slosh currently lies in a more generalized 3-D computerized model incorporating masses, springs and dampers. Parameters describing the model include the inertia tensor of the fuel, spring constants, and damper coefficients. Refinement and understanding the effects of these parameters allow for a more accurate simulation of fuel slosh. The current research will focus on developing models of different complexity and estimating the model parameters that will ultimately provide a more realistic prediction of Nutation Time Constant obtained through simulation.

  7. Modeling the fundamental characteristics and processes of the spacecraft functioning

    Science.gov (United States)

    Bazhenov, V. I.; Osin, M. I.; Zakharov, Y. V.

    1986-01-01

    The fundamental aspects of modeling of spacecraft characteristics by using computing means are considered. Particular attention is devoted to the design studies, the description of physical appearance of the spacecraft, and simulated modeling of spacecraft systems. The fundamental questions of organizing the on-the-ground spacecraft testing and the methods of mathematical modeling were presented.

  8. Streamlined Modeling for Characterizing Spacecraft Anomalous Behavior

    Science.gov (United States)

    Klem, B.; Swann, D.

    2011-09-01

    Anomalous behavior of on-orbit spacecraft can often be detected using passive, remote sensors which measure electro-optical signatures that vary in time and spectral content. Analysts responsible for assessing spacecraft operational status and detecting detrimental anomalies using non-resolved imaging sensors are often presented with various sensing and identification issues. Modeling and measuring spacecraft self emission and reflected radiant intensity when the radiation patterns exhibit a time varying reflective glint superimposed on an underlying diffuse signal contribute to assessment of spacecraft behavior in two ways: (1) providing information on body component orientation and attitude; and, (2) detecting changes in surface material properties due to the space environment. Simple convex and cube-shaped spacecraft, designed to operate without protruding solar panel appendages, may require an enhanced level of preflight characterization to support interpretation of the various physical effects observed during on-orbit monitoring. This paper describes selected portions of the signature database generated using streamlined signature modeling and simulations of basic geometry shapes apparent to non-imaging sensors. With this database, summarization of key observable features for such shapes as spheres, cylinders, flat plates, cones, and cubes in specific spectral bands that include the visible, mid wave, and long wave infrared provide the analyst with input to the decision process algorithms contained in the overall sensing and identification architectures. The models typically utilize baseline materials such as Kapton, paints, aluminum surface end plates, and radiators, along with solar cell representations covering the cylindrical and side portions of the spacecraft. Multiple space and ground-based sensors are assumed to be located at key locations to describe the comprehensive multi-viewing aspect scenarios that can result in significant specular reflection

  9. Sources of Sodium in the Lunar Exosphere: Modeling Using Ground-Based Observations of Sodium Emission and Spacecraft Data of the Plasma

    Science.gov (United States)

    Sarantos, Menelaos; Killen, Rosemary M.; Sharma, A. Surjalal; Slavin, James A.

    2009-01-01

    Observations of the equatorial lunar sodium emission are examined to quantify the effect of precipitating ions on source rates for the Moon's exospheric volatile species. Using a model of exospheric sodium transport under lunar gravity forces, the measured emission intensity is normalized to a constant lunar phase angle to minimize the effect of different viewing geometries. Daily averages of the solar Lyman alpha flux and ion flux are used as the input variables for photon-stimulated desorption (PSD) and ion sputtering, respectively, while impact vaporization due to the micrometeoritic influx is assumed constant. Additionally, a proxy term proportional to both the Lyman alpha and to the ion flux is introduced to assess the importance of ion-enhanced diffusion and/or chemical sputtering. The combination of particle transport and constrained regression models demonstrates that, assuming sputtering yields that are typical of protons incident on lunar soils, the primary effect of ion impact on the surface of the Moon is not direct sputtering but rather an enhancement of the PSD efficiency. It is inferred that the ion-induced effects must double the PSD efficiency for flux typical of the solar wind at 1 AU. The enhancement in relative efficiency of PSD due to the bombardment of the lunar surface by the plasma sheet ions during passages through the Earth's magnetotail is shown to be approximately two times higher than when it is due to solar wind ions. This leads to the conclusion that the priming of the surface is more efficiently carried out by the energetic plasma sheet ions.

  10. Modeling, Monitoring and Fault Diagnosis of Spacecraft Air Contaminants

    Science.gov (United States)

    Ramirez, W. Fred; Skliar, Mikhail; Narayan, Anand; Morgenthaler, George W.; Smith, Gerald J.

    1998-01-01

    Control of air contaminants is a crucial factor in the safety considerations of crewed space flight. Indoor air quality needs to be closely monitored during long range missions such as a Mars mission, and also on large complex space structures such as the International Space Station. This work mainly pertains to the detection and simulation of air contaminants in the space station, though much of the work is easily extended to buildings, and issues of ventilation systems. Here we propose a method with which to track the presence of contaminants using an accurate physical model, and also develop a robust procedure that would raise alarms when certain tolerance levels are exceeded. A part of this research concerns the modeling of air flow inside a spacecraft, and the consequent dispersal pattern of contaminants. Our objective is to also monitor the contaminants on-line, so we develop a state estimation procedure that makes use of the measurements from a sensor system and determines an optimal estimate of the contamination in the system as a function of time and space. The real-time optimal estimates in turn are used to detect faults in the system and also offer diagnoses as to their sources. This work is concerned with the monitoring of air contaminants aboard future generation spacecraft and seeks to satisfy NASA's requirements as outlined in their Strategic Plan document (Technology Development Requirements, 1996).

  11. Overview of SDCM - The Spacecraft Design and Cost Model

    Science.gov (United States)

    Ferebee, Melvin J.; Farmer, Jeffery T.; Andersen, Gregory C.; Flamm, Jeffery D.; Badi, Deborah M.

    1988-01-01

    The Spacecraft Design and Cost Model (SDCM) is a computer-aided design and analysis tool for synthesizing spacecraft configurations, integrating their subsystems, and generating information concerning on-orbit servicing and costs. SDCM uses a bottom-up method in which the cost and performance parameters for subsystem components are first calculated; the model then sums the contributions from individual components in order to obtain an estimate of sizes and costs for each candidate configuration within a selected spacecraft system. An optimum spacraft configuration can then be selected.

  12. A nuclear source term analysis for spacecraft power systems

    International Nuclear Information System (INIS)

    McCulloch, W.H.

    1998-01-01

    All US space missions involving on board nuclear material must be approved by the Office of the President. To be approved the mission and the hardware systems must undergo evaluations of the associated nuclear health and safety risk. One part of these evaluations is the characterization of the source terms, i.e., the estimate of the amount, physical form, and location of nuclear material, which might be released into the environment in the event of credible accidents. This paper presents a brief overview of the source term analysis by the Interagency Nuclear Safety Review Panel for the NASA Cassini Space Mission launched in October 1997. Included is a description of the Energy Interaction Model, an innovative approach to the analysis of potential releases from high velocity impacts resulting from launch aborts and reentries

  13. The first collection of spacecraft-associated microorganisms: a public source for extremotolerant microorganisms from spacecraft assembly clean rooms.

    Science.gov (United States)

    Moissl-Eichinger, Christine; Rettberg, Petra; Pukall, Rüdiger

    2012-11-01

    For several reasons, spacecraft are constructed in so-called clean rooms. Particles could affect the function of spacecraft instruments, and for missions under planetary protection limitations, the biological contamination has to be restricted as much as possible. The proper maintenance of clean rooms includes, for instance, constant control of humidity and temperature, air filtering, and cleaning (disinfection) of the surfaces. The combination of these conditions creates an artificial, extreme biotope for microbial survival specialists: spore formers, autotrophs, multi-resistant, facultative, or even strictly anaerobic microorganisms have been detected in clean room habitats. Based on a diversity study of European and South-American spacecraft assembly clean rooms, the European Space Agency (ESA) has initialized and funded the creation of a public library of microbial isolates. Isolates from three different European clean rooms, as well as from the final assembly and launch facility in Kourou (French Guiana), have been phylogenetically analyzed and were lyophilized for long-term storage at the German Culture Collection facilities in Brunswick, Germany (Leibniz-Institut DSMZ-Deutsche Sammlung von Mikroorganismen und Zellkulturen). The isolates were obtained by either following the standard protocol for the determination of bioburden on, and around, spacecraft or the use of alternative cultivation strategies. Currently, the database contains 298 bacterial strains. Fifty-nine strains are Gram-negative microorganisms, belonging to the α-, β- and γ-Proteobacteria. Representatives of the Gram-positive phyla Actinobacteria, Bacteroidetes/Chlorobi, and Firmicutes were subjected to the collection. Ninety-four isolates (21 different species) of the genus Bacillus were included in the ESA collection. This public collection of extremotolerant microbes, which are adapted to a complicated artificial biotope, provides a wonderful source for industry and research focused on

  14. Modeling and Simulation of Satellite Subsystems for End-to-End Spacecraft Modeling

    National Research Council Canada - National Science Library

    Schum, William K; Doolittle, Christina M; Boyarko, George A

    2006-01-01

    During the past ten years, the Air Force Research Laboratory (AFRL) has been simultaneously developing high-fidelity spacecraft payload models as well as a robust distributed simulation environment for modeling spacecraft subsystems...

  15. 42: An Open-Source Simulation Tool for Study and Design of Spacecraft Attitude Control Systems

    Science.gov (United States)

    Stoneking, Eric

    2018-01-01

    Simulation is an important tool in the analysis and design of spacecraft attitude control systems. The speaker will discuss the simulation tool, called simply 42, that he has developed over the years to support his own work as an engineer in the Attitude Control Systems Engineering Branch at NASA Goddard Space Flight Center. 42 was intended from the outset to be high-fidelity and powerful, but also fast and easy to use. 42 is publicly available as open source since 2014. The speaker will describe some of 42's models and features, and discuss its applicability to studies ranging from early concept studies through the design cycle, integration, and operations. He will outline 42's architecture and share some thoughts on simulation development as a long-term project.

  16. Modeling and Analysis of Realistic Fire Scenarios in Spacecraft

    Science.gov (United States)

    Brooker, J. E.; Dietrich, D. L.; Gokoglu, S. A.; Urban, D. L.; Ruff, G. A.

    2015-01-01

    An accidental fire inside a spacecraft is an unlikely, but very real emergency situation that can easily have dire consequences. While much has been learned over the past 25+ years of dedicated research on flame behavior in microgravity, a quantitative understanding of the initiation, spread, detection and extinguishment of a realistic fire aboard a spacecraft is lacking. Virtually all combustion experiments in microgravity have been small-scale, by necessity (hardware limitations in ground-based facilities and safety concerns in space-based facilities). Large-scale, realistic fire experiments are unlikely for the foreseeable future (unlike in terrestrial situations). Therefore, NASA will have to rely on scale modeling, extrapolation of small-scale experiments and detailed numerical modeling to provide the data necessary for vehicle and safety system design. This paper presents the results of parallel efforts to better model the initiation, spread, detection and extinguishment of fires aboard spacecraft. The first is a detailed numerical model using the freely available Fire Dynamics Simulator (FDS). FDS is a CFD code that numerically solves a large eddy simulation form of the Navier-Stokes equations. FDS provides a detailed treatment of the smoke and energy transport from a fire. The simulations provide a wealth of information, but are computationally intensive and not suitable for parametric studies where the detailed treatment of the mass and energy transport are unnecessary. The second path extends a model previously documented at ICES meetings that attempted to predict maximum survivable fires aboard space-craft. This one-dimensional model implies the heat and mass transfer as well as toxic species production from a fire. These simplifications result in a code that is faster and more suitable for parametric studies (having already been used to help in the hatch design of the Multi-Purpose Crew Vehicle, MPCV).

  17. Model predictive control for spacecraft rendezvous in elliptical orbit

    Science.gov (United States)

    Li, Peng; Zhu, Zheng H.

    2018-05-01

    This paper studies the control of spacecraft rendezvous with attitude stable or spinning targets in an elliptical orbit. The linearized Tschauner-Hempel equation is used to describe the motion of spacecraft and the problem is formulated by model predictive control. The control objective is to maximize control accuracy and smoothness simultaneously to avoid unexpected change or overshoot of trajectory for safe rendezvous. It is achieved by minimizing the weighted summations of control errors and increments. The effects of two sets of horizons (control and predictive horizons) in the model predictive control are examined in terms of fuel consumption, rendezvous time and computational effort. The numerical results show the proposed control strategy is effective.

  18. Mechanical Slosh Models for Rocket-Propelled Spacecraft

    Science.gov (United States)

    Jang, Jiann-Woei; Alaniz, Abram; Yang, Lee; Powers. Joseph; Hall, Charles

    2013-01-01

    Several analytical mechanical slosh models for a cylindrical tank with flat bottom are reviewed. Even though spacecrafts use cylinder shaped tanks, most of those tanks usually have elliptical domes. To extend the application of the analytical models for a cylindrical tank with elliptical domes, the modified slosh parameter models are proposed in this report by mapping an elliptical dome cylindrical tank to a flat top/bottom cylindrical tank while maintaining the equivalent liquid volume. For the low Bond number case, the low-g slosh models were also studied. Those low-g models can be used for Bond number > 10. The current low-g slosh models were also modified to extend their applications for the case that liquid height is smaller than the tank radius. All modified slosh models are implemented in MATLAB m-functions and are collected in the developed MST (Mechanical Slosh Toolbox).

  19. Development and evaluation of thermal model reduction algorithms for spacecraft

    Science.gov (United States)

    Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus

    2015-05-01

    This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.

  20. Correlation of spacecraft thermal mathematical models to reference data

    Science.gov (United States)

    Torralbo, Ignacio; Perez-Grande, Isabel; Sanz-Andres, Angel; Piqueras, Javier

    2018-03-01

    Model-to-test correlation is a frequent problem in spacecraft-thermal control design. The idea is to determine the values of the parameters of the thermal mathematical model (TMM) that allows reaching a good fit between the TMM results and test data, in order to reduce the uncertainty of the mathematical model. Quite often, this task is performed manually, mainly because a good engineering knowledge and experience is needed to reach a successful compromise, but the use of a mathematical tool could facilitate this work. The correlation process can be considered as the minimization of the error of the model results with regard to the reference data. In this paper, a simple method is presented suitable to solve the TMM-to-test correlation problem, using Jacobian matrix formulation and Moore-Penrose pseudo-inverse, generalized to include several load cases. Aside, in simple cases, this method also allows for analytical solutions to be obtained, which helps to analyze some problems that appear when the Jacobian matrix is singular. To show the implementation of the method, two problems have been considered, one more academic, and the other one the TMM of an electronic box of PHI instrument of ESA Solar Orbiter mission, to be flown in 2019. The use of singular value decomposition of the Jacobian matrix to analyze and reduce these models is also shown. The error in parameter space is used to assess the quality of the correlation results in both models.

  1. X-Ray Detection and Processing Models for Spacecraft Navigation and Timing

    Science.gov (United States)

    Sheikh, Suneel; Hanson, John

    2013-01-01

    The current primary method of deepspace navigation is the NASA Deep Space Network (DSN). High-performance navigation is achieved using Delta Differential One-Way Range techniques that utilize simultaneous observations from multiple DSN sites, and incorporate observations of quasars near the line-of-sight to a spacecraft in order to improve the range and angle measurement accuracies. Over the past four decades, x-ray astronomers have identified a number of xray pulsars with pulsed emissions having stabilities comparable to atomic clocks. The x-ray pulsar-based navigation and time determination (XNAV) system uses phase measurements from these sources to establish autonomously the position of the detector, and thus the spacecraft, relative to a known reference frame, much as the Global Positioning System (GPS) uses phase measurements from radio signals from several satellites to establish the position of the user relative to an Earth-centered fixed frame of reference. While a GPS receiver uses an antenna to detect the radio signals, XNAV uses a detector array to capture the individual xray photons from the x-ray pulsars. The navigation solution relies on detailed xray source models, signal processing, navigation and timing algorithms, and analytical tools that form the basis of an autonomous XNAV system. Through previous XNAV development efforts, some techniques have been established to utilize a pulsar pulse time-of-arrival (TOA) measurement to correct a position estimate. One well-studied approach, based upon Kalman filter methods, optimally adjusts a dynamic orbit propagation solution based upon the offset in measured and predicted pulse TOA. In this delta position estimator scheme, previously estimated values of spacecraft position and velocity are utilized from an onboard orbit propagator. Using these estimated values, the detected arrival times at the spacecraft of pulses from a pulsar are compared to the predicted arrival times defined by the pulsar s pulse

  2. Spacecraft Charging Modeling -- Nascap-2k 2014 Annual Report

    Science.gov (United States)

    2014-09-19

    appears to work similarly in Internet Explorer, FireFox , and Opera, but fails in Safari and Chrome. Note that the SEE Spacecraft Charging Handbook is... Characteristics of Spacecraft Charging in Low Earth Orbit, J Geophys Res. 11 7, doi: 10.1029/20 11JA016875, 2012. 2 M. Cho, K. Saito, T. Hamanaga, Data

  3. A geometric model of a V-slit Sun sensor correcting for spacecraft wobble

    Science.gov (United States)

    Mcmartin, W. P.; Gambhir, S. S.

    1994-01-01

    A V-Slit sun sensor is body-mounted on a spin-stabilized spacecraft. During injection from a parking or transfer orbit to some final orbit, the spacecraft may not be dynamically balanced. This may result in wobble about the spacecraft spin axis as the spin axis may not be aligned with the spacecraft's axis of symmetry. While the widely used models in Spacecraft Attitude Determination and Control, edited by Wertz, correct for separation, elevation, and azimuthal mounting biases, spacecraft wobble is not taken into consideration. A geometric approach is used to develop a method for measurement of the sun angle which corrects for the magnitude and phase of spacecraft wobble. The algorithm was implemented using a set of standard mathematical routines for spherical geometry on a unit sphere.

  4. Remote sensing of a NTC radio source from a Cluster tilted spacecraft pair

    Directory of Open Access Journals (Sweden)

    P. M. E. Décréau

    2013-11-01

    Full Text Available The Cluster mission operated a "tilt campaign" during the month of May 2008. Two of the four identical Cluster spacecraft were placed at a close distance (~50 km from each other and the spin axis of one of the spacecraft pair was tilted by an angle of ~46°. This gave the opportunity, for the first time in space, to measure global characteristics of AC electric field, at the sensitivity available with long boom (88 m antennas, simultaneously from the specific configuration of the tilted pair of satellites and from the available base of three satellites placed at a large characteristic separation (~1 RE. This paper describes how global characteristics of radio waves, in this case the configuration of the electric field polarization ellipse in 3-D-space, are identified from in situ measurements of spin modulation features by the tilted pair, validating a novel experimental concept. In the event selected for analysis, non-thermal continuum (NTC waves in the 15–25 kHz frequency range are observed from the Cluster constellation placed above the polar cap. The observed intensity variations with spin angle are those of plane waves, with an electric field polarization close to circular, at an ellipticity ratio e = 0.87. We derive the source position in 3-D by two different methods. The first one uses ray path orientation (measured by the tilted pair combined with spectral signature of magnetic field magnitude at source. The second one is obtained via triangulation from the three spacecraft baseline, using estimation of directivity angles under assumption of circular polarization. The two results are not compatible, placing sources widely apart. We present a general study of the level of systematic errors due to the assumption of circular polarization, linked to the second approach, and show how this approach can lead to poor triangulation and wrong source positioning. The estimation derived from the first method places the NTC source region in the

  5. Modeling Vacuum Arcs On Spacecraft Solar Panel Arrays, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Spacecraft charging and subsequent vacuum arcing poses a significant threat to satellites in LEO and GEO plasma conditions. Localized arc discharges can cause a...

  6. Economic analysis of open space box model utilization in spacecraft

    Science.gov (United States)

    Mohammad, Atif F.; Straub, Jeremy

    2015-05-01

    It is a known fact that the amount of data about space that is stored is getting larger on an everyday basis. However, the utilization of Big Data and related tools to perform ETL (Extract, Transform and Load) applications will soon be pervasive in the space sciences. We have entered in a crucial time where using Big Data can be the difference (for terrestrial applications) between organizations underperforming and outperforming their peers. The same is true for NASA and other space agencies, as well as for individual missions and the highly-competitive process of mission data analysis and publication. In most industries, conventional opponents and new candidates alike will influence data-driven approaches to revolutionize and capture the value of Big Data archives. The Open Space Box Model is poised to take the proverbial "giant leap", as it provides autonomic data processing and communications for spacecraft. We can find economic value generated from such use of data processing in our earthly organizations in every sector, such as healthcare, retail. We also can easily find retailers, performing research on Big Data, by utilizing sensors driven embedded data in products within their stores and warehouses to determine how these products are actually used in the real world.

  7. Modeling and simulation of satellite subsystems for end-to-end spacecraft modeling

    Science.gov (United States)

    Schum, William K.; Doolittle, Christina M.; Boyarko, George A.

    2006-05-01

    During the past ten years, the Air Force Research Laboratory (AFRL) has been simultaneously developing high-fidelity spacecraft payload models as well as a robust distributed simulation environment for modeling spacecraft subsystems. Much of this research has occurred in the Distributed Architecture Simulation Laboratory (DASL). AFRL developers working in the DASL have effectively combined satellite power, attitude pointing, and communication link analysis subsystem models with robust satellite sensor models to create a first-order end-to-end satellite simulation capability. The merging of these two simulation areas has advanced the field of spacecraft simulation, design, and analysis, and enabled more in-depth mission and satellite utility analyses. A core capability of the DASL is the support of a variety of modeling and analysis efforts, ranging from physics and engineering-level modeling to mission and campaign-level analysis. The flexibility and agility of this simulation architecture will be used to support space mission analysis, military utility analysis, and various integrated exercises with other military and space organizations via direct integration, or through DOD standards such as Distributed Interaction Simulation. This paper discusses the results and lessons learned in modeling satellite communication link analysis, power, and attitude control subsystems for an end-to-end satellite simulation. It also discusses how these spacecraft subsystem simulations feed into and support military utility and space mission analyses.

  8. Application of partial differential equation modeling of the control/structural dynamics of flexible spacecraft

    Science.gov (United States)

    Taylor, Lawrence W., Jr.; Rajiyah, H.

    1991-01-01

    Partial differential equations for modeling the structural dynamics and control systems of flexible spacecraft are applied here in order to facilitate systems analysis and optimization of these spacecraft. Example applications are given, including the structural dynamics of SCOLE, the Solar Array Flight Experiment, the Mini-MAST truss, and the LACE satellite. The development of related software is briefly addressed.

  9. Open source molecular modeling.

    Science.gov (United States)

    Pirhadi, Somayeh; Sunseri, Jocelyn; Koes, David Ryan

    2016-09-01

    The success of molecular modeling and computational chemistry efforts are, by definition, dependent on quality software applications. Open source software development provides many advantages to users of modeling applications, not the least of which is that the software is free and completely extendable. In this review we categorize, enumerate, and describe available open source software packages for molecular modeling and computational chemistry. An updated online version of this catalog can be found at https://opensourcemolecularmodeling.github.io. Copyright © 2016 The Author(s). Published by Elsevier Inc. All rights reserved.

  10. Adaptive relative pose control of spacecraft with model couplings and uncertainties

    Science.gov (United States)

    Sun, Liang; Zheng, Zewei

    2018-02-01

    The spacecraft pose tracking control problem for an uncertain pursuer approaching to a space target is researched in this paper. After modeling the nonlinearly coupled dynamics for relative translational and rotational motions between two spacecraft, position tracking and attitude synchronization controllers are developed independently by using a robust adaptive control approach. The unknown kinematic couplings, parametric uncertainties, and bounded external disturbances are handled with adaptive updating laws. It is proved via Lyapunov method that the pose tracking errors converge to zero asymptotically. Spacecraft close-range rendezvous and proximity operations are introduced as an example to validate the effectiveness of the proposed control approach.

  11. Nonlinear model and attitude dynamics of flexible spacecraft with large amplitude slosh

    Science.gov (United States)

    Deng, Mingle; Yue, Baozeng

    2017-04-01

    This paper is focused on the nonlinearly modelling and attitude dynamics of spacecraft coupled with large amplitude liquid sloshing dynamics and flexible appendage vibration. The large amplitude fuel slosh dynamics is included by using an improved moving pulsating ball model. The moving pulsating ball model is an equivalent mechanical model that is capable of imitating the whole liquid reorientation process. A modification is introduced in the capillary force computation in order to more precisely estimate the settling location of liquid in microgravity or zero-g environment. The flexible appendage is modelled as a three dimensional Bernoulli-Euler beam and the assumed modal method is employed to derive the nonlinear mechanical model for the overall coupled system of liquid filled spacecraft with appendage. The attitude maneuver is implemented by the momentum transfer technique, and a feedback controller is designed. The simulation results show that the liquid sloshing can always result in nutation behavior, but the effect of flexible deformation of appendage depends on the amplitude and direction of attitude maneuver performed by spacecraft. Moreover, it is found that the liquid sloshing and the vibration of flexible appendage are coupled with each other, and the coupling becomes more significant with more rapid motion of spacecraft. This study reveals that the appendage's flexibility has influence on the liquid's location and settling time in microgravity. The presented nonlinear system model can provide an important reference for the overall design of the modern spacecraft composed of rigid platform, liquid filled tank and flexible appendage.

  12. Modeling the angular motion dynamics of spacecraft with a magnetic attitude control system based on experimental studies and dynamic similarity

    Science.gov (United States)

    Kulkov, V. M.; Medvedskii, A. L.; Terentyev, V. V.; Firsyuk, S. O.; Shemyakov, A. O.

    2017-12-01

    The problem of spacecraft attitude control using electromagnetic systems interacting with the Earth's magnetic field is considered. A set of dimensionless parameters has been formed to investigate the spacecraft orientation regimes based on dynamically similar models. The results of experimental studies of small spacecraft with a magnetic attitude control system can be extrapolated to the in-orbit spacecraft motion control regimes by using the methods of the dimensional and similarity theory.

  13. Methodology for Developing a Probabilistic Risk Assessment Model of Spacecraft Rendezvous and Dockings

    Science.gov (United States)

    Farnham, Steven J., II; Garza, Joel, Jr.; Castillo, Theresa M.; Lutomski, Michael

    2011-01-01

    In 2007 NASA was preparing to send two new visiting vehicles carrying logistics and propellant to the International Space Station (ISS). These new vehicles were the European Space Agency s (ESA) Automated Transfer Vehicle (ATV), the Jules Verne, and the Japanese Aerospace and Explorations Agency s (JAXA) H-II Transfer Vehicle (HTV). The ISS Program wanted to quantify the increased risk to the ISS from these visiting vehicles. At the time, only the Shuttle, the Soyuz, and the Progress vehicles rendezvoused and docked to the ISS. The increased risk to the ISS was from an increase in vehicle traffic, thereby, increasing the potential catastrophic collision during the rendezvous and the docking or berthing of the spacecraft to the ISS. A universal method of evaluating the risk of rendezvous and docking or berthing was created by the ISS s Risk Team to accommodate the increasing number of rendezvous and docking or berthing operations due to the increasing number of different spacecraft, as well as the future arrival of commercial spacecraft. Before the first docking attempt of ESA's ATV and JAXA's HTV to the ISS, a probabilistic risk model was developed to quantitatively calculate the risk of collision of each spacecraft with the ISS. The 5 rendezvous and docking risk models (Soyuz, Progress, Shuttle, ATV, and HTV) have been used to build and refine the modeling methodology for rendezvous and docking of spacecrafts. This risk modeling methodology will be NASA s basis for evaluating the addition of future ISS visiting spacecrafts hazards, including SpaceX s Dragon, Orbital Science s Cygnus, and NASA s own Orion spacecraft. This paper will describe the methodology used for developing a visiting vehicle risk model.

  14. MODEL CORRELATION STUDY OF A RETRACTABLE BOOM FOR A SOLAR SAIL SPACECRAFT

    Science.gov (United States)

    Adetona, O.; Keel, L. H.; Oakley, J. D.; Kappus, K.; Whorton, M. S.; Kim, Y. K.; Rakpczy, J. M.

    2005-01-01

    To realize design concepts, predict dynamic behavior and develop appropriate control strategies for high performance operation of a solar-sail spacecraft, we developed a simple analytical model that represents dynamic behavior of spacecraft with various sizes. Since motion of the vehicle is dominated by retractable booms that support the structure, our study concentrates on developing and validating a dynamic model of a long retractable boom. Extensive tests with various configurations were conducted for the 30 Meter, light-weight, retractable, lattice boom at NASA MSFC that is structurally and dynamically similar to those of a solar-sail spacecraft currently under construction. Experimental data were then compared with the corresponding response of the analytical model. Though mixed results were obtained, the analytical model emulates several key characteristics of the boom. The paper concludes with a detailed discussion of issues observed during the study.

  15. First principles nickel-cadmium and nickel hydrogen spacecraft battery models

    Energy Technology Data Exchange (ETDEWEB)

    Timmerman, P.; Ratnakumar, B.V.; Distefano, S.

    1996-02-01

    The principles of Nickel-Cadmium and Nickel-Hydrogen spacecraft battery models are discussed. The Ni-Cd battery model includes two phase positive electrode and its predictions are very close to actual data. But the Ni-H2 battery model predictions (without the two phase positive electrode) are unacceptable even though the model is operational. Both models run on UNIX and Macintosh computers.

  16. Spacecraft Interactions Modeling and Post-Mission Data Analysis

    National Research Council Canada - National Science Library

    Bonito, N

    1996-01-01

    .... In Support of these analyses, models for satellite ephemeris, attitude determination, magnetic fields, atmospheric composition, and particle precipitation were designed and developed, using PL...

  17. Lifetime of a spacecraft around a synchronous system of asteroids using a dipole model

    Science.gov (United States)

    dos Santos, Leonardo Barbosa Torres; de Almeida Prado, Antonio F. Bertachini; Sanchez, Diogo Merguizo

    2017-11-01

    Space missions allow us to expand our knowledge about the origin of the solar system. It is believed that asteroids and comets preserve the physical characteristics from the time that the solar system was created. For this reason, there was an increase of missions to asteroids in the past few years. To send spacecraft to asteroids or comets is challenging, since these objects have their own characteristics in several aspects, such as size, shape, physical properties, etc., which are often only discovered after the approach and even after the landing of the spacecraft. These missions must be developed with sufficient flexibility to adjust to these parameters, which are better determined only when the spacecraft reaches the system. Therefore, conducting a dynamic investigation of a spacecraft around a multiple asteroid system offers an extremely rich environment. Extracting accurate information through analytical approaches is quite challenging and requires a significant number of restrictive assumptions. For this reason, a numerical approach to the dynamics of a spacecraft in the vicinity of a binary asteroid system is offered in this paper. In the present work, the equations of the Restricted Synchronous Four-Body Problem (RSFBP) are used to model a binary asteroid system. The main objective of this work is to construct grids of initial conditions, which relates semi-major axis and eccentricity, in order to quantify the lifetime of a spacecraft when released close to the less massive body of the binary system (modeled as a rotating mass dipole). We performed an analysis of the lifetime of the spacecraft considering several mass ratios of a binary system of asteroids and investigating the behavior of a spacecraft in the vicinity of this system. We analyze direct and retrograde orbits. This study investigated orbits that survive for at least 500 orbital periods of the system (which is approximately one year), then not colliding or escaping from the system during this

  18. Gravity and Macro-Model Tuning for the Geosat Follow-on Spacecraft

    Science.gov (United States)

    Lemoine, Frank G.; Rowlands, David D.; Marr, Gregory C.; Zelensky, Nikita P.; Luthcke, Scott B.; Cox, Christopher M.

    1999-01-01

    The US Navy's GEOSAT Follow-On (GFO) spacecraft was launched on February 10, 1998 and the primary objective of the mission was to map the oceans using a radar altimeter. The spacecraft tracking complement consisted of GPS receivers, a laser retroreflector and Doppler beacons. Since the GPS receivers have not yet returned reliable data, the only means of providing high-quality precise orbits has been though satellite laser ranging (SLR). The spacecraft has been tracked by the international satellite laser ranging network since April 22, 1998, and an average of 7.4 passes per day have been obtained from US and participating foreign stations. Since the predicted radial orbit error due to the gravity field is two to three cm, the largest contributor to the high SLR residuals (7-10 cm RMS for five day arcs) is the mismodelling of the non-conservative forces, not withstanding the development of a three-dimensional eight-panel model and an analytical attitude model for the GFO spacecraft. The SLR residuals show a clear correlation with beta-prime (solar elevation) angle, peaking in mid-August 1998 when the beta-prime angle reached -80 to -90 degrees. In this paper we discuss the tuning of the non-conservative force model, for GFO and report the subsequent addition of the GFO tracking data to the Earth gravity model solutions.

  19. High-Fidelity Dynamic Modeling of Spacecraft in the Continuum--Rarefied Transition Regime

    Science.gov (United States)

    Turansky, Craig P.

    The state of the art of spacecraft rarefied aerodynamics seldom accounts for detailed rigid-body dynamics. In part because of computational constraints, simpler models based upon the ballistic and drag coefficients are employed. Of particular interest is the continuum-rarefied transition regime of Earth's thermosphere where gas dynamic simulation is difficult yet wherein many spacecraft operate. The feasibility of increasing the fidelity of modeling spacecraft dynamics is explored by coupling rarefied aerodynamics with rigid-body dynamics modeling similar to that traditionally used for aircraft in atmospheric flight. Presented is a framework of analysis and guiding principles which capitalize on the availability of increasing computational methods and resources. Aerodynamic force inputs for modeling spacecraft in two dimensions in a rarefied flow are provided by analytical equations in the free-molecular regime, and the direct simulation Monte Carlo method in the transition regime. The application of the direct simulation Monte Carlo method to this class of problems is examined in detail with a new code specifically designed for engineering-level rarefied aerodynamic analysis. Time-accurate simulations of two distinct geometries in low thermospheric flight and atmospheric entry are performed, demonstrating non-linear dynamics that cannot be predicted using simpler approaches. The results of this straightforward approach to the aero-orbital coupled-field problem highlight the possibilities for future improvements in drag prediction, control system design, and atmospheric science. Furthermore, a number of challenges for future work are identified in the hope of stimulating the development of a new subfield of spacecraft dynamics.

  20. Planar rigid-flexible coupling spacecraft modeling and control considering solar array deployment and joint clearance

    Science.gov (United States)

    Li, Yuanyuan; Wang, Zilu; Wang, Cong; Huang, Wenhu

    2018-01-01

    Based on Nodal Coordinate Formulation (NCF) and Absolute Nodal Coordinate Formulation (ANCF), this paper establishes rigid-flexible coupling dynamic model of the spacecraft with large deployable solar arrays and multiple clearance joints to analyze and control the satellite attitude under deployment disturbance. Considering torque spring, close cable loop (CCL) configuration and latch mechanisms, a typical spacecraft composed of a rigid main-body described by NCF and two flexible panels described by ANCF is used as a demonstration case. Nonlinear contact force model and modified Coulomb friction model are selected to establish normal contact force and tangential friction model, respectively. Generalized elastic force are derived and all generalized forces are defined in the NCF-ANCF frame. The Newmark-β method is used to solve system equations of motion. The availability and superiority of the proposed model is verified through comparing with numerical co-simulations of Patran and ADAMS software. The numerical results reveal the effects of panel flexibility, joint clearance and their coupling on satellite attitude. The effects of clearance number, clearance size and clearance stiffness on satellite attitude are investigated. Furthermore, a proportional-differential (PD) attitude controller of spacecraft is designed to discuss the effect of attitude control on the dynamic responses of the whole system.

  1. Fractionated Spacecraft Architectures Seeding Study

    National Research Council Canada - National Science Library

    Mathieu, Charlotte; Weigel, Annalisa

    2006-01-01

    .... Models were developed from a customer-centric perspective to assess different fractionated spacecraft architectures relative to traditional spacecraft architectures using multi-attribute analysis...

  2. Photovoltaic sources modeling

    CERN Document Server

    Petrone, Giovanni; Spagnuolo, Giovanni

    2016-01-01

    This comprehensive guide surveys all available models for simulating a photovoltaic (PV) generator at different levels of granularity, from cell to system level, in uniform as well as in mismatched conditions. Providing a thorough comparison among the models, engineers have all the elements needed to choose the right PV array model for specific applications or environmental conditions matched with the model of the electronic circuit used to maximize the PV power production.

  3. Estimation Model of Spacecraft Parameters and Cost Based on a Statistical Analysis of COMPASS Designs

    Science.gov (United States)

    Gerberich, Matthew W.; Oleson, Steven R.

    2013-01-01

    The Collaborative Modeling for Parametric Assessment of Space Systems (COMPASS) team at Glenn Research Center has performed integrated system analysis of conceptual spacecraft mission designs since 2006 using a multidisciplinary concurrent engineering process. The set of completed designs was archived in a database, to allow for the study of relationships between design parameters. Although COMPASS uses a parametric spacecraft costing model, this research investigated the possibility of using a top-down approach to rapidly estimate the overall vehicle costs. This paper presents the relationships between significant design variables, including breakdowns of dry mass, wet mass, and cost. It also develops a model for a broad estimate of these parameters through basic mission characteristics, including the target location distance, the payload mass, the duration, the delta-v requirement, and the type of mission, propulsion, and electrical power. Finally, this paper examines the accuracy of this model in regards to past COMPASS designs, with an assessment of outlying spacecraft, and compares the results to historical data of completed NASA missions.

  4. Adaptation and Re-Use of Spacecraft Power System Models for the Constellation Program

    Science.gov (United States)

    Hojnicki, Jeffrey S.; Kerslake, Thomas W.; Ayres, Mark; Han, Augustina H.; Adamson, Adrian M.

    2008-01-01

    NASA's Constellation Program is embarking on a new era of space exploration, returning to the Moon and beyond. The Constellation architecture will consist of a number of new spacecraft elements, including the Orion crew exploration vehicle, the Altair lunar lander, and the Ares family of launch vehicles. Each of these new spacecraft elements will need an electric power system, and those power systems will need to be designed to fulfill unique mission objectives and to survive the unique environments encountered on a lunar exploration mission. As with any new spacecraft power system development, preliminary design work will rely heavily on analysis to select the proper power technologies, size the power system components, and predict the system performance throughout the required mission profile. Constellation projects have the advantage of leveraging power system modeling developments from other recent programs such as the International Space Station (ISS) and the Mars Exploration Program. These programs have developed mature power system modeling tools, which can be quickly modified to meet the unique needs of Constellation, and thus provide a rapid capability for detailed power system modeling that otherwise would not exist.

  5. Suprathermal ions in the solar wind from the Voyager spacecraft: Instrument modeling and background analysis

    International Nuclear Information System (INIS)

    Randol, B M; Christian, E R

    2015-01-01

    Using publicly available data from the Voyager Low Energy Charged Particle (LECP) instruments, we investigate the form of the solar wind ion suprathermal tail in the outer heliosphere inside the termination shock. This tail has a commonly observed form in the inner heliosphere, that is, a power law with a particular spectral index. The Voyager spacecraft have taken data beyond 100 AU, farther than any other spacecraft. However, during extended periods of time, the data appears to be mostly background. We have developed a technique to self-consistently estimate the background seen by LECP due to cosmic rays using data from the Voyager cosmic ray instruments and a simple, semi-analytical model of the LECP instruments

  6. Instrument for observing transient cosmic gamma-ray sources for the ISEE-C Heliocentric spacecraft

    International Nuclear Information System (INIS)

    Evans, W.D.; Aiello, W.P.; Klebesadel, R.W.

    1977-12-01

    Satellite instrumentation that would serve as one element of a three-satellite network to provide precise directional information for the recently discovered cosmic gamma-ray bursts is described. The proposed network would be capable of determining source locations with uncertainties of less than one arc minute, sufficient for a meaningful optical and radio search. The association of the gamma bursts with a known type of astrophysical object provides the most direct method for establishing source distances and thus defining the overall energetics of the emission process

  7. Light Curve Simulation Using Spacecraft CAD Models and Empirical Material Spectral BRDFS

    Science.gov (United States)

    Willison, A.; Bedard, D.

    This paper presents a Matlab-based light curve simulation software package that uses computer-aided design (CAD) models of spacecraft and the spectral bidirectional reflectance distribution function (sBRDF) of their homogenous surface materials. It represents the overall optical reflectance of objects as a sBRDF, a spectrometric quantity, obtainable during an optical ground truth experiment. The broadband bidirectional reflectance distribution function (BRDF), the basis of a broadband light curve, is produced by integrating the sBRDF over the optical wavelength range. Colour-filtered BRDFs, the basis of colour-filtered light curves, are produced by first multiplying the sBRDF by colour filters, and integrating the products. The software package's validity is established through comparison of simulated reflectance spectra and broadband light curves with those measured of the CanX-1 Engineering Model (EM) nanosatellite, collected during an optical ground truth experiment. It is currently being extended to simulate light curves of spacecraft in Earth orbit, using spacecraft Two-Line-Element (TLE) sets, yaw/pitch/roll angles, and observer coordinates. Measured light curves of the NEOSSat spacecraft will be used to validate simulated quantities. The sBRDF was chosen to represent material reflectance as it is spectrometric and a function of illumination and observation geometry. Homogeneous material sBRDFs were obtained using a goniospectrometer for a range of illumination and observation geometries, collected in a controlled environment. The materials analyzed include aluminum alloy, two types of triple-junction photovoltaic (TJPV) cell, white paint, and multi-layer insulation (MLI). Interpolation and extrapolation methods were used to determine the sBRDF for all possible illumination and observation geometries not measured in the laboratory, resulting in empirical look-up tables. These look-up tables are referenced when calculating the overall sBRDF of objects, where

  8. Radiation properties modeling for plasma-sprayed-alumina-coated rough surfaces for spacecrafts

    International Nuclear Information System (INIS)

    Li, R.M.; Joshi, Sunil C.; Ng, H.W.

    2006-01-01

    Spacecraft thermal control materials (TCMs) play a vital role in the entire service life of a spacecraft . Most of the conventional TCMs degrade in the harmful space environment . In the previous study, plasma sprayed alumina (PSA) coating was established as a new and better TCM for spacecrafts, in view of its stability and reliability compared to the traditional TCMs . During the investigation, the surface roughness of PSA was found important, because the roughness affects the radiative heat exchange between the surface and its surroundings. Parameters such as root-mean-square roughness cannot properly evaluate surface roughness effects on radiative properties of opaque surfaces . Some models have been developed earlier to predict the effects, such as Davies' model , Tang and Buckius's statistical geometric optics model . However, they are valid only in their own specific situations. In this paper, an energy absorption geometry model was developed and applied to investigate the roughness effects with the help of 2D surface profile of PSA coated substrate scanned at micron level. This model predicts effective normal solar absorptance (α ne ) and effective hemispherical infrared emittance (ε he ) of a rough PSA surface. These values, if used in the heat transfer analysis of an equivalent, smooth and optically flat surface, lead to the prediction of the same rate of heat exchange and temperature as that of for the rough PSA surface. The model was validated through comparison between a smooth and a rough PSA coated surfaces. Even though not tested for other types of materials, the model formulation is generic and can be used to incorporate the rough surface effects for other types of thermal coatings, provided the baseline values of normal solar absorptance (α n ) and hemispherical infrared emittance (ε h ) are available for a generic surface of the same material

  9. A Quantitative Human Spacecraft Design Evaluation Model for Assessing Crew Accommodation and Utilization

    Science.gov (United States)

    Fanchiang, Christine

    Crew performance, including both accommodation and utilization factors, is an integral part of every human spaceflight mission from commercial space tourism, to the demanding journey to Mars and beyond. Spacecraft were historically built by engineers and technologists trying to adapt the vehicle into cutting edge rocketry with the assumption that the astronauts could be trained and will adapt to the design. By and large, that is still the current state of the art. It is recognized, however, that poor human-machine design integration can lead to catastrophic and deadly mishaps. The premise of this work relies on the idea that if an accurate predictive model exists to forecast crew performance issues as a result of spacecraft design and operations, it can help designers and managers make better decisions throughout the design process, and ensure that the crewmembers are well-integrated with the system from the very start. The result should be a high-quality, user-friendly spacecraft that optimizes the utilization of the crew while keeping them alive, healthy, and happy during the course of the mission. Therefore, the goal of this work was to develop an integrative framework to quantitatively evaluate a spacecraft design from the crew performance perspective. The approach presented here is done at a very fundamental level starting with identifying and defining basic terminology, and then builds up important axioms of human spaceflight that lay the foundation for how such a framework can be developed. With the framework established, a methodology for characterizing the outcome using a mathematical model was developed by pulling from existing metrics and data collected on human performance in space. Representative test scenarios were run to show what information could be garnered and how it could be applied as a useful, understandable metric for future spacecraft design. While the model is the primary tangible product from this research, the more interesting outcome of

  10. MODELING THE SOLAR WIND AT THE ULYSSES , VOYAGER , AND NEW HORIZONS SPACECRAFT

    Energy Technology Data Exchange (ETDEWEB)

    Kim, T. K.; Pogorelov, N. V.; Zank, G. P. [Center for Space Plasma and Aeronomic Research, University of Alabama in Huntsville, Huntsville, AL 35805 (United States); Elliott, H. A.; McComas, D. J. [Southwest Research Institute, San Antonio, TX 78238 (United States)

    2016-11-20

    The outer heliosphere is a dynamic region shaped largely by the interaction between the solar wind and the interstellar medium. While interplanetary magnetic field and plasma observations by the Voyager spacecraft have significantly improved our understanding of this vast region, modeling the outer heliosphere still remains a challenge. We simulate the three-dimensional, time-dependent solar wind flow from 1 to 80 astronomical units (au), where the solar wind is assumed to be supersonic, using a two-fluid model in which protons and interstellar neutral hydrogen atoms are treated as separate fluids. We use 1 day averages of the solar wind parameters from the OMNI data set as inner boundary conditions to reproduce time-dependent effects in a simplified manner which involves interpolation in both space and time. Our model generally agrees with Ulysses data in the inner heliosphere and Voyager data in the outer heliosphere. Ultimately, we present the model solar wind parameters extracted along the trajectory of the New Horizons spacecraft. We compare our results with in situ plasma data taken between 11 and 33 au and at the closest approach to Pluto on 2015 July 14.

  11. Modeling the Solar Wind at the Ulysses, Voyager, and New Horizons Spacecraft

    Science.gov (United States)

    Kim, T. K.; Pogorelov, N. V.; Zank, G. P.; Elliott, H. A.; McComas, D. J.

    2016-11-01

    The outer heliosphere is a dynamic region shaped largely by the interaction between the solar wind and the interstellar medium. While interplanetary magnetic field and plasma observations by the Voyager spacecraft have significantly improved our understanding of this vast region, modeling the outer heliosphere still remains a challenge. We simulate the three-dimensional, time-dependent solar wind flow from 1 to 80 astronomical units (au), where the solar wind is assumed to be supersonic, using a two-fluid model in which protons and interstellar neutral hydrogen atoms are treated as separate fluids. We use 1 day averages of the solar wind parameters from the OMNI data set as inner boundary conditions to reproduce time-dependent effects in a simplified manner which involves interpolation in both space and time. Our model generally agrees with Ulysses data in the inner heliosphere and Voyager data in the outer heliosphere. Ultimately, we present the model solar wind parameters extracted along the trajectory of the New Horizons spacecraft. We compare our results with in situ plasma data taken between 11 and 33 au and at the closest approach to Pluto on 2015 July 14.

  12. MODELING THE SOLAR WIND AT THE ULYSSES , VOYAGER , AND NEW HORIZONS SPACECRAFT

    International Nuclear Information System (INIS)

    Kim, T. K.; Pogorelov, N. V.; Zank, G. P.; Elliott, H. A.; McComas, D. J.

    2016-01-01

    The outer heliosphere is a dynamic region shaped largely by the interaction between the solar wind and the interstellar medium. While interplanetary magnetic field and plasma observations by the Voyager spacecraft have significantly improved our understanding of this vast region, modeling the outer heliosphere still remains a challenge. We simulate the three-dimensional, time-dependent solar wind flow from 1 to 80 astronomical units (au), where the solar wind is assumed to be supersonic, using a two-fluid model in which protons and interstellar neutral hydrogen atoms are treated as separate fluids. We use 1 day averages of the solar wind parameters from the OMNI data set as inner boundary conditions to reproduce time-dependent effects in a simplified manner which involves interpolation in both space and time. Our model generally agrees with Ulysses data in the inner heliosphere and Voyager data in the outer heliosphere. Ultimately, we present the model solar wind parameters extracted along the trajectory of the New Horizons spacecraft. We compare our results with in situ plasma data taken between 11 and 33 au and at the closest approach to Pluto on 2015 July 14.

  13. Gravity and Nonconservative Force Model Tuning for the GEOSAT Follow-On Spacecraft

    Science.gov (United States)

    Lemoine, Frank G.; Zelensky, Nikita P.; Rowlands, David D.; Luthcke, Scott B.; Chinn, Douglas S.; Marr, Gregory C.; Smith, David E. (Technical Monitor)

    2000-01-01

    The US Navy's GEOSAT Follow-On spacecraft was launched on February 10, 1998 and the primary objective of the mission was to map the oceans using a radar altimeter. Three radar altimeter calibration campaigns have been conducted in 1999 and 2000. The spacecraft is tracked by satellite laser ranging (SLR) and Doppler beacons and a limited amount of data have been obtained from the Global Positioning Receiver (GPS) on board the satellite. Even with EGM96, the predicted radial orbit error due to gravity field mismodelling (to 70x70) remains high at 2.61 cm (compared to 0.88 cm for TOPEX). We report on the preliminary gravity model tuning for GFO using SLR, and altimeter crossover data. Preliminary solutions using SLR and GFO/GFO crossover data from CalVal campaigns I and II in June-August 1999, and January-February 2000 have reduced the predicted radial orbit error to 1.9 cm and further reduction will be possible when additional data are added to the solutions. The gravity model tuning has improved principally the low order m-daily terms and has reduced significantly the geographically correlated error present in this satellite orbit. In addition to gravity field mismodelling, the largest contributor to the orbit error is the non-conservative force mismodelling. We report on further nonconservative force model tuning results using available data from over one cycle in beta prime.

  14. Modelling Choice of Information Sources

    Directory of Open Access Journals (Sweden)

    Agha Faisal Habib Pathan

    2013-04-01

    Full Text Available This paper addresses the significance of traveller information sources including mono-modal and multimodal websites for travel decisions. The research follows a decision paradigm developed earlier, involving an information acquisition process for travel choices, and identifies the abstract characteristics of new information sources that deserve further investigation (e.g. by incorporating these in models and studying their significance in model estimation. A Stated Preference experiment is developed and the utility functions are formulated by expanding the travellers' choice set to include different combinations of sources of information. In order to study the underlying choice mechanisms, the resulting variables are examined in models based on different behavioural strategies, including utility maximisation and minimising the regret associated with the foregone alternatives. This research confirmed that RRM (Random Regret Minimisation Theory can fruitfully be used and can provide important insights for behavioural studies. The study also analyses the properties of travel planning websites and establishes a link between travel choices and the content, provenance, design, presence of advertisements, and presentation of information. The results indicate that travellers give particular credence to governmentowned sources and put more importance on their own previous experiences than on any other single source of information. Information from multimodal websites is more influential than that on train-only websites. This in turn is more influential than information from friends, while information from coachonly websites is the least influential. A website with less search time, specific information on users' own criteria, and real time information is regarded as most attractive

  15. Three-dimensional modeling, estimation, and fault diagnosis of spacecraft air contaminants.

    Science.gov (United States)

    Narayan, A P; Ramirez, W F

    1998-01-01

    A description is given of the design and implementation of a method to track the presence of air contaminants aboard a spacecraft using an accurate physical model and of a procedure that would raise alarms when certain tolerance levels are exceeded. Because our objective is to monitor the contaminants in real time, we make use of a state estimation procedure that filters measurements from a sensor system and arrives at an optimal estimate of the state of the system. The model essentially consists of a convection-diffusion equation in three dimensions, solved implicitly using the principle of operator splitting, and uses a flowfield obtained by the solution of the Navier-Stokes equations for the cabin geometry, assuming steady-state conditions. A novel implicit Kalman filter has been used for fault detection, a procedure that is an efficient way to track the state of the system and that uses the sparse nature of the state transition matrices.

  16. Implementing model-based system engineering for the whole lifecycle of a spacecraft

    Science.gov (United States)

    Fischer, P. M.; Lüdtke, D.; Lange, C.; Roshani, F.-C.; Dannemann, F.; Gerndt, A.

    2017-09-01

    Design information of a spacecraft is collected over all phases in the lifecycle of a project. A lot of this information is exchanged between different engineering tasks and business processes. In some lifecycle phases, model-based system engineering (MBSE) has introduced system models and databases that help to organize such information and to keep it consistent for everyone. Nevertheless, none of the existing databases approached the whole lifecycle yet. Virtual Satellite is the MBSE database developed at DLR. It has been used for quite some time in Phase A studies and is currently extended for implementing it in the whole lifecycle of spacecraft projects. Since it is unforeseeable which future use cases such a database needs to support in all these different projects, the underlying data model has to provide tailoring and extension mechanisms to its conceptual data model (CDM). This paper explains the mechanisms as they are implemented in Virtual Satellite, which enables extending the CDM along the project without corrupting already stored information. As an upcoming major use case, Virtual Satellite will be implemented as MBSE tool in the S2TEP project. This project provides a new satellite bus for internal research and several different payload missions in the future. This paper explains how Virtual Satellite will be used to manage configuration control problems associated with such a multi-mission platform. It discusses how the S2TEP project starts using the software for collecting the first design information from concurrent engineering studies, then making use of the extension mechanisms of the CDM to introduce further information artefacts such as functional electrical architecture, thus linking more and more processes into an integrated MBSE approach.

  17. Modeling Temporal Processes in Early Spacecraft Design: Application of Discrete-Event Simulations for Darpa's F6 Program

    Science.gov (United States)

    Dubos, Gregory F.; Cornford, Steven

    2012-01-01

    While the ability to model the state of a space system over time is essential during spacecraft operations, the use of time-based simulations remains rare in preliminary design. The absence of the time dimension in most traditional early design tools can however become a hurdle when designing complex systems whose development and operations can be disrupted by various events, such as delays or failures. As the value delivered by a space system is highly affected by such events, exploring the trade space for designs that yield the maximum value calls for the explicit modeling of time.This paper discusses the use of discrete-event models to simulate spacecraft development schedule as well as operational scenarios and on-orbit resources in the presence of uncertainty. It illustrates how such simulations can be utilized to support trade studies, through the example of a tool developed for DARPA's F6 program to assist the design of "fractionated spacecraft".

  18. Software for Automated Generation of Reduced Thermal Models for Spacecraft Thermal Control, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Thermal analysis is increasingly used in thermal engineering of spacecrafts in every stage, including design, test, and ground-operation simulation. Current...

  19. CFD Fuel Slosh Modeling of Fluid-Structure Interaction in Spacecraft Propellant Tanks with Diaphragms

    Science.gov (United States)

    Sances, Dillon J.; Gangadharan, Sathya N.; Sudermann, James E.; Marsell, Brandon

    2010-01-01

    Liquid sloshing within spacecraft propellant tanks causes rapid energy dissipation at resonant modes, which can result in attitude destabilization of the vehicle. Identifying resonant slosh modes currently requires experimental testing and mechanical pendulum analogs to characterize the slosh dynamics. Computational Fluid Dynamics (CFD) techniques have recently been validated as an effective tool for simulating fuel slosh within free-surface propellant tanks. Propellant tanks often incorporate an internal flexible diaphragm to separate ullage and propellant which increases modeling complexity. A coupled fluid-structure CFD model is required to capture the damping effects of a flexible diaphragm on the propellant. ANSYS multidisciplinary engineering software employs a coupled solver for analyzing two-way Fluid Structure Interaction (FSI) cases such as the diaphragm propellant tank system. Slosh models generated by ANSYS software are validated by experimental lateral slosh test results. Accurate data correlation would produce an innovative technique for modeling fuel slosh within diaphragm tanks and provide an accurate and efficient tool for identifying resonant modes and the slosh dynamic response.

  20. Spacecraft Doppler tracking with possible violations of LLI and LPI: a theoretical modeling

    International Nuclear Information System (INIS)

    Deng Xue-Mei; Xie Yi

    2014-01-01

    Currently two-way and three-way spacecraft Doppler tracking techniques are widely used and play important roles in control and navigation of deep space missions. Starting from a one-way Doppler model, we extend the theory to two-way and three-way Doppler models by making them include possible violations of the local Lorentz invariance (LLI) and the local position invariance (LPI) in order to test the Einstein equivalence principle, which is the cornerstone of general relativity and all other metric theories of gravity. After taking the finite speed of light into account, which is the so-called light time solution (LTS), we make these models depend on the time of reception of the signal only for practical convenience. We find that possible violations of LLI and LPI cannot affect two-way Doppler tracking under a linear approximation of LTS, although this approximation is sufficiently good for most cases in the solar system. We also show that, in three-way Doppler tracking, possible violations of LLI and LPI are only associated with two stations, which suggests that it is better to set the stations at places with significant differences in velocities and gravitational potentials to obtain a high level of sensitivity for the tests

  1. Reduced Order Electrostatic Force Field Modeling of 3D Spacecraft Shapes

    Data.gov (United States)

    National Aeronautics and Space Administration — The Autonomous Vehicles Systems (AVS) Lab at CU Boulder has been pursuing research in Coulomb charge control of spacecraft for several years. The electrostatic...

  2. Long-term orbit prediction for Tiangong-1 spacecraft using the mean atmosphere model

    Science.gov (United States)

    Tang, Jingshi; Liu, Lin; Cheng, Haowen; Hu, Songjie; Duan, Jianfeng

    2015-03-01

    China is planning to complete its first space station by 2020. For the long-term management and maintenance, the orbit of the space station needs to be predicted for a long period of time. Since the space station is expected to work in a low-Earth orbit, the error in the a priori atmosphere model contributes significantly to the rapid increase of the predicted orbit error. When the orbit is predicted for 20 days, the error in the a priori atmosphere model, if not properly corrected, could induce a semi-major axis error of up to a few kilometers and an overall position error of several thousand kilometers respectively. In this work, we use a mean atmosphere model averaged from NRLMSISE00. The a priori reference mean density can be corrected during the orbit determination. For the long-term orbit prediction, we use sufficiently long period of observations and obtain a series of the diurnal mean densities. This series contains the recent variation of the atmosphere density and can be analyzed for various periodic components. After being properly fitted, the mean density can be predicted and then applied in the orbit prediction. Here we carry out the test with China's Tiangong-1 spacecraft at the altitude of about 340 km and we show that this method is simple and flexible. The densities predicted with this approach can serve in the long-term orbit prediction. In several 20-day prediction tests, most predicted orbits show semi-major axis errors better than 700 m and overall position errors better than 400 km.

  3. Long-term orbit prediction for China's Tiangong-1 spacecraft based on mean atmosphere model

    Science.gov (United States)

    Tang, Jingshi; Liu, Lin; Miao, Manqian

    Tiangong-1 is China's test module for future space station. It has gone through three successful rendezvous and dockings with Shenzhou spacecrafts from 2011 to 2013. For the long-term management and maintenance, the orbit sometimes needs to be predicted for a long period of time. As Tiangong-1 works in a low-Earth orbit with an altitude of about 300-400 km, the error in the a priori atmosphere model contributes significantly to the rapid increase of the predicted orbit error. When the orbit is predicted for 10-20 days, the error in the a priori atmosphere model, if not properly corrected, could induce the semi-major axis error and the overall position error up to a few kilometers and several thousand kilometers respectively. In this work, we use a mean atmosphere model averaged from NRLMSIS00. The a priori reference mean density can be corrected during precise orbit determination (POD). For applications in the long-term orbit prediction, the observations are first accumulated. With sufficiently long period of observations, we are able to obtain a series of the diurnal mean densities. This series bears the recent variation of the atmosphere density and can be analyzed for various periods. After being properly fitted, the mean density can be predicted and then applied in the orbit prediction. We show that the densities predicted with this approach can serve to increase the accuracy of the predicted orbit. In several 20-day prediction tests, most predicted orbits show semi-major axis errors better than 700m and overall position errors better than 600km.

  4. Model-based software engineering for an optical navigation system for spacecraft

    Science.gov (United States)

    Franz, T.; Lüdtke, D.; Maibaum, O.; Gerndt, A.

    2018-06-01

    The project Autonomous Terrain-based Optical Navigation (ATON) at the German Aerospace Center (DLR) is developing an optical navigation system for future landing missions on celestial bodies such as the moon or asteroids. Image data obtained by optical sensors can be used for autonomous determination of the spacecraft's position and attitude. Camera-in-the-loop experiments in the Testbed for Robotic Optical Navigation (TRON) laboratory and flight campaigns with unmanned aerial vehicle (UAV) are performed to gather flight data for further development and to test the system in a closed-loop scenario. The software modules are executed in the C++ Tasking Framework that provides the means to concurrently run the modules in separated tasks, send messages between tasks, and schedule task execution based on events. Since the project is developed in collaboration with several institutes in different domains at DLR, clearly defined and well-documented interfaces are necessary. Preventing misconceptions caused by differences between various development philosophies and standards turned out to be challenging. After the first development cycles with manual Interface Control Documents (ICD) and manual implementation of the complex interactions between modules, we switched to a model-based approach. The ATON model covers a graphical description of the modules, their parameters and communication patterns. Type and consistency checks on this formal level help to reduce errors in the system. The model enables the generation of interfaces and unified data types as well as their documentation. Furthermore, the C++ code for the exchange of data between the modules and the scheduling of the software tasks is created automatically. With this approach, changing the data flow in the system or adding additional components (e.g., a second camera) have become trivial.

  5. Model-based software engineering for an optical navigation system for spacecraft

    Science.gov (United States)

    Franz, T.; Lüdtke, D.; Maibaum, O.; Gerndt, A.

    2017-09-01

    The project Autonomous Terrain-based Optical Navigation (ATON) at the German Aerospace Center (DLR) is developing an optical navigation system for future landing missions on celestial bodies such as the moon or asteroids. Image data obtained by optical sensors can be used for autonomous determination of the spacecraft's position and attitude. Camera-in-the-loop experiments in the Testbed for Robotic Optical Navigation (TRON) laboratory and flight campaigns with unmanned aerial vehicle (UAV) are performed to gather flight data for further development and to test the system in a closed-loop scenario. The software modules are executed in the C++ Tasking Framework that provides the means to concurrently run the modules in separated tasks, send messages between tasks, and schedule task execution based on events. Since the project is developed in collaboration with several institutes in different domains at DLR, clearly defined and well-documented interfaces are necessary. Preventing misconceptions caused by differences between various development philosophies and standards turned out to be challenging. After the first development cycles with manual Interface Control Documents (ICD) and manual implementation of the complex interactions between modules, we switched to a model-based approach. The ATON model covers a graphical description of the modules, their parameters and communication patterns. Type and consistency checks on this formal level help to reduce errors in the system. The model enables the generation of interfaces and unified data types as well as their documentation. Furthermore, the C++ code for the exchange of data between the modules and the scheduling of the software tasks is created automatically. With this approach, changing the data flow in the system or adding additional components (e.g., a second camera) have become trivial.

  6. Global modeling of flux transfer events: generation mechanism and spacecraft signatures

    Science.gov (United States)

    Raeder, J.

    2003-04-01

    Magnetic reconnection is a fundamental mode of energy and momentum transfer from the solar wind to the magnetosphere. It is known to occur in different forms depending on solar wind and magnetospheric conditions. In particular, steady reconnection can be distinguished from pulse-like reconnection events which are also known as Flux Transfer Events (FTEs). The formation mechanism of FTEs and their contolling factors remain controversial. We use global MHD simulations of Earth's magnetosphere to show that for southward IMF conditions: a) steady reconnection preferentially occurs without FTEs when the stagnation flow line nearly coincides with the X-line location, which requires small dipole tilt and nearly due southward IMF, b) FTEs occur when the flow/field symmetry is broken, which requires either a large dipole tilt and/or a substantial east-west component of the IMF, c) the predicted spacecraft signature and the repetition frequency of FTEs in the simulations agrees very well with typical observations, lending credibility to the the model, d) the fundamental process that leads to FTE formation is multiple X-line formation caused by the flow and field patterns in the magnetosheath and requires no intrinsic plasma property variations like variable resistivity, e) if the dipole tilt breaks the symmetry FTEs occur only in the winter hemisphere whereas the reconnection signatures in the summer hemisphere are steady with no bipolar FTE-like signatures, f) if the IMF east-west field component breaks the symmetry FTEs occur in both hemispheres, and g) FTE formation depends on sufficient resolution and low diffusion in the model -- coarse resolution and/or high diffusivity lead to flow-through reconnection signatures that appear unphysical given the frequent observation of FTEs.

  7. Revamping Spacecraft Operational Intelligence

    Science.gov (United States)

    Hwang, Victor

    2012-01-01

    The EPOXI flight mission has been testing a new commercial system, Splunk, which employs data mining techniques to organize and present spacecraft telemetry data in a high-level manner. By abstracting away data-source specific details, Splunk unifies arbitrary data formats into one uniform system. This not only reduces the time and effort for retrieving relevant data, but it also increases operational visibility by allowing a spacecraft team to correlate data across many different sources. Splunk's scalable architecture coupled with its graphing modules also provide a solid toolset for generating data visualizations and building real-time applications such as browser-based telemetry displays.

  8. Assessing Model Characterization of Single Source ...

    Science.gov (United States)

    Aircraft measurements made downwind from specific coal fired power plants during the 2013 Southeast Nexus field campaign provide a unique opportunity to evaluate single source photochemical model predictions of both O3 and secondary PM2.5 species. The model did well at predicting downwind plume placement. The model shows similar patterns of an increasing fraction of PM2.5 sulfate ion to the sum of SO2 and PM2.5 sulfate ion by distance from the source compared with ambient based estimates. The model was less consistent in capturing downwind ambient based trends in conversion of NOX to NOY from these sources. Source sensitivity approaches capture near-source O3 titration by fresh NO emissions, in particular subgrid plume treatment. However, capturing this near-source chemical feature did not translate into better downwind peak estimates of single source O3 impacts. The model estimated O3 production from these sources but often was lower than ambient based source production. The downwind transect ambient measurements, in particular secondary PM2.5 and O3, have some level of contribution from other sources which makes direct comparison with model source contribution challenging. Model source attribution results suggest contribution to secondary pollutants from multiple sources even where primary pollutants indicate the presence of a single source. The National Exposure Research Laboratory (NERL) Computational Exposure Division (CED) develops and evaluates data, deci

  9. Modelling the reflective thermal contribution to the acceleration of the Pioneer spacecraft

    Energy Technology Data Exchange (ETDEWEB)

    Francisco, F., E-mail: frederico.francisco@ist.utl.pt [Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Universidade Tecnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Bertolami, O., E-mail: orfeu.bertolami@fc.up.pt [Departamento de Fisica e Astronomia, Faculdade de Ciencias, Universidade do Porto, Rua do Campo Alegre 687, 4169-007 Porto (Portugal); Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Universidade Tecnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Gil, P.J.S., E-mail: p.gil@dem.ist.utl.pt [Departamento de Engenharia Mecanica and IDMEC - Instituto de Engenharia Mecanica, Instituto Superior Tecnico, Universidade Tecnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal); Paramos, J., E-mail: paramos@ist.edu [Instituto de Plasmas e Fusao Nuclear, Instituto Superior Tecnico, Universidade Tecnica de Lisboa, Av. Rovisco Pais 1, 1049-001 Lisboa (Portugal)

    2012-05-23

    We present an improved method to compute the radiative momentum transfer in the Pioneer 10 and 11 spacecraft that takes into account both diffusive and specular reflection. The method allows for more reliable results regarding the thermal acceleration of the deep-space probes, confirming previous findings. A parametric analysis is performed in order to set upper and lower bounds for the thermal acceleration and its evolution with time.

  10. Modelling the reflective thermal contribution to the acceleration of the Pioneer spacecraft

    International Nuclear Information System (INIS)

    Francisco, F.; Bertolami, O.; Gil, P.J.S.; Páramos, J.

    2012-01-01

    We present an improved method to compute the radiative momentum transfer in the Pioneer 10 and 11 spacecraft that takes into account both diffusive and specular reflection. The method allows for more reliable results regarding the thermal acceleration of the deep-space probes, confirming previous findings. A parametric analysis is performed in order to set upper and lower bounds for the thermal acceleration and its evolution with time.

  11. Modeling Attitude Dynamics in Simulink: A Study of the Rotational and Translational Motion of a Spacecraft Given Torques and Impulses Generated by RMS Hand Controllers

    Science.gov (United States)

    Mauldin, Rebecca H.

    2010-01-01

    In order to study and control the attitude of a spacecraft, it is necessary to understand the natural motion of a body in orbit. Assuming a spacecraft to be a rigid body, dynamics describes the complete motion of the vehicle by the translational and rotational motion of the body. The Simulink Attitude Analysis Model applies the equations of rigid body motion to the study of a spacecraft?s attitude in orbit. Using a TCP/IP connection, Matlab reads the values of the Remote Manipulator System (RMS) hand controllers and passes them to Simulink as specified torque and impulse profiles. Simulink then uses the governing kinematic and dynamic equations of a rigid body in low earth orbit (LE0) to plot the attitude response of a spacecraft for five seconds given known applied torques and impulses, and constant principal moments of inertia.

  12. Multibody dynamical modeling for spacecraft docking process with spring-damper buffering device: A new validation approach

    Science.gov (United States)

    Daneshjou, Kamran; Alibakhshi, Reza

    2018-01-01

    In the current manuscript, the process of spacecraft docking, as one of the main risky operations in an on-orbit servicing mission, is modeled based on unconstrained multibody dynamics. The spring-damper buffering device is utilized here in the docking probe-cone system for micro-satellites. Owing to the impact occurs inevitably during docking process and the motion characteristics of multibody systems are remarkably affected by this phenomenon, a continuous contact force model needs to be considered. Spring-damper buffering device, keeping the spacecraft stable in an orbit when impact occurs, connects a base (cylinder) inserted in the chaser satellite and the end of docking probe. Furthermore, by considering a revolute joint equipped with torsional shock absorber, between base and chaser satellite, the docking probe can experience both translational and rotational motions simultaneously. Although spacecraft docking process accompanied by the buffering mechanisms may be modeled by constrained multibody dynamics, this paper deals with a simple and efficient formulation to eliminate the surplus generalized coordinates and solve the impact docking problem based on unconstrained Lagrangian mechanics. By an example problem, first, model verification is accomplished by comparing the computed results with those recently reported in the literature. Second, according to a new alternative validation approach, which is based on constrained multibody problem, the accuracy of presented model can be also evaluated. This proposed verification approach can be applied to indirectly solve the constrained multibody problems by minimum required effort. The time history of impact force, the influence of system flexibility and physical interaction between shock absorber and penetration depth caused by impact are the issues followed in this paper. Third, the MATLAB/SIMULINK multibody dynamic analysis software will be applied to build impact docking model to validate computed results and

  13. A Preliminary Model for Spacecraft Propulsion Performance Analysis Based on Nuclear Gain and Subsystem Mass-Power Balances

    Science.gov (United States)

    Chakrabarti, Suman; Schmidt, George R.; Thio, Y. C.; Hurst, Chantelle M.

    1999-01-01

    A preliminary model for spacecraft propulsion performance analysis based on nuclear gain and subsystem mass-power balances are presented in viewgraph form. For very fast missions with straight-line trajectories, it has been shown that mission trip time is proportional to the cube root of alpha. Analysis of spacecraft power systems via a power balance and examination of gain vs. mass-power ratio has shown: 1) A minimum gain is needed to have enough power for thruster and driver operation; and 2) Increases in gain result in decreases in overall mass-power ratio, which in turn leads to greater achievable accelerations. However, subsystem mass-power ratios and efficiencies are crucial: less efficient values for these can partially offset the effect of nuclear gain. Therefore, it is of interest to monitor the progress of gain-limited subsystem technologies and it is also possible that power-limited systems with sufficiently low alpha may be competitive for such ambitious missions. Topics include Space flight requirements; Spacecraft energy gain; Control theory for performance; Mission assumptions; Round trips: Time and distance; Trip times; Vehicle acceleration; and Minimizing trip times.

  14. Learning models for multi-source integration

    Energy Technology Data Exchange (ETDEWEB)

    Tejada, S.; Knoblock, C.A.; Minton, S. [Univ. of Southern California/ISI, Marina del Rey, CA (United States)

    1996-12-31

    Because of the growing number of information sources available through the internet there are many cases in which information needed to solve a problem or answer a question is spread across several information sources. For example, when given two sources, one about comic books and the other about super heroes, you might want to ask the question {open_quotes}Is Spiderman a Marvel Super Hero?{close_quotes} This query accesses both sources; therefore, it is necessary to have information about the relationships of the data within each source and between sources to properly access and integrate the data retrieved. The SIMS information broker captures this type of information in the form of a model. All the information sources map into the model providing the user a single interface to multiple sources.

  15. MULTI-SPACECRAFT OBSERVATIONS AND TRANSPORT MODELING OF ENERGETIC ELECTRONS FOR A SERIES OF SOLAR PARTICLE EVENTS IN AUGUST 2010

    Energy Technology Data Exchange (ETDEWEB)

    Dröge, W.; Kartavykh, Y. Y. [Institut für Theoretische Physik und Astrophysik, Universität Würzburg, D-97074 Würzburg (Germany); Dresing, N.; Klassen, A. [Institut für Experimentelle und Angewandte Physik, Universität Kiel, D-24118 Kiel (Germany)

    2016-08-01

    During 2010 August a series of solar particle events was observed by the two STEREO spacecraft as well as near-Earth spacecraft. The events, occurring on August 7, 14, and 18, originated from active regions 11093 and 11099. We combine in situ and remote-sensing observations with predictions from our model of three-dimensional anisotropic particle propagation in order to investigate the physical processes that caused the large angular spreads of energetic electrons during these events. In particular, we address the effects of the lateral transport of the electrons in the solar corona that is due to diffusion perpendicular to the average magnetic field in the interplanetary medium. We also study the influence of two coronal mass ejections and associated shock waves on the electron propagation, and a possible time variation of the transport conditions during the above period. For the August 18 event we also utilize electron observations from the MESSENGER spacecraft at a distance of 0.31 au from the Sun for an attempt to separate between radial and longitudinal dependencies in the transport process. Our modelings show that the parallel and perpendicular diffusion mean free paths of electrons can vary significantly not only as a function of the radial distance, but also of the heliospheric longitude. Normalized to a distance of 1 au, we derive values of λ {sub ∥} in the range of 0.15–0.6 au, and values of λ {sub ⊥} in the range of 0.005–0.01 au. We discuss how our results relate to various theoretical models for perpendicular diffusion, and whether there might be a functional relationship between the perpendicular and the parallel mean free path.

  16. Photovoltaic sources modeling and emulation

    CERN Document Server

    Piazza, Maria Carmela Di

    2012-01-01

    This book offers an extensive introduction to the modeling of photovoltaic generators and their emulation by means of power electronic converters will aid in understanding and improving design and setup of new PV plants.

  17. Spacecraft operations

    CERN Document Server

    Sellmaier, Florian; Schmidhuber, Michael

    2015-01-01

    The book describes the basic concepts of spaceflight operations, for both, human and unmanned missions. The basic subsystems of a space vehicle are explained in dedicated chapters, the relationship of spacecraft design and the very unique space environment are laid out. Flight dynamics are taught as well as ground segment requirements. Mission operations are divided into preparation including management aspects, execution and planning. Deep space missions and space robotic operations are included as special cases. The book is based on a course held at the German Space Operation Center (GSOC).

  18. Gravity-gradient dynamics experiments performed in orbit utilizing the Radio Astronomy Explorer (RAE-1) spacecraft

    Science.gov (United States)

    Walden, H.

    1973-01-01

    Six dynamic experiments were performed in earth orbit utilizing the RAE spacecraft in order to test the accuracy of the mathematical model of RAE dynamics. The spacecraft consisted of four flexible antenna booms, mounted on a rigid cylindrical spacecraft hub at center, for measuring radio emissions from extraterrestrial sources. Attitude control of the gravity stabilized spacecraft was tested by using damper clamping, single lower leading boom operations, and double lower boom operations. Results and conclusions of the in-orbit dynamic experiments proved the accuracy of the analytic techniques used to model RAE dynamical behavior.

  19. The Commercial Open Source Business Model

    Science.gov (United States)

    Riehle, Dirk

    Commercial open source software projects are open source software projects that are owned by a single firm that derives a direct and significant revenue stream from the software. Commercial open source at first glance represents an economic paradox: How can a firm earn money if it is making its product available for free as open source? This paper presents the core properties of com mercial open source business models and discusses how they work. Using a commercial open source approach, firms can get to market faster with a superior product at lower cost than possible for traditional competitors. The paper shows how these benefits accrue from an engaged and self-supporting user community. Lacking any prior comprehensive reference, this paper is based on an analysis of public statements by practitioners of commercial open source. It forges the various anecdotes into a coherent description of revenue generation strategies and relevant business functions.

  20. An analytic uranium sources model

    International Nuclear Information System (INIS)

    Singer, C.E.

    2001-01-01

    This document presents a method for estimating uranium resources as a continuous function of extraction costs and describing the uncertainty in the resulting fit. The estimated functions provide convenient extrapolations of currently available data on uranium extraction cost and can be used to predict the effect of resource depletion on future uranium supply costs. As such, they are a useful input for economic models of the nuclear energy sector. The method described here pays careful attention to minimizing built-in biases in the fitting procedure and defines ways to describe the uncertainty in the resulting fits in order to render the procedure and its results useful to the widest possible variety of potential users. (author)

  1. Spacecraft Attitude Determination

    DEFF Research Database (Denmark)

    Bak, Thomas

    This thesis describes the development of an attitude determination system for spacecraft based only on magnetic field measurements. The need for such system is motivated by the increased demands for inexpensive, lightweight solutions for small spacecraft. These spacecraft demands full attitude...... determination based on simple, reliable sensors. Meeting these objectives with a single vector magnetometer is difficult and requires temporal fusion of data in order to avoid local observability problems. In order to guaranteed globally nonsingular solutions, quaternions are generally the preferred attitude...... is a detailed study of the influence of approximations in the modeling of the system. The quantitative effects of errors in the process and noise statistics are discussed in detail. The third contribution is the introduction of these methods to the attitude determination on-board the Ørsted satellite...

  2. Characterization and modeling of the heat source

    Energy Technology Data Exchange (ETDEWEB)

    Glickstein, S.S.; Friedman, E.

    1993-10-01

    A description of the input energy source is basic to any numerical modeling formulation designed to predict the outcome of the welding process. The source is fundamental and unique to each joining process. The resultant output of any numerical model will be affected by the initial description of both the magnitude and distribution of the input energy of the heat source. Thus, calculated weld shape, residual stresses, weld distortion, cooling rates, metallurgical structure, material changes due to excessive temperatures and potential weld defects are all influenced by the initial characterization of the heat source. Understandings of both the physics and the mathematical formulation of these sources are essential for describing the input energy distribution. This section provides a brief review of the physical phenomena that influence the input energy distributions and discusses several different models of heat sources that have been used in simulating arc welding, high energy density welding and resistance welding processes. Both simplified and detailed models of the heat source are discussed.

  3. Balmorel open source energy system model

    DEFF Research Database (Denmark)

    Wiese, Frauke; Bramstoft, Rasmus; Koduvere, Hardi

    2018-01-01

    As the world progresses towards a cleaner energy future with more variable renewable energy sources, energy system models are required to deal with new challenges. This article describes design, development and applications of the open source energy system model Balmorel, which is a result...... of a long and fruitful cooperation between public and private institutions within energy system research and analysis. The purpose of the article is to explain the modelling approach, to highlight strengths and challenges of the chosen approach, to create awareness about the possible applications...... of Balmorel as well as to inspire to new model developments and encourage new users to join the community. Some of the key strengths of the model are the flexible handling of the time and space dimensions and the combination of operation and investment optimisation. Its open source character enables diverse...

  4. Faster universal modeling for two source classes

    NARCIS (Netherlands)

    Nowbakht, A.; Willems, F.M.J.; Macq, B.; Quisquater, J.-J.

    2002-01-01

    The Universal Modeling algorithms proposed in [2] for two general classes of finite-context sources are reviewed. The above methods were constructed by viewing a model structure as a partition of the context space and realizing that a partition can be reached through successive splits. Here we start

  5. System level modelling with open source tools

    DEFF Research Database (Denmark)

    Jakobsen, Mikkel Koefoed; Madsen, Jan; Niaki, Seyed Hosein Attarzadeh

    , called ForSyDe. ForSyDe is available under the open Source approach, which allows small and medium enterprises (SME) to get easy access to advanced modeling capabilities and tools. We give an introduction to the design methodology through the system level modeling of a simple industrial use case, and we...

  6. Probabilistic forward model for electroencephalography source analysis

    International Nuclear Information System (INIS)

    Plis, Sergey M; George, John S; Jun, Sung C; Ranken, Doug M; Volegov, Petr L; Schmidt, David M

    2007-01-01

    Source localization by electroencephalography (EEG) requires an accurate model of head geometry and tissue conductivity. The estimation of source time courses from EEG or from EEG in conjunction with magnetoencephalography (MEG) requires a forward model consistent with true activity for the best outcome. Although MRI provides an excellent description of soft tissue anatomy, a high resolution model of the skull (the dominant resistive component of the head) requires CT, which is not justified for routine physiological studies. Although a number of techniques have been employed to estimate tissue conductivity, no present techniques provide the noninvasive 3D tomographic mapping of conductivity that would be desirable. We introduce a formalism for probabilistic forward modeling that allows the propagation of uncertainties in model parameters into possible errors in source localization. We consider uncertainties in the conductivity profile of the skull, but the approach is general and can be extended to other kinds of uncertainties in the forward model. We and others have previously suggested the possibility of extracting conductivity of the skull from measured electroencephalography data by simultaneously optimizing over dipole parameters and the conductivity values required by the forward model. Using Cramer-Rao bounds, we demonstrate that this approach does not improve localization results nor does it produce reliable conductivity estimates. We conclude that the conductivity of the skull has to be either accurately measured by an independent technique, or that the uncertainties in the conductivity values should be reflected in uncertainty in the source location estimates

  7. A model for superliminal radio sources

    International Nuclear Information System (INIS)

    Milgrom, M.; Bahcall, J.N.

    1977-01-01

    A geometrical model for superluminal radio sources is described. Six predictions that can be tested by observations are summarized. The results are in agreement with all the available observations. In this model, the Hubble constant is the only numerical parameter that is important in interpreting the observed rates of change of angular separations for small redshifts. The available observations imply that H 0 is less than 55 km/s/Mpc if the model is correct. (author)

  8. Air quality dispersion models from energy sources

    International Nuclear Information System (INIS)

    Lazarevska, Ana

    1996-01-01

    Along with the continuing development of new air quality models that cover more complex problems, in the Clean Air Act, legislated by the US Congress, a consistency and standardization of air quality model applications were encouraged. As a result, the Guidelines on Air Quality Models were published, which are regularly reviewed by the Office of Air Quality Planning and Standards, EPA. These guidelines provide a basis for estimating the air quality concentrations used in accessing control strategies as well as defining emission limits. This paper presents a review and analysis of the recent versions of the models: Simple Terrain Stationary Source Model; Complex Terrain Dispersion Model; Ozone,Carbon Monoxide and Nitrogen Dioxide Models; Long Range Transport Model; Other phenomenon Models:Fugitive Dust/Fugitive Emissions, Particulate Matter, Lead, Air Pathway Analyses - Air Toxic as well as Hazardous Waste. 8 refs., 4 tabs., 2 ills

  9. Mathematical Modeling of the Thermal State of an Isothermal Element with Account of the Radiant Heat Transfer Between Parts of a Spacecraft

    Science.gov (United States)

    Alifanov, O. M.; Paleshkin, A. V.; Terent‧ev, V. V.; Firsyuk, S. O.

    2016-01-01

    A methodological approach to determination of the thermal state at a point on the surface of an isothermal element of a small spacecraft has been developed. A mathematical model of heat transfer between surfaces of intricate geometric configuration has been described. In this model, account was taken of the external field of radiant fluxes and of the differentiated mutual influence of the surfaces. An algorithm for calculation of the distribution of the density of the radiation absorbed by surface elements of the object under study has been proposed. The temperature field on the lateral surface of the spacecraft exposed to sunlight and on its shady side has been calculated. By determining the thermal state of magnetic controls of the orientation system as an example, the authors have assessed the contribution of the radiation coming from the solar-cell panels and from the spacecraft surface.

  10. Developing a Successful Open Source Training Model

    Directory of Open Access Journals (Sweden)

    Belinda Lopez

    2010-01-01

    Full Text Available Training programs for open source software provide a tangible, and sellable, product. A successful training program not only builds revenue, it also adds to the overall body of knowledge available for the open source project. By gathering best practices and taking advantage of the collective expertise within a community, it may be possible for a business to partner with an open source project to build a curriculum that promotes the project and supports the needs of the company's training customers. This article describes the initial approach used by Canonical, the commercial sponsor of the Ubuntu Linux operating system, to engage the community in the creation of its training offerings. We then discuss alternate curriculum creation models and some of the conditions that are necessary for successful collaboration between creators of existing documentation and commercial training providers.

  11. Determination of Realistic Fire Scenarios in Spacecraft

    Science.gov (United States)

    Dietrich, Daniel L.; Ruff, Gary A.; Urban, David

    2013-01-01

    This paper expands on previous work that examined how large a fire a crew member could successfully survive and extinguish in the confines of a spacecraft. The hazards to the crew and equipment during an accidental fire include excessive pressure rise resulting in a catastrophic rupture of the vehicle skin, excessive temperatures that burn or incapacitate the crew (due to hyperthermia), carbon dioxide build-up or accumulation of other combustion products (e.g. carbon monoxide). The previous work introduced a simplified model that treated the fire primarily as a source of heat and combustion products and sink for oxygen prescribed (input to the model) based on terrestrial standards. The model further treated the spacecraft as a closed system with no capability to vent to the vacuum of space. The model in the present work extends this analysis to more realistically treat the pressure relief system(s) of the spacecraft, include more combustion products (e.g. HF) in the analysis and attempt to predict the fire spread and limiting fire size (based on knowledge of terrestrial fires and the known characteristics of microgravity fires) rather than prescribe them in the analysis. Including the characteristics of vehicle pressure relief systems has a dramatic mitigating effect by eliminating vehicle overpressure for all but very large fires and reducing average gas-phase temperatures.

  12. Spacecraft radiator systems

    Science.gov (United States)

    Anderson, Grant A. (Inventor)

    2012-01-01

    A spacecraft radiator system designed to provide structural support to the spacecraft. Structural support is provided by the geometric "crescent" form of the panels of the spacecraft radiator. This integration of radiator and structural support provides spacecraft with a semi-monocoque design.

  13. Open source integrated modeling environment Delta Shell

    Science.gov (United States)

    Donchyts, G.; Baart, F.; Jagers, B.; van Putten, H.

    2012-04-01

    In the last decade, integrated modelling has become a very popular topic in environmental modelling since it helps solving problems, which is difficult to model using a single model. However, managing complexity of integrated models and minimizing time required for their setup remains a challenging task. The integrated modelling environment Delta Shell simplifies this task. The software components of Delta Shell are easy to reuse separately from each other as well as a part of integrated environment that can run in a command-line or a graphical user interface mode. The most components of the Delta Shell are developed using C# programming language and include libraries used to define, save and visualize various scientific data structures as well as coupled model configurations. Here we present two examples showing how Delta Shell simplifies process of setting up integrated models from the end user and developer perspectives. The first example shows coupling of a rainfall-runoff, a river flow and a run-time control models. The second example shows how coastal morphological database integrates with the coastal morphological model (XBeach) and a custom nourishment designer. Delta Shell is also available as open-source software released under LGPL license and accessible via http://oss.deltares.nl.

  14. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  15. Disturbance observer based model predictive control for accurate atmospheric entry of spacecraft

    Science.gov (United States)

    Wu, Chao; Yang, Jun; Li, Shihua; Li, Qi; Guo, Lei

    2018-05-01

    Facing the complex aerodynamic environment of Mars atmosphere, a composite atmospheric entry trajectory tracking strategy is investigated in this paper. External disturbances, initial states uncertainties and aerodynamic parameters uncertainties are the main problems. The composite strategy is designed to solve these problems and improve the accuracy of Mars atmospheric entry. This strategy includes a model predictive control for optimized trajectory tracking performance, as well as a disturbance observer based feedforward compensation for external disturbances and uncertainties attenuation. 500-run Monte Carlo simulations show that the proposed composite control scheme achieves more precise Mars atmospheric entry (3.8 km parachute deployment point distribution error) than the baseline control scheme (8.4 km) and integral control scheme (5.8 km).

  16. Risk of spacecraft on-orbit obsolescence: Novel framework, stochastic modeling, and implications

    Science.gov (United States)

    Dubos, Gregory F.; Saleh, Joseph H.

    2010-07-01

    The Government Accountability Office (GAO) has repeatedly noted the difficulties encountered by the Department of Defense (DOD) in keeping its acquisition of space systems on schedule and within budget. Among the recommendations provided by GAO, a minimum Technology Readiness Level (TRL) for technologies to be included in the development of a space system is advised. The DOD considers this recommendation impractical arguing that if space systems were designed with only mature technologies (high TRL), they would likely become obsolete on-orbit fairly quickly. The risk of on-orbit obsolescence is a key argument in the DOD's position for dipping into low technology maturity for space acquisition programs, but this policy unfortunately often results in the cost growth and schedule slippage criticized by the GAO. The concept of risk of on-orbit obsolescence has remained qualitative to date. In this paper, we formulate a theory of risk of on-orbit obsolescence by building on the traditional notion of obsolescence and adapting it to the specificities of space systems. We develop a stochastic model for quantifying and analyzing the risk of on-orbit obsolescence, and we assess, in its light, the appropriateness of DOD's rationale for maintaining low TRL technologies in its acquisition of space assets as a strategy for mitigating on-orbit obsolescence. Our model and results contribute one step towards the resolution of the conceptual stalemate on this matter between the DOD and the GAO, and we hope will inspire academics to further investigate the risk of on-orbit obsolescence.

  17. Spacecraft Jitter Attenuation Using Embedded Piezoelectric Actuators

    Science.gov (United States)

    Belvin, W. Keith

    1995-01-01

    Remote sensing from spacecraft requires precise pointing of measurement devices in order to achieve adequate spatial resolution. Unfortunately, various spacecraft disturbances induce vibrational jitter in the remote sensing instruments. The NASA Langley Research Center has performed analysis, simulations, and ground tests to identify the more promising technologies for minimizing spacecraft pointing jitter. These studies have shown that the use of smart materials to reduce spacecraft jitter is an excellent match between a maturing technology and an operational need. This paper describes the use of embedding piezoelectric actuators for vibration control and payload isolation. In addition, recent advances in modeling, simulation, and testing of spacecraft pointing jitter are discussed.

  18. Markov source model for printed music decoding

    Science.gov (United States)

    Kopec, Gary E.; Chou, Philip A.; Maltz, David A.

    1995-03-01

    This paper describes a Markov source model for a simple subset of printed music notation. The model is based on the Adobe Sonata music symbol set and a message language of our own design. Chord imaging is the most complex part of the model. Much of the complexity follows from a rule of music typography that requires the noteheads for adjacent pitches to be placed on opposite sides of the chord stem. This rule leads to a proliferation of cases for other typographic details such as dot placement. We describe the language of message strings accepted by the model and discuss some of the imaging issues associated with various aspects of the message language. We also point out some aspects of music notation that appear problematic for a finite-state representation. Development of the model was greatly facilitated by the duality between image synthesis and image decoding. Although our ultimate objective was a music image model for use in decoding, most of the development proceeded by using the evolving model for image synthesis, since it is computationally far less costly to image a message than to decode an image.

  19. Modeling of renewable hybrid energy sources

    Directory of Open Access Journals (Sweden)

    Dumitru Cristian Dragos

    2009-12-01

    Full Text Available Recent developments and trends in the electric power consumption indicate an increasing use of renewable energy. Renewable energy technologies offer the promise of clean, abundant energy gathered from self-renewing resources such as the sun, wind, earth and plants. Virtually all regions of the world have renewable resources of one type or another. By this point of view studies on renewable energies focuses more and more attention. The present paper intends to present different mathematical models related to different types of renewable energy sources such as: solar energy and wind energy. It is also presented the validation and adaptation of such models to hybrid systems working in geographical and meteorological conditions specific to central part of Transylvania region. The conclusions based on validation of such models are also shown.

  20. Computer simulation of spacecraft/environment interaction

    International Nuclear Information System (INIS)

    Krupnikov, K.K.; Makletsov, A.A.; Mileev, V.N.; Novikov, L.S.; Sinolits, V.V.

    1999-01-01

    This report presents some examples of a computer simulation of spacecraft interaction with space environment. We analysed a set data on electron and ion fluxes measured in 1991-1994 on geostationary satellite GORIZONT-35. The influence of spacecraft eclipse and device eclipse by solar-cell panel on spacecraft charging was investigated. A simple method was developed for an estimation of spacecraft potentials in LEO. Effects of various particle flux impact and spacecraft orientation are discussed. A computer engineering model for a calculation of space radiation is presented. This model is used as a client/server model with WWW interface, including spacecraft model description and results representation based on the virtual reality markup language

  1. Computer simulation of spacecraft/environment interaction

    CERN Document Server

    Krupnikov, K K; Mileev, V N; Novikov, L S; Sinolits, V V

    1999-01-01

    This report presents some examples of a computer simulation of spacecraft interaction with space environment. We analysed a set data on electron and ion fluxes measured in 1991-1994 on geostationary satellite GORIZONT-35. The influence of spacecraft eclipse and device eclipse by solar-cell panel on spacecraft charging was investigated. A simple method was developed for an estimation of spacecraft potentials in LEO. Effects of various particle flux impact and spacecraft orientation are discussed. A computer engineering model for a calculation of space radiation is presented. This model is used as a client/server model with WWW interface, including spacecraft model description and results representation based on the virtual reality markup language.

  2. Modeling a neutron rich nuclei source

    Energy Technology Data Exchange (ETDEWEB)

    Mirea, M.; Bajeat, O.; Clapier, F.; Ibrahim, F.; Mueller, A.C.; Pauwels, N.; Proust, J. [Institut de Physique Nucleaire, IN2P3/CNRS, 91 - Orsay (France); Mirea, M. [Institute of Physics and Nuclear Engineering, Tandem Lab., Bucharest (Romania)

    2000-07-01

    The deuteron break-up process in a suitable converter gives rise to intense neutron beams. A source of neutron rich nuclei based on the neutron induced fission can be realised using these beams. A theoretical optimization of such a facility as a function of the incident deuteron energy is reported. The model used to determine the fission products takes into account the excitation energy of the target nucleus and the evaporation of prompt neutrons. Results are presented in connection with a converter-target specific geometry. (author000.

  3. Modeling a neutron rich nuclei source

    International Nuclear Information System (INIS)

    Mirea, M.; Bajeat, O.; Clapier, F.; Ibrahim, F.; Mueller, A.C.; Pauwels, N.; Proust, J.; Mirea, M.

    2000-01-01

    The deuteron break-up process in a suitable converter gives rise to intense neutron beams. A source of neutron rich nuclei based on the neutron induced fission can be realised using these beams. A theoretical optimization of such a facility as a function of the incident deuteron energy is reported. The model used to determine the fission products takes into account the excitation energy of the target nucleus and the evaporation of prompt neutrons. Results are presented in connection with a converter-target specific geometry. (authors)

  4. Ulysses spacecraft control and monitoring system

    Science.gov (United States)

    Hamer, P. A.; Snowden, P. J.

    1991-01-01

    The baseline Ulysses spacecraft control and monitoring system (SCMS) concepts and the converted SCMS, residing on a DEC/VAX 8350 hardware, are considered. The main functions of the system include monitoring and displaying spacecraft telemetry, preparing spacecraft commands, producing hard copies of experimental data, and archiving spacecraft telemetry. The SCMS system comprises over 20 subsystems ranging from low-level utility routines to the major monitoring and control software. These in total consist of approximately 55,000 lines of FORTRAN source code and 100 VMS command files. The SCMS major software facilities are described, including database files, telemetry processing, telecommanding, archiving of data, and display of telemetry.

  5. Toward autonomous spacecraft

    Science.gov (United States)

    Fogel, L. J.; Calabrese, P. G.; Walsh, M. J.; Owens, A. J.

    1982-01-01

    Ways in which autonomous behavior of spacecraft can be extended to treat situations wherein a closed loop control by a human may not be appropriate or even possible are explored. Predictive models that minimize mean least squared error and arbitrary cost functions are discussed. A methodology for extracting cyclic components for an arbitrary environment with respect to usual and arbitrary criteria is developed. An approach to prediction and control based on evolutionary programming is outlined. A computer program capable of predicting time series is presented. A design of a control system for a robotic dense with partially unknown physical properties is presented.

  6. Data analysis and source modelling for LISA

    International Nuclear Information System (INIS)

    Shang, Yu

    2014-01-01

    The gravitational waves are one of the most important predictions in general relativity. Besides of the directly proof of the existence of GWs, there are already several ground based detectors (such as LIGO, GEO, etc) and the planed future space mission (such as: LISA) which are aim to detect the GWs directly. GW contain a large amount of information of its source, extracting these information can help us dig out the physical property of the source, even open a new window for understanding the Universe. Hence, GW data analysis will be a challenging task in seeking the GWs. In this thesis, I present two works about the data analysis for LISA. In the first work, we introduce an extended multimodal genetic algorithm which utilizes the properties of the signal and the detector response function to analyze the data from the third round of mock LISA data challenge. We have found all five sources present in the data and recovered the coalescence time, chirp mass, mass ratio and sky location with reasonable accuracy. As for the orbital angular momentum and two spins of the Black Holes, we have found a large number of widely separated modes in the parameter space with similar maximum likelihood values. The performance of this method is comparable, if not better, to already existing algorithms. In the second work, we introduce an new phenomenological waveform model for the extreme mass ratio inspiral system. This waveform consists of a set of harmonics with constant amplitude and slowly evolving phase which we decompose in a Taylor series. We use these phenomenological templates to detect the signal in the simulated data, and then, assuming a particular EMRI model, estimate the physical parameters of the binary with high precision. The results show that our phenomenological waveform is very feasible in the data analysis of EMRI signal.

  7. Spacecraft performance Analysis For Extreme events by integration and Combination Of Sensing, Modelling and OptimiSation

    DEFF Research Database (Denmark)

    Cowlard, A.; Jomaas, Grunde; Torero, J. L.

    2011-01-01

    To supersede the current state-of-the-art of fire safety in spacecrafts, ESA has commissioned a topical team to define an unprecedented series of demonstration and validation experiments. This initiative aims to move fire safety away from test standards and into a truly scientific selection of bo...

  8. Source term modelling parameters for Project-90

    International Nuclear Information System (INIS)

    Shaw, W.; Smith, G.; Worgan, K.; Hodgkinson, D.; Andersson, K.

    1992-04-01

    This document summarises the input parameters for the source term modelling within Project-90. In the first place, the parameters relate to the CALIBRE near-field code which was developed for the Swedish Nuclear Power Inspectorate's (SKI) Project-90 reference repository safety assessment exercise. An attempt has been made to give best estimate values and, where appropriate, a range which is related to variations around base cases. It should be noted that the data sets contain amendments to those considered by KBS-3. In particular, a completely new set of inventory data has been incorporated. The information given here does not constitute a complete set of parameter values for all parts of the CALIBRE code. Rather, it gives the key parameter values which are used in the constituent models within CALIBRE and the associated studies. For example, the inventory data acts as an input to the calculation of the oxidant production rates, which influence the generation of a redox front. The same data is also an initial value data set for the radionuclide migration component of CALIBRE. Similarly, the geometrical parameters of the near-field are common to both sub-models. The principal common parameters are gathered here for ease of reference and avoidance of unnecessary duplication and transcription errors. (au)

  9. Integrated source-risk model for radon: A definition study

    International Nuclear Information System (INIS)

    Laheij, G.M.H.; Aldenkamp, F.J.; Stoop, P.

    1993-10-01

    The purpose of a source-risk model is to support policy making on radon mitigation by comparing effects of various policy options and to enable optimization of counter measures applied to different parts of the source-risk chain. There are several advantages developing and using a source-risk model: risk calculations are standardized; the effects of measures applied to different parts of the source-risk chain can be better compared because interactions are included; and sensitivity analyses can be used to determine the most important parameters within the total source-risk chain. After an inventory of processes and sources to be included in the source-risk chain, the models presently available in the Netherlands are investigated. The models were screened for completeness, validation and operational status. The investigation made clear that, by choosing for each part of the source-risk chain the most convenient model, a source-risk chain model for radon may be realized. However, the calculation of dose out of the radon concentrations and the status of the validation of most models should be improved. Calculations with the proposed source-risk model will give estimations with a large uncertainty at the moment. For further development of the source-risk model an interaction between the source-risk model and experimental research is recommended. Organisational forms of the source-risk model are discussed. A source-risk model in which only simple models are included is also recommended. The other models are operated and administrated by the model owners. The model owners execute their models for a combination of input parameters. The output of the models is stored in a database which will be used for calculations with the source-risk model. 5 figs., 15 tabs., 7 appendices, 14 refs

  10. An open source business model for malaria.

    Science.gov (United States)

    Årdal, Christine; Røttingen, John-Arne

    2015-01-01

    Greater investment is required in developing new drugs and vaccines against malaria in order to eradicate malaria. These precious funds must be carefully managed to achieve the greatest impact. We evaluate existing efforts to discover and develop new drugs and vaccines for malaria to determine how best malaria R&D can benefit from an enhanced open source approach and how such a business model may operate. We assess research articles, patents, clinical trials and conducted a smaller survey among malaria researchers. Our results demonstrate that the public and philanthropic sectors are financing and performing the majority of malaria drug/vaccine discovery and development, but are then restricting access through patents, 'closed' publications and hidden away physical specimens. This makes little sense since it is also the public and philanthropic sector that purchases the drugs and vaccines. We recommend that a more "open source" approach is taken by making the entire value chain more efficient through greater transparency which may lead to more extensive collaborations. This can, for example, be achieved by empowering an existing organization like the Medicines for Malaria Venture (MMV) to act as a clearing house for malaria-related data. The malaria researchers that we surveyed indicated that they would utilize such registry data to increase collaboration. Finally, we question the utility of publicly or philanthropically funded patents for malaria medicines, where little to no profits are available. Malaria R&D benefits from a publicly and philanthropically funded architecture, which starts with academic research institutions, product development partnerships, commercialization assistance through UNITAID and finally procurement through mechanisms like The Global Fund to Fight AIDS, Tuberculosis and Malaria and the U.S.' President's Malaria Initiative. We believe that a fresh look should be taken at the cost/benefit of patents particularly related to new malaria

  11. An open source business model for malaria.

    Directory of Open Access Journals (Sweden)

    Christine Årdal

    Full Text Available Greater investment is required in developing new drugs and vaccines against malaria in order to eradicate malaria. These precious funds must be carefully managed to achieve the greatest impact. We evaluate existing efforts to discover and develop new drugs and vaccines for malaria to determine how best malaria R&D can benefit from an enhanced open source approach and how such a business model may operate. We assess research articles, patents, clinical trials and conducted a smaller survey among malaria researchers. Our results demonstrate that the public and philanthropic sectors are financing and performing the majority of malaria drug/vaccine discovery and development, but are then restricting access through patents, 'closed' publications and hidden away physical specimens. This makes little sense since it is also the public and philanthropic sector that purchases the drugs and vaccines. We recommend that a more "open source" approach is taken by making the entire value chain more efficient through greater transparency which may lead to more extensive collaborations. This can, for example, be achieved by empowering an existing organization like the Medicines for Malaria Venture (MMV to act as a clearing house for malaria-related data. The malaria researchers that we surveyed indicated that they would utilize such registry data to increase collaboration. Finally, we question the utility of publicly or philanthropically funded patents for malaria medicines, where little to no profits are available. Malaria R&D benefits from a publicly and philanthropically funded architecture, which starts with academic research institutions, product development partnerships, commercialization assistance through UNITAID and finally procurement through mechanisms like The Global Fund to Fight AIDS, Tuberculosis and Malaria and the U.S.' President's Malaria Initiative. We believe that a fresh look should be taken at the cost/benefit of patents particularly related

  12. About Block Dynamic Model of Earthquake Source.

    Science.gov (United States)

    Gusev, G. A.; Gufeld, I. L.

    One may state the absence of a progress in the earthquake prediction papers. The short-term prediction (diurnal period, localisation being also predicted) has practical meaning. Failure is due to the absence of the adequate notions about geological medium, particularly, its block structure and especially in the faults. Geological and geophysical monitoring gives the basis for the notion about geological medium as open block dissipative system with limit energy saturation. The variations of the volume stressed state close to critical states are associated with the interaction of the inhomogeneous ascending stream of light gases (helium and hydrogen) with solid phase, which is more expressed in the faults. In the background state small blocks of the fault medium produce the sliding of great blocks in the faults. But for the considerable variations of ascending gas streams the formation of bound chains of small blocks is possible, so that bound state of great blocks may result (earthquake source). Recently using these notions we proposed a dynamical earthquake source model, based on the generalized chain of non-linear bound oscillators of Fermi-Pasta-Ulam type (FPU). The generalization concerns its in homogeneity and different external actions, imitating physical processes in the real source. Earlier weak inhomogeneous approximation without dissipation was considered. Last has permitted to study the FPU return (return to initial state). Probabilistic properties in quasi periodic movement were found. The chain decay problem due to non-linearity and external perturbations was posed. The thresholds and dependence of life- time of the chain are studied. The great fluctuations of life-times are discovered. In the present paper the rigorous consideration of the inhomogeneous chain including the dissipation is considered. For the strong dissipation case, when the oscillation movements are suppressed, specific effects are discovered. For noise action and constantly arising

  13. sources

    Directory of Open Access Journals (Sweden)

    Shu-Yin Chiang

    2002-01-01

    Full Text Available In this paper, we study the simplified models of the ATM (Asynchronous Transfer Mode multiplexer network with Bernoulli random traffic sources. Based on the model, the performance measures are analyzed by the different output service schemes.

  14. Heat source model for welding process

    International Nuclear Information System (INIS)

    Doan, D.D.

    2006-10-01

    One of the major industrial stakes of the welding simulation relates to the control of mechanical effects of the process (residual stress, distortions, fatigue strength... ). These effects are directly dependent on the temperature evolutions imposed during the welding process. To model this thermal loading, an original method is proposed instead of the usual methods like equivalent heat source approach or multi-physical approach. This method is based on the estimation of the weld pool shape together with the heat flux crossing the liquid/solid interface, from experimental data measured in the solid part. Its originality consists in solving an inverse Stefan problem specific to the welding process, and it is shown how to estimate the parameters of the weld pool shape. To solve the heat transfer problem, the interface liquid/solid is modeled by a Bezier curve ( 2-D) or a Bezier surface (3-D). This approach is well adapted to a wide diversity of weld pool shapes met for the majority of the current welding processes (TIG, MlG-MAG, Laser, FE, Hybrid). The number of parameters to be estimated is weak enough, according to the cases considered from 2 to 5 in 20 and 7 to 16 in 3D. A sensitivity study leads to specify the location of the sensors, their number and the set of measurements required to a good estimate. The application of the method on test results of welding TIG on thin stainless steel sheets in emerging and not emerging configurations, shows that only one measurement point is enough to estimate the various weld pool shapes in 20, and two points in 3D, whatever the penetration is full or not. In the last part of the work, a methodology is developed for the transient analysis. It is based on the Duvaut's transformation which overpasses the discontinuity of the liquid metal interface and therefore gives a continuous variable for the all spatial domain. Moreover, it allows to work on a fixed mesh grid and the new inverse problem is equivalent to identify a source

  15. Training for spacecraft technical analysts

    Science.gov (United States)

    Ayres, Thomas J.; Bryant, Larry

    1989-01-01

    Deep space missions such as Voyager rely upon a large team of expert analysts who monitor activity in the various engineering subsystems of the spacecraft and plan operations. Senior teammembers generally come from the spacecraft designers, and new analysts receive on-the-job training. Neither of these methods will suffice for the creation of a new team in the middle of a mission, which may be the situation during the Magellan mission. New approaches are recommended, including electronic documentation, explicit cognitive modeling, and coached practice with archived data.

  16. Modelling of H.264 MPEG2 TS Traffic Source

    Directory of Open Access Journals (Sweden)

    Stanislav Klucik

    2013-01-01

    Full Text Available This paper deals with IPTV traffic source modelling. Traffic sources are used for simulation, emulation and real network testing. This model is made as a derivation of known recorded traffic sources that are analysed and statistically processed. As the results show the proposed model causes in comparison to the known traffic source very similar network traffic parameters when used in a simulated network.

  17. Spacecraft Spin Test Facility

    Data.gov (United States)

    Federal Laboratory Consortium — FUNCTION: Provides the capability to correct unbalances of spacecraft by using dynamic measurement techniques and static/coupled measurements to provide products of...

  18. Short Wavelength Electromagnetic Perturbations Excited Near the Solar Probe Plus Spacecraft in the Inner Heliosphere: 2.5D Hybrid Modeling

    Science.gov (United States)

    Lipatov, Alexander S.; Sittler, Edward C.; Hartle, Richard E.; Cooper, John F.

    2011-01-01

    A 2.5D numerical plasma model of the interaction of the solar wind (SW) with the Solar Probe Plus spacecraft (SPPSC) is presented. These results should be interpreted as a basic plasma model derived from the SW-interaction with the spacecraft (SC), which could have consequences for both plasma wave and electron plasma measurements on board the SC in the inner heliosphere. Compression waves and electric field jumps with amplitudes of about 1.5 V/m and (12-18) V/m were also observed. A strong polarization electric field was also observed in the wing of the plasma wake. However, 2.5D hybrid modeling did not show excitation of whistler/Alfven waves in the upstream connected with the bidirectional current closure that was observed in short-time 3D modeling SPPSC and near a tether in the ionosphere. The observed strong electromagnetic perturbations may be a crucial point in the electromagnetic measurements planned for the future Solar Probe Plus (SPP) mission. The results of modeling electromagnetic field perturbations in the SW due to shot noise in absence of SPPSC are also discussed.

  19. Airborne particulate matter in spacecraft

    Science.gov (United States)

    1988-01-01

    Acceptability limits and sampling and monitoring strategies for airborne particles in spacecraft were considered. Based on instances of eye and respiratory tract irritation reported by Shuttle flight crews, the following acceptability limits for airborne particles were recommended: for flights of 1 week or less duration (1 mg/cu m for particles less than 10 microns in aerodynamic diameter (AD) plus 1 mg/cu m for particles 10 to 100 microns in AD); and for flights greater than 1 week and up to 6 months in duration (0.2 mg/cu m for particles less than 10 microns in AD plus 0.2 mg/cu m for particles 10 to 100 microns in AD. These numerical limits were recommended to aid in spacecraft atmosphere design which should aim at particulate levels that are a low as reasonably achievable. Sampling of spacecraft atmospheres for particles should include size-fractionated samples of 0 to 10, 10 to 100, and greater than 100 micron particles for mass concentration measurement and elementary chemical analysis by nondestructive analysis techniques. Morphological and chemical analyses of single particles should also be made to aid in identifying airborne particulate sources. Air cleaning systems based on inertial collection principles and fine particle collection devices based on electrostatic precipitation and filtration should be considered for incorporation into spacecraft air circulation systems. It was also recommended that research be carried out in space in the areas of health effects and particle characterization.

  20. Computerized dosimetry of I-125 sources model 6711

    International Nuclear Information System (INIS)

    Isturiz, J.

    2001-01-01

    It tries on: physical presentation of the sources; radiation protection; mathematical model of I-125 source model 6711; data considered for the calculation program; experimental com probation of the dose distribution; exposure rate and apparent activity; techniques of the use given to the sources I-125; and the calculation planning systems [es

  1. Assessment of the Forward Contamination Risk of Mars by Clean Room Isolates from Space-Craft Assembly Facilities through Aeolian Transport - a Model Study

    Science.gov (United States)

    van Heereveld, Luc; Merrison, Jonathan; Nørnberg, Per; Finster, Kai

    2017-06-01

    The increasing number of missions to Mars also increases the risk of forward contamination. Consequently there is a need for effective protocols to ensure efficient protection of the Martian environment against terrestrial microbiota. Despite the fact of constructing sophisticated clean rooms for spacecraft assembly a 100 % avoidance of contamination appears to be impossible. Recent surveys of these facilities have identified a significant number of microbes belonging to a variety of taxonomic groups that survive the harsh conditions of clean rooms. These microbes may have a strong contamination potential, which needs to be investigate to apply efficient decontamination treatments. In this study we propose a series of tests to evaluate the potential of clean room contaminants to survive the different steps involved in forward contamination. We used Staphylococcus xylosus as model organism to illustrate the different types of stress that potential contaminants will be subjected to on their way from the spacecraft onto the surface of Mars. Staphylococcus xylosus is associated with human skin and commonly found in clean rooms and could therefore contaminate the spacecraft as a result of human activity during the assembling process. The path the cell will take from the surface of the spacecraft onto the surface of Mars was split into steps representing different stresses that include desiccation, freezing, aeolian transport in a Martian-like atmosphere at Martian atmospheric pressure, and UV radiation climate. We assessed the surviving fraction of the cellular population after each step by determining the integrated metabolic activity of the survivor population by measuring their oxygen consumption rate. The largest fraction of the starting culture (around 70 %) was killed during desiccation, while freezing, Martian vacuum and short-term UV radiation only had a minor additional effect on the survivability of Staphylococcus xylosus. The study also included a simulation

  2. Source Term Model for Fine Particle Resuspension from Indoor Surfaces

    National Research Council Canada - National Science Library

    Kim, Yoojeong; Gidwani, Ashok; Sippola, Mark; Sohn, Chang W

    2008-01-01

    This Phase I effort developed a source term model for particle resuspension from indoor surfaces to be used as a source term boundary condition for CFD simulation of particle transport and dispersion in a building...

  3. Spacecraft Charge Monitor

    Science.gov (United States)

    Goembel, L.

    2003-12-01

    We are currently developing a flight prototype Spacecraft Charge Monitor (SCM) with support from NASA's Small Business Innovation Research (SBIR) program. The device will use a recently proposed high energy-resolution electron spectroscopic technique to determine spacecraft floating potential. The inspiration for the technique came from data collected by the Atmosphere Explorer (AE) satellites in the 1970s. The data available from the AE satellites indicate that the SCM may be able to determine spacecraft floating potential to within 0.1 V under certain conditions. Such accurate measurement of spacecraft charge could be used to correct biases in space plasma measurements. The device may also be able to measure spacecraft floating potential in the solar wind and in orbit around other planets.

  4. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.

    2013-12-24

    Ground motion prediction is an essential element in seismic hazard and risk analysis. Empirical ground motion prediction approaches have been widely used in the community, but efficient simulation-based ground motion prediction methods are needed to complement empirical approaches, especially in the regions with limited data constraints. Recently, dynamic rupture modelling has been successfully adopted in physics-based source and ground motion modelling, but it is still computationally demanding and many input parameters are not well constrained by observational data. Pseudo-dynamic source modelling keeps the form of kinematic modelling with its computational efficiency, but also tries to emulate the physics of source process. In this paper, we develop a statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point and 2-point statistics from dynamically derived source models and simulating a number of rupture scenarios, given target 1-point and 2-point statistics. We propose a new rupture model generator for stochastic source modelling with the covariance matrix constructed from target 2-point statistics, that is, auto- and cross-correlations. Our sensitivity analysis of near-source ground motions to 1-point and 2-point statistics of source parameters provides insights into relations between statistical rupture properties and ground motions. We observe that larger standard deviation and stronger correlation produce stronger peak ground motions in general. The proposed new source modelling approach will contribute to understanding the effect of earthquake source on near-source ground motion characteristics in a more quantitative and systematic way.

  5. Computational model of Amersham I-125 source model 6711 and Prosper Pd-103 source model MED3633 using MCNP

    Energy Technology Data Exchange (ETDEWEB)

    Menezes, Artur F.; Reis Junior, Juraci P.; Silva, Ademir X., E-mail: ademir@con.ufrj.b [Coordenacao dos Programas de Pos-Graduacao de Engenharia (PEN/COPPE/UFRJ), RJ (Brazil). Programa de Engenharia Nuclear; Rosa, Luiz A.R. da, E-mail: lrosa@ird.gov.b [Instituto de Radioprotecao e Dosimetria (IRD/CNEN-RJ), Rio de Janeiro, RJ (Brazil); Facure, Alessandro [Comissao Nacional de Energia Nuclear (CNEN), Rio de Janeiro, RJ (Brazil); Cardoso, Simone C., E-mail: Simone@if.ufrj.b [Universidade Federal do Rio de Janeiro (IF/UFRJ), RJ (Brazil). Inst. de Fisica. Dept. de Fisica Nuclear

    2011-07-01

    Brachytherapy is used in cancer treatment at shorter distances through the use of small encapsulated source of ionizing radiation. In such treatment, a radiation source is positioned directly into or near the target volume to be treated. In this study the Monte Carlo based MCNP code was used to model and simulate the I-125 Amersham Health source model 6711 and the Pd-103 Prospera source model MED3633 in order to obtain the dosimetric parameter dose rate constant ({Lambda}) . The sources geometries were modeled and implemented in MCNPX code. The dose rate constant is an important parameter prostate LDR brachytherapy's treatments planning. This study was based on American Association of Physicists in Medicine (AAPM) recommendations which were produced by its Task Group 43. The results obtained were 0.941 and 0.65 for the dose rate constants of I-125 and Pd-103 sources, respectively. They present good agreement with the literature values based on different Monte Carlo codes. (author)

  6. Space Weather Magnetometer Set with Automated AC Spacecraft Field Correction for GEO-KOMPSAT-2A

    Science.gov (United States)

    Auster, U.; Magnes, W.; Delva, M.; Valavanoglou, A.; Leitner, S.; Hillenmaier, O.; Strauch, C.; Brown, P.; Whiteside, B.; Bendyk, M.; Hilgers, A.; Kraft, S.; Luntama, J. P.; Seon, J.

    2016-05-01

    Monitoring the solar wind conditions, in particular its magnetic field (interplanetary magnetic field) ahead of the Earth is essential in performing accurate and reliable space weather forecasting. The magnetic condition of the spacecraft itself is a key parameter for the successful performance of the magnetometer onboard. In practice a condition with negligible magnetic field of the spacecraft cannot always be fulfilled and magnetic sources on the spacecraft interfere with the natural magnetic field measured by the space magnetometer. The presented "ready-to-use" Service Oriented Spacecraft Magnetometer (SOSMAG) is developed for use on any satellite implemented without magnetic cleanliness programme. It enables detection of the spacecraft field AC variations on a proper time scale suitable to distinguish the magnetic field variations relevant to space weather phenomena, such as sudden increase in the interplanetary field or southward turning. This is achieved through the use of dual fluxgate magnetometers on a short boom (1m) and two additional AMR sensors on the spacecraft body, which monitor potential AC disturbers. The measurements of the latter sensors enable an automated correction of the AC signal contributions from the spacecraft in the final magnetic vector. After successful development and test of the EQM prototype, a flight model (FM) is being built for the Korean satellite Geo-Kompsat 2A, with launch foreseen in 2018.

  7. Multi-spacecraft observations of solar hard X-ray bursts

    International Nuclear Information System (INIS)

    Kane, S.R.

    1981-01-01

    The role of multi-spacecraft observations in solar flare research is examined from the point of view of solar hard X-ray bursts and their implications with respect to models of the impulsive phase. Multi-spacecraft measurements provide a stereoscopic view of the flare region, and hence represent the only direct method of measuring directivity of X-rays. In absence of hard X-ray imaging instruments with high spatial and temporal resolution, multi-spacecraft measurements provide the only means of determining the radial (vertical) structure of the hard X-ray source. This potential of the multi-spacecraft observations is illustrated with an analysis of the presently available observations of solar hard X-ray bursts made simultaneously by two or more of the following spacecraft: International Sun Earth Explorer-3 (ISEE-3), Pioneer Venus Orbiter (PVO), Helios-B and High Energy Astrophysical Observatory-A (HEAO-A). In particular, some conclusions have been drawn about the spatial structure and directivity of 50-100 keV X-rays from impulsive flares. Desirable features of future multi-spacecraft missions are briefly discussed followed by a short description of the hard X-ray experiment on the International Solar Polar Mission which has been planned specifically for multi-spacecraft observations of the Sun. (orig.)

  8. Studies and modeling of cold neutron sources

    International Nuclear Information System (INIS)

    Campioni, G.

    2004-11-01

    With the purpose of updating knowledge in the fields of cold neutron sources, the work of this thesis has been run according to the 3 following axes. First, the gathering of specific information forming the materials of this work. This set of knowledge covers the following fields: cold neutron, cross-sections for the different cold moderators, flux slowing down, different measurements of the cold flux and finally, issues in the thermal analysis of the problem. Secondly, the study and development of suitable computation tools. After an analysis of the problem, several tools have been planed, implemented and tested in the 3-dimensional radiation transport code Tripoli-4. In particular, a module of uncoupling, integrated in the official version of Tripoli-4, can perform Monte-Carlo parametric studies with a spare factor of Cpu time fetching 50 times. A module of coupling, simulating neutron guides, has also been developed and implemented in the Monte-Carlo code McStas. Thirdly, achieving a complete study for the validation of the installed calculation chain. These studies focus on 3 cold sources currently functioning: SP1 from Orphee reactor and 2 other sources (SFH and SFV) from the HFR at the Laue Langevin Institute. These studies give examples of problems and methods for the design of future cold sources

  9. An Open-Source Simulation Environment for Model-Based Engineering, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed work is a new spacecraft simulation environment for model-based engineering of flight algorithms and software. The goal is to provide a much faster way...

  10. Discussion of Source Reconstruction Models Using 3D MCG Data

    Science.gov (United States)

    Melis, Massimo De; Uchikawa, Yoshinori

    In this study we performed the source reconstruction of magnetocardiographic signals generated by the human heart activity to localize the site of origin of the heart activation. The localizations were performed in a four compartment model of the human volume conductor. The analyses were conducted on normal subjects and on a subject affected by the Wolff-Parkinson-White syndrome. Different models of the source activation were used to evaluate whether a general model of the current source can be applied in the study of the cardiac inverse problem. The data analyses were repeated using normal and vector component data of the MCG. The results show that a distributed source model has the better accuracy in performing the source reconstructions, and that 3D MCG data allow finding smaller differences between the different source models.

  11. Pressure-Fed LOX/LCH4 Reaction Control System for Spacecraft: Transient Modeling and Thermal Vacuum Hotfire Test Results

    Science.gov (United States)

    Atwell, Matthew J.; Hurlbert, Eric A.; Melcher, J. C.; Morehead, Robert L.

    2017-01-01

    An integrated cryogenic liquid oxygen, liquid methane (LOX/LCH4) reaction control system (RCS) was tested at NASA Glenn Research Center's Plum Brook Station in the Spacecraft Propulsion Research Facility (B-2) under vacuum and thermal vacuum conditions. The RCS is a subsystem of the Integrated Cryogenic Propulsion Test Article (ICPTA), a pressure-fed LOX/LCH4 propulsion system composed of a single 2,800 lbf main engine, two 28 lbf RCS engines, and two 7 lbf RCS engines. Propellants are stored in four 48 inch diameter 5083 aluminum tanks that feed both the main engine and RCS engines in parallel. Helium stored cryogenically in a composite overwrapped pressure vessel (COPV) flows through a heat exchanger on the main engine before being used to pressurize the propellant tanks to a design operating pressure of 325 psi. The ICPTA is capable of simultaneous main engine and RCS operation. The RCS engines utilize a coil-on-plug (COP) ignition system designed for operation in a vacuum environment, eliminating corona discharge issues associated with a high voltage lead. There are two RCS pods on the ICPTA, with two engines on each pod. One of these two engines is a heritage flight engine from Project Morpheus. Its sea level nozzle was removed and replaced by an 85:1 nozzle machined using Inconel 718, resulting in a maximum thrust of 28 lbf under altitude conditions. The other engine is a scaled down version of the 28 lbf engine, designed to match the core and overall mixture ratios as well as other injector characteristics. This engine can produce a maximum thrust of 7 lbf with an 85:1 nozzle that was additively manufactured using Inconel 718. Both engines are film-cooled and capable of limited duration gas-gas and gas-liquid operation, as well as steady-state liquid-liquid operation. Each pod contains one of each version, such that two engines of the same thrust level can be fired as a couple on opposite pods. The RCS feed system is composed of symmetrical 3/8 inch lines

  12. Model predictive control for Z-source power converter

    DEFF Research Database (Denmark)

    Mo, W.; Loh, P.C.; Blaabjerg, Frede

    2011-01-01

    This paper presents Model Predictive Control (MPC) of impedance-source (commonly known as Z-source) power converter. Output voltage control and current control for Z-source inverter are analyzed and simulated. With MPC's ability of multi- system variables regulation, load current and voltage...

  13. COMBINING SOURCES IN STABLE ISOTOPE MIXING MODELS: ALTERNATIVE METHODS

    Science.gov (United States)

    Stable isotope mixing models are often used to quantify source contributions to a mixture. Examples include pollution source identification; trophic web studies; analysis of water sources for soils, plants, or water bodies; and many others. A common problem is having too many s...

  14. Outer heliospheric radio emissions. II - Foreshock source models

    Science.gov (United States)

    Cairns, Iver H.; Kurth, William S.; Gurnett, Donald A.

    1992-01-01

    Observations of LF radio emissions in the range 2-3 kHz by the Voyager spacecraft during the intervals 1983-1987 and 1989 to the present while at heliocentric distances greater than 11 AU are reported. New analyses of the wave data are presented, and the characteristics of the radiation are reviewed and discussed. Two classes of events are distinguished: transient events with varying starting frequencies that drift upward in frequency and a relatively continuous component that remains near 2 kHz. Evidence for multiple transient sources and for extension of the 2-kHz component above the 2.4-kHz interference signal is presented. The transient emissions are interpreted in terms of radiation generated at multiples of the plasma frequency when solar wind density enhancements enter one or more regions of a foreshock sunward of the inner heliospheric shock. Solar wind density enhancements by factors of 4-10 are observed. Propagation effects, the number of radiation sources, and the time variability, frequency drift, and varying starting frequencies of the transient events are discussed in terms of foreshock sources.

  15. Earthquake source model using strong motion displacement

    Indian Academy of Sciences (India)

    The strong motion displacement records available during an earthquake can be treated as the response of the earth as the a structural system to unknown forces acting at unknown locations. Thus, if the part of the earth participating in ground motion is modelled as a known finite elastic medium, one can attempt to model the ...

  16. Repairing business process models as retrieved from source code

    NARCIS (Netherlands)

    Fernández-Ropero, M.; Reijers, H.A.; Pérez-Castillo, R.; Piattini, M.; Nurcan, S.; Proper, H.A.; Soffer, P.; Krogstie, J.; Schmidt, R.; Halpin, T.; Bider, I.

    2013-01-01

    The static analysis of source code has become a feasible solution to obtain underlying business process models from existing information systems. Due to the fact that not all information can be automatically derived from source code (e.g., consider manual activities), such business process models

  17. Data Sources Available for Modeling Environmental Exposures in Older Adults

    Science.gov (United States)

    This report, “Data Sources Available for Modeling Environmental Exposures in Older Adults,” focuses on information sources and data available for modeling environmental exposures in the older U.S. population, defined here to be people 60 years and older, with an emphasis on those...

  18. Spacecraft momentum control systems

    CERN Document Server

    Leve, Frederick A; Peck, Mason A

    2015-01-01

    The goal of this book is to serve both as a practical technical reference and a resource for gaining a fuller understanding of the state of the art of spacecraft momentum control systems, specifically looking at control moment gyroscopes (CMGs). As a result, the subject matter includes theory, technology, and systems engineering. The authors combine material on system-level architecture of spacecraft that feature momentum-control systems with material about the momentum-control hardware and software. This also encompasses material on the theoretical and algorithmic approaches to the control of space vehicles with CMGs. In essence, CMGs are the attitude-control actuators that make contemporary highly agile spacecraft possible. The rise of commercial Earth imaging, the advances in privately built spacecraft (including small satellites), and the growing popularity of the subject matter in academic circles over the past decade argues that now is the time for an in-depth treatment of the topic. CMGs are augmented ...

  19. Spacecraft Material Outgassing Data

    Data.gov (United States)

    National Aeronautics and Space Administration — This compilation of outgassing data of materials intended for spacecraft use were obtained at the Goddard Space Flight Center (GSFC), utilizing equipment developed...

  20. Spacecraft Fire Safety Demonstration

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of the Spacecraft Fire Safety Demonstration project is to develop and conduct large-scale fire safety experiments on an International Space Station...

  1. Quick spacecraft charging primer

    International Nuclear Information System (INIS)

    Larsen, Brian Arthur

    2014-01-01

    This is a presentation in PDF format which is a quick spacecraft charging primer, meant to be used for program training. It goes into detail about charging physics, RBSP examples, and how to identify charging.

  2. Quasistatic modelling of the coaxial slow source

    International Nuclear Information System (INIS)

    Hahn, K.D.; Pietrzyk, Z.A.; Vlases, G.C.

    1986-01-01

    A new 1-D Lagrangian MHD numerical code in flux coordinates has been developed for the Coaxial Slow Source (CSS) geometry. It utilizes the quasistatic approximation so that the plasma evolves as a succession of equilibria. The P=P (psi) equilibrium constraint, along with the assumption of infinitely fast axial temperature relaxation on closed field lines, is incorporated. An axially elongated, rectangular plasma is assumed. The axial length is adjusted by the global average condition, or assumed to be fixed. In this paper predictions obtained with the code, and a limited amount of comparison with experimental data are presented

  3. Deployable Brake for Spacecraft

    Science.gov (United States)

    Rausch, J. R.; Maloney, J. W.

    1987-01-01

    Aerodynamic shield that could be opened and closed proposed. Report presents concepts for deployable aerodynamic brake. Brake used by spacecraft returning from high orbit to low orbit around Earth. Spacecraft makes grazing passes through atmosphere to slow down by drag of brake. Brake flexible shield made of woven metal or ceramic withstanding high temperatures created by air friction. Stored until needed, then deployed by set of struts.

  4. Development of test models to quantify encapsulated bioburden in spacecraft polymer materials by cultivation-dependent and molecular methods

    Science.gov (United States)

    Bauermeister, Anja; Moissl-Eichinger, Christine; Mahnert, Alexander; Probst, Alexander; Flier, Niwin; Auerbach, Anna; Weber, Christina; Haberer, Klaus; Boeker, Alexander

    Bioburden encapsulated in spacecraft polymers (such as adhesives and coatings) poses a potential risk to scientific exploration of other celestial bodies, but it is not easily detectable. In this study, we developed novel testing strategies to estimate the quantity of intrinsic encapsulated bioburden in polymers used frequently on spaceflight hardware. In particular Scotch-Weld (TM) 2216 B/A (Epoxy adhesive); MAP SG121FD (Silicone coating), Solithane (®) 113 (Urethane resin); ESP 495 (Silicone adhesive); and Dow Corning (®) 93-500 (Silicone encapsulant) were investigated. As extraction of bioburden from polymerized (solid) materials did not prove feasible, a method was devised to extract contaminants from uncured polymer precursors by dilution in organic solvents. Cultivation-dependent analyses showed less than 0.1-2.5 colony forming units (cfu) per cm³ polymer, whereas quantitative PCR with extracted DNA indicated considerably higher values, despite low DNA extraction efficiency. Results obtained by this method reflected the most conservative proxy for encapsulated bioburden. To observe the effect of physical and chemical stress occurring during polymerization on the viability of encapsulated contaminants, Bacillus safensis spores were embedded close to the surface in cured polymer, which facilitated access for different analytical techniques. Staining by AlexaFluor succinimidyl ester 488 (AF488), propidium monoazide (PMA), CTC (5-cyano-2,3-diotolyl tetrazolium chloride) and subsequent confocal laser scanning microscopy (CLSM) demonstrated that embedded spores retained integrity, germination and cultivation ability even after polymerization of the adhesive Scotch-Weld™ 2216 B/A.

  5. Worldwide Spacecraft Crew Hatch History

    Science.gov (United States)

    Johnson, Gary

    2009-01-01

    The JSC Flight Safety Office has developed this compilation of historical information on spacecraft crew hatches to assist the Safety Tech Authority in the evaluation and analysis of worldwide spacecraft crew hatch design and performance. The document is prepared by SAIC s Gary Johnson, former NASA JSC S&MA Associate Director for Technical. Mr. Johnson s previous experience brings expert knowledge to assess the relevancy of data presented. He has experience with six (6) of the NASA spacecraft programs that are covered in this document: Apollo; Skylab; Apollo Soyuz Test Project (ASTP), Space Shuttle, ISS and the Shuttle/Mir Program. Mr. Johnson is also intimately familiar with the JSC Design and Procedures Standard, JPR 8080.5, having been one of its original developers. The observations and findings are presented first by country and organized within each country section by program in chronological order of emergence. A host of reference sources used to augment the personal observations and comments of the author are named within the text and/or listed in the reference section of this document. Careful attention to the selection and inclusion of photos, drawings and diagrams is used to give visual association and clarity to the topic areas examined.

  6. Equivalent charge source model based iterative maximum neighbor weight for sparse EEG source localization.

    Science.gov (United States)

    Xu, Peng; Tian, Yin; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2008-12-01

    How to localize the neural electric activities within brain effectively and precisely from the scalp electroencephalogram (EEG) recordings is a critical issue for current study in clinical neurology and cognitive neuroscience. In this paper, based on the charge source model and the iterative re-weighted strategy, proposed is a new maximum neighbor weight based iterative sparse source imaging method, termed as CMOSS (Charge source model based Maximum neighbOr weight Sparse Solution). Different from the weight used in focal underdetermined system solver (FOCUSS) where the weight for each point in the discrete solution space is independently updated in iterations, the new designed weight for each point in each iteration is determined by the source solution of the last iteration at both the point and its neighbors. Using such a new weight, the next iteration may have a bigger chance to rectify the local source location bias existed in the previous iteration solution. The simulation studies with comparison to FOCUSS and LORETA for various source configurations were conducted on a realistic 3-shell head model, and the results confirmed the validation of CMOSS for sparse EEG source localization. Finally, CMOSS was applied to localize sources elicited in a visual stimuli experiment, and the result was consistent with those source areas involved in visual processing reported in previous studies.

  7. Pseudo-dynamic source modelling with 1-point and 2-point statistics of earthquake source parameters

    KAUST Repository

    Song, S. G.; Dalguer, L. A.; Mai, Paul Martin

    2013-01-01

    statistical framework that governs the finite-fault rupture process with 1-point and 2-point statistics of source parameters in order to quantify the variability of finite source models for future scenario events. We test this method by extracting 1-point

  8. Nuisance Source Population Modeling for Radiation Detection System Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sokkappa, P; Lange, D; Nelson, K; Wheeler, R

    2009-10-05

    A major challenge facing the prospective deployment of radiation detection systems for homeland security applications is the discrimination of radiological or nuclear 'threat sources' from radioactive, but benign, 'nuisance sources'. Common examples of such nuisance sources include naturally occurring radioactive material (NORM), medical patients who have received radioactive drugs for either diagnostics or treatment, and industrial sources. A sensitive detector that cannot distinguish between 'threat' and 'benign' classes will generate false positives which, if sufficiently frequent, will preclude it from being operationally deployed. In this report, we describe a first-principles physics-based modeling approach that is used to approximate the physical properties and corresponding gamma ray spectral signatures of real nuisance sources. Specific models are proposed for the three nuisance source classes - NORM, medical and industrial. The models can be validated against measured data - that is, energy spectra generated with the model can be compared to actual nuisance source data. We show by example how this is done for NORM and medical sources, using data sets obtained from spectroscopic detector deployments for cargo container screening and urban area traffic screening, respectively. In addition to capturing the range of radioactive signatures of individual nuisance sources, a nuisance source population model must generate sources with a frequency of occurrence consistent with that found in actual movement of goods and people. Measured radiation detection data can indicate these frequencies, but, at present, such data are available only for a very limited set of locations and time periods. In this report, we make more general estimates of frequencies for NORM and medical sources using a range of data sources such as shipping manifests and medical treatment statistics. We also identify potential data sources for industrial

  9. Modeling Group Interactions via Open Data Sources

    Science.gov (United States)

    2011-08-30

    data. The state-of-art search engines are designed to help general query-specific search and not suitable for finding disconnected online groups. The...groups, (2) developing innovative mathematical and statistical models and efficient algorithms that leverage existing search engines and employ

  10. Nitrogen component in nonpoint source pollution models

    Science.gov (United States)

    Pollutants entering a water body can be very destructive to the health of that system. Best Management Practices (BMPs) and/or conservation practices are used to reduce these pollutants, but understanding the most effective practices is very difficult. Watershed models are an effective tool to aid...

  11. Application of source-receptor models to determine source areas of biological components (pollen and butterflies)

    OpenAIRE

    M. Alarcón; M. Àvila; J. Belmonte; C. Stefanescu; R. Izquierdo

    2010-01-01

    The source-receptor models allow the establishment of relationships between a receptor point (sampling point) and the probable source areas (regions of emission) through the association of concentration values at the receptor point with the corresponding atmospheric back-trajectories, and, together with other techniques, to interpret transport phenomena on a synoptic scale. These models are generally used in air pollution studies to determine the areas of origin of chemical compounds measured...

  12. Internet Technology on Spacecraft

    Science.gov (United States)

    Rash, James; Parise, Ron; Hogie, Keith; Criscuolo, Ed; Langston, Jim; Powers, Edward I. (Technical Monitor)

    2000-01-01

    The Operating Missions as Nodes on the Internet (OMNI) project has shown that Internet technology works in space missions through a demonstration using the UoSAT-12 spacecraft. An Internet Protocol (IP) stack was installed on the orbiting UoSAT-12 spacecraft and tests were run to demonstrate Internet connectivity and measure performance. This also forms the basis for demonstrating subsequent scenarios. This approach provides capabilities heretofore either too expensive or simply not feasible such as reconfiguration on orbit. The OMNI project recognized the need to reduce the risk perceived by mission managers and did this with a multi-phase strategy. In the initial phase, the concepts were implemented in a prototype system that includes space similar components communicating over the TDRS (space network) and the terrestrial Internet. The demonstration system includes a simulated spacecraft with sample instruments. Over 25 demonstrations have been given to mission and project managers, National Aeronautics and Space Administration (NASA), Department of Defense (DoD), contractor technologists and other decisions makers, This initial phase reached a high point with an OMNI demonstration given from a booth at the Johnson Space Center (JSC) Inspection Day 99 exhibition. The proof to mission managers is provided during this second phase with year 2000 accomplishments: testing the use of Internet technologies onboard an actual spacecraft. This was done with a series of tests performed using the UoSAT-12 spacecraft. This spacecraft was reconfigured on orbit at very low cost. The total period between concept and the first tests was only 6 months! On board software was modified to add an IP stack to support basic IP communications. Also added was support for ping, traceroute and network timing protocol (NTP) tests. These tests show that basic Internet functionality can be used onboard spacecraft. The performance of data was measured to show no degradation from current

  13. Mechanical Design of Spacecraft

    Science.gov (United States)

    1962-01-01

    In the spring of 1962, engineers from the Engineering Mechanics Division of the Jet Propulsion Laboratory gave a series of lectures on spacecraft design at the Engineering Design seminars conducted at the California Institute of Technology. Several of these lectures were subsequently given at Stanford University as part of the Space Technology seminar series sponsored by the Department of Aeronautics and Astronautics. Presented here are notes taken from these lectures. The lectures were conceived with the intent of providing the audience with a glimpse of the activities of a few mechanical engineers who are involved in designing, building, and testing spacecraft. Engineering courses generally consist of heavily idealized problems in order to allow the more efficient teaching of mathematical technique. Students, therefore, receive a somewhat limited exposure to actual engineering problems, which are typified by more unknowns than equations. For this reason it was considered valuable to demonstrate some of the problems faced by spacecraft designers, the processes used to arrive at solutions, and the interactions between the engineer and the remainder of the organization in which he is constrained to operate. These lecture notes are not so much a compilation of sophisticated techniques of analysis as they are a collection of examples of spacecraft hardware and associated problems. They will be of interest not so much to the experienced spacecraft designer as to those who wonder what part the mechanical engineer plays in an effort such as the exploration of space.

  14. Cross-field gradients: general concept, importance of multi-spacecraft measurements and study at 1 AU of the source intensity gradient for E > 30 keV solar event electrons

    Directory of Open Access Journals (Sweden)

    P. A. Chaizy

    Full Text Available Three main physical processes (and associated properties are currently used to describe the flux and anisotropy time profiles of solar energetic par- ticle events, called SEP profiles. They are (1 the particle scattering (due to magnetic waves, (2 the particle focusing (due to the decrease of the amplitude of the interplanetary magnetic field (IMF with the radial distance to the Sun and (3 the finite injection profile at the source. If their features change from one field line to another, i.e. if there is a cross IMF gradient (CFG, then the shape of the SEP profiles will depend, at onset time, on the relative position of the spacecraft to the IMF and might vary significantly on small distance scale (e.g. 106 km. One type of CFG is studied here. It is called intensity CFG and considers variations, at the solar surface, only of the intensity of the event. It is shown here that drops of about two orders of magnitude over distances of ~104 km at the Sun (1° of angular distance can influence dramatically the SEP profiles at 1 AU. This CFG can lead to either an under or overestimation of both the parallel mean free path and of the injection parameters by factor up to, at least, ~2-3 and 18, respectively. Multi-spacecraft analysis can be used to identify CFG. Three basic requirements are proposed to identify, from the observation, the type of the CFG being measured.

    Key words: Solar physics, astrophysics, and astronomy (energetic particles; flares and mass ejections - Space plasma physics (transport processes

  15. Bayesian mixture models for source separation in MEG

    International Nuclear Information System (INIS)

    Calvetti, Daniela; Homa, Laura; Somersalo, Erkki

    2011-01-01

    This paper discusses the problem of imaging electromagnetic brain activity from measurements of the induced magnetic field outside the head. This imaging modality, magnetoencephalography (MEG), is known to be severely ill posed, and in order to obtain useful estimates for the activity map, complementary information needs to be used to regularize the problem. In this paper, a particular emphasis is on finding non-superficial focal sources that induce a magnetic field that may be confused with noise due to external sources and with distributed brain noise. The data are assumed to come from a mixture of a focal source and a spatially distributed possibly virtual source; hence, to differentiate between those two components, the problem is solved within a Bayesian framework, with a mixture model prior encoding the information that different sources may be concurrently active. The mixture model prior combines one density that favors strongly focal sources and another that favors spatially distributed sources, interpreted as clutter in the source estimation. Furthermore, to address the challenge of localizing deep focal sources, a novel depth sounding algorithm is suggested, and it is shown with simulated data that the method is able to distinguish between a signal arising from a deep focal source and a clutter signal. (paper)

  16. Constraints on equivalent elastic source models from near-source data

    International Nuclear Information System (INIS)

    Stump, B.

    1993-01-01

    A phenomenological based seismic source model is important in quantifying the important physical processes that affect the observed seismic radiation in the linear-elastic regime. Representations such as these were used to assess yield effects on seismic waves under a Threshold Test Ban Treaty and to help transport seismic coupling experience at one test site to another. These same characterizations in a non-proliferation environment find applications in understanding the generation of the different types of body and surface waves from nuclear explosions, single chemical explosions, arrays of chemical explosions used in mining, rock bursts and earthquakes. Seismologists typically begin with an equivalent elastic representation of the source which when convolved with the propagation path effects produces a seismogram. The Representation Theorem replaces the true source with an equivalent set of body forces, boundary conditions or initial conditions. An extension of this representation shows the equivalence of the body forces, boundary conditions and initial conditions and replaces the source with a set of force moments, the first degree moment tensor for a point source representation. The difficulty with this formulation, which can completely describe the observed waveforms when the propagation path effects are known, is in the physical interpretation of the actual physical processes acting in the source volume. Observational data from within the source region, where processes are often nonlinear, linked to numerical models of the important physical processes in this region are critical to a unique physical understanding of the equivalent elastic source function

  17. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    Science.gov (United States)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration

  18. Data Sources for NetZero Ft Carson Model

    Data.gov (United States)

    U.S. Environmental Protection Agency — Table of values used to parameterize and evaluate the Ft Carson NetZero integrated Model with published reference sources for each value. This dataset is associated...

  19. Near-Source Modeling Updates: Building Downwash & Near-Road

    Science.gov (United States)

    The presentation describes recent research efforts in near-source model development focusing on building downwash and near-road barriers. The building downwash section summarizes a recent wind tunnel study, ongoing computational fluid dynamics simulations and efforts to improve ...

  20. Dips spacecraft integration issues

    International Nuclear Information System (INIS)

    Determan, W.R.; Harty, R.B.

    1988-01-01

    The Department of Energy, in cooperation with the Department of Defense, has recently initiated the dynamic isotope power system (DIPS) demonstration program. DIPS is designed to provide 1 to 10 kW of electrical power for future military spacecraft. One of the near-term missions considered as a potential application for DIPS was the boost surveillance and tracking system (BSTS). A brief review and summary of the reasons behind a selection of DIPS for BSTS-type missions is presented. Many of these are directly related to spacecraft integration issues; these issues will be reviewed in the areas of system safety, operations, survivability, reliability, and autonomy

  1. Rate-control algorithms testing by using video source model

    DEFF Research Database (Denmark)

    Belyaev, Evgeny; Turlikov, Andrey; Ukhanova, Anna

    2008-01-01

    In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set.......In this paper the method of rate control algorithms testing by the use of video source model is suggested. The proposed method allows to significantly improve algorithms testing over the big test set....

  2. Contemporary state of spacecraft/environment interaction research

    CERN Document Server

    Novikov, L S

    1999-01-01

    Various space environment effects on spacecraft materials and equipment, and the reverse effects of spacecrafts and rockets on space environment are considered. The necessity of permanent updating and perfection of our knowledge on spacecraft/environment interaction processes is noted. Requirements imposed on models of space environment in theoretical and experimental researches of various aspects of the spacecraft/environment interaction problem are formulated. In this field, main problems which need to be solved today and in the nearest future are specified. The conclusion is made that the joint analysis of both aspects of spacecraft/environment interaction problem promotes the most effective solution of the problem.

  3. Earthquake Source Spectral Study beyond the Omega-Square Model

    Science.gov (United States)

    Uchide, T.; Imanishi, K.

    2017-12-01

    Earthquake source spectra have been used for characterizing earthquake source processes quantitatively and, at the same time, simply, so that we can analyze the source spectra for many earthquakes, especially for small earthquakes, at once and compare them each other. A standard model for the source spectra is the omega-square model, which has the flat spectrum and the falloff inversely proportional to the square of frequencies at low and high frequencies, respectively, which are bordered by a corner frequency. The corner frequency has often been converted to the stress drop under the assumption of circular crack models. However, recent studies claimed the existence of another corner frequency [Denolle and Shearer, 2016; Uchide and Imanishi, 2016] thanks to the recent development of seismic networks. We have found that many earthquakes in areas other than the area studied by Uchide and Imanishi [2016] also have source spectra deviating from the omega-square model. Another part of the earthquake spectra we now focus on is the falloff rate at high frequencies, which will affect the seismic energy estimation [e.g., Hirano and Yagi, 2017]. In June, 2016, we deployed seven velocity seismometers in the northern Ibaraki prefecture, where the shallow crustal seismicity mainly with normal-faulting events was activated by the 2011 Tohoku-oki earthquake. We have recorded seismograms at 1000 samples per second and at a short distance from the source, so that we can investigate the high-frequency components of the earthquake source spectra. Although we are still in the stage of discovery and confirmation of the deviation from the standard omega-square model, the update of the earthquake source spectrum model will help us systematically extract more information on the earthquake source process.

  4. Monte Carlo modelling of large scale NORM sources using MCNP.

    Science.gov (United States)

    Wallace, J D

    2013-12-01

    The representative Monte Carlo modelling of large scale planar sources (for comparison to external environmental radiation fields) is undertaken using substantial diameter and thin profile planar cylindrical sources. The relative impact of source extent, soil thickness and sky-shine are investigated to guide decisions relating to representative geometries. In addition, the impact of source to detector distance on the nature of the detector response, for a range of source sizes, has been investigated. These investigations, using an MCNP based model, indicate a soil cylinder of greater than 20 m diameter and of no less than 50 cm depth/height, combined with a 20 m deep sky section above the soil cylinder, are needed to representatively model the semi-infinite plane of uniformly distributed NORM sources. Initial investigation of the effect of detector placement indicate that smaller source sizes may be used to achieve a representative response at shorter source to detector distances. Crown Copyright © 2013. Published by Elsevier Ltd. All rights reserved.

  5. Optimization of Excitation in FDTD Method and Corresponding Source Modeling

    Directory of Open Access Journals (Sweden)

    B. Dimitrijevic

    2015-04-01

    Full Text Available Source and excitation modeling in FDTD formulation has a significant impact on the method performance and the required simulation time. Since the abrupt source introduction yields intensive numerical variations in whole computational domain, a generally accepted solution is to slowly introduce the source, using appropriate shaping functions in time. The main goal of the optimization presented in this paper is to find balance between two opposite demands: minimal required computation time and acceptable degradation of simulation performance. Reducing the time necessary for source activation and deactivation is an important issue, especially in design of microwave structures, when the simulation is intensively repeated in the process of device parameter optimization. Here proposed optimized source models are realized and tested within an own developed FDTD simulation environment.

  6. Industry perspectives on Plug-& -Play Spacecraft Avionics

    Science.gov (United States)

    Franck, R.; Graven, P.; Liptak, L.

    This paper describes the methodologies and findings from an industry survey of awareness and utility of Spacecraft Plug-& -Play Avionics (SPA). The survey was conducted via interviews, in-person and teleconference, with spacecraft prime contractors and suppliers. It focuses primarily on AFRL's SPA technology development activities but also explores the broader applicability and utility of Plug-& -Play (PnP) architectures for spacecraft. Interviews include large and small suppliers as well as large and small spacecraft prime contractors. Through these “ product marketing” interviews, awareness and attitudes can be assessed, key technical and market barriers can be identified, and opportunities for improvement can be uncovered. Although this effort focuses on a high-level assessment, similar processes can be used to develop business cases and economic models which may be necessary to support investment decisions.

  7. Spacecraft Multiple Array Communication System Performance Analysis

    Science.gov (United States)

    Hwu, Shian U.; Desilva, Kanishka; Sham, Catherine C.

    2010-01-01

    The Communication Systems Simulation Laboratory (CSSL) at the NASA Johnson Space Center is tasked to perform spacecraft and ground network communication system simulations, design validation, and performance verification. The CSSL has developed simulation tools that model spacecraft communication systems and the space and ground environment in which the tools operate. In this paper, a spacecraft communication system with multiple arrays is simulated. Multiple array combined technique is used to increase the radio frequency coverage and data rate performance. The technique is to achieve phase coherence among the phased arrays to combine the signals at the targeting receiver constructively. There are many technical challenges in spacecraft integration with a high transmit power communication system. The array combining technique can improve the communication system data rate and coverage performances without increasing the system transmit power requirements. Example simulation results indicate significant performance improvement can be achieved with phase coherence implementation.

  8. PHARAO laser source flight model: Design and performances

    Energy Technology Data Exchange (ETDEWEB)

    Lévèque, T., E-mail: thomas.leveque@cnes.fr; Faure, B.; Esnault, F. X.; Delaroche, C.; Massonnet, D.; Grosjean, O.; Buffe, F.; Torresi, P. [Centre National d’Etudes Spatiales, 18 avenue Edouard Belin, 31400 Toulouse (France); Bomer, T.; Pichon, A.; Béraud, P.; Lelay, J. P.; Thomin, S. [Sodern, 20 Avenue Descartes, 94451 Limeil-Brévannes (France); Laurent, Ph. [LNE-SYRTE, CNRS, UPMC, Observatoire de Paris, 61 avenue de l’Observatoire, 75014 Paris (France)

    2015-03-15

    In this paper, we describe the design and the main performances of the PHARAO laser source flight model. PHARAO is a laser cooled cesium clock specially designed for operation in space and the laser source is one of the main sub-systems. The flight model presented in this work is the first remote-controlled laser system designed for spaceborne cold atom manipulation. The main challenges arise from mechanical compatibility with space constraints, which impose a high level of compactness, a low electric power consumption, a wide range of operating temperature, and a vacuum environment. We describe the main functions of the laser source and give an overview of the main technologies developed for this instrument. We present some results of the qualification process. The characteristics of the laser source flight model, and their impact on the clock performances, have been verified in operational conditions.

  9. Submarines, spacecraft and exhaled breath.

    Science.gov (United States)

    Pleil, Joachim D; Hansel, Armin

    2012-03-01

    extend the underwater endurance to 2-3 weeks. These propulsion engineering changes also reduce periodic ventilation of the submarine's interior and thus put a greater burden on the various maintenance systems. We note that the spaceflight community has similar issues; their energy production mechanisms are essentially air independent in that they rely almost entirely on photovoltaic arrays for electricity generation, with only emergency back-up power from alcohol fuel cells. In response to prolonged underwater submarine AIP operations, months-long spaceflight operations onboard the ISS and planning for future years-long missions to Mars, there has been an increasing awareness that bio-monitoring is an important factor for assessing the health and awareness states of the crewmembers. SAMAP researchers have been proposing various air and bio-monitoring instruments and methods in response to these needs. One of the most promising new methodologies is the non-invasive monitoring of exhaled breath. So, what do the IABR and SAMAP communities have in common? Inhalation toxicology. We are both concerned with contamination from the environment, either as a direct health threat or as a confounder for diagnostic assessments. For example, the exhaled breath from subjects in a contaminated and enclosed artificial environment (submarine or spacecraft) can serve as a model system and a source of contamination for their peers in a cleaner environment. In a similar way, exhaled anaesthetics can serve as a source of contamination in hospital/clinical settings, or exhalation of occupational exposures to tetrachloroethylene can impact family members at home. Instrumentation development. Both communities have similar needs for better, more specific and more sensitive instruments. Certainly, the analytical instruments to be used onboard submarines and spacecraft have severe restrictions on energy use, physical size and ease of operation. The medical and clinical communities have similar long

  10. Conceptual Design of Simulation Models in an Early Development Phase of Lunar Spacecraft Simulator Using SMP2 Standard

    Science.gov (United States)

    Lee, Hoon Hee; Koo, Cheol Hea; Moon, Sung Tae; Han, Sang Hyuck; Ju, Gwang Hyeok

    2013-08-01

    The conceptual study for Korean lunar orbiter/lander prototype has been performed in Korea Aerospace Research Institute (KARI). Across diverse space programs around European countries, a variety of simulation application has been developed using SMP2 (Simulation Modelling Platform) standard related to portability and reuse of simulation models by various model users. KARI has not only first-hand experience of a development of SMP compatible simulation environment but also an ongoing study to apply the SMP2 development process of simulation model to a simulator development project for lunar missions. KARI has tried to extend the coverage of the development domain based on SMP2 standard across the whole simulation model life-cycle from software design to its validation through a lunar exploration project. Figure. 1 shows a snapshot from a visualization tool for the simulation of lunar lander motion. In reality, a demonstrator prototype on the right-hand side of image was made and tested in 2012. In an early phase of simulator development prior to a kick-off start in the near future, targeted hardware to be modelled has been investigated and indentified at the end of 2012. The architectural breakdown of the lunar simulator at system level was performed and the architecture with a hierarchical tree of models from the system to parts at lower level has been established. Finally, SMP Documents such as Catalogue, Assembly, Schedule and so on were converted using a XML(eXtensible Mark-up Language) converter. To obtain benefits of the suggested approaches and design mechanisms in SMP2 standard as far as possible, the object-oriented and component-based design concepts were strictly chosen throughout a whole model development process.

  11. The Unfolding of Value Sources During Online Business Model Transformation

    Directory of Open Access Journals (Sweden)

    Nadja Hoßbach

    2016-12-01

    Full Text Available Purpose: In the magazine publishing industry, viable online business models are still rare to absent. To prepare for the ‘digital future’ and safeguard their long-term survival, many publishers are currently in the process of transforming their online business model. Against this backdrop, this study aims to develop a deeper understanding of (1 how the different building blocks of an online business model are transformed over time and (2 how sources of value creation unfold during this transformation process. Methodology: To answer our research question, we conducted a longitudinal case study with a leading German business magazine publisher (called BIZ. Data was triangulated from multiple sources including interviews, internal documents, and direct observations. Findings: Based on our case study, we nd that BIZ used the transformation process to differentiate its online business model from its traditional print business model along several dimensions, and that BIZ’s online business model changed from an efficiency- to a complementarity- to a novelty-based model during this process. Research implications: Our findings suggest that different business model transformation phases relate to different value sources, questioning the appropriateness of value source-based approaches for classifying business models. Practical implications: The results of our case study highlight the need for online-offline business model differentiation and point to the important distinction between service and product differentiation. Originality: Our study contributes to the business model literature by applying a dynamic and holistic perspective on the link between online business model changes and unfolding value sources.

  12. Modeling water demand when households have multiple sources of water

    Science.gov (United States)

    Coulibaly, Lassina; Jakus, Paul M.; Keith, John E.

    2014-07-01

    A significant portion of the world's population lives in areas where public water delivery systems are unreliable and/or deliver poor quality water. In response, people have developed important alternatives to publicly supplied water. To date, most water demand research has been based on single-equation models for a single source of water, with very few studies that have examined water demand from two sources of water (where all nonpublic system water sources have been aggregated into a single demand). This modeling approach leads to two outcomes. First, the demand models do not capture the full range of alternatives, so the true economic relationship among the alternatives is obscured. Second, and more seriously, economic theory predicts that demand for a good becomes more price-elastic as the number of close substitutes increases. If researchers artificially limit the number of alternatives studied to something less than the true number, the price elasticity estimate may be biased downward. This paper examines water demand in a region with near universal access to piped water, but where system reliability and quality is such that many alternative sources of water exist. In extending the demand analysis to four sources of water, we are able to (i) demonstrate why households choose the water sources they do, (ii) provide a richer description of the demand relationships among sources, and (iii) calculate own-price elasticity estimates that are more elastic than those generally found in the literature.

  13. Spacecraft Dynamics Should be Considered in Kalman Filter Attitude Estimation

    Science.gov (United States)

    Yang, Yaguang; Zhou, Zhiqiang

    2016-01-01

    Kalman filter based spacecraft attitude estimation has been used in some high-profile missions and has been widely discussed in literature. While some models in spacecraft attitude estimation include spacecraft dynamics, most do not. To our best knowledge, there is no comparison on which model is a better choice. In this paper, we discuss the reasons why spacecraft dynamics should be considered in the Kalman filter based spacecraft attitude estimation problem. We also propose a reduced quaternion spacecraft dynamics model which admits additive noise. Geometry of the reduced quaternion model and the additive noise are discussed. This treatment is more elegant in mathematics and easier in computation. We use some simulation example to verify our claims.

  14. Optical linear algebra processors - Noise and error-source modeling

    Science.gov (United States)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  15. Optical linear algebra processors: noise and error-source modeling.

    Science.gov (United States)

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  16. MCNP model for the many KE-Basin radiation sources

    International Nuclear Information System (INIS)

    Rittmann, P.D.

    1997-01-01

    This document presents a model for the location and strength of radiation sources in the accessible areas of KE-Basin which agrees well with data taken on a regular grid in September of 1996. This modelling work was requested to support dose rate reduction efforts in KE-Basin. Anticipated fuel removal activities require lower dose rates to minimize annual dose to workers. With this model, the effects of component cleanup or removal can be estimated in advance to evaluate their effectiveness. In addition, the sources contributing most to the radiation fields in a given location can be identified and dealt with

  17. Open source data assimilation framework for hydrological modeling

    Science.gov (United States)

    Ridler, Marc; Hummel, Stef; van Velzen, Nils; Katrine Falk, Anne; Madsen, Henrik

    2013-04-01

    An open-source data assimilation framework is proposed for hydrological modeling. Data assimilation (DA) in hydrodynamic and hydrological forecasting systems has great potential to improve predictions and improve model result. The basic principle is to incorporate measurement information into a model with the aim to improve model results by error minimization. Great strides have been made to assimilate traditional in-situ measurements such as discharge, soil moisture, hydraulic head and snowpack into hydrologic models. More recently, remotely sensed data retrievals of soil moisture, snow water equivalent or snow cover area, surface water elevation, terrestrial water storage and land surface temperature have been successfully assimilated in hydrological models. The assimilation algorithms have become increasingly sophisticated to manage measurement and model bias, non-linear systems, data sparsity (time & space) and undetermined system uncertainty. It is therefore useful to use a pre-existing DA toolbox such as OpenDA. OpenDA is an open interface standard for (and free implementation of) a set of tools to quickly implement DA and calibration for arbitrary numerical models. The basic design philosophy of OpenDA is to breakdown DA into a set of building blocks programmed in object oriented languages. To implement DA, a model must interact with OpenDA to create model instances, propagate the model, get/set variables (or parameters) and free the model once DA is completed. An open-source interface for hydrological models exists capable of all these tasks: OpenMI. OpenMI is an open source standard interface already adopted by key hydrological model providers. It defines a universal approach to interact with hydrological models during simulation to exchange data during runtime, thus facilitating the interactions between models and data sources. The interface is flexible enough so that models can interact even if the model is coded in a different language, represent

  18. Effects of Source RDP Models and Near-source Propagation: Implication for Seismic Yield Estimation

    Science.gov (United States)

    Saikia, C. K.; Helmberger, D. V.; Stead, R. J.; Woods, B. B.

    - It has proven difficult to uniquely untangle the source and propagation effects on the observed seismic data from underground nuclear explosions, even when large quantities of near-source, broadband data are available for analysis. This leads to uncertainties in our ability to quantify the nuclear seismic source function and, consequently the accuracy of seismic yield estimates for underground explosions. Extensive deterministic modeling analyses of the seismic data recorded from underground explosions at a variety of test sites have been conducted over the years and the results of these studies suggest that variations in the seismic source characteristics between test sites may be contributing to the observed differences in the magnitude/yield relations applicable at those sites. This contributes to our uncertainty in the determination of seismic yield estimates for explosions at previously uncalibrated test sites. In this paper we review issues involving the relationship of Nevada Test Site (NTS) source scaling laws to those at other sites. The Joint Verification Experiment (JVE) indicates that a magnitude (mb) bias (δmb) exists between the Semipalatinsk test site (STS) in the former Soviet Union (FSU) and the Nevada test site (NTS) in the United States. Generally this δmb is attributed to differential attenuation in the upper-mantle beneath the two test sites. This assumption results in rather large estimates of yield for large mb tunnel shots at Novaya Zemlya. A re-examination of the US testing experiments suggests that this δmb bias can partly be explained by anomalous NTS (Pahute) source characteristics. This interpretation is based on the modeling of US events at a number of test sites. Using a modified Haskell source description, we investigated the influence of the source Reduced Displacement Potential (RDP) parameters ψ ∞ , K and B by fitting short- and long-period data simultaneously, including the near-field body and surface waves. In general

  19. An Empirical Temperature Variance Source Model in Heated Jets

    Science.gov (United States)

    Khavaran, Abbas; Bridges, James

    2012-01-01

    An acoustic analogy approach is implemented that models the sources of jet noise in heated jets. The equivalent sources of turbulent mixing noise are recognized as the differences between the fluctuating and Favre-averaged Reynolds stresses and enthalpy fluxes. While in a conventional acoustic analogy only Reynolds stress components are scrutinized for their noise generation properties, it is now accepted that a comprehensive source model should include the additional entropy source term. Following Goldstein s generalized acoustic analogy, the set of Euler equations are divided into two sets of equations that govern a non-radiating base flow plus its residual components. When the base flow is considered as a locally parallel mean flow, the residual equations may be rearranged to form an inhomogeneous third-order wave equation. A general solution is written subsequently using a Green s function method while all non-linear terms are treated as the equivalent sources of aerodynamic sound and are modeled accordingly. In a previous study, a specialized Reynolds-averaged Navier-Stokes (RANS) solver was implemented to compute the variance of thermal fluctuations that determine the enthalpy flux source strength. The main objective here is to present an empirical model capable of providing a reasonable estimate of the stagnation temperature variance in a jet. Such a model is parameterized as a function of the mean stagnation temperature gradient in the jet, and is evaluated using commonly available RANS solvers. The ensuing thermal source distribution is compared with measurements as well as computational result from a dedicated RANS solver that employs an enthalpy variance and dissipation rate model. Turbulent mixing noise predictions are presented for a wide range of jet temperature ratios from 1.0 to 3.20.

  20. Standardizing the information architecture for spacecraft operations

    Science.gov (United States)

    Easton, C. R.

    1994-01-01

    This paper presents an information architecture developed for the Space Station Freedom as a model from which to derive an information architecture standard for advanced spacecraft. The information architecture provides a way of making information available across a program, and among programs, assuming that the information will be in a variety of local formats, structures and representations. It provides a format that can be expanded to define all of the physical and logical elements that make up a program, add definitions as required, and import definitions from prior programs to a new program. It allows a spacecraft and its control center to work in different representations and formats, with the potential for supporting existing spacecraft from new control centers. It supports a common view of data and control of all spacecraft, regardless of their own internal view of their data and control characteristics, and of their communications standards, protocols and formats. This information architecture is central to standardizing spacecraft operations, in that it provides a basis for information transfer and translation, such that diverse spacecraft can be monitored and controlled in a common way.

  1. Robust Parametric Control of Spacecraft Rendezvous

    Directory of Open Access Journals (Sweden)

    Dake Gu

    2014-01-01

    Full Text Available This paper proposes a method to design the robust parametric control for autonomous rendezvous of spacecrafts with the inertial information with uncertainty. We consider model uncertainty of traditional C-W equation to formulate the dynamic model of the relative motion. Based on eigenstructure assignment and model reference theory, a concise control law for spacecraft rendezvous is proposed which could be fixed through solving an optimization problem. The cost function considers the stabilization of the system and other performances. Simulation results illustrate the robustness and effectiveness of the proposed control.

  2. Open Sourcing Social Change: Inside the Constellation Model

    OpenAIRE

    Tonya Surman; Mark Surman

    2008-01-01

    The constellation model was developed by and for the Canadian Partnership for Children's Health and the Environment. The model offers an innovative approach to organizing collaborative efforts in the social mission sector and shares various elements of the open source model. It emphasizes self-organizing and concrete action within a network of partner organizations working on a common issue. Constellations are self-organizing action teams that operate within the broader strategic vision of a ...

  3. White Dwarf Model Atmospheres: Synthetic Spectra for Super Soft Sources

    OpenAIRE

    Rauch, Thomas

    2011-01-01

    The T\\"ubingen NLTE Model-Atmosphere Package (TMAP) calculates fully metal-line blanketed white dwarf model atmospheres and spectral energy distributions (SEDs) at a high level of sophistication. Such SEDs are easily accessible via the German Astrophysical Virtual Observatory (GAVO) service TheoSSA. We discuss applications of TMAP models to (pre) white dwarfs during the hottest stages of their stellar evolution, e.g. in the parameter range of novae and super soft sources.

  4. White Dwarf Model Atmospheres: Synthetic Spectra for Supersoft Sources

    Science.gov (United States)

    Rauch, Thomas

    2013-01-01

    The Tübingen NLTE Model-Atmosphere Package (TMAP) calculates fully metal-line blanketed white dwarf model atmospheres and spectral energy distributions (SEDs) at a high level of sophistication. Such SEDs are easily accessible via the German Astrophysical Virtual Observatory (GAVO) service TheoSSA. We discuss applications of TMAP models to (pre) white dwarfs during the hottest stages of their stellar evolution, e.g. in the parameter range of novae and supersoft sources.

  5. Intelligent spacecraft module

    Science.gov (United States)

    Oungrinis, Konstantinos-Alketas; Liapi, Marianthi; Kelesidi, Anna; Gargalis, Leonidas; Telo, Marinela; Ntzoufras, Sotiris; Paschidi, Mariana

    2014-12-01

    The paper presents the development of an on-going research project that focuses on a human-centered design approach to habitable spacecraft modules. It focuses on the technical requirements and proposes approaches on how to achieve a spatial arrangement of the interior that addresses sufficiently the functional, physiological and psychosocial needs of the people living and working in such confined spaces that entail long-term environmental threats to human health and performance. Since the research perspective examines the issue from a qualitative point of view, it is based on establishing specific relationships between the built environment and its users, targeting people's bodily and psychological comfort as a measure toward a successful mission. This research has two basic branches, one examining the context of the system's operation and behavior and the other in the direction of identifying, experimenting and formulating the environment that successfully performs according to the desired context. The latter aspect is researched upon the construction of a scaled-model on which we run series of tests to identify the materiality, the geometry and the electronic infrastructure required. Guided by the principles of sensponsive architecture, the ISM research project explores the application of the necessary spatial arrangement and behavior for a user-centered, functional interior where the appropriate intelligent systems are based upon the existing mechanical and chemical support ones featured on space today, and especially on the ISS. The problem is set according to the characteristics presented at the Mars500 project, regarding the living quarters of six crew-members, along with their hygiene, leisure and eating areas. Transformable design techniques introduce spatial economy, adjustable zoning and increased efficiency within the interior, securing at the same time precise spatial orientation and character at any given time. The sensponsive configuration is

  6. Extended nonnegative tensor factorisation models for musical sound source separation.

    Science.gov (United States)

    FitzGerald, Derry; Cranitch, Matt; Coyle, Eugene

    2008-01-01

    Recently, shift-invariant tensor factorisation algorithms have been proposed for the purposes of sound source separation of pitched musical instruments. However, in practice, existing algorithms require the use of log-frequency spectrograms to allow shift invariance in frequency which causes problems when attempting to resynthesise the separated sources. Further, it is difficult to impose harmonicity constraints on the recovered basis functions. This paper proposes a new additive synthesis-based approach which allows the use of linear-frequency spectrograms as well as imposing strict harmonic constraints, resulting in an improved model. Further, these additional constraints allow the addition of a source filter model to the factorisation framework, and an extended model which is capable of separating mixtures of pitched and percussive instruments simultaneously.

  7. Extended Nonnegative Tensor Factorisation Models for Musical Sound Source Separation

    Directory of Open Access Journals (Sweden)

    Derry FitzGerald

    2008-01-01

    Full Text Available Recently, shift-invariant tensor factorisation algorithms have been proposed for the purposes of sound source separation of pitched musical instruments. However, in practice, existing algorithms require the use of log-frequency spectrograms to allow shift invariance in frequency which causes problems when attempting to resynthesise the separated sources. Further, it is difficult to impose harmonicity constraints on the recovered basis functions. This paper proposes a new additive synthesis-based approach which allows the use of linear-frequency spectrograms as well as imposing strict harmonic constraints, resulting in an improved model. Further, these additional constraints allow the addition of a source filter model to the factorisation framework, and an extended model which is capable of separating mixtures of pitched and percussive instruments simultaneously.

  8. Spacecraft Thermal Management

    Science.gov (United States)

    Hurlbert, Kathryn Miller

    2009-01-01

    In the 21st century, the National Aeronautics and Space Administration (NASA), the Russian Federal Space Agency, the National Space Agency of Ukraine, the China National Space Administration, and many other organizations representing spacefaring nations shall continue or newly implement robust space programs. Additionally, business corporations are pursuing commercialization of space for enabling space tourism and capital business ventures. Future space missions are likely to include orbiting satellites, orbiting platforms, space stations, interplanetary vehicles, planetary surface missions, and planetary research probes. Many of these missions will include humans to conduct research for scientific and terrestrial benefits and for space tourism, and this century will therefore establish a permanent human presence beyond Earth s confines. Other missions will not include humans, but will be autonomous (e.g., satellites, robotic exploration), and will also serve to support the goals of exploring space and providing benefits to Earth s populace. This section focuses on thermal management systems for human space exploration, although the guiding principles can be applied to unmanned space vehicles as well. All spacecraft require a thermal management system to maintain a tolerable thermal environment for the spacecraft crew and/or equipment. The requirements for human rating and the specified controlled temperature range (approximately 275 K - 310 K) for crewed spacecraft are unique, and key design criteria stem from overall vehicle and operational/programatic considerations. These criteria include high reliability, low mass, minimal power requirements, low development and operational costs, and high confidence for mission success and safety. This section describes the four major subsystems for crewed spacecraft thermal management systems, and design considerations for each. Additionally, some examples of specialized or advanced thermal system technologies are presented

  9. Formation of the high-energy ion population in the earth's magnetotail: spacecraft observations and theoretical models

    Directory of Open Access Journals (Sweden)

    A. V. Artemyev

    2014-10-01

    Full Text Available We investigate the formation of the high-energy (E ∈ [20,600] keV ion population in the earth's magnetotail. We collect statistics of 4 years of Interball / Tail observations (1995–1998 in the vicinity of the neutral plane in the magnetotail region (X RE, |Y| ≤ 20 RE in geocentric solar magnetospheric (GSM system. We study the dependence of high-energy ion spectra on the thermal-plasma parameters (the temperature Ti and the amplitude of bulk velocity vi and on the magnetic-field component Bz. The ion population in the energy range E ∈ [20,600] keV can be separated in the thermal core and the power-law tail with the slope (index ~ −4.5. Fluxes of the high-energy ion population increase with the growth of Bz, vi and especially Ti, but spectrum index seems to be independent on these parameters. We have suggested that the high-energy ion population is generated by small scale transient processes, rather than by the global reconfiguration of the magnetotail. We have proposed the relatively simple and general model of ion acceleration by transient bursts of the electric field. This model describes the power-law energy spectra and predicts typical energies of accelerated ions.

  10. Monitoring alert and drowsy states by modeling EEG source nonstationarity

    Science.gov (United States)

    Hsu, Sheng-Hsiou; Jung, Tzyy-Ping

    2017-10-01

    Objective. As a human brain performs various cognitive functions within ever-changing environments, states of the brain characterized by recorded brain activities such as electroencephalogram (EEG) are inevitably nonstationary. The challenges of analyzing the nonstationary EEG signals include finding neurocognitive sources that underlie different brain states and using EEG data to quantitatively assess the state changes. Approach. This study hypothesizes that brain activities under different states, e.g. levels of alertness, can be modeled as distinct compositions of statistically independent sources using independent component analysis (ICA). This study presents a framework to quantitatively assess the EEG source nonstationarity and estimate levels of alertness. The framework was tested against EEG data collected from 10 subjects performing a sustained-attention task in a driving simulator. Main results. Empirical results illustrate that EEG signals under alert versus drowsy states, indexed by reaction speeds to driving challenges, can be characterized by distinct ICA models. By quantifying the goodness-of-fit of each ICA model to the EEG data using the model deviation index (MDI), we found that MDIs were significantly correlated with the reaction speeds (r  =  -0.390 with alertness models and r  =  0.449 with drowsiness models) and the opposite correlations indicated that the two models accounted for sources in the alert and drowsy states, respectively. Based on the observed source nonstationarity, this study also proposes an online framework using a subject-specific ICA model trained with an initial (alert) state to track the level of alertness. For classification of alert against drowsy states, the proposed online framework achieved an averaged area-under-curve of 0.745 and compared favorably with a classic power-based approach. Significance. This ICA-based framework provides a new way to study changes of brain states and can be applied to

  11. Time-dependent source model of the Lusi mud volcano

    Science.gov (United States)

    Shirzaei, M.; Rudolph, M. L.; Manga, M.

    2014-12-01

    The Lusi mud eruption, near Sidoarjo, East Java, Indonesia, began erupting in May 2006 and continues to erupt today. Previous analyses of surface deformation data suggested an exponential decay of the pressure in the mud source, but did not constrain the geometry and evolution of the source(s) from which the erupting mud and fluids ascend. To understand the spatiotemporal evolution of the mud and fluid sources, we apply a time-dependent inversion scheme to a densely populated InSAR time series of the surface deformation at Lusi. The SAR data set includes 50 images acquired on 3 overlapping tracks of the ALOS L-band satellite between May 2006 and April 2011. Following multitemporal analysis of this data set, the obtained surface deformation time series is inverted in a time-dependent framework to solve for the volume changes of distributed point sources in the subsurface. The volume change distribution resulting from this modeling scheme shows two zones of high volume change underneath Lusi at 0.5-1.5 km and 4-5.5km depth as well as another shallow zone, 7 km to the west of Lusi and underneath the Wunut gas field. The cumulative volume change within the shallow source beneath Lusi is ~2-4 times larger than that of the deep source, whilst the ratio of the Lusi shallow source volume change to that of Wunut gas field is ~1. This observation and model suggest that the Lusi shallow source played a key role in eruption process and mud supply, but that additional fluids do ascend from depths >4 km on eruptive timescales.

  12. Modeling the contribution of point sources and non-point sources to Thachin River water pollution.

    Science.gov (United States)

    Schaffner, Monika; Bader, Hans-Peter; Scheidegger, Ruth

    2009-08-15

    Major rivers in developing and emerging countries suffer increasingly of severe degradation of water quality. The current study uses a mathematical Material Flow Analysis (MMFA) as a complementary approach to address the degradation of river water quality due to nutrient pollution in the Thachin River Basin in Central Thailand. This paper gives an overview of the origins and flow paths of the various point- and non-point pollution sources in the Thachin River Basin (in terms of nitrogen and phosphorus) and quantifies their relative importance within the system. The key parameters influencing the main nutrient flows are determined and possible mitigation measures discussed. The results show that aquaculture (as a point source) and rice farming (as a non-point source) are the key nutrient sources in the Thachin River Basin. Other point sources such as pig farms, households and industries, which were previously cited as the most relevant pollution sources in terms of organic pollution, play less significant roles in comparison. This order of importance shifts when considering the model results for the provincial level. Crosschecks with secondary data and field studies confirm the plausibility of our simulations. Specific nutrient loads for the pollution sources are derived; these can be used for a first broad quantification of nutrient pollution in comparable river basins. Based on an identification of the sensitive model parameters, possible mitigation scenarios are determined and their potential to reduce the nutrient load evaluated. A comparison of simulated nutrient loads with measured nutrient concentrations shows that nutrient retention in the river system may be significant. Sedimentation in the slow flowing surface water network as well as nitrogen emission to the air from the warm oxygen deficient waters are certainly partly responsible, but also wetlands along the river banks could play an important role as nutrient sinks.

  13. Retrieving global aerosol sources from satellites using inverse modeling

    Directory of Open Access Journals (Sweden)

    O. Dubovik

    2008-01-01

    Full Text Available Understanding aerosol effects on global climate requires knowing the global distribution of tropospheric aerosols. By accounting for aerosol sources, transports, and removal processes, chemical transport models simulate the global aerosol distribution using archived meteorological fields. We develop an algorithm for retrieving global aerosol sources from satellite observations of aerosol distribution by inverting the GOCART aerosol transport model.

    The inversion is based on a generalized, multi-term least-squares-type fitting, allowing flexible selection and refinement of a priori algorithm constraints. For example, limitations can be placed on retrieved quantity partial derivatives, to constrain global aerosol emission space and time variability in the results. Similarities and differences between commonly used inverse modeling and remote sensing techniques are analyzed. To retain the high space and time resolution of long-period, global observational records, the algorithm is expressed using adjoint operators.

    Successful global aerosol emission retrievals at 2°×2.5 resolution were obtained by inverting GOCART aerosol transport model output, assuming constant emissions over the diurnal cycle, and neglecting aerosol compositional differences. In addition, fine and coarse mode aerosol emission sources were inverted separately from MODIS fine and coarse mode aerosol optical thickness data, respectively. These assumptions are justified, based on observational coverage and accuracy limitations, producing valuable aerosol source locations and emission strengths. From two weeks of daily MODIS observations during August 2000, the global placement of fine mode aerosol sources agreed with available independent knowledge, even though the inverse method did not use any a priori information about aerosol sources, and was initialized with a "zero aerosol emission" assumption. Retrieving coarse mode aerosol emissions was less successful

  14. Large-Scale Spacecraft Fire Safety Tests

    Science.gov (United States)

    Urban, David; Ruff, Gary A.; Ferkul, Paul V.; Olson, Sandra; Fernandez-Pello, A. Carlos; T'ien, James S.; Torero, Jose L.; Cowlard, Adam J.; Rouvreau, Sebastien; Minster, Olivier; hide

    2014-01-01

    . The first flight (Saffire-1) is scheduled for July 2015 with the other two following at six-month intervals. A computer modeling effort will complement the experimental effort. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. This will facilitate the first examination of fire behavior on a scale that is relevant to spacecraft fire safety and will provide unique data for fire model validation.

  15. Low-level radioactive waste performance assessments: Source term modeling

    International Nuclear Information System (INIS)

    Icenhour, A.S.; Godbee, H.W.; Miller, L.F.

    1995-01-01

    Low-level radioactive wastes (LLW) generated by government and commercial operations need to be isolated from the environment for at least 300 to 500 yr. Most existing sites for the storage or disposal of LLW employ the shallow-land burial approach. However, the U.S. Department of Energy currently emphasizes the use of engineered systems (e.g., packaging, concrete and metal barriers, and water collection systems). Future commercial LLW disposal sites may include such systems to mitigate radionuclide transport through the biosphere. Performance assessments must be conducted for LUW disposal facilities. These studies include comprehensive evaluations of radionuclide migration from the waste package, through the vadose zone, and within the water table. Atmospheric transport mechanisms are also studied. Figure I illustrates the performance assessment process. Estimates of the release of radionuclides from the waste packages (i.e., source terms) are used for subsequent hydrogeologic calculations required by a performance assessment. Computer models are typically used to describe the complex interactions of water with LLW and to determine the transport of radionuclides. Several commonly used computer programs for evaluating source terms include GWSCREEN, BLT (Breach-Leach-Transport), DUST (Disposal Unit Source Term), BARRIER (Ref. 5), as well as SOURCE1 and SOURCE2 (which are used in this study). The SOURCE1 and SOURCE2 codes were prepared by Rogers and Associates Engineering Corporation for the Oak Ridge National Laboratory (ORNL). SOURCE1 is designed for tumulus-type facilities, and SOURCE2 is tailored for silo, well-in-silo, and trench-type disposal facilities. This paper focuses on the source term for ORNL disposal facilities, and it describes improved computational methods for determining radionuclide transport from waste packages

  16. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  17. REQUIREMENTS FOR IMAGE QUALITY OF EMERGENCY SPACECRAFTS

    Directory of Open Access Journals (Sweden)

    A. I. Altukhov

    2015-05-01

    Full Text Available The paper deals with the method for formation of quality requirements to the images of emergency spacecrafts. The images are obtained by means of remote sensing of near-earth space orbital deployment in the visible range. of electromagnetic radiation. The method is based on a joint taking into account conditions of space survey, characteristics of surveillance equipment, main design features of the observed spacecrafts and orbital inspection tasks. Method. Quality score is the predicted linear resolution image that gives the possibility to create a complete view of pictorial properties of the space image obtained by electro-optical system from the observing satellite. Formulation of requirements to the numerical value of this indicator is proposed to perform based on the properties of remote sensing system, forming images in the conditions of outer space, and the properties of the observed emergency spacecraft: dimensions, platform construction of the satellite, on-board equipment placement. For method implementation the authors have developed a predictive model of requirements to a linear resolution for images of emergency spacecrafts, making it possible to select the intervals of space shooting and get the satellite images required for quality interpretation. Main results. To verify the proposed model functionality we have carried out calculations of the numerical values for the linear resolution of the image, ensuring the successful task of determining the gross structural damage of the spacecrafts and identifying changes in their spatial orientation. As input data were used with dimensions and geometric primitives corresponding to the shape of deemed inspected spacecrafts: Resurs-P", "Canopus-B", "Electro-L". Numerical values of the linear resolution images have been obtained, ensuring the successful task solution for determining the gross structural damage of spacecrafts.

  18. Topographic filtering simulation model for sediment source apportionment

    Science.gov (United States)

    Cho, Se Jong; Wilcock, Peter; Hobbs, Benjamin

    2018-05-01

    We propose a Topographic Filtering simulation model (Topofilter) that can be used to identify those locations that are likely to contribute most of the sediment load delivered from a watershed. The reduced complexity model links spatially distributed estimates of annual soil erosion, high-resolution topography, and observed sediment loading to determine the distribution of sediment delivery ratio across a watershed. The model uses two simple two-parameter topographic transfer functions based on the distance and change in elevation from upland sources to the nearest stream channel and then down the stream network. The approach does not attempt to find a single best-calibrated solution of sediment delivery, but uses a model conditioning approach to develop a large number of possible solutions. For each model run, locations that contribute to 90% of the sediment loading are identified and those locations that appear in this set in most of the 10,000 model runs are identified as the sources that are most likely to contribute to most of the sediment delivered to the watershed outlet. Because the underlying model is quite simple and strongly anchored by reliable information on soil erosion, topography, and sediment load, we believe that the ensemble of simulation outputs provides a useful basis for identifying the dominant sediment sources in the watershed.

  19. Selection of models to calculate the LLW source term

    International Nuclear Information System (INIS)

    Sullivan, T.M.

    1991-10-01

    Performance assessment of a LLW disposal facility begins with an estimation of the rate at which radionuclides migrate out of the facility (i.e., the source term). The focus of this work is to develop a methodology for calculating the source term. In general, the source term is influenced by the radionuclide inventory, the wasteforms and containers used to dispose of the inventory, and the physical processes that lead to release from the facility (fluid flow, container degradation, wasteform leaching, and radionuclide transport). In turn, many of these physical processes are influenced by the design of the disposal facility (e.g., infiltration of water). The complexity of the problem and the absence of appropriate data prevent development of an entirely mechanistic representation of radionuclide release from a disposal facility. Typically, a number of assumptions, based on knowledge of the disposal system, are used to simplify the problem. This document provides a brief overview of disposal practices and reviews existing source term models as background for selecting appropriate models for estimating the source term. The selection rationale and the mathematical details of the models are presented. Finally, guidance is presented for combining the inventory data with appropriate mechanisms describing release from the disposal facility. 44 refs., 6 figs., 1 tab

  20. Open source Modeling and optimization tools for Planning

    Energy Technology Data Exchange (ETDEWEB)

    Peles, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-02-10

    Open source modeling and optimization tools for planning The existing tools and software used for planning and analysis in California are either expensive, difficult to use, or not generally accessible to a large number of participants. These limitations restrict the availability of participants for larger scale energy and grid studies in the state. The proposed initiative would build upon federal and state investments in open source software, and create and improve open source tools for use in the state planning and analysis activities. Computational analysis and simulation frameworks in development at national labs and universities can be brought forward to complement existing tools. An open source platform would provide a path for novel techniques and strategies to be brought into the larger community and reviewed by a broad set of stakeholders.

  1. The Growth of open source: A look at how companies are utilizing open source software in their business models

    OpenAIRE

    Feare, David

    2009-01-01

    This paper examines how open source software is being incorporated into the business models of companies in the software industry. The goal is to answer the question of whether the open source model can help sustain economic growth. While some companies are able to maintain a "pure" open source approach with their business model, the reality is that most companies are relying on proprietary add-on value in order to generate revenue because open source itself is simply not big business. Ultima...

  2. Mitigating Spreadsheet Model Risk with Python Open Source Infrastructure

    OpenAIRE

    Beavers, Oliver

    2018-01-01

    Across an aggregation of EuSpRIG presentation papers, two maxims hold true: spreadsheets models are akin to software, yet spreadsheet developers are not software engineers. As such, the lack of traditional software engineering tools and protocols invites a higher rate of error in the end result. This paper lays ground work for spreadsheet modelling professionals to develop reproducible audit tools using freely available, open source packages built with the Python programming language, enablin...

  3. OSeMOSYS: The Open Source Energy Modeling System

    International Nuclear Information System (INIS)

    Howells, Mark; Rogner, Holger; Strachan, Neil; Heaps, Charles; Huntington, Hillard; Kypreos, Socrates; Hughes, Alison; Silveira, Semida; DeCarolis, Joe; Bazillian, Morgan; Roehrl, Alexander

    2011-01-01

    This paper discusses the design and development of the Open Source Energy Modeling System (OSeMOSYS). It describes the model's formulation in terms of a 'plain English' description, algebraic formulation, implementation-in terms of its full source code, as well as a detailed description of the model inputs, parameters, and outputs. A key feature of the OSeMOSYS implementation is that it is contained in less than five pages of documented, easily accessible code. Other existing energy system models that do not have this emphasis on compactness and openness makes the barrier to entry by new users much higher, as well as making the addition of innovative new functionality very difficult. The paper begins by describing the rationale for the development of OSeMOSYS and its structure. The current preliminary implementation of the model is then demonstrated for a discrete example. Next, we explain how new development efforts will build on the existing OSeMOSYS codebase. The paper closes with thoughts regarding the organization of the OSeMOSYS community, associated capacity development efforts, and linkages to other open source efforts including adding functionality to the LEAP model. - Highlights: → OSeMOSYS is a new free and open source energy systems. → This model is written in a simple, open, flexible and transparent manner to support teaching. → OSeMOSYS is based on free software and optimizes using a free solver. → This model replicates the results of many popular tools, such as MARKAL. → A link between OSeMOSYS and LEAP has been developed.

  4. MODEL OF A PERSONWALKING AS A STRUCTURE BORNE SOUND SOURCE

    DEFF Research Database (Denmark)

    Lievens, Matthias; Brunskog, Jonas

    2007-01-01

    has to be considered and the contact history must be integrated in the model. This is complicated by the fact that nonlinearities occur at different stages in the system either on the source or receiver side. ot only lightweight structures but also soft floor coverings would benefit from an accurate...

  5. Modeling Noise Sources and Propagation in External Gear Pumps

    Directory of Open Access Journals (Sweden)

    Sangbeom Woo

    2017-07-01

    Full Text Available As a key component in power transfer, positive displacement machines often represent the major source of noise in hydraulic systems. Thus, investigation into the sources of noise and discovering strategies to reduce noise is a key part of improving the performance of current hydraulic systems, as well as applying fluid power systems to a wider range of applications. The present work aims at developing modeling techniques on the topic of noise generation caused by external gear pumps for high pressure applications, which can be useful and effective in investigating the interaction between noise sources and radiated noise and establishing the design guide for a quiet pump. In particular, this study classifies the internal noise sources into four types of effective load functions and, in the proposed model, these load functions are applied to the corresponding areas of the pump case in a realistic way. Vibration and sound radiation can then be predicted using a combined finite element and boundary element vibro-acoustic model. The radiated sound power and sound pressure for the different operating conditions are presented as the main outcomes of the acoustic model. The noise prediction was validated through comparison with the experimentally measured sound power levels.

  6. Modeling of an autonomous microgrid for renewable energy sources integration

    DEFF Research Database (Denmark)

    Serban, I.; Teodorescu, Remus; Guerrero, Josep M.

    2009-01-01

    The frequency stability analysis in an autonomous microgrid (MG) with renewable energy sources (RES) is a continuously studied issue. This paper presents an original method for modeling an autonomous MG with a battery energy storage system (BESS) and a wind power plant (WPP), with the purpose...

  7. Robust Spacecraft Component Detection in Point Clouds

    Directory of Open Access Journals (Sweden)

    Quanmao Wei

    2018-03-01

    Full Text Available Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.

  8. Robust Spacecraft Component Detection in Point Clouds.

    Science.gov (United States)

    Wei, Quanmao; Jiang, Zhiguo; Zhang, Haopeng

    2018-03-21

    Automatic component detection of spacecraft can assist in on-orbit operation and space situational awareness. Spacecraft are generally composed of solar panels and cuboidal or cylindrical modules. These components can be simply represented by geometric primitives like plane, cuboid and cylinder. Based on this prior, we propose a robust automatic detection scheme to automatically detect such basic components of spacecraft in three-dimensional (3D) point clouds. In the proposed scheme, cylinders are first detected in the iteration of the energy-based geometric model fitting and cylinder parameter estimation. Then, planes are detected by Hough transform and further described as bounded patches with their minimum bounding rectangles. Finally, the cuboids are detected with pair-wise geometry relations from the detected patches. After successive detection of cylinders, planar patches and cuboids, a mid-level geometry representation of the spacecraft can be delivered. We tested the proposed component detection scheme on spacecraft 3D point clouds synthesized by computer-aided design (CAD) models and those recovered by image-based reconstruction, respectively. Experimental results illustrate that the proposed scheme can detect the basic geometric components effectively and has fine robustness against noise and point distribution density.

  9. Modeling Secondary Organic Aerosol Formation From Emissions of Combustion Sources

    Science.gov (United States)

    Jathar, Shantanu Hemant

    Atmospheric aerosols exert a large influence on the Earth's climate and cause adverse public health effects, reduced visibility and material degradation. Secondary organic aerosol (SOA), defined as the aerosol mass arising from the oxidation products of gas-phase organic species, accounts for a significant fraction of the submicron atmospheric aerosol mass. Yet, there are large uncertainties surrounding the sources, atmospheric evolution and properties of SOA. This thesis combines laboratory experiments, extensive data analysis and global modeling to investigate the contribution of semi-volatile and intermediate volatility organic compounds (SVOC and IVOC) from combustion sources to SOA formation. The goals are to quantify the contribution of these emissions to ambient PM and to evaluate and improve models to simulate its formation. To create a database for model development and evaluation, a series of smog chamber experiments were conducted on evaporated fuel, which served as surrogates for real-world combustion emissions. Diesel formed the most SOA followed by conventional jet fuel / jet fuel derived from natural gas, gasoline and jet fuel derived from coal. The variability in SOA formation from actual combustion emissions can be partially explained by the composition of the fuel. Several models were developed and tested along with existing models using SOA data from smog chamber experiments conducted using evaporated fuel (this work, gasoline, fischertropschs, jet fuel, diesels) and published data on dilute combustion emissions (aircraft, on- and off-road gasoline, on- and off-road diesel, wood burning, biomass burning). For all of the SOA data, existing models under-predicted SOA formation if SVOC/IVOC were not included. For the evaporated fuel experiments, when SVOC/IVOC were included predictions using the existing SOA model were brought to within a factor of two of measurements with minor adjustments to model parameterizations. Further, a volatility

  10. About the Big Graphs Arising when Forming the Diagnostic Models in a Reconfigurable Computing Field of Functional Monitoring and Diagnostics System of the Spacecraft Onboard Control Complex

    Directory of Open Access Journals (Sweden)

    L. V. Savkin

    2015-01-01

    Full Text Available One of the problems in implementation of the multipurpose complete systems based on the reconfigurable computing fields (RCF is the problem of optimum redistribution of logicalarithmetic resources in growing scope of functional tasks. Irrespective of complexity, all of them are transformed into an orgraph, which functional and topological structure is appropriately imposed on the RCF based, as a rule, on the field programmable gate array (FPGA.Due to limitation of the hardware configurations and functions realized by means of the switched logical blocks (SLB, the abovementioned problem becomes even more critical when there is a need, within the strictly allocated RCF fragment, to realize even more complex challenge in comparison with the problem which was solved during the previous computing step. In such cases it is possible to speak about graphs of big dimensions with respect to allocated RCF fragment.The article considers this problem through development of diagnostic algorithms to implement diagnostics and control of an onboard control complex of the spacecraft using RCF. It gives examples of big graphs arising with respect to allocated RCF fragment when forming the hardware levels of a diagnostic model, which, in this case, is any hardware-based algorithm of diagnostics in RCF.The article reviews examples of arising big graphs when forming the complicated diagnostic models due to drastic difference in formation of hardware levels on closely located RCF fragments. It also pays attention to big graphs emerging when the multichannel diagnostic models are formed.Three main ways to solve the problem of big graphs with respect to allocated RCF fragment are given. These are: splitting the graph into fragments, use of pop-up windows with relocating and memorizing intermediate values of functions of high hardware levels of diagnostic models, and deep adaptive update of diagnostic model.It is shown that the last of three ways is the most efficient

  11. Source modelling in seismic risk analysis for nuclear power plants

    International Nuclear Information System (INIS)

    Yucemen, M.S.

    1978-12-01

    The proposed probabilistic procedure provides a consistent method for the modelling, analysis and updating of uncertainties that are involved in the seismic risk analysis for nuclear power plants. The potential earthquake activity zones are idealized as point, line or area sources. For these seismic source types, expressions to evaluate their contribution to seismic risk are derived, considering all the possible site-source configurations. The seismic risk at a site is found to depend not only on the inherent randomness of the earthquake occurrences with respect to magnitude, time and space, but also on the uncertainties associated with the predicted values of the seismic and geometric parameters, as well as the uncertainty in the attenuation model. The uncertainty due to the attenuation equation is incorporated into the analysis through the use of random correction factors. The influence of the uncertainty resulting from the insufficient information on the seismic parameters and source geometry is introduced into the analysis by computing a mean risk curve averaged over the various alternative assumptions on the parameters and source geometry. Seismic risk analysis is carried for the city of Denizli, which is located in the seismically most active zone of Turkey. The second analysis is for Akkuyu

  12. Developing seismogenic source models based on geologic fault data

    Science.gov (United States)

    Haller, Kathleen M.; Basili, Roberto

    2011-01-01

    Calculating seismic hazard usually requires input that includes seismicity associated with known faults, historical earthquake catalogs, geodesy, and models of ground shaking. This paper will address the input generally derived from geologic studies that augment the short historical catalog to predict ground shaking at time scales of tens, hundreds, or thousands of years (e.g., SSHAC 1997). A seismogenic source model, terminology we adopt here for a fault source model, includes explicit three-dimensional faults deemed capable of generating ground motions of engineering significance within a specified time frame of interest. In tectonically active regions of the world, such as near plate boundaries, multiple seismic cycles span a few hundred to a few thousand years. In contrast, in less active regions hundreds of kilometers from the nearest plate boundary, seismic cycles generally are thousands to tens of thousands of years long. Therefore, one should include sources having both longer recurrence intervals and possibly older times of most recent rupture in less active regions of the world rather than restricting the model to include only Holocene faults (i.e., those with evidence of large-magnitude earthquakes in the past 11,500 years) as is the practice in tectonically active regions with high deformation rates. During the past 15 years, our institutions independently developed databases to characterize seismogenic sources based on geologic data at a national scale. Our goal here is to compare the content of these two publicly available seismogenic source models compiled for the primary purpose of supporting seismic hazard calculations by the Istituto Nazionale di Geofisica e Vulcanologia (INGV) and the U.S. Geological Survey (USGS); hereinafter we refer to the two seismogenic source models as INGV and USGS, respectively. This comparison is timely because new initiatives are emerging to characterize seismogenic sources at the continental scale (e.g., SHARE in the

  13. Spacecraft exploration of asteroids

    International Nuclear Information System (INIS)

    Veverka, J.; Langevin, Y.; Farquhar, R.; Fulchignoni, M.

    1989-01-01

    After two decades of spacecraft exploration, we still await the first direct investigation of an asteroid. This paper describes how a growing international interest in the solar system's more primitive bodies should remedy this. Plans are under way in Europe for a dedicated asteroid mission (Vesta) which will include multiple flybys with in situ penetrator studies. Possible targets include 4 Vesta, 8 Flora and 46 Hestia; launch its scheduled for 1994 or 1996. In the United States, NASA plans include flybys of asteroids en route to outer solar system targets

  14. Spacecraft rendezvous and docking

    DEFF Research Database (Denmark)

    Jørgensen, John Leif

    1999-01-01

    The phenomenons and problems encountered when a rendezvous manoeuvre, and possible docking, of two spacecrafts has to be performed, have been the topic for numerous studies, and, details of a variety of scenarios has been analysed. So far, all solutions that has been brought into realization has...... been based entirely on direct human supervision and control. This paper describes a vision-based system and methodology, that autonomously generates accurate guidance information that may assist a human operator in performing the tasks associated with both the rendezvous and docking navigation...

  15. Race of source effects in the elaboration likelihood model.

    Science.gov (United States)

    White, P H; Harkins, S G

    1994-11-01

    In a series of experiments, we investigated the effect of race of source on persuasive communications in the Elaboration Likelihood Model (R.E. Petty & J.T. Cacioppo, 1981, 1986). In Experiment 1, we found no evidence that White participants responded to a Black source as a simple negative cue. Experiment 2 suggested the possibility that exposure to a Black source led to low-involvement message processing. In Experiments 3 and 4, a distraction paradigm was used to test this possibility, and it was found that participants under low involvement were highly motivated to process a message presented by a Black source. In Experiment 5, we found that attitudes toward the source's ethnic group, rather than violations of expectancies, accounted for this processing effect. Taken together, the results of these experiments are consistent with S.L. Gaertner and J.F. Dovidio's (1986) theory of aversive racism, which suggests that Whites, because of a combination of egalitarian values and underlying negative racial attitudes, are very concerned about not appearing unfavorable toward Blacks, leading them to be highly motivated to process messages presented by a source from this group.

  16. How Many Separable Sources? Model Selection In Independent Components Analysis

    Science.gov (United States)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    Unlike mixtures consisting solely of non-Gaussian sources, mixtures including two or more Gaussian components cannot be separated using standard independent components analysis methods that are based on higher order statistics and independent observations. The mixed Independent Components Analysis/Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though computationally intensive alternative for model selection. Application of the algorithm is illustrated using Fisher's iris data set and Howells' craniometric data set. Mixed ICA/PCA is of potential interest in any field of scientific investigation where the authenticity of blindly separated non-Gaussian sources might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian. PMID:25811988

  17. Absorptivity Measurements and Heat Source Modeling to Simulate Laser Cladding

    Science.gov (United States)

    Wirth, Florian; Eisenbarth, Daniel; Wegener, Konrad

    The laser cladding process gains importance, as it does not only allow the application of surface coatings, but also additive manufacturing of three-dimensional parts. In both cases, process simulation can contribute to process optimization. Heat source modeling is one of the main issues for an accurate model and simulation of the laser cladding process. While the laser beam intensity distribution is readily known, the other two main effects on the process' heat input are non-trivial. Namely the measurement of the absorptivity of the applied materials as well as the powder attenuation. Therefore, calorimetry measurements were carried out. The measurement method and the measurement results for laser cladding of Stellite 6 on structural steel S 235 and for the processing of Inconel 625 are presented both using a CO2 laser as well as a high power diode laser (HPDL). Additionally, a heat source model is deduced.

  18. Diffusion theory model for optimization calculations of cold neutron sources

    International Nuclear Information System (INIS)

    Azmy, Y.Y.

    1987-01-01

    Cold neutron sources are becoming increasingly important and common experimental facilities made available at many research reactors around the world due to the high utility of cold neutrons in scattering experiments. The authors describe a simple two-group diffusion model of an infinite slab LD 2 cold source. The simplicity of the model permits to obtain an analytical solution from which one can deduce the reason for the optimum thickness based solely on diffusion-type phenomena. Also, a second more sophisticated model is described and the results compared to a deterministic transport calculation. The good (particularly qualitative) agreement between the results suggests that diffusion theory methods can be used in parametric and optimization studies to avoid the generally more expensive transport calculations

  19. Residential radon in Finland: sources, variation, modelling and dose comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Arvela, H

    1995-09-01

    The study deals with sources of indoor radon in Finland, seasonal variations in radon concentration, the effect of house construction and ventilation and also with the radiation dose from indoor radon and terrestrial gamma radiation. The results are based on radon measurements in approximately 4000 dwellings and on air exchange measurements in 250 dwellings as well as on model calculations. The results confirm that convective soil air flow is by far the most important source of indoor radon in Finnish low-rise residential housing. (97 refs., 61 figs., 30 tabs.).

  20. Residential radon in Finland: sources, variation, modelling and dose comparisons

    International Nuclear Information System (INIS)

    Arvela, H.

    1995-09-01

    The study deals with sources of indoor radon in Finland, seasonal variations in radon concentration, the effect of house construction and ventilation and also with the radiation dose from indoor radon and terrestrial gamma radiation. The results are based on radon measurements in approximately 4000 dwellings and on air exchange measurements in 250 dwellings as well as on model calculations. The results confirm that convective soil air flow is by far the most important source of indoor radon in Finnish low-rise residential housing. (97 refs., 61 figs., 30 tabs.)

  1. Dynamic modeling of the advanced neutron source reactor

    International Nuclear Information System (INIS)

    March-Leuba, J.; Ibn-Khayat, M.

    1990-01-01

    The purpose of this paper is to provide a summary description and some applications of a computer model that has been developed to simulate the dynamic behavior of the advanced neutron source (ANS) reactor. The ANS dynamic model is coded in the advanced continuous simulation language (ACSL), and it represents the reactor core, vessel, primary cooling system, and secondary cooling systems. The use of a simple dynamic model in the early stages of the reactor design has proven very valuable not only in the development of the control and plant protection system but also of components such as pumps and heat exchangers that are usually sized based on steady-state calculations

  2. Application of Space Environmental Observations to Spacecraft Pre-Launch Engineering and Spacecraft Operations

    Science.gov (United States)

    Barth, Janet L.; Xapsos, Michael

    2008-01-01

    This presentation focuses on the effects of the space environment on spacecraft systems and applying this knowledge to spacecraft pre-launch engineering and operations. Particle radiation, neutral gas particles, ultraviolet and x-rays, as well as micrometeoroids and orbital debris in the space environment have various effects on spacecraft systems, including degradation of microelectronic and optical components, physical damage, orbital decay, biasing of instrument readings, and system shutdowns. Space climate and weather must be considered during the mission life cycle (mission concept, mission planning, systems design, and launch and operations) to minimize and manage risk to both the spacecraft and its systems. A space environment model for use in the mission life cycle is presented.

  3. Automating Trend Analysis for Spacecraft Constellations

    Science.gov (United States)

    Davis, George; Cooter, Miranda; Updike, Clark; Carey, Everett; Mackey, Jennifer; Rykowski, Timothy; Powers, Edward I. (Technical Monitor)

    2001-01-01

    missions such as DRACO with the intent that mission operations costs be significantly reduced. The goal of the Constellation Spacecraft Trend Analysis Toolkit (CSTAT) project is to serve as the pathfinder for a fully automated trending system to support spacecraft constellations. The development approach to be taken is evolutionary. In the first year of the project, the intent is to significantly advance the state of the art in current trending systems through improved functionality and increased automation. In the second year, the intent is to add an expert system shell, likely through the adaptation of an existing commercial-off-the-shelf (COTS) or government-off-the-shelf (GOTS) tool to implement some level of the trending intelligence that humans currently provide in manual operations. In the third year, the intent is to infuse the resulting technology into a near-term constellation or formation-flying mission to test it and gain experience in automated trending. The lessons learned from the real missions operations experience will then be used to improve the system, and to ultimately incorporate it into a fully autonomous, closed-loop mission operations system that is truly capable of supporting large constellations. In this paper, the process of automating trend analysis for spacecraft constellations will be addressed. First, the results of a survey on automation in spacecraft mission operations in general, and in trending systems in particular will be presented to provide an overview of the current state of the art. Next, a rule-based model for implementing intelligent spacecraft subsystem trending will be then presented, followed by a survey of existing COTS/GOTS tools that could be adapted for implementing such a model. The baseline design and architecture of the CSTAT system will be presented. Finally, some results obtained from initial software tests and demonstrations will be presented.

  4. Numerical model of electron cyclotron resonance ion source

    Directory of Open Access Journals (Sweden)

    V. Mironov

    2015-12-01

    Full Text Available Important features of the electron cyclotron resonance ion source (ECRIS operation are accurately reproduced with a numerical code. The code uses the particle-in-cell technique to model the dynamics of ions in ECRIS plasma. It is shown that a gas dynamical ion confinement mechanism is sufficient to provide the ion production rates in ECRIS close to the experimentally observed values. Extracted ion currents are calculated and compared to the experiment for a few sources. Changes in the simulated extracted ion currents are obtained with varying the gas flow into the source chamber and the microwave power. Empirical scaling laws for ECRIS design are studied and the underlying physical effects are discussed.

  5. Mathematical modelling of electricity market with renewable energy sources

    International Nuclear Information System (INIS)

    Marchenko, O.V.

    2007-01-01

    The paper addresses the electricity market with conventional energy sources on fossil fuel and non-conventional renewable energy sources (RESs) with stochastic operating conditions. A mathematical model of long-run (accounting for development of generation capacities) equilibrium in the market is constructed. The problem of determining optimal parameters providing the maximum social criterion of efficiency is also formulated. The calculations performed have shown that the adequate choice of price cap, environmental tax, subsidies to RESs and consumption tax make it possible to take into account external effects (environmental damage) and to create incentives for investors to construct conventional and renewable energy sources in an optimal (from the society view point) mix. (author)

  6. A FRAMEWORK FOR AN OPEN SOURCE GEOSPATIAL CERTIFICATION MODEL

    Directory of Open Access Journals (Sweden)

    T. U. R. Khan

    2016-06-01

    Full Text Available The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cartographic Association, OSGeo Foundation, and ISPRS this development led to the implementation of the ICA-OSGeo-Lab imitative with its mission “Making geospatial education and opportunities accessible to all”. Discussions in this initiative and the growth and maturity of geospatial Open Source software initiated the idea to develop a framework for a worldwide applicable Open Source certification approach. Generic and geospatial certification approaches are already offered by numerous organisations, i.e., GIS Certification Institute, GeoAcademy, ASPRS, and software vendors, i. e., Esri, Oracle, and RedHat. They focus different fields of expertise and have different levels and ways of examination which are offered for a wide range of fees. The development of the certification framework presented here is based on the analysis of diverse bodies of knowledge concepts, i.e., NCGIA Core Curriculum, URISA Body Of Knowledge, USGIF Essential Body Of Knowledge, the “Geographic Information: Need to Know", currently under development, and the Geospatial Technology Competency Model (GTCM. The latter provides a US American oriented list of the knowledge, skills, and abilities required of workers in the geospatial technology industry and influenced essentially the framework of certification. In addition to the theoretical analysis of existing resources the geospatial community was integrated twofold. An online survey about the relevance of Open Source was performed and

  7. a Framework for AN Open Source Geospatial Certification Model

    Science.gov (United States)

    Khan, T. U. R.; Davis, P.; Behr, F.-J.

    2016-06-01

    The geospatial industry is forecasted to have an enormous growth in the forthcoming years and an extended need for well-educated workforce. Hence ongoing education and training play an important role in the professional life. Parallel, in the geospatial and IT arena as well in the political discussion and legislation Open Source solutions, open data proliferation, and the use of open standards have an increasing significance. Based on the Memorandum of Understanding between International Cartographic Association, OSGeo Foundation, and ISPRS this development led to the implementation of the ICA-OSGeo-Lab imitative with its mission "Making geospatial education and opportunities accessible to all". Discussions in this initiative and the growth and maturity of geospatial Open Source software initiated the idea to develop a framework for a worldwide applicable Open Source certification approach. Generic and geospatial certification approaches are already offered by numerous organisations, i.e., GIS Certification Institute, GeoAcademy, ASPRS, and software vendors, i. e., Esri, Oracle, and RedHat. They focus different fields of expertise and have different levels and ways of examination which are offered for a wide range of fees. The development of the certification framework presented here is based on the analysis of diverse bodies of knowledge concepts, i.e., NCGIA Core Curriculum, URISA Body Of Knowledge, USGIF Essential Body Of Knowledge, the "Geographic Information: Need to Know", currently under development, and the Geospatial Technology Competency Model (GTCM). The latter provides a US American oriented list of the knowledge, skills, and abilities required of workers in the geospatial technology industry and influenced essentially the framework of certification. In addition to the theoretical analysis of existing resources the geospatial community was integrated twofold. An online survey about the relevance of Open Source was performed and evaluated with 105

  8. Modeling a Hypothetical 170Tm Source for Brachytherapy Applications

    International Nuclear Information System (INIS)

    Enger, Shirin A.; D'Amours, Michel; Beaulieu, Luc

    2011-01-01

    Purpose: To perform absorbed dose calculations based on Monte Carlo simulations for a hypothetical 170 Tm source and to investigate the influence of encapsulating material on the energy spectrum of the emitted electrons and photons. Methods: GEANT4 Monte Carlo code version 9.2 patch 2 was used to simulate the decay process of 170 Tm and to calculate the absorbed dose distribution using the GEANT4 Penelope physics models. A hypothetical 170 Tm source based on the Flexisource brachytherapy design with the active core set as a pure thulium cylinder (length 3.5 mm and diameter 0.6 mm) and different cylindrical source encapsulations (length 5 mm and thickness 0.125 mm) constructed of titanium, stainless-steel, gold, or platinum were simulated. The radial dose function for the line source approximation was calculated following the TG-43U1 formalism for the stainless-steel encapsulation. Results: For the titanium and stainless-steel encapsulation, 94% of the total bremsstrahlung is produced inside the core, 4.8 and 5.5% in titanium and stainless-steel capsules, respectively, and less than 1% in water. For the gold capsule, 85% is produced inside the core, 14.2% inside the gold capsule, and a negligible amount ( 170 Tm source is primarily a bremsstrahlung source, with the majority of bremsstrahlung photons being generated in the source core and experiencing little attenuation in the source encapsulation. Electrons are efficiently absorbed by the gold and platinum encapsulations. However, for the stainless-steel capsule (or other lower Z encapsulations) electrons will escape. The dose from these electrons is dominant over the photon dose in the first few millimeter but is not taken into account by current standard treatment planning systems. The total energy spectrum of photons emerging from the source depends on the encapsulation composition and results in mean photon energies well above 100 keV. This is higher than the main gamma-ray energy peak at 84 keV. Based on our

  9. Guidance and control of swarms of spacecraft

    Science.gov (United States)

    Morgan, Daniel James

    There has been considerable interest in formation flying spacecraft due to their potential to perform certain tasks at a cheaper cost than monolithic spacecraft. Formation flying enables the use of smaller, cheaper spacecraft that distribute the risk of the mission. Recently, the ideas of formation flying have been extended to spacecraft swarms made up of hundreds to thousands of 100-gram-class spacecraft known as femtosatellites. The large number of spacecraft and limited capabilities of each individual spacecraft present a significant challenge in guidance, navigation, and control. This dissertation deals with the guidance and control algorithms required to enable the flight of spacecraft swarms. The algorithms developed in this dissertation are focused on achieving two main goals: swarm keeping and swarm reconfiguration. The objectives of swarm keeping are to maintain bounded relative distances between spacecraft, prevent collisions between spacecraft, and minimize the propellant used by each spacecraft. Swarm reconfiguration requires the transfer of the swarm to a specific shape. Like with swarm keeping, minimizing the propellant used and preventing collisions are the main objectives. Additionally, the algorithms required for swarm keeping and swarm reconfiguration should be decentralized with respect to communication and computation so that they can be implemented on femtosats, which have limited hardware capabilities. The algorithms developed in this dissertation are concerned with swarms located in low Earth orbit. In these orbits, Earth oblateness and atmospheric drag have a significant effect on the relative motion of the swarm. The complicated dynamic environment of low Earth orbits further complicates the swarm-keeping and swarm-reconfiguration problems. To better develop and test these algorithms, a nonlinear, relative dynamic model with J2 and drag perturbations is developed. This model is used throughout this dissertation to validate the algorithms

  10. Spacecraft Attitude Control in Hamiltonian Framework

    DEFF Research Database (Denmark)

    Wisniewski, Rafal

    2000-01-01

    The objective of this paper is to give a design scheme for attitude control algorithms of a generic spacecraft. Along with the system model formulated in the Hamilton's canonical form the algorithm uses information about a required potential energy and a dissipative term. The control action...

  11. Open Sourcing Social Change: Inside the Constellation Model

    Directory of Open Access Journals (Sweden)

    Tonya Surman

    2008-09-01

    Full Text Available The constellation model was developed by and for the Canadian Partnership for Children's Health and the Environment. The model offers an innovative approach to organizing collaborative efforts in the social mission sector and shares various elements of the open source model. It emphasizes self-organizing and concrete action within a network of partner organizations working on a common issue. Constellations are self-organizing action teams that operate within the broader strategic vision of a partnership. These constellations are outwardly focused, placing their attention on creating value for those in the external environment rather than on the partnership itself. While serious effort is invested into core partnership governance and management, most of the energy is devoted to the decision making, resources and collaborative effort required to create social value. The constellations drive and define the partnership. The constellation model emerged from a deep understanding of the power of networks and peer production. Leadership rotates fluidly amongst partners, with each partner having the freedom to head up a constellation and to participate in constellations that carry out activities that are of more peripheral interest. The Internet provided the platform, the partner network enabled the expertise to align itself, and the goal of reducing chemical exposure in children kept the energy flowing. Building on seven years of experience, this article provides an overview of the constellation model, discusses the results from the CPCHE, and identifies similarities and differences between the constellation and open source models.

  12. Model of the Sgr B2 radio source

    International Nuclear Information System (INIS)

    Gosachinskij, I.V.; Khersonskij, V.K.

    1981-01-01

    The dynamical model of the gas cloud around the radio source Sagittarius B2 is suggested. This model describes the kinematic features of the gas in this source: contraction of the core and rotation of the envelope. The stability of the cloud at the initial stage is supported by the turbulent motion of the gas, turbulence energy dissipates due to magnetic viscosity. This process is occurring more rapidly in the dense core and the core begins to collapse but the envelope remains stable. The parameters of the primary cloud and some parameters (mass, density and size) of the collapse are calculated. The conditions in the core at the moment of its fragmentation into masses of stellar order are established [ru

  13. Additive Manufacturing: Ensuring Quality for Spacecraft Applications

    Science.gov (United States)

    Swanson, Theodore; Stephenson, Timothy

    2014-01-01

    Reliable manufacturing requires that material properties and fabrication processes be well defined in order to insure that the manufactured parts meet specified requirements. While this issue is now relatively straightforward for traditional processes such as subtractive manufacturing and injection molding, this capability is still evolving for AM products. Hence, one of the principal challenges within AM is in qualifying and verifying source material properties and process control. This issue is particularly critical for applications in harsh environments and demanding applications, such as spacecraft.

  14. Nitrate source apportionment in a subtropical watershed using Bayesian model

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Liping; Han, Jiangpei; Xue, Jianlong; Zeng, Lingzao [College of Environmental and Natural Resource Sciences, Zhejiang Provincial Key Laboratory of Subtropical Soil and Plant Nutrition, Zhejiang University, Hangzhou, 310058 (China); Shi, Jiachun, E-mail: jcshi@zju.edu.cn [College of Environmental and Natural Resource Sciences, Zhejiang Provincial Key Laboratory of Subtropical Soil and Plant Nutrition, Zhejiang University, Hangzhou, 310058 (China); Wu, Laosheng, E-mail: laowu@zju.edu.cn [College of Environmental and Natural Resource Sciences, Zhejiang Provincial Key Laboratory of Subtropical Soil and Plant Nutrition, Zhejiang University, Hangzhou, 310058 (China); Jiang, Yonghai [State Key Laboratory of Environmental Criteria and Risk Assessment, Chinese Research Academy of Environmental Sciences, Beijing, 100012 (China)

    2013-10-01

    Nitrate (NO{sub 3}{sup −}) pollution in aquatic system is a worldwide problem. The temporal distribution pattern and sources of nitrate are of great concern for water quality. The nitrogen (N) cycling processes in a subtropical watershed located in Changxing County, Zhejiang Province, China were greatly influenced by the temporal variations of precipitation and temperature during the study period (September 2011 to July 2012). The highest NO{sub 3}{sup −} concentration in water was in May (wet season, mean ± SD = 17.45 ± 9.50 mg L{sup −1}) and the lowest concentration occurred in December (dry season, mean ± SD = 10.54 ± 6.28 mg L{sup −1}). Nevertheless, no water sample in the study area exceeds the WHO drinking water limit of 50 mg L{sup −1} NO{sub 3}{sup −}. Four sources of NO{sub 3}{sup −} (atmospheric deposition, AD; soil N, SN; synthetic fertilizer, SF; manure and sewage, M and S) were identified using both hydrochemical characteristics [Cl{sup −}, NO{sub 3}{sup −}, HCO{sub 3}{sup −}, SO{sub 4}{sup 2−}, Ca{sup 2+}, K{sup +}, Mg{sup 2+}, Na{sup +}, dissolved oxygen (DO)] and dual isotope approach (δ{sup 15}N–NO{sub 3}{sup −} and δ{sup 18}O–NO{sub 3}{sup −}). Both chemical and isotopic characteristics indicated that denitrification was not the main N cycling process in the study area. Using a Bayesian model (stable isotope analysis in R, SIAR), the contribution of each source was apportioned. Source apportionment results showed that source contributions differed significantly between the dry and wet season, AD and M and S contributed more in December than in May. In contrast, SN and SF contributed more NO{sub 3}{sup −} to water in May than that in December. M and S and SF were the major contributors in December and May, respectively. Moreover, the shortcomings and uncertainties of SIAR were discussed to provide implications for future works. With the assessment of temporal variation and sources of NO{sub 3}{sup −}, better

  15. Nitrate source apportionment in a subtropical watershed using Bayesian model

    International Nuclear Information System (INIS)

    Yang, Liping; Han, Jiangpei; Xue, Jianlong; Zeng, Lingzao; Shi, Jiachun; Wu, Laosheng; Jiang, Yonghai

    2013-01-01

    Nitrate (NO 3 − ) pollution in aquatic system is a worldwide problem. The temporal distribution pattern and sources of nitrate are of great concern for water quality. The nitrogen (N) cycling processes in a subtropical watershed located in Changxing County, Zhejiang Province, China were greatly influenced by the temporal variations of precipitation and temperature during the study period (September 2011 to July 2012). The highest NO 3 − concentration in water was in May (wet season, mean ± SD = 17.45 ± 9.50 mg L −1 ) and the lowest concentration occurred in December (dry season, mean ± SD = 10.54 ± 6.28 mg L −1 ). Nevertheless, no water sample in the study area exceeds the WHO drinking water limit of 50 mg L −1 NO 3 − . Four sources of NO 3 − (atmospheric deposition, AD; soil N, SN; synthetic fertilizer, SF; manure and sewage, M and S) were identified using both hydrochemical characteristics [Cl − , NO 3 − , HCO 3 − , SO 4 2− , Ca 2+ , K + , Mg 2+ , Na + , dissolved oxygen (DO)] and dual isotope approach (δ 15 N–NO 3 − and δ 18 O–NO 3 − ). Both chemical and isotopic characteristics indicated that denitrification was not the main N cycling process in the study area. Using a Bayesian model (stable isotope analysis in R, SIAR), the contribution of each source was apportioned. Source apportionment results showed that source contributions differed significantly between the dry and wet season, AD and M and S contributed more in December than in May. In contrast, SN and SF contributed more NO 3 − to water in May than that in December. M and S and SF were the major contributors in December and May, respectively. Moreover, the shortcomings and uncertainties of SIAR were discussed to provide implications for future works. With the assessment of temporal variation and sources of NO 3 − , better agricultural management practices and sewage disposal programs can be implemented to sustain water quality in subtropical watersheds

  16. An architectural model for software reliability quantification: sources of data

    International Nuclear Information System (INIS)

    Smidts, C.; Sova, D.

    1999-01-01

    Software reliability assessment models in use today treat software as a monolithic block. An aversion towards 'atomic' models seems to exist. These models appear to add complexity to the modeling, to the data collection and seem intrinsically difficult to generalize. In 1997, we introduced an architecturally based software reliability model called FASRE. The model is based on an architecture derived from the requirements which captures both functional and nonfunctional requirements and on a generic classification of functions, attributes and failure modes. The model focuses on evaluation of failure mode probabilities and uses a Bayesian quantification framework. Failure mode probabilities of functions and attributes are propagated to the system level using fault trees. It can incorporate any type of prior information such as results of developers' testing, historical information on a specific functionality and its attributes, and, is ideally suited for reusable software. By building an architecture and deriving its potential failure modes, the model forces early appraisal and understanding of the weaknesses of the software, allows reliability analysis of the structure of the system, provides assessments at a functional level as well as at a systems' level. In order to quantify the probability of failure (or the probability of success) of a specific element of our architecture, data are needed. The term element of the architecture is used here in its broadest sense to mean a single failure mode or a higher level of abstraction such as a function. The paper surveys the potential sources of software reliability data available during software development. Next the mechanisms for incorporating these sources of relevant data to the FASRE model are identified

  17. Receptor models for source apportionment of remote aerosols in Brazil

    International Nuclear Information System (INIS)

    Artaxo Netto, P.E.

    1985-11-01

    The PIXE (particle induced X-ray emission), and PESA (proton elastic scattering analysis) method were used in conjunction with receptor models for source apportionment of remote aerosols in Brazil. The PIXE used in the determination of concentration for elements with Z >- 11, has a detection limit of about 1 ng/m 3 . The concentrations of carbon, nitrogen and oxygen in the fine fraction of Amazon Basin aerosols was measured by PESA. We sampled in Jureia (SP), Fernando de Noronha, Arembepe (BA), Firminopolis (GO), Itaberai (GO) and Amazon Basin. For collecting the airbone particles we used cascade impactors, stacked filter units, and streaker samplers. Three receptor models were used: chemical mass balance, stepwise multiple regression analysis and principal factor analysis. The elemental and gravimetric concentrations were explained by the models within the experimental errors. Three sources of aerosol were quantitatively distinguished: marine aerosol, soil dust and aerosols related to forests. The emission of aerosols by vegetation is very clear for all the sampling sites. In Amazon Basin and Jureia it is the major source, responsible for 60 to 80% of airborne concentrations. (Author) [pt

  18. Spacecraft attitude determination using the earth's magnetic field

    Science.gov (United States)

    Simpson, David G.

    1989-01-01

    A method is presented by which the attitude of a low-Earth orbiting spacecraft may be determined using a vector magnetometer, a digital Sun sensor, and a mathematical model of the Earth's magnetic field. The method is currently being implemented for the Solar Maximum Mission spacecraft (as a backup for the failing star trackers) as a way to determine roll gyro drift.

  19. Probing interferometric parallax with interplanetary spacecraft

    Science.gov (United States)

    Rodeghiero, G.; Gini, F.; Marchili, N.; Jain, P.; Ralston, J. P.; Dallacasa, D.; Naletto, G.; Possenti, A.; Barbieri, C.; Franceschini, A.; Zampieri, L.

    2017-07-01

    We describe an experimental scenario for testing a novel method to measure distance and proper motion of astronomical sources. The method is based on multi-epoch observations of amplitude or intensity correlations between separate receiving systems. This technique is called Interferometric Parallax, and efficiently exploits phase information that has traditionally been overlooked. The test case we discuss combines amplitude correlations of signals from deep space interplanetary spacecraft with those from distant galactic and extragalactic radio sources with the goal of estimating the interplanetary spacecraft distance. Interferometric parallax relies on the detection of wavefront curvature effects in signals collected by pairs of separate receiving systems. The method shows promising potentialities over current techniques when the target is unresolved from the background reference sources. Developments in this field might lead to the construction of an independent, geometrical cosmic distance ladder using a dedicated project and future generation instruments. We present a conceptual overview supported by numerical estimates of its performances applied to a spacecraft orbiting the Solar System. Simulations support the feasibility of measurements with a simple and time-saving observational scheme using current facilities.

  20. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.

    Science.gov (United States)

    Nwankwo, Obioma; Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens

    2017-01-01

    To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.

  1. A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.

    Directory of Open Access Journals (Sweden)

    Obioma Nwankwo

    Full Text Available To introduce a new method of deriving a virtual source model (VSM of a linear accelerator photon beam from a phase space file (PSF for Monte Carlo (MC dose calculation.A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses.The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate for the evaluated fields.A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.

  2. How Many Separable Sources? Model Selection In Independent Components Analysis

    DEFF Research Database (Denmark)

    Woods, Roger P.; Hansen, Lars Kai; Strother, Stephen

    2015-01-01

    among potential model categories with differing numbers of Gaussian components. Based on simulation studies, the assumptions and approximations underlying the Akaike Information Criterion do not hold in this setting, even with a very large number of observations. Cross-validation is a suitable, though....../Principal Components Analysis (mixed ICA/PCA) model described here accommodates one or more Gaussian components in the independent components analysis model and uses principal components analysis to characterize contributions from this inseparable Gaussian subspace. Information theory can then be used to select from...... might otherwise be questionable. Failure of the Akaike Information Criterion in model selection also has relevance in traditional independent components analysis where all sources are assumed non-Gaussian....

  3. Receptor Model Source Apportionment of Nonmethane Hydrocarbons in Mexico City

    Directory of Open Access Journals (Sweden)

    V. Mugica

    2002-01-01

    Full Text Available With the purpose of estimating the source contributions of nonmethane hydrocarbons (NMHC to the atmosphere at three different sites in the Mexico City Metropolitan Area, 92 ambient air samples were measured from February 23 to March 22 of 1997. Light- and heavy-duty vehicular profiles were determined to differentiate the NMHC contribution of diesel and gasoline to the atmosphere. Food cooking source profiles were also determined for chemical mass balance receptor model application. Initial source contribution estimates were carried out to determine the adequate combination of source profiles and fitting species. Ambient samples of NMHC were apportioned to motor vehicle exhaust, gasoline vapor, handling and distribution of liquefied petroleum gas (LP gas, asphalt operations, painting operations, landfills, and food cooking. Both gasoline and diesel motor vehicle exhaust were the major NMHC contributors for all sites and times, with a percentage of up to 75%. The average motor vehicle exhaust contributions increased during the day. In contrast, LP gas contribution was higher during the morning than in the afternoon. Apportionment for the most abundant individual NMHC showed that the vehicular source is the major contributor to acetylene, ethylene, pentanes, n-hexane, toluene, and xylenes, while handling and distribution of LP gas was the major source contributor to propane and butanes. Comparison between CMB estimates of NMHC and the emission inventory showed a good agreement for vehicles, handling and distribution of LP gas, and painting operations; nevertheless, emissions from diesel exhaust and asphalt operations showed differences, and the results suggest that these emissions could be underestimated.

  4. Hierarchical Bayesian Model for Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE)

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface, and ele......In this paper we propose an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model is motivated by the many uncertain contributions that form the forward propagation model including the tissue conductivity distribution, the cortical surface...

  5. Atmospheric mercury dispersion modelling from two nearest hypothetical point sources

    Energy Technology Data Exchange (ETDEWEB)

    Al Razi, Khandakar Md Habib; Hiroshi, Moritomi; Shinji, Kambara [Environmental and Renewable Energy System (ERES), Graduate School of Engineering, Gifu University, Yanagido, Gifu City, 501-1193 (Japan)

    2012-07-01

    The Japan coastal areas are still environmentally friendly, though there are multiple air emission sources originating as a consequence of several developmental activities such as automobile industries, operation of thermal power plants, and mobile-source pollution. Mercury is known to be a potential air pollutant in the region apart from SOX, NOX, CO and Ozone. Mercury contamination in water bodies and other ecosystems due to deposition of atmospheric mercury is considered a serious environmental concern. Identification of sources contributing to the high atmospheric mercury levels will be useful for formulating pollution control and mitigation strategies in the region. In Japan, mercury and its compounds were categorized as hazardous air pollutants in 1996 and are on the list of 'Substances Requiring Priority Action' published by the Central Environmental Council of Japan. The Air Quality Management Division of the Environmental Bureau, Ministry of the Environment, Japan, selected the current annual mean environmental air quality standard for mercury and its compounds of 0.04 ?g/m3. Long-term exposure to mercury and its compounds can have a carcinogenic effect, inducing eg, Minamata disease. This study evaluates the impact of mercury emissions on air quality in the coastal area of Japan. Average yearly emission of mercury from an elevated point source in this area with background concentration and one-year meteorological data were used to predict the ground level concentration of mercury. To estimate the concentration of mercury and its compounds in air of the local area, two different simulation models have been used. The first is the National Institute of Advanced Science and Technology Atmospheric Dispersion Model for Exposure and Risk Assessment (AIST-ADMER) that estimates regional atmospheric concentration and distribution. The second is the Hybrid Single Particle Lagrangian Integrated trajectory Model (HYSPLIT) that estimates the atmospheric

  6. Greenhouse Gas Source Attribution: Measurements Modeling and Uncertainty Quantification

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Zhen [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-CA), Livermore, CA (United States); LaFranchi, Brian W. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ivey, Mark D. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Schrader, Paul E. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Michelsen, Hope A. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Bambha, Ray P. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2014-09-01

    In this project we have developed atmospheric measurement capabilities and a suite of atmospheric modeling and analysis tools that are well suited for verifying emissions of green- house gases (GHGs) on an urban-through-regional scale. We have for the first time applied the Community Multiscale Air Quality (CMAQ) model to simulate atmospheric CO2 . This will allow for the examination of regional-scale transport and distribution of CO2 along with air pollutants traditionally studied using CMAQ at relatively high spatial and temporal resolution with the goal of leveraging emissions verification efforts for both air quality and climate. We have developed a bias-enhanced Bayesian inference approach that can remedy the well-known problem of transport model errors in atmospheric CO2 inversions. We have tested the approach using data and model outputs from the TransCom3 global CO2 inversion comparison project. We have also performed two prototyping studies on inversion approaches in the generalized convection-diffusion context. One of these studies employed Polynomial Chaos Expansion to accelerate the evaluation of a regional transport model and enable efficient Markov Chain Monte Carlo sampling of the posterior for Bayesian inference. The other approach uses de- terministic inversion of a convection-diffusion-reaction system in the presence of uncertainty. These approaches should, in principle, be applicable to realistic atmospheric problems with moderate adaptation. We outline a regional greenhouse gas source inference system that integrates (1) two ap- proaches of atmospheric dispersion simulation and (2) a class of Bayesian inference and un- certainty quantification algorithms. We use two different and complementary approaches to simulate atmospheric dispersion. Specifically, we use a Eulerian chemical transport model CMAQ and a Lagrangian Particle Dispersion Model - FLEXPART-WRF. These two models share the same WRF

  7. Modeling of low pressure plasma sources for microelectronics fabrication

    International Nuclear Information System (INIS)

    Agarwal, Ankur; Bera, Kallol; Kenney, Jason; Rauf, Shahid; Likhanskii, Alexandre

    2017-01-01

    Chemically reactive plasmas operating in the 1 mTorr–10 Torr pressure range are widely used for thin film processing in the semiconductor industry. Plasma modeling has come to play an important role in the design of these plasma processing systems. A number of 3-dimensional (3D) fluid and hybrid plasma modeling examples are used to illustrate the role of computational investigations in design of plasma processing hardware for applications such as ion implantation, deposition, and etching. A model for a rectangular inductively coupled plasma (ICP) source is described, which is employed as an ion source for ion implantation. It is shown that gas pressure strongly influences ion flux uniformity, which is determined by the balance between the location of plasma production and diffusion. The effect of chamber dimensions on plasma uniformity in a rectangular capacitively coupled plasma (CCP) is examined using an electromagnetic plasma model. Due to high pressure and small gap in this system, plasma uniformity is found to be primarily determined by the electric field profile in the sheath/pre-sheath region. A 3D model is utilized to investigate the confinement properties of a mesh in a cylindrical CCP. Results highlight the role of hole topology and size on the formation of localized hot-spots. A 3D electromagnetic plasma model for a cylindrical ICP is used to study inductive versus capacitive power coupling and how placement of ground return wires influences it. Finally, a 3D hybrid plasma model for an electron beam generated magnetized plasma is used to understand the role of reactor geometry on plasma uniformity in the presence of E  ×  B drift. (paper)

  8. Modeling of low pressure plasma sources for microelectronics fabrication

    Science.gov (United States)

    Agarwal, Ankur; Bera, Kallol; Kenney, Jason; Likhanskii, Alexandre; Rauf, Shahid

    2017-10-01

    Chemically reactive plasmas operating in the 1 mTorr-10 Torr pressure range are widely used for thin film processing in the semiconductor industry. Plasma modeling has come to play an important role in the design of these plasma processing systems. A number of 3-dimensional (3D) fluid and hybrid plasma modeling examples are used to illustrate the role of computational investigations in design of plasma processing hardware for applications such as ion implantation, deposition, and etching. A model for a rectangular inductively coupled plasma (ICP) source is described, which is employed as an ion source for ion implantation. It is shown that gas pressure strongly influences ion flux uniformity, which is determined by the balance between the location of plasma production and diffusion. The effect of chamber dimensions on plasma uniformity in a rectangular capacitively coupled plasma (CCP) is examined using an electromagnetic plasma model. Due to high pressure and small gap in this system, plasma uniformity is found to be primarily determined by the electric field profile in the sheath/pre-sheath region. A 3D model is utilized to investigate the confinement properties of a mesh in a cylindrical CCP. Results highlight the role of hole topology and size on the formation of localized hot-spots. A 3D electromagnetic plasma model for a cylindrical ICP is used to study inductive versus capacitive power coupling and how placement of ground return wires influences it. Finally, a 3D hybrid plasma model for an electron beam generated magnetized plasma is used to understand the role of reactor geometry on plasma uniformity in the presence of E  ×  B drift.

  9. Sensitivity of numerical dispersion modeling to explosive source parameters

    International Nuclear Information System (INIS)

    Baskett, R.L.; Cederwall, R.T.

    1991-01-01

    The calculation of downwind concentrations from non-traditional sources, such as explosions, provides unique challenges to dispersion models. The US Department of Energy has assigned the Atmospheric Release Advisory Capability (ARAC) at the Lawrence Livermore National Laboratory (LLNL) the task of estimating the impact of accidental radiological releases to the atmosphere anywhere in the world. Our experience includes responses to over 25 incidents in the past 16 years, and about 150 exercises a year. Examples of responses to explosive accidents include the 1980 Titan 2 missile fuel explosion near Damascus, Arkansas and the hydrogen gas explosion in the 1986 Chernobyl nuclear power plant accident. Based on judgment and experience, we frequently estimate the source geometry and the amount of toxic material aerosolized as well as its particle size distribution. To expedite our real-time response, we developed some automated algorithms and default assumptions about several potential sources. It is useful to know how well these algorithms perform against real-world measurements and how sensitive our dispersion model is to the potential range of input values. In this paper we present the algorithms we use to simulate explosive events, compare these methods with limited field data measurements, and analyze their sensitivity to input parameters. 14 refs., 7 figs., 2 tabs

  10. Particle model of a cylindrical inductively coupled ion source

    Science.gov (United States)

    Ippolito, N. D.; Taccogna, F.; Minelli, P.; Cavenago, M.; Veltri, P.

    2017-08-01

    In spite of the wide use of RF sources, a complete understanding of the mechanisms regulating the RF-coupling of the plasma is still lacking so self-consistent simulations of the involved physics are highly desirable. For this reason we are developing a 2.5D fully kinetic Particle-In-Cell Monte-Carlo-Collision (PIC-MCC) model of a cylindrical ICP-RF source, keeping the time step of the simulation small enough to resolve the plasma frequency scale. The grid cell dimension is now about seven times larger than the average Debye length, because of the large computational demand of the code. It will be scaled down in the next phase of the development of the code. The filling gas is Xenon, in order to minimize the time lost by the MCC collision module in the first stage of development of the code. The results presented here are preliminary, with the code already showing a good robustness. The final goal will be the modeling of the NIO1 (Negative Ion Optimization phase 1) source, operating in Padua at Consorzio RFX.

  11. A theoretical model of a liquid metal ion source

    International Nuclear Information System (INIS)

    Kingham, D.R.; Swanson, L.W.

    1984-01-01

    A model of liquid metal ion source (LMIS) operation has been developed which gives a consistent picture of three different aspects of LMI sources: (i) the shape and size of the ion emitting region; (ii) the mechanism of ion formation; (iii) properties of the ion beam such as angular intensity and energy spread. It was found that the emitting region takes the shape of a jet-like protrusion on the end of a Taylor cone with ion emission from an area only a few tens of A across, in agreement with recent TEM pictures by Sudraud. This is consistent with ion formation predominantly by field evaporation. Calculated angular intensities and current-voltage characteristics based on our fluid dynamic jet-like protrusion model agree well with experiment. The formation of doubly charged ions is attributed to post-ionization of field evaporated singly charged ions and an apex field strength of about 2.0 V A -1 was calculated for a Ga source. The ion energy spread is mainly due to space charge effects, it is known to be reduced for doubly charged ions in agreement with this post-ionization mechanism. (author)

  12. Extended gamma sources modelling using multipole expansion: Application to the Tunisian gamma source load planning

    International Nuclear Information System (INIS)

    Loussaief, Abdelkader

    2007-01-01

    In this work we extend the use of multipole moments expansion to the case of inner radiation fields. A series expansion of the photon flux was established. The main advantage of this approach is that it offers the opportunity to treat both inner and external radiation field cases. We determined the expression of the inner multipole moments in both spherical harmonics and in cartesian coordinates. As an application we applied the analytical model to a radiation facility used for small target irradiation. Theoretical, experimental and simulation studies were performed, in air and in a product, and good agreement was reached.Conventional dose distribution study for gamma irradiation facility involves the use of isodose maps. The establishment of these maps requires the measurement of the absorbed dose in many points, which makes the task expensive experimentally and very long by simulation. However, a lack of points of measurement can distort the dose distribution cartography. To overcome these problems, we present in this paper a mathematical method to describe the dose distribution in air. This method is based on the multipole expansion in spherical harmonics of the photon flux emitted by the gamma source. The determination of the multipole coefficients of this development allows the modeling of the radiation field around the gamma source. (Author)

  13. A Two-Temperature Open-Source CFD Model for Hypersonic Reacting Flows, Part One: Zero-Dimensional Analysis

    Directory of Open Access Journals (Sweden)

    Vincent Casseau

    2016-10-01

    Full Text Available A two-temperature CFD (computational fluid dynamics solver is a prerequisite to any spacecraft re-entry numerical study that aims at producing results with a satisfactory level of accuracy within realistic timescales. In this respect, a new two-temperature CFD solver, hy2Foam, has been developed within the framework of the open-source CFD platform OpenFOAM for the prediction of hypersonic reacting flows. This solver makes the distinct juncture between the trans-rotational and multiple vibrational-electronic temperatures. hy2Foam has the capability to model vibrational-translational and vibrational-vibrational energy exchanges in an eleven-species air mixture. It makes use of either the Park TTv model or the coupled vibration-dissociation-vibration (CVDV model to handle chemistry-vibration coupling and it can simulate flows with or without electronic energy. Verification of the code for various zero-dimensional adiabatic heat baths of progressive complexity has been carried out. hy2Foam has been shown to produce results in good agreement with those given by the CFD code LeMANS (The Michigan Aerothermodynamic Navier-Stokes solver and previously published data. A comparison is also performed with the open-source DSMC (direct simulation Monte Carlo code dsmcFoam. It has been demonstrated that the use of the CVDV model and rates derived from Quantum-Kinetic theory promote a satisfactory consistency between the CFD and DSMC chemistry modules.

  14. SOURCE 2.0 model development: UO2 thermal properties

    International Nuclear Information System (INIS)

    Reid, P.J.; Richards, M.J.; Iglesias, F.C.; Brito, A.C.

    1997-01-01

    During analysis of CANDU postulated accidents, the reactor fuel is estimated to experience large temperature variations and to be exposed to a variety of environments from highly oxidized to mildly reducing. The exposure of CANDU fuel to these environments and temperatures may affect fission product releases from the fuel and cause degradation of the fuel thermal properties. The SOURCE 2.0 project is a safety analysis code which will model the necessary mechanisms required to calculate fission product release for a variety of accident scenarios, including large break loss of coolant accidents (LOCAs) with or without emergency core cooling. The goal of the model development is to generate models which are consistent with each other and phenomenologically based, insofar as that is possible given the state of theoretical understanding

  15. RF Plasma modeling of the Linac4 H− ion source

    CERN Document Server

    Mattei, S; Hatayama, A; Lettry, J; Kawamura, Y; Yasumoto, M; Schmitzer, C

    2013-01-01

    This study focuses on the modelling of the ICP RF-plasma in the Linac4 H− ion source currently being constructed at CERN. A self-consistent model of the plasma dynamics with the RF electromagnetic field has been developed by a PIC-MCC method. In this paper, the model is applied to the analysis of a low density plasma discharge initiation, with particular interest on the effect of the external magnetic field on the plasma properties, such as wall loss, electron density and electron energy. The use of a multi-cusp magnetic field effectively limits the wall losses, particularly in the radial direction. Preliminary results however indicate that a reduced heating efficiency results in such a configuration. The effect is possibly due to trapping of electrons in the multi-cusp magnetic field, preventing their continuous acceleration in the azimuthal direction.

  16. How to Model Super-Soft X-ray Sources?

    Science.gov (United States)

    Rauch, Thomas

    2012-07-01

    During outbursts, the surface temperatures of white dwarfs in cataclysmic variables exceed by far half a million Kelvin. In this phase, they may become the brightest super-soft sources (SSS) in the sky. Time-series of high-resolution, high S/N X-ray spectra taken during rise, maximum, and decline of their X-ray luminosity provide insights into the processes following such outbursts as well as in the surface composition of the white dwarf. Their analysis requires adequate NLTE model atmospheres. The Tuebingen Non-LTE Model-Atmosphere Package (TMAP) is a powerful tool for their calculation. We present the application of TMAP models to SSS spectra and discuss their validity.

  17. Sensitivity of fluvial sediment source apportionment to mixing model assumptions: A Bayesian model comparison.

    Science.gov (United States)

    Cooper, Richard J; Krueger, Tobias; Hiscock, Kevin M; Rawlins, Barry G

    2014-11-01

    Mixing models have become increasingly common tools for apportioning fluvial sediment load to various sediment sources across catchments using a wide variety of Bayesian and frequentist modeling approaches. In this study, we demonstrate how different model setups can impact upon resulting source apportionment estimates in a Bayesian framework via a one-factor-at-a-time (OFAT) sensitivity analysis. We formulate 13 versions of a mixing model, each with different error assumptions and model structural choices, and apply them to sediment geochemistry data from the River Blackwater, Norfolk, UK, to apportion suspended particulate matter (SPM) contributions from three sources (arable topsoils, road verges, and subsurface material) under base flow conditions between August 2012 and August 2013. Whilst all 13 models estimate subsurface sources to be the largest contributor of SPM (median ∼76%), comparison of apportionment estimates reveal varying degrees of sensitivity to changing priors, inclusion of covariance terms, incorporation of time-variant distributions, and methods of proportion characterization. We also demonstrate differences in apportionment results between a full and an empirical Bayesian setup, and between a Bayesian and a frequentist optimization approach. This OFAT sensitivity analysis reveals that mixing model structural choices and error assumptions can significantly impact upon sediment source apportionment results, with estimated median contributions in this study varying by up to 21% between model versions. Users of mixing models are therefore strongly advised to carefully consider and justify their choice of model structure prior to conducting sediment source apportionment investigations. An OFAT sensitivity analysis of sediment fingerprinting mixing models is conductedBayesian models display high sensitivity to error assumptions and structural choicesSource apportionment results differ between Bayesian and frequentist approaches.

  18. Innovative Approach for Developing Spacecraft Interior Acoustic Requirement Allocation

    Science.gov (United States)

    Chu, S. Reynold; Dandaroy, Indranil; Allen, Christopher S.

    2016-01-01

    The Orion Multi-Purpose Crew Vehicle (MPCV) is an American spacecraft for carrying four astronauts during deep space missions. This paper describes an innovative application of Power Injection Method (PIM) for allocating Orion cabin continuous noise Sound Pressure Level (SPL) limits to the sound power level (PWL) limits of major noise sources in the Environmental Control and Life Support System (ECLSS) during all mission phases. PIM is simulated using both Statistical Energy Analysis (SEA) and Hybrid Statistical Energy Analysis-Finite Element (SEA-FE) models of the Orion MPCV to obtain the transfer matrix from the PWL of the noise sources to the acoustic energies of the receivers, i.e., the cavities associated with the cabin habitable volume. The goal of the allocation strategy is to control the total energy of cabin habitable volume for maintaining the required SPL limits. Simulations are used to demonstrate that applying the allocated PWLs to the noise sources in the models indeed reproduces the SPL limits in the habitable volume. The effects of Noise Control Treatment (NCT) on allocated noise source PWLs are investigated. The measurement of source PWLs of involved fan and pump development units are also discussed as it is related to some case-specific details of the allocation strategy discussed here.

  19. Small Spacecraft for Planetary Science

    Science.gov (United States)

    Baker, John; Castillo-Rogez, Julie; Bousquet, Pierre-W.; Vane, Gregg; Komarek, Tomas; Klesh, Andrew

    2016-07-01

    As planetary science continues to explore new and remote regions of the Solar system with comprehensive and more sophisticated payloads, small spacecraft offer the possibility for focused and more affordable science investigations. These small spacecraft or micro spacecraft (attitude control and determination, capable computer and data handling, and navigation are being met by technologies currently under development to be flown on CubeSats within the next five years. This paper will discuss how micro spacecraft offer an attractive alternative to accomplish specific science and technology goals and what relevant technologies are needed for these these types of spacecraft. Acknowledgements: Part of this work is being carried out at the Jet Propulsion Laboratory, California Institute of Technology under contract to NASA. Government sponsorship acknowledged.

  20. Modeling Degradation Product Partitioning in Chlorinated-DNAPL Source Zones

    Science.gov (United States)

    Boroumand, A.; Ramsburg, A.; Christ, J.; Abriola, L.

    2009-12-01

    Metabolic reductive dechlorination degrades aqueous phase contaminant concentrations, increasing the driving force for DNAPL dissolution. Results from laboratory and field investigations suggest that accumulation of cis-dichloroethene (cis-DCE) and vinyl chloride (VC) may occur within DNAPL source zones. The lack of (or slow) degradation of cis-DCE and VC within bioactive DNAPL source zones may result in these dechlorination products becoming distributed among the solid, aqueous, and organic phases. Partitioning of cis-DCE and VC into the organic phase may reduce aqueous phase concentrations of these contaminants and result in the enrichment of these dechlorination products within the non-aqueous phase. Enrichment of degradation products within DNAPL may reduce some of the advantages associated with the application of bioremediation in DNAPL source zones. Thus, it is important to quantify how partitioning (between the aqueous and organic phases) influences the transport of cis-DCE and VC within bioactive DNAPL source zones. In this work, abiotic two-phase (PCE-water) one-dimensional column experiments are modeled using analytical and numerical methods to examine the rate of partitioning and the capacity of PCE-DNAPL to reversibly sequester cis-DCE. These models consider aqueous-phase, nonaqueous phase, and aqueous plus nonaqueous phase mass transfer resistance using linear driving force and spherical diffusion expressions. Model parameters are examined and compared for different experimental conditions to evaluate the mechanisms controlling partitioning. Biot number, a dimensionless number which is an index of the ratio of the aqueous phase mass transfer rate in boundary layer to the mass transfer rate within the NAPL, is used to characterize conditions in which either or both processes are controlling. Results show that application of a single aqueous resistance is capable to capture breakthrough curves when DNAPL is distributed in porous media as low

  1. Cardiac magnetic source imaging based on current multipole model

    International Nuclear Information System (INIS)

    Tang Fa-Kuan; Wang Qian; Hua Ning; Lu Hong; Tang Xue-Zheng; Ma Ping

    2011-01-01

    It is widely accepted that the heart current source can be reduced into a current multipole. By adopting three linear inverse methods, the cardiac magnetic imaging is achieved in this article based on the current multipole model expanded to the first order terms. This magnetic imaging is realized in a reconstruction plane in the centre of human heart, where the current dipole array is employed to represent realistic cardiac current distribution. The current multipole as testing source generates magnetic fields in the measuring plane, serving as inputs of cardiac magnetic inverse problem. In the heart-torso model constructed by boundary element method, the current multipole magnetic field distribution is compared with that in the homogeneous infinite space, and also with the single current dipole magnetic field distribution. Then the minimum-norm least-squares (MNLS) method, the optimal weighted pseudoinverse method (OWPIM), and the optimal constrained linear inverse method (OCLIM) are selected as the algorithms for inverse computation based on current multipole model innovatively, and the imaging effects of these three inverse methods are compared. Besides, two reconstructing parameters, residual and mean residual, are also discussed, and their trends under MNLS, OWPIM and OCLIM each as a function of SNR are obtained and compared. (general)

  2. A model for managing sources of groundwater pollution

    Science.gov (United States)

    Gorelick, Steven M.

    1982-01-01

    The waste disposal capacity of a groundwater system can be maximized while maintaining water quality at specified locations by using a groundwater pollutant source management model that is based upon linear programing and numerical simulation. The decision variables of the management model are solute waste disposal rates at various facilities distributed over space. A concentration response matrix is used in the management model to describe transient solute transport and is developed using the U.S. Geological Survey solute transport simulation model. The management model was applied to a complex hypothetical groundwater system. Large-scale management models were formulated as dual linear programing problems to reduce numerical difficulties and computation time. Linear programing problems were solved using a numerically stable, available code. Optimal solutions to problems with successively longer management time horizons indicated that disposal schedules at some sites are relatively independent of the number of disposal periods. Optimal waste disposal schedules exhibited pulsing rather than constant disposal rates. Sensitivity analysis using parametric linear programing showed that a sharp reduction in total waste disposal potential occurs if disposal rates at any site are increased beyond their optimal values.

  3. Plant model of KIPT neutron source facility simulator

    International Nuclear Information System (INIS)

    Cao, Yan; Wei, Thomas Y.; Grelle, Austin L.; Gohar, Yousry

    2016-01-01

    Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine are collaborating on constructing a neutron source facility at KIPT, Kharkov, Ukraine. The facility has 100-kW electron beam driving a subcritical assembly (SCA). The electron beam interacts with a natural uranium target or a tungsten target to generate neutrons, and deposits its power in the target zone. The total fission power generated in SCA is about 300 kW. Two primary cooling loops are designed to remove 100-kW and 300-kW from the target zone and the SCA, respectively. A secondary cooling system is coupled with the primary cooling system to dispose of the generated heat outside the facility buildings to the atmosphere. In addition, the electron accelerator has a low efficiency for generating the electron beam, which uses another secondary cooling loop to remove the generated heat from the accelerator primary cooling loop. One of the main functions the KIPT neutron source facility is to train young nuclear specialists; therefore, ANL has developed the KIPT Neutron Source Facility Simulator for this function. In this simulator, a Plant Control System and a Plant Protection System were developed to perform proper control and to provide automatic protection against unsafe and improper operation of the facility during the steady-state and the transient states using a facility plant model. This report focuses on describing the physics of the plant model and provides several test cases to demonstrate its capabilities. The plant facility model uses the PYTHON script language. It is consistent with the computer language of the plant control system. It is easy to integrate with the simulator without an additional interface, and it is able to simulate the transients of the cooling systems with system control variables changing on real-time.

  4. Plant model of KIPT neutron source facility simulator

    Energy Technology Data Exchange (ETDEWEB)

    Cao, Yan [Argonne National Lab. (ANL), Argonne, IL (United States); Wei, Thomas Y. [Argonne National Lab. (ANL), Argonne, IL (United States); Grelle, Austin L. [Argonne National Lab. (ANL), Argonne, IL (United States); Gohar, Yousry [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-02-01

    Argonne National Laboratory (ANL) of the United States and Kharkov Institute of Physics and Technology (KIPT) of Ukraine are collaborating on constructing a neutron source facility at KIPT, Kharkov, Ukraine. The facility has 100-kW electron beam driving a subcritical assembly (SCA). The electron beam interacts with a natural uranium target or a tungsten target to generate neutrons, and deposits its power in the target zone. The total fission power generated in SCA is about 300 kW. Two primary cooling loops are designed to remove 100-kW and 300-kW from the target zone and the SCA, respectively. A secondary cooling system is coupled with the primary cooling system to dispose of the generated heat outside the facility buildings to the atmosphere. In addition, the electron accelerator has a low efficiency for generating the electron beam, which uses another secondary cooling loop to remove the generated heat from the accelerator primary cooling loop. One of the main functions the KIPT neutron source facility is to train young nuclear specialists; therefore, ANL has developed the KIPT Neutron Source Facility Simulator for this function. In this simulator, a Plant Control System and a Plant Protection System were developed to perform proper control and to provide automatic protection against unsafe and improper operation of the facility during the steady-state and the transient states using a facility plant model. This report focuses on describing the physics of the plant model and provides several test cases to demonstrate its capabilities. The plant facility model uses the PYTHON script language. It is consistent with the computer language of the plant control system. It is easy to integrate with the simulator without an additional interface, and it is able to simulate the transients of the cooling systems with system control variables changing on real-time.

  5. Sources

    International Nuclear Information System (INIS)

    Duffy, L.P.

    1991-01-01

    This paper discusses the sources of radiation in the narrow perspective of radioactivity and the even narrow perspective of those sources that concern environmental management and restoration activities at DOE facilities, as well as a few related sources. Sources of irritation, Sources of inflammatory jingoism, and Sources of information. First, the sources of irritation fall into three categories: No reliable scientific ombudsman to speak without bias and prejudice for the public good, Technical jargon with unclear definitions exists within the radioactive nomenclature, and Scientific community keeps a low-profile with regard to public information. The next area of personal concern are the sources of inflammation. This include such things as: Plutonium being described as the most dangerous substance known to man, The amount of plutonium required to make a bomb, Talk of transuranic waste containing plutonium and its health affects, TMI-2 and Chernobyl being described as Siamese twins, Inadequate information on low-level disposal sites and current regulatory requirements under 10 CFR 61, Enhanced engineered waste disposal not being presented to the public accurately. Numerous sources of disinformation regarding low level radiation high-level radiation, Elusive nature of the scientific community, The Federal and State Health Agencies resources to address comparative risk, and Regulatory agencies speaking out without the support of the scientific community

  6. Bayesian model selection of template forward models for EEG source reconstruction.

    Science.gov (United States)

    Strobbe, Gregor; van Mierlo, Pieter; De Vos, Maarten; Mijović, Bogdan; Hallez, Hans; Van Huffel, Sabine; López, José David; Vandenberghe, Stefaan

    2014-06-01

    Several EEG source reconstruction techniques have been proposed to identify the generating neuronal sources of electrical activity measured on the scalp. The solution of these techniques depends directly on the accuracy of the forward model that is inverted. Recently, a parametric empirical Bayesian (PEB) framework for distributed source reconstruction in EEG/MEG was introduced and implemented in the Statistical Parametric Mapping (SPM) software. The framework allows us to compare different forward modeling approaches, using real data, instead of using more traditional simulated data from an assumed true forward model. In the absence of a subject specific MR image, a 3-layered boundary element method (BEM) template head model is currently used including a scalp, skull and brain compartment. In this study, we introduced volumetric template head models based on the finite difference method (FDM). We constructed a FDM head model equivalent to the BEM model and an extended FDM model including CSF. These models were compared within the context of three different types of source priors related to the type of inversion used in the PEB framework: independent and identically distributed (IID) sources, equivalent to classical minimum norm approaches, coherence (COH) priors similar to methods such as LORETA, and multiple sparse priors (MSP). The resulting models were compared based on ERP data of 20 subjects using Bayesian model selection for group studies. The reconstructed activity was also compared with the findings of previous studies using functional magnetic resonance imaging. We found very strong evidence in favor of the extended FDM head model with CSF and assuming MSP. These results suggest that the use of realistic volumetric forward models can improve PEB EEG source reconstruction. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Vibration Antiresonance Design for a Spacecraft Multifunctional Structure

    OpenAIRE

    Li, Dong-Xu; Liu, Wang; Hao, Dong

    2017-01-01

    Spacecraft must withstand rigorous mechanical environment experiences such as acceleration, noise, vibration, and shock during the process of launching, satellite-vehicle separation, and so on. In this paper, a new spacecraft multifunctional structure concept designed by us is introduced. The multifunctional structure has the functions of not only load bearing, but also vibration reduction, energy source, thermal control, and so on, and we adopt a series of viscoelastic parts as connections b...

  8. Open-source Software for Exoplanet Atmospheric Modeling

    Science.gov (United States)

    Cubillos, Patricio; Blecic, Jasmina; Harrington, Joseph

    2018-01-01

    I will present a suite of self-standing open-source tools to model and retrieve exoplanet spectra implemented for Python. These include: (1) a Bayesian-statistical package to run Levenberg-Marquardt optimization and Markov-chain Monte Carlo posterior sampling, (2) a package to compress line-transition data from HITRAN or Exomol without loss of information, (3) a package to compute partition functions for HITRAN molecules, (4) a package to compute collision-induced absorption, and (5) a package to produce radiative-transfer spectra of transit and eclipse exoplanet observations and atmospheric retrievals.

  9. Printed Spacecraft Separation System

    Energy Technology Data Exchange (ETDEWEB)

    Dehoff, Ryan R [ORNL; Holmans, Walter [Planetary Systems Corporation

    2016-10-01

    In this project Planetary Systems Corporation proposed utilizing additive manufacturing (3D printing) to manufacture a titanium spacecraft separation system for commercial and US government customers to realize a 90% reduction in the cost and energy. These savings were demonstrated via “printing-in” many of the parts and sub-assemblies into one part, thus greatly reducing the labor associated with design, procurement, assembly and calibration of mechanisms. Planetary Systems Corporation redesigned several of the components of the separation system based on additive manufacturing principles including geometric flexibility and the ability to fabricate complex designs, ability to combine multiple parts of an assembly into a single component, and the ability to optimize design for specific mechanical property targets. Shock absorption was specifically targeted and requirements were established to attenuate damage to the Lightband system from shock of initiation. Planetary Systems Corporation redesigned components based on these requirements and sent the designs to Oak Ridge National Laboratory to be printed. ORNL printed the parts using the Arcam electron beam melting technology based on the desire for the parts to be fabricated from Ti-6Al-4V based on the weight and mechanical performance of the material. A second set of components was fabricated from stainless steel material on the Renishaw laser powder bed technology due to the improved geometric accuracy, surface finish, and wear resistance of the material. Planetary Systems Corporation evaluated these components and determined that 3D printing is potentially a viable method for achieving significant cost and savings metrics.

  10. Spectra and spacecraft

    Science.gov (United States)

    Moroz, V. I.

    2001-02-01

    In June 1999, Dr. Regis Courtin, Associate Editor of PSS, suggested that I write an article for the new section of this journal: "Planetary Pioneers". I hesitated , but decided to try. One of the reasons for my doubts was my primitive English, so I owe the reader an apology for this in advance. Writing took me much more time than I supposed initially, I have stopped and again returned to manuscript many times. My professional life may be divided into three main phases: pioneering work in ground-based IR astronomy with an emphasis on planetary spectroscopy (1955-1970), studies of the planets with spacecraft (1970-1989), and attempts to proceed with this work in difficult times. I moved ahead using the known method of trials and errors as most of us do. In fact, only a small percentage of efforts led to some important results, a sort of dry residue. I will try to describe below how has it been in my case: what may be estimated as the most important, how I came to this, what was around, etc.

  11. Research-Based Monitoring, Prediction, and Analysis Tools of the Spacecraft Charging Environment for Spacecraft Users

    Science.gov (United States)

    Zheng, Yihua; Kuznetsova, Maria M.; Pulkkinen, Antti A.; Maddox, Marlo M.; Mays, Mona Leila

    2015-01-01

    The Space Weather Research Center (http://swrc. gsfc.nasa.gov) at NASA Goddard, part of the Community Coordinated Modeling Center (http://ccmc.gsfc.nasa.gov), is committed to providing research-based forecasts and notifications to address NASA's space weather needs, in addition to its critical role in space weather education. It provides a host of services including spacecraft anomaly resolution, historical impact analysis, real-time monitoring and forecasting, tailored space weather alerts and products, and weekly summaries and reports. In this paper, we focus on how (near) real-time data (both in space and on ground), in combination with modeling capabilities and an innovative dissemination system called the integrated Space Weather Analysis system (http://iswa.gsfc.nasa.gov), enable monitoring, analyzing, and predicting the spacecraft charging environment for spacecraft users. Relevant tools and resources are discussed.

  12. Improved techniques for predicting spacecraft power

    International Nuclear Information System (INIS)

    Chmielewski, A.B.

    1987-01-01

    Radioisotope Thermoelectric Generators (RTGs) are going to supply power for the NASA Galileo and Ulysses spacecraft now scheduled to be launched in 1989 and 1990. The duration of the Galileo mission is expected to be over 8 years. This brings the total RTG lifetime to 13 years. In 13 years, the RTG power drops more than 20 percent leaving a very small power margin over what is consumed by the spacecraft. Thus it is very important to accurately predict the RTG performance and be able to assess the magnitude of errors involved. The paper lists all the error sources involved in the RTG power predictions and describes a statistical method for calculating the tolerance

  13. Dissertation Defense Computational Fluid Dynamics Uncertainty Analysis for Payload Fairing Spacecraft Environmental Control Systems

    Science.gov (United States)

    Groves, Curtis Edward

    2014-01-01

    Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional "validation by test only" mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions. Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in "Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations". This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics

  14. Dissertation Defense: Computational Fluid Dynamics Uncertainty Analysis for Payload Fairing Spacecraft Environmental Control Systems

    Science.gov (United States)

    Groves, Curtis Edward

    2014-01-01

    Spacecraft thermal protection systems are at risk of being damaged due to airflow produced from Environmental Control Systems. There are inherent uncertainties and errors associated with using Computational Fluid Dynamics to predict the airflow field around a spacecraft from the Environmental Control System. This paper describes an approach to quantify the uncertainty in using Computational Fluid Dynamics to predict airflow speeds around an encapsulated spacecraft without the use of test data. Quantifying the uncertainty in analytical predictions is imperative to the success of any simulation-based product. The method could provide an alternative to traditional validation by test only mentality. This method could be extended to other disciplines and has potential to provide uncertainty for any numerical simulation, thus lowering the cost of performing these verifications while increasing the confidence in those predictions.Spacecraft requirements can include a maximum airflow speed to protect delicate instruments during ground processing. Computational Fluid Dynamics can be used to verify these requirements; however, the model must be validated by test data. This research includes the following three objectives and methods. Objective one is develop, model, and perform a Computational Fluid Dynamics analysis of three (3) generic, non-proprietary, environmental control systems and spacecraft configurations. Several commercially available and open source solvers have the capability to model the turbulent, highly three-dimensional, incompressible flow regime. The proposed method uses FLUENT, STARCCM+, and OPENFOAM. Objective two is to perform an uncertainty analysis of the Computational Fluid Dynamics model using the methodology found in Comprehensive Approach to Verification and Validation of Computational Fluid Dynamics Simulations. This method requires three separate grids and solutions, which quantify the error bars around Computational Fluid Dynamics predictions

  15. Software for Engineering Simulations of a Spacecraft

    Science.gov (United States)

    Shireman, Kirk; McSwain, Gene; McCormick, Bernell; Fardelos, Panayiotis

    2005-01-01

    Spacecraft Engineering Simulation II (SES II) is a C-language computer program for simulating diverse aspects of operation of a spacecraft characterized by either three or six degrees of freedom. A functional model in SES can include a trajectory flight plan; a submodel of a flight computer running navigational and flight-control software; and submodels of the environment, the dynamics of the spacecraft, and sensor inputs and outputs. SES II features a modular, object-oriented programming style. SES II supports event-based simulations, which, in turn, create an easily adaptable simulation environment in which many different types of trajectories can be simulated by use of the same software. The simulation output consists largely of flight data. SES II can be used to perform optimization and Monte Carlo dispersion simulations. It can also be used to perform simulations for multiple spacecraft. In addition to its generic simulation capabilities, SES offers special capabilities for space-shuttle simulations: for this purpose, it incorporates submodels of the space-shuttle dynamics and a C-language version of the guidance, navigation, and control components of the space-shuttle flight software.

  16. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    Energy Technology Data Exchange (ETDEWEB)

    Ma, Denglong [Fuli School of Food Equipment Engineering and Science, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); Zhang, Zaoxiao, E-mail: zhangzx@mail.xjtu.edu.cn [State Key Laboratory of Multiphase Flow in Power Engineering, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China); School of Chemical Engineering and Technology, Xi’an Jiaotong University, No.28 Xianning West Road, Xi’an 710049 (China)

    2016-07-05

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  17. Contaminant dispersion prediction and source estimation with integrated Gaussian-machine learning network model for point source emission in atmosphere

    International Nuclear Information System (INIS)

    Ma, Denglong; Zhang, Zaoxiao

    2016-01-01

    Highlights: • The intelligent network models were built to predict contaminant gas concentrations. • The improved network models coupled with Gaussian dispersion model were presented. • New model has high efficiency and accuracy for concentration prediction. • New model were applied to indentify the leakage source with satisfied results. - Abstract: Gas dispersion model is important for predicting the gas concentrations when contaminant gas leakage occurs. Intelligent network models such as radial basis function (RBF), back propagation (BP) neural network and support vector machine (SVM) model can be used for gas dispersion prediction. However, the prediction results from these network models with too many inputs based on original monitoring parameters are not in good agreement with the experimental data. Then, a new series of machine learning algorithms (MLA) models combined classic Gaussian model with MLA algorithm has been presented. The prediction results from new models are improved greatly. Among these models, Gaussian-SVM model performs best and its computation time is close to that of classic Gaussian dispersion model. Finally, Gaussian-MLA models were applied to identifying the emission source parameters with the particle swarm optimization (PSO) method. The estimation performance of PSO with Gaussian-MLA is better than that with Gaussian, Lagrangian stochastic (LS) dispersion model and network models based on original monitoring parameters. Hence, the new prediction model based on Gaussian-MLA is potentially a good method to predict contaminant gas dispersion as well as a good forward model in emission source parameters identification problem.

  18. A source-controlled data center network model.

    Science.gov (United States)

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS.

  19. Source characterization and dynamic fault modeling of induced seismicity

    Science.gov (United States)

    Lui, S. K. Y.; Young, R. P.

    2017-12-01

    In recent years there are increasing concerns worldwide that industrial activities in the sub-surface can cause or trigger damaging earthquakes. In order to effectively mitigate the damaging effects of induced seismicity, the key is to better understand the source physics of induced earthquakes, which still remain elusive at present. Furthermore, an improved understanding of induced earthquake physics is pivotal to assess large-magnitude earthquake triggering. A better quantification of the possible causes of induced earthquakes can be achieved through numerical simulations. The fault model used in this study is governed by the empirically-derived rate-and-state friction laws, featuring a velocity-weakening (VW) patch embedded into a large velocity-strengthening (VS) region. Outside of that, the fault is slipping at the background loading rate. The model is fully dynamic, with all wave effects resolved, and is able to resolve spontaneous long-term slip history on a fault segment at all stages of seismic cycles. An earlier study using this model has established that aseismic slip plays a major role in the triggering of small repeating earthquakes. This study presents a series of cases with earthquakes occurring on faults with different fault frictional properties and fluid-induced stress perturbations. The effects to both the overall seismicity rate and fault slip behavior are investigated, and the causal relationship between the pre-slip pattern prior to the event and the induced source characteristics is discussed. Based on simulation results, the subsequent step is to select specific cases for laboratory experiments which allow well controlled variables and fault parameters. Ultimately, the aim is to provide better constraints on important parameters for induced earthquakes based on numerical modeling and laboratory data, and hence to contribute to a physics-based induced earthquake hazard assessment.

  20. A source-controlled data center network model

    Science.gov (United States)

    Yu, Yang; Liang, Mangui; Wang, Zhe

    2017-01-01

    The construction of data center network by applying SDN technology has become a hot research topic. The SDN architecture has innovatively separated the control plane from the data plane which makes the network more software-oriented and agile. Moreover, it provides virtual multi-tenancy, effective scheduling resources and centralized control strategies to meet the demand for cloud computing data center. However, the explosion of network information is facing severe challenges for SDN controller. The flow storage and lookup mechanisms based on TCAM device have led to the restriction of scalability, high cost and energy consumption. In view of this, a source-controlled data center network (SCDCN) model is proposed herein. The SCDCN model applies a new type of source routing address named the vector address (VA) as the packet-switching label. The VA completely defines the communication path and the data forwarding process can be finished solely relying on VA. There are four advantages in the SCDCN architecture. 1) The model adopts hierarchical multi-controllers and abstracts large-scale data center network into some small network domains that has solved the restriction for the processing ability of single controller and reduced the computational complexity. 2) Vector switches (VS) developed in the core network no longer apply TCAM for table storage and lookup that has significantly cut down the cost and complexity for switches. Meanwhile, the problem of scalability can be solved effectively. 3) The SCDCN model simplifies the establishment process for new flows and there is no need to download flow tables to VS. The amount of control signaling consumed when establishing new flows can be significantly decreased. 4) We design the VS on the NetFPGA platform. The statistical results show that the hardware resource consumption in a VS is about 27% of that in an OFS. PMID:28328925

  1. A Model fot the Sources of the Slow Solar Wind

    Science.gov (United States)

    Antiochos, S. K.; Mikic, Z.; Titov, V. S.; Lionello, R.; Linker, J. A.

    2011-01-01

    Models for the origin of the slow solar wind must account for two seemingly contradictory observations: the slow wind has the composition of the closed-field corona, implying that it originates from the continuous opening and closing of flux at the boundary between open and closed field. On the other hand, the slow wind also has large angular width, up to approx.60deg, suggesting that its source extends far from the open-closed boundary. We propose a model that can explain both observations. The key idea is that the source of the slow wind at the Sun is a network of narrow (possibly singular) open-field corridors that map to a web of separatrices and quasi-separatrix layers in the heliosphere. We compute analytically the topology of an open-field corridor and show that it produces a quasi-separatrix layer in the heliosphere that extends to angles far from the heliospheric current sheet. We then use an MHD code and MDI/SOHO observations of the photospheric magnetic field to calculate numerically, with high spatial resolution, the quasi-steady solar wind, and magnetic field for a time period preceding the 2008 August 1 total solar eclipse. Our numerical results imply that, at least for this time period, a web of separatrices (which we term an S-web) forms with sufficient density and extent in the heliosphere to account for the observed properties of the slow wind. We discuss the implications of our S-web model for the structure and dynamics of the corona and heliosphere and propose further tests of the model. Key words: solar wind - Sun: corona - Sun: magnetic topology

  2. A Model for the Sources of the Slow Solar Wind

    Science.gov (United States)

    Antiochos, S. K.; Mikić, Z.; Titov, V. S.; Lionello, R.; Linker, J. A.

    2011-04-01

    Models for the origin of the slow solar wind must account for two seemingly contradictory observations: the slow wind has the composition of the closed-field corona, implying that it originates from the continuous opening and closing of flux at the boundary between open and closed field. On the other hand, the slow wind also has large angular width, up to ~60°, suggesting that its source extends far from the open-closed boundary. We propose a model that can explain both observations. The key idea is that the source of the slow wind at the Sun is a network of narrow (possibly singular) open-field corridors that map to a web of separatrices and quasi-separatrix layers in the heliosphere. We compute analytically the topology of an open-field corridor and show that it produces a quasi-separatrix layer in the heliosphere that extends to angles far from the heliospheric current sheet. We then use an MHD code and MDI/SOHO observations of the photospheric magnetic field to calculate numerically, with high spatial resolution, the quasi-steady solar wind, and magnetic field for a time period preceding the 2008 August 1 total solar eclipse. Our numerical results imply that, at least for this time period, a web of separatrices (which we term an S-web) forms with sufficient density and extent in the heliosphere to account for the observed properties of the slow wind. We discuss the implications of our S-web model for the structure and dynamics of the corona and heliosphere and propose further tests of the model.

  3. Source modelling at the dawn of gravitational-wave astronomy

    Science.gov (United States)

    Gerosa, Davide

    2016-09-01

    The age of gravitational-wave astronomy has begun. Gravitational waves are propagating spacetime perturbations ("ripples in the fabric of space-time") predicted by Einstein's theory of General Relativity. These signals propagate at the speed of light and are generated by powerful astrophysical events, such as the merger of two black holes and supernova explosions. The first detection of gravitational waves was performed in 2015 with the LIGO interferometers. This constitutes a tremendous breakthrough in fundamental physics and astronomy: it is not only the first direct detection of such elusive signals, but also the first irrefutable observation of a black-hole binary system. The future of gravitational-wave astronomy is bright and loud: the LIGO experiments will soon be joined by a network of ground-based interferometers; the space mission eLISA has now been fully approved by the European Space Agency with a proof-of-concept mission called LISA Pathfinder launched in 2015. Gravitational-wave observations will provide unprecedented tests of gravity as well as a qualitatively new window on the Universe. Careful theoretical modelling of the astrophysical sources of gravitational-waves is crucial to maximize the scientific outcome of the detectors. In this Thesis, we present several advances on gravitational-wave source modelling, studying in particular: (i) the precessional dynamics of spinning black-hole binaries; (ii) the astrophysical consequences of black-hole recoils; and (iii) the formation of compact objects in the framework of scalar-tensor theories of gravity. All these phenomena are deeply characterized by a continuous interplay between General Relativity and astrophysics: despite being a truly relativistic messenger, gravitational waves encode details of the astrophysical formation and evolution processes of their sources. We work out signatures and predictions to extract such information from current and future observations. At the dawn of a revolutionary

  4. Self-consistent modeling of electron cyclotron resonance ion sources

    International Nuclear Information System (INIS)

    Girard, A.; Hitz, D.; Melin, G.; Serebrennikov, K.; Lecot, C.

    2004-01-01

    In order to predict the performances of electron cyclotron resonance ion source (ECRIS), it is necessary to perfectly model the different parts of these sources: (i) magnetic configuration; (ii) plasma characteristics; (iii) extraction system. The magnetic configuration is easily calculated via commercial codes; different codes also simulate the ion extraction, either in two dimension, or even in three dimension (to take into account the shape of the plasma at the extraction influenced by the hexapole). However the characteristics of the plasma are not always mastered. This article describes the self-consistent modeling of ECRIS: we have developed a code which takes into account the most important construction parameters: the size of the plasma (length, diameter), the mirror ratio and axial magnetic profile, whether a biased probe is installed or not. These input parameters are used to feed a self-consistent code, which calculates the characteristics of the plasma: electron density and energy, charge state distribution, plasma potential. The code is briefly described, and some of its most interesting results are presented. Comparisons are made between the calculations and the results obtained experimentally

  5. Self-consistent modeling of electron cyclotron resonance ion sources

    Science.gov (United States)

    Girard, A.; Hitz, D.; Melin, G.; Serebrennikov, K.; Lécot, C.

    2004-05-01

    In order to predict the performances of electron cyclotron resonance ion source (ECRIS), it is necessary to perfectly model the different parts of these sources: (i) magnetic configuration; (ii) plasma characteristics; (iii) extraction system. The magnetic configuration is easily calculated via commercial codes; different codes also simulate the ion extraction, either in two dimension, or even in three dimension (to take into account the shape of the plasma at the extraction influenced by the hexapole). However the characteristics of the plasma are not always mastered. This article describes the self-consistent modeling of ECRIS: we have developed a code which takes into account the most important construction parameters: the size of the plasma (length, diameter), the mirror ratio and axial magnetic profile, whether a biased probe is installed or not. These input parameters are used to feed a self-consistent code, which calculates the characteristics of the plasma: electron density and energy, charge state distribution, plasma potential. The code is briefly described, and some of its most interesting results are presented. Comparisons are made between the calculations and the results obtained experimentally.

  6. Modeling and simulation of RF photoinjectors for coherent light sources

    Science.gov (United States)

    Chen, Y.; Krasilnikov, M.; Stephan, F.; Gjonaj, E.; Weiland, T.; Dohlus, M.

    2018-05-01

    We propose a three-dimensional fully electromagnetic numerical approach for the simulation of RF photoinjectors for coherent light sources. The basic idea consists in incorporating a self-consistent photoemission model within a particle tracking code. The generation of electron beams in the injector is determined by the quantum efficiency (QE) of the cathode, the intensity profile of the driving laser as well as by the accelerating field and magnetic focusing conditions in the gun. The total charge emitted during an emission cycle can be limited by the space charge field at the cathode. Furthermore, the time and space dependent electromagnetic field at the cathode may induce a transient modulation of the QE due to surface barrier reduction of the emitting layer. In our modeling approach, all these effects are taken into account. The beam particles are generated dynamically according to the local QE of the cathode and the time dependent laser intensity profile. For the beam dynamics, a tracking code based on the Lienard-Wiechert retarded field formalism is employed. This code provides the single particle trajectories as well as the transient space charge field distribution at the cathode. As an application, the PITZ injector is considered. Extensive electron bunch emission simulations are carried out for different operation conditions of the injector, in the source limited as well as in the space charge limited emission regime. In both cases, fairly good agreement between measurements and simulations is obtained.

  7. Towards a Unified Source-Propagation Model of Cosmic Rays

    Science.gov (United States)

    Taylor, M.; Molla, M.

    2010-07-01

    It is well known that the cosmic ray energy spectrum is multifractal with the analysis of cosmic ray fluxes as a function of energy revealing a first “knee” slightly below 1016 eV, a second knee slightly below 1018 eV and an “ankle” close to 1019 eV. The behaviour of the highest energy cosmic rays around and above the ankle is still a mystery and precludes the development of a unified source-propagation model of cosmic rays from their source origin to Earth. A variety of acceleration and propagation mechanisms have been proposed to explain different parts of the spectrum the most famous of course being Fermi acceleration in magnetised turbulent plasmas (Fermi 1949). Many others have been proposd for energies at and below the first knee (Peters & Cimento (1961); Lagage & Cesarsky (1983); Drury et al. (1984); Wdowczyk & Wolfendale (1984); Ptuskin et al. (1993); Dova et al. (0000); Horandel et al. (2002); Axford (1991)) as well as at higher energies between the first knee and the ankle (Nagano & Watson (2000); Bhattacharjee & Sigl (2000); Malkov & Drury (2001)). The recent fit of most of the cosmic ray spectrum up to the ankle using non-extensive statistical mechanics (NESM) (Tsallis et al. (2003)) provides what may be the strongest evidence for a source-propagation system deviating significantly from Boltmann statistics. As Tsallis has shown (Tsallis et al. (2003)), the knees appear as crossovers between two fractal-like thermal regimes. In this work, we have developed a generalisation of the second order NESM model (Tsallis et al. (2003)) to higher orders and we have fit the complete spectrum including the ankle with third order NESM. We find that, towards the GDZ limit, a new mechanism comes into play. Surprisingly it also presents as a modulation akin to that in our own local neighbourhood of cosmic rays emitted by the sun. We propose that this is due to modulation at the source and is possibly due to processes in the shell of the originating supernova. We

  8. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field ( port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  9. Modelling RF sources using 2-D PIC codes

    Energy Technology Data Exchange (ETDEWEB)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT`S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (``port approximation``). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation.

  10. Modelling RF sources using 2-D PIC codes

    International Nuclear Information System (INIS)

    Eppley, K.R.

    1993-03-01

    In recent years, many types of RF sources have been successfully modelled using 2-D PIC codes. Both cross field devices (magnetrons, cross field amplifiers, etc.) and pencil beam devices (klystrons, gyrotrons, TWT'S, lasertrons, etc.) have been simulated. All these devices involve the interaction of an electron beam with an RF circuit. For many applications, the RF structure may be approximated by an equivalent circuit, which appears in the simulation as a boundary condition on the electric field (''port approximation''). The drive term for the circuit is calculated from the energy transfer between beam and field in the drift space. For some applications it may be necessary to model the actual geometry of the structure, although this is more expensive. One problem not entirely solved is how to accurately model in 2-D the coupling to an external waveguide. Frequently this is approximated by a radial transmission line, but this sometimes yields incorrect results. We also discuss issues in modelling the cathode and injecting the beam into the PIC simulation

  11. Fast temperature optimization of multi-source hyperthermia applicators with reduced-order modeling of 'virtual sources'

    International Nuclear Information System (INIS)

    Cheng, K-S; Stakhursky, Vadim; Craciunescu, Oana I; Stauffer, Paul; Dewhirst, Mark; Das, Shiva K

    2008-01-01

    The goal of this work is to build the foundation for facilitating real-time magnetic resonance image guided patient treatment for heating systems with a large number of physical sources (e.g. antennas). Achieving this goal requires knowledge of how the temperature distribution will be affected by changing each source individually, which requires time expenditure on the order of the square of the number of sources. To reduce computation time, we propose a model reduction approach that combines a smaller number of predefined source configurations (fewer than the number of actual sources) that are most likely to heat tumor. The source configurations consist of magnitude and phase source excitation values for each actual source and may be computed from a CT scan based plan or a simplified generic model of the corresponding patient anatomy. Each pre-calculated source configuration is considered a 'virtual source'. We assume that the actual best source settings can be represented effectively as weighted combinations of the virtual sources. In the context of optimization, each source configuration is treated equivalently to one physical source. This model reduction approach is tested on a patient upper-leg tumor model (with and without temperature-dependent perfusion), heated using a 140 MHz ten-antenna cylindrical mini-annular phased array. Numerical simulations demonstrate that using only a few pre-defined source configurations can achieve temperature distributions that are comparable to those from full optimizations using all physical sources. The method yields close to optimal temperature distributions when using source configurations determined from a simplified model of the tumor, even when tumor position is erroneously assumed to be ∼2.0 cm away from the actual position as often happens in practical clinical application of pre-treatment planning. The method also appears to be robust under conditions of changing, nonlinear, temperature-dependent perfusion. The

  12. Application of the NASCAP Spacecraft Simulation Tool to Investigate Electrodynamic Tether Current Collection in LEO

    Science.gov (United States)

    Adams, Mitzi; HabashKrause, Linda

    2012-01-01

    Recent interest in using electrodynamic tethers (EDTs) for orbital maneuvering in Low Earth Orbit (LEO) has prompted the development of the Marshall ElectroDynamic Tether Orbit Propagator (MEDTOP) model. The model is comprised of several modules which address various aspects of EDT propulsion, including calculation of state vectors using a standard orbit propagator (e.g., J2), an atmospheric drag model, realistic ionospheric and magnetic field models, space weather effects, and tether librations. The natural electromotive force (EMF) attained during a radially-aligned conductive tether results in electrons flowing down the tether and accumulating on the lower-altitude spacecraft. The energy that drives this EMF is sourced from the orbital energy of the system; thus, EDTs are often proposed as de-orbiting systems. However, when the current is reversed using satellite charged particle sources, then propulsion is possible. One of the most difficult challenges of the modeling effort is to ascertain the equivalent circuit between the spacecraft and the ionospheric plasma. The present study investigates the use of the NASA Charging Analyzer Program (NASCAP) to calculate currents to and from the tethered satellites and the ionospheric plasma. NASCAP is a sophisticated set of computational tools to model the surface charging of three-dimensional (3D) spacecraft surfaces in a time-varying space environment. The model's surface is tessellated into a collection of facets, and NASCAP calculates currents and potentials for each one. Additionally, NASCAP provides for the construction of one or more nested grids to calculate space potential and time-varying electric fields. This provides for the capability to track individual particles orbits, to model charged particle wakes, and to incorporate external charged particle sources. With this study, we have developed a model of calculating currents incident onto an electrodynamic tethered satellite system, and first results are shown

  13. Spacecraft Charging and the Microwave Anisotropy Probe Spacecraft

    Science.gov (United States)

    Timothy, VanSant J.; Neergaard, Linda F.

    1998-01-01

    The Microwave Anisotropy Probe (MAP), a MIDEX mission built in partnership between Princeton University and the NASA Goddard Space Flight Center (GSFC), will study the cosmic microwave background. It will be inserted into a highly elliptical earth orbit for several weeks and then use a lunar gravity assist to orbit around the second Lagrangian point (L2), 1.5 million kilometers, anti-sunward from the earth. The charging environment for the phasing loops and at L2 was evaluated. There is a limited set of data for L2; the GEOTAIL spacecraft measured relatively low spacecraft potentials (approx. 50 V maximum) near L2. The main area of concern for charging on the MAP spacecraft is the well-established threat posed by the "geosynchronous region" between 6-10 Re. The launch in the autumn of 2000 will coincide with the falling of the solar maximum, a period when the likelihood of a substorm is higher than usual. The likelihood of a substorm at that time has been roughly estimated to be on the order of 20% for a typical MAP mission profile. Because of the possibility of spacecraft charging, a requirement for conductive spacecraft surfaces was established early in the program. Subsequent NASCAP/GEO analyses for the MAP spacecraft demonstrated that a significant portion of the sunlit surface (solar cell cover glass and sunshade) could have nonconductive surfaces without significantly raising differential charging. The need for conductive materials on surfaces continually in eclipse has also been reinforced by NASCAP analyses.

  14. Internal Acoustics of the ISS and Other Spacecraft

    Science.gov (United States)

    Allen, Christopher S.

    2017-01-01

    It is important to control the acoustic environment inside spacecraft and space habitats to protect for astronaut communications, alarm audibility, and habitability, and to reduce astronauts' risk for sleep disturbance, and hear-ing loss. But this is not an easy task, given the various design trade-offs, and it has been difficult, historically, to achieve. Over time it has been found that successful control of spacecraft acoustic levels is achieved by levying firm requirements at the system-level, using a systems engineering approach for design and development, and then validating these requirements with acoustic testing. In the systems engineering method, the system-level requirements must be flowed down to sub-systems and component noise sources, using acoustic analysis and acoustic modelling to develop allocated requirements for the sub-systems and components. Noise controls must also be developed, tested, and implemented so the sub-systems and components can achieve their allocated limits. It is also important to have management support for acoustics efforts to maintain their priority against the various trade-offs, including mass, volume, power, cost, and schedule. In this extended abstract and companion presentation, the requirements, approach, and results for controlling acoustic levels in most US spacecraft since Apollo will be briefly discussed. The approach for controlling acoustic levels in the future US space vehicle, Orion Multipurpose Crew Vehicle (MPCV), will also be briefly discussed. These discussions will be limited to the control of continuous noise inside the space vehicles. Other types of noise, such as launch, landing, and abort noise, intermittent noise, Extra-Vehicular Activity (EVA) noise, emergency operations/off-nominal noise, noise exposure, and impulse noise are important, but will not be discussed because of time limitations.

  15. Internal Mass Motion for Spacecraft Dynamics and Control

    National Research Council Canada - National Science Library

    Hall, Christopher D

    2008-01-01

    We present a detailed description of the application of a noncanonical Hamiltonian formulation to the modeling, analysis, and simulation of the dynamics of gyrostat spacecraft with internal mass motion...

  16. Spacecraft early design validation using formal methods

    International Nuclear Information System (INIS)

    Bozzano, Marco; Cimatti, Alessandro; Katoen, Joost-Pieter; Katsaros, Panagiotis; Mokos, Konstantinos; Nguyen, Viet Yen; Noll, Thomas; Postma, Bart; Roveri, Marco

    2014-01-01

    The size and complexity of software in spacecraft is increasing exponentially, and this trend complicates its validation within the context of the overall spacecraft system. Current validation methods are labor-intensive as they rely on manual analysis, review and inspection. For future space missions, we developed – with challenging requirements from the European space industry – a novel modeling language and toolset for a (semi-)automated validation approach. Our modeling language is a dialect of AADL and enables engineers to express the system, the software, and their reliability aspects. The COMPASS toolset utilizes state-of-the-art model checking techniques, both qualitative and probabilistic, for the analysis of requirements related to functional correctness, safety, dependability and performance. Several pilot projects have been performed by industry, with two of them having focused on the system-level of a satellite platform in development. Our efforts resulted in a significant advancement of validating spacecraft designs from several perspectives, using a single integrated system model. The associated technology readiness level increased from level 1 (basic concepts and ideas) to early level 4 (laboratory-tested)

  17. A Quantized State Approach to On-line Simulation for Spacecraft Autonomy

    DEFF Research Database (Denmark)

    Alminde, Lars; Stoustrup, Jakob; Bendtsen, Jan Dimon

    2006-01-01

    Future space applications will require an increased level of operational autonomy. This calls for declarative methods for spacecraft state estimation and control, so that the spacecraft engineer can focus on modeling the spacecraft rather than implementing all details of the on-line system. Celeb...

  18. HYDROLOGY AND SEDIMENT MODELING USING THE BASINS NON-POINT SOURCE MODEL

    Science.gov (United States)

    The Non-Point Source Model (Hydrologic Simulation Program-Fortran, or HSPF) within the EPA Office of Water's BASINS watershed modeling system was used to simulate streamflow and total suspended solids within Contentnea Creek, North Carolina, which is a tributary of the Neuse Rive...

  19. Spacecraft Environmental Interactions Technology, 1983

    Science.gov (United States)

    1985-01-01

    State of the art of environment interactions dealing with low-Earth-orbit plasmas; high-voltage systems; spacecraft charging; materials effects; and direction of future programs are contained in over 50 papers.

  20. Gravity Probe B spacecraft description

    International Nuclear Information System (INIS)

    Bennett, Norman R; Burns, Kevin; Katz, Russell; Kirschenbaum, Jon; Mason, Gary; Shehata, Shawky

    2015-01-01

    The Gravity Probe B spacecraft, developed, integrated, and tested by Lockheed Missiles and Space Company and later Lockheed Martin Corporation, consisted of structures, mechanisms, command and data handling, attitude and translation control, electrical power, thermal control, flight software, and communications. When integrated with the payload elements, the integrated system became the space vehicle. Key requirements shaping the design of the spacecraft were: (1) the tight mission timeline (17 months, 9 days of on-orbit operation), (2) precise attitude and translational control, (3) thermal protection of science hardware, (4) minimizing aerodynamic, magnetic, and eddy current effects, and (5) the need to provide a robust, low risk spacecraft. The spacecraft met all mission requirements, as demonstrated by dewar lifetime meeting specification, positive power and thermal margins, precision attitude control and drag-free performance, reliable communications, and the collection of more than 97% of the available science data. (paper)

  1. Modelling and optimisation of fs laser-produced Kα sources

    International Nuclear Information System (INIS)

    Gibbon, P.; Masek, M.; Teubner, U.; Lu, W.; Nicoul, M.; Shymanovich, U.; Tarasevitch, A.; Zhou, P.; Sokolowski-Tinten, K.; Linde, D. von der

    2009-01-01

    Recent theoretical and numerical studies of laser-driven femtosecond K α sources are presented, aimed at understanding a recent experimental campaign to optimize emission from thin coating targets. Particular attention is given to control over the laser-plasma interaction conditions defined by the interplay between a controlled prepulse and the angle of incidence. It is found that the x-ray efficiency for poor-contrast laser systems in which a large preplasma is suspected can be enhanced by using a near-normal incidence geometry even at high laser intensities. With high laser contrast, similar efficiencies can be achieved by going to larger incidence angles, but only at the expense of larger X-ray spot size. New developments in three-dimensional modelling are also reported with the goal of handling interactions with geometrically complex targets and finite resistivity. (orig.)

  2. Modeling in control of the Advanced Light Source

    International Nuclear Information System (INIS)

    Bengtsson, J.; Forest, E.; Nishimura, H.; Schachinger, L.

    1991-05-01

    A software system for control of accelerator physics parameters of the Advanced Light Source (ALS) is being designed and implemented at LBL. Some of the parameters we wish to control are tunes, chromaticities, and closed orbit distortions as well as linear lattice distortions and, possibly, amplitude- and momentum-dependent tune shifts. In all our applications, the goal is to allow the user to adjust physics parameters of the machine, instead of turning knobs that control magnets directly. This control will take place via a highly graphical user interface, with both a model appropriate to the application and any correction algorithm running alongside as separate processes. Many of these applications will run on a Unix workstation, separate from the controls system, but communicating with the hardware database via Remote Procedure Calls (RPCs)

  3. Crowd Sourcing for Challenging Technical Problems and Business Model

    Science.gov (United States)

    Davis, Jeffrey R.; Richard, Elizabeth

    2011-01-01

    Crowd sourcing may be defined as the act of outsourcing tasks that are traditionally performed by an employee or contractor to an undefined, generally large group of people or community (a crowd) in the form of an open call. The open call may be issued by an organization wishing to find a solution to a particular problem or complete a task, or by an open innovation service provider on behalf of that organization. In 2008, the Space Life Sciences Directorate (SLSD), with the support of Wyle Integrated Science and Engineering, established and implemented pilot projects in open innovation (crowd sourcing) to determine if these new internet-based platforms could indeed find solutions to difficult technical challenges. These unsolved technical problems were converted to problem statements, also called "Challenges" or "Technical Needs" by the various open innovation service providers, and were then posted externally to seek solutions. In addition, an open call was issued internally to NASA employees Agency wide (10 Field Centers and NASA HQ) using an open innovation service provider crowd sourcing platform to post NASA challenges from each Center for the others to propose solutions). From 2008 to 2010, the SLSD issued 34 challenges, 14 externally and 20 internally. The 14 external problems or challenges were posted through three different vendors: InnoCentive, Yet2.com and TopCoder. The 20 internal challenges were conducted using the InnoCentive crowd sourcing platform designed for internal use by an organization. This platform was customized for NASA use and promoted as NASA@Work. The results were significant. Of the seven InnoCentive external challenges, two full and five partial awards were made in complex technical areas such as predicting solar flares and long-duration food packaging. Similarly, the TopCoder challenge yielded an optimization algorithm for designing a lunar medical kit. The Yet2.com challenges yielded many new industry and academic contacts in bone

  4. Development of an emissions inventory model for mobile sources

    Energy Technology Data Exchange (ETDEWEB)

    Reynolds, A W; Broderick, B M [Trinity College, Dublin (Ireland). Dept. of Civil, Structural and Environmental Engineering

    2000-07-01

    Traffic represents one of the largest sources of primary air pollutants in urban areas. As a consequence, numerous abatement strategies are being pursued to decrease the ambient concentrations of a wide range of pollutants. A mutual characteristic of most of these strategies is a requirement for accurate data on both the quantity and spatial distribution of emissions to air in the form of an atmospheric emissions inventory database. In the case of traffic pollution, such an inventory must be compiled using activity statistics and emission factors for a wide range of vehicle types. The majority of inventories are compiled using 'passive' data from either surveys or transportation models and by their very nature tend to be out-of-date by the time they are compiled. Current trends are towards integrating urban traffic control systems and assessments of the environmental effects of motor vehicles. In this paper. a methodology for estimating emissions from mobile sources using real-time data is described. This methodology is used to calculate emissions of sulphur dioxide (SO{sub 2}), oxides of nitrogen (NO{sub x}), carbon monoxide (CO). volatile organic compounds (VOC), particulate matter less than 10 {mu}m aerodynamic diameter (PM{sub 10}), 1,3-butadiene (C{sub 4}H{sub 6}) and benzene (C{sub 6}H{sub 6}) at a test junction in Dublin. Traffic data, which are required on a street-by-street basis, is obtained from induction loops and closed circuit televisions (CCTV) as well as statistical data. The observed traffic data are compared to simulated data from a travel demand model. As a test case, an emissions inventory is compiled for a heavily trafficked signalized junction in an urban environment using the measured data. In order that the model may be validated, the predicted emissions are employed in a dispersion model along with local meteorological conditions and site geometry. The resultant pollutant concentrations are compared to average ambient kerbside conditions

  5. The interaction of a flowing plasma with a dipole magnetic field: measurements and modelling of a diamagnetic cavity relevant to spacecraft protection

    International Nuclear Information System (INIS)

    Bamford, R; Bradford, J; Bingham, R; Gargate, L; Hapgood, M; Stamper, R; Gibson, K J; Thornton, A J; Silva, L O; Fonseca, R A; Norberg, C; Todd, T

    2008-01-01

    Here we describe a new experiment to test the shielding concept of a dipole-like magnetic field and plasma, surrounding a spacecraft forming a 'mini magnetosphere'. Initial laboratory experiments have been conducted to determine the effectiveness of a magnetized plasma barrier to be able to expel an impacting, low beta, supersonic flowing energetic plasma representing the solar wind. Optical and Langmuir probe data of the plasma density, the plasma flow velocity and the intensity of the dipole field clearly show the creation of a narrow transport barrier region and diamagnetic cavity virtually devoid of energetic plasma particles. This demonstrates the potential viability of being able to create a small 'hole' in a solar wind plasma, of the order of the ion Larmor orbit width, in which an inhabited spacecraft could reside in relative safety. The experimental results have been quantitatively compared with a 3D particle-in-cell 'hybrid' code simulation that uses kinetic ions and fluid electrons, showing good qualitative agreement and excellent quantitative agreement. Together the results demonstrate the pivotal role of particle kinetics in determining generic plasma transport barriers.

  6. Source term identification in atmospheric modelling via sparse optimization

    Science.gov (United States)

    Adam, Lukas; Branda, Martin; Hamburger, Thomas

    2015-04-01

    Inverse modelling plays an important role in identifying the amount of harmful substances released into atmosphere during major incidents such as power plant accidents or volcano eruptions. Another possible application of inverse modelling lies in the monitoring the CO2 emission limits where only observations at certain places are available and the task is to estimate the total releases at given locations. This gives rise to minimizing the discrepancy between the observations and the model predictions. There are two standard ways of solving such problems. In the first one, this discrepancy is regularized by adding additional terms. Such terms may include Tikhonov regularization, distance from a priori information or a smoothing term. The resulting, usually quadratic, problem is then solved via standard optimization solvers. The second approach assumes that the error term has a (normal) distribution and makes use of Bayesian modelling to identify the source term. Instead of following the above-mentioned approaches, we utilize techniques from the field of compressive sensing. Such techniques look for a sparsest solution (solution with the smallest number of nonzeros) of a linear system, where a maximal allowed error term may be added to this system. Even though this field is a developed one with many possible solution techniques, most of them do not consider even the simplest constraints which are naturally present in atmospheric modelling. One of such examples is the nonnegativity of release amounts. We believe that the concept of a sparse solution is natural in both problems of identification of the source location and of the time process of the source release. In the first case, it is usually assumed that there are only few release points and the task is to find them. In the second case, the time window is usually much longer than the duration of the actual release. In both cases, the optimal solution should contain a large amount of zeros, giving rise to the

  7. The eSourcing Capability Model for Service Providers: Knowledge Manage-ment across the Sourcing Life-cycle

    OpenAIRE

    Laaksonen, Pekka

    2011-01-01

    Laaksonen, Pekka The eSourcing Capability Model for Service Providers: Knowledge Manage-ment across the Sourcing Life-cycle Jyväskylä: Jyväskylän yliopisto, 2011, 42 s. Tietojärjestelmätiede, kandidaatintutkielma Ohjaaja(t): Käkölä, Timo Tässä kandidaatintutkielmassa selvitettiin sitä, miten the eSourcing Capability Model for Service Providers-mallin käytännöt (practices) ovat liittyneet tietä-myksenhallinnan neljään prosessiin: tiedon luominen, varastointi/noutaminen, jakamine...

  8. (abstract) ARGOS: a System to Monitor Ulysses Nutation and Thruster Firings from Variations of the Spacecraft Radio Signal

    Science.gov (United States)

    McElrath, T. P.; Cangahuala, L. A.; Miller, K. J.; Stravert, L. R.; Garcia-Perez, Raul

    1995-01-01

    Ulysses is a spin-stabilized spacecraft that experienced significant nutation after its launch in October 1990. This was due to the Sun-spacecraft-Earth geometry, and a study of the phenomenon predicted that the nutation would again be a problem during 1994-95. The difficulty of obtaining nutation estimates in real time from the spacecraft telemetry forced the ESA/NASA Ulysses Team to explore alternative information sources. The work performed by the ESA Operations Team provided a model for a system that uses the radio signal strength measurements to monitor the spacecraft dynamics. These measurements (referred to as AGC) are provided once per second by the tracking stations of the DSN. The system was named ARGOS (Attitude Reckoning from Ground Observable Signals) after the ever-vigilant, hundred-eyed giant of Greek Mythology. The ARGOS design also included Doppler processing, because Doppler shifts indicate thruster firings commanded by the active nutation control carried out onboard the spacecraft. While there is some visibility into thruster activity from telemetry, careful processing of the high-sample-rate Doppler data provides an accurate means of detecting the presence and time of thruster firings. DSN Doppler measurements are available at a ten-per-second rate in the same tracking data block as the AGC data.

  9. Model of contamination sources of electron for radiotherapy of beams of photons

    International Nuclear Information System (INIS)

    Gonzalez Infantes, W.; Lallena Rojo, A. M.; Anguiano Millan, M.

    2013-01-01

    Proposes a model of virtual sources of electrons, that allows to reproduce the sources to the input parameters of the representation of the patient. To compare performance in depth values and calculated profiles from the full simulation of the heads, with the calculated values using sources model, found that the model is capable of playing depth dose distributions and profiles. (Author)

  10. Large Scale Experiments on Spacecraft Fire Safety

    DEFF Research Database (Denmark)

    Urban, David L.; Ruff, Gary A.; Minster, Olivier

    2012-01-01

    -based microgravity facilities or has been limited to very small fuel samples. Still, the work conducted to date has shown that fire behaviour in low-gravity is very different from that in normal-gravity, with differences observed for flammability limits, ignition delay, flame spread behaviour, flame colour and flame......Full scale fire testing complemented by computer modelling has provided significant knowhow about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due...... to the complexity, cost and risk associ-ated with operating a long duration fire safety experiment of a relevant size in microgravity. Therefore, there is currently a gap in knowledge of fire behaviour in spacecraft. The entire body of low-gravity fire research has either been conducted in short duration ground...

  11. Vibration and Acoustic Testing for Mars Micromission Spacecraft

    Science.gov (United States)

    Kern, Dennis L.; Scharton, Terry D.

    1999-01-01

    spacecraft and the test fixture, alleviates the severe overtest at spacecraft resonances inherent in rigid fixture vibration tests. It has the distinct advantage over response limiting that the method is not dependent on the accuracy of a detailed dynamic model of the spacecraft. Combined loads, vibration, and modal testing were recently performed on the QuikSCAT spacecraft. The combined tests were performed in a single test setup per axis on a vibration shaker, reducing test time by a factor of two or three. Force gages were employed to measure the true c.g. acceleration of the spacecraft for structural loads verification using a sine burst test, to automatically notch random vibration test input accelerations at spacecraft resonances based on predetermined force limits, and to directly measure modal masses in a base drive modal test. In addition to these combined tests on the shaker, the QuikSCAT spacecraft was subjected to a direct field acoustic test by surrounding the spacecraft, still on the vibration shaker, with rock concert type acoustic speakers. Since the spacecraft contractor does not have a reverberant field acoustic test facility, performing a direct field acoustic test -saved the program nearly two weeks schedule time that would have been required for packing / unpacking and shipping of the spacecraft. This paper discusses the rationale behind and advantages of the above test approaches and provides examples of their actual implementation and comparisons to flight data. The applicability of the test approaches to Mars Micromission spacecraft qualification is discussed.

  12. Experiments study on attitude coupling control method for flexible spacecraft

    Science.gov (United States)

    Wang, Jie; Li, Dongxu

    2018-06-01

    High pointing accuracy and stabilization are significant for spacecrafts to carry out Earth observing, laser communication and space exploration missions. However, when a spacecraft undergoes large angle maneuver, the excited elastic oscillation of flexible appendages, for instance, solar wing and onboard antenna, would downgrade the performance of the spacecraft platform. This paper proposes a coupling control method, which synthesizes the adaptive sliding mode controller and the positive position feedback (PPF) controller, to control the attitude and suppress the elastic vibration simultaneously. Because of its prominent performance for attitude tracking and stabilization, the proposed method is capable of slewing the flexible spacecraft with a large angle. Also, the method is robust to parametric uncertainties of the spacecraft model. Numerical simulations are carried out with a hub-plate system which undergoes a single-axis attitude maneuver. An attitude control testbed for the flexible spacecraft is established and experiments are conducted to validate the coupling control method. Both numerical and experimental results demonstrate that the method discussed above can effectively decrease the stabilization time and improve the attitude accuracy of the flexible spacecraft.

  13. Dynamics and control of Lorentz-augmented spacecraft relative motion

    CERN Document Server

    Yan, Ye; Yang, Yueneng

    2017-01-01

    This book develops a dynamical model of the orbital motion of Lorentz spacecraft in both unperturbed and J2-perturbed environments. It explicitly discusses three kinds of typical space missions involving relative orbital control: spacecraft hovering, rendezvous, and formation flying. Subsequently, it puts forward designs for both open-loop and closed-loop control schemes propelled or augmented by the geomagnetic Lorentz force. These control schemes are entirely novel and represent a significantly departure from previous approaches.

  14. Reliability model of SNS linac (spallation neutron source-ORNL)

    International Nuclear Information System (INIS)

    Pitigoi, A.; Fernandez, P.

    2015-01-01

    A reliability model of SNS LINAC (Spallation Neutron Source at Oak Ridge National Laboratory) has been developed using risk spectrum reliability analysis software and the analysis of the accelerator system's reliability has been performed. The analysis results have been evaluated by comparing them with the SNS operational data. This paper presents the main results and conclusions focusing on the definition of design weaknesses and provides recommendations to improve reliability of the MYRRHA ( linear accelerator. The reliability results show that the most affected SNS LINAC parts/systems are: 1) SCL (superconducting linac), front-end systems: IS, LEBT (low-energy beam transport line), MEBT (medium-energy beam transport line), diagnostics and controls; 2) RF systems (especially the SCL RF system); 3) power supplies and PS controllers. These results are in line with the records in the SNS logbook. The reliability issue that needs to be enforced in the linac design is the redundancy of the systems, subsystems and components most affected by failures. For compensation purposes, there is a need for intelligent fail-over redundancy implementation in controllers. Enough diagnostics has to be implemented to allow reliable functioning of the redundant solutions and to ensure the compensation function

  15. Modeling the explosion-source region: An overview

    International Nuclear Information System (INIS)

    Glenn, L.A.

    1993-01-01

    The explosion-source region is defined as the region surrounding an underground explosion that cannot be described by elastic or anelastic theory. This region extends typically to ranges up to 1 km/(kt) 1/3 but for some purposes, such as yield estimation via hydrodynamic means (CORRTEX and HYDRO PLUS), the maximum range of interest is less by an order of magnitude. For the simulation or analysis of seismic signals, however, what is required is the time resolved motion and stress state at the inelastic boundary. Various analytic approximations have been made for these boundary conditions, but since they rely on near-field empirical data they cannot be expected to reliably extrapolate to different explosion sites. More important, without some knowledge of the initial energy density and the characteristics of the medium immediately surrounding the explosion, these simplified models are unable to distinguish chemical from nuclear explosions, identify cavity decoupling, or account for such phenomena as anomalous dissipation via pore collapse

  16. Artist concept of Galileo spacecraft

    Science.gov (United States)

    1988-01-01

    Galileo spacecraft is illustrated in artist concept. Gallileo, named for the Italian astronomer, physicist and mathematician who is credited with construction of the first complete, practical telescope in 1620, will make detailed studies of Jupiter. A cooperative program with the Federal Republic of Germany the Galileo mission will amplify information acquired by two Voyager spacecraft in their brief flybys. Galileo is a two-element system that includes a Jupiter-orbiting observatory and an entry probe. Jet Propulsion Laboratory (JPL) is Galileo project manager and builder of the main spacecraft. Ames Research Center (ARC) has responsibility for the entry probe, which was built by Hughes Aircraft Company and General Electric. Galileo will be deployed from the payload bay (PLB) of Atlantis, Orbiter Vehicle (OV) 104, during mission STS-34.

  17. Large Scale Experiments on Spacecraft Fire Safety

    Science.gov (United States)

    Urban, David; Ruff, Gary A.; Minster, Olivier; Fernandez-Pello, A. Carlos; Tien, James S.; Torero, Jose L.; Legros, Guillaume; Eigenbrod, Christian; Smirnov, Nickolay; Fujita, Osamu; hide

    2012-01-01

    Full scale fire testing complemented by computer modelling has provided significant knowhow about the risk, prevention and suppression of fire in terrestrial systems (cars, ships, planes, buildings, mines, and tunnels). In comparison, no such testing has been carried out for manned spacecraft due to the complexity, cost and risk associated with operating a long duration fire safety experiment of a relevant size in microgravity. Therefore, there is currently a gap in knowledge of fire behaviour in spacecraft. The entire body of low-gravity fire research has either been conducted in short duration ground-based microgravity facilities or has been limited to very small fuel samples. Still, the work conducted to date has shown that fire behaviour in low-gravity is very different from that in normal gravity, with differences observed for flammability limits, ignition delay, flame spread behaviour, flame colour and flame structure. As a result, the prediction of the behaviour of fires in reduced gravity is at present not validated. To address this gap in knowledge, a collaborative international project, Spacecraft Fire Safety, has been established with its cornerstone being the development of an experiment (Fire Safety 1) to be conducted on an ISS resupply vehicle, such as the Automated Transfer Vehicle (ATV) or Orbital Cygnus after it leaves the ISS and before it enters the atmosphere. A computer modelling effort will complement the experimental effort. Although the experiment will need to meet rigorous safety requirements to ensure the carrier vehicle does not sustain damage, the absence of a crew removes the need for strict containment of combustion products. This will facilitate the possibility of examining fire behaviour on a scale that is relevant to spacecraft fire safety and will provide unique data for fire model validation. This unprecedented opportunity will expand the understanding of the fundamentals of fire behaviour in spacecraft. The experiment is being

  18. An Orbit Propagation Software for Mars Orbiting Spacecraft

    Directory of Open Access Journals (Sweden)

    Young-Joo Song

    2004-12-01

    Full Text Available An orbit propagation software for the Mars orbiting spacecraft has been developed and verified in preparations for the future Korean Mars missions. Dynamic model for Mars orbiting spacecraft has been studied, and Mars centered coordinate systems are utilized to express spacecraft state vectors. Coordinate corrections to the Mars centered coordinate system have been made to adjust the effects caused by Mars precession and nutation. After spacecraft enters Sphere of Influence (SOI of the Mars, the spacecraft experiences various perturbation effects as it approaches to Mars. Every possible perturbation effect is considered during integrations of spacecraft state vectors. The Mars50c gravity field model and the Mars-GRAM 2001 model are used to compute perturbation effects due to Mars gravity field and Mars atmospheric drag, respectively. To compute exact locations of other planets, JPL's DE405 ephemerides are used. Phobos and Deimos's ephemeris are computed using analytical method because their informations are not released with DE405. Mars Global Surveyor's mapping orbital data are used to verify the developed propagator performances. After one Martian day propagation (12 orbital periods, the results show about maximum ±5 meter errors, in every position state components(radial, cross-track and along-track, when compared to these from the Astrogator propagation in the Satellite Tool Kit. This result shows high reliability of the developed software which can be used to design near Mars missions for Korea, in future.

  19. Quasi-homogeneous partial coherent source modeling of multimode optical fiber output using the elementary source method

    Science.gov (United States)

    Fathy, Alaa; Sabry, Yasser M.; Khalil, Diaa A.

    2017-10-01

    Multimode fibers (MMF) have many applications in illumination, spectroscopy, sensing and even in optical communication systems. In this work, we present a model for the MMF output field assuming the fiber end as a quasi-homogenous source. The fiber end is modeled by a group of partially coherent elementary sources, spatially shifted and uncorrelated with each other. The elementary source distribution is derived from the far field intensity measurement, while the weighting function of the sources is derived from the fiber end intensity measurement. The model is compared with practical measurements for fibers with different core/cladding diameters at different propagation distances and for different input excitations: laser, white light and LED. The obtained results show normalized root mean square error less than 8% in the intensity profile in most cases, even when the fiber end surface is not perfectly cleaved. Also, the comparison with the Gaussian-Schell model results shows a better agreement with the measurement. In addition, the complex degree of coherence, derived from the model results, is compared with the theoretical predictions of the modified Van Zernike equation showing very good agreement, which strongly supports the assumption that the large core MMF could be considered as a quasi-homogenous source.

  20. Definition of the topological structure of the automatic control system of spacecrafts

    International Nuclear Information System (INIS)

    KrasnoyarskiyRabochiy prospect, Krasnoyarsk, 660014 (Russian Federation))" data-affiliation=" (Siberian State Aerospace University named after Academician M.F.Reshetnev 31 KrasnoyarskiyRabochiy prospect, Krasnoyarsk, 660014 (Russian Federation))" >Zelenkov, P V; KrasnoyarskiyRabochiy prospect, Krasnoyarsk, 660014 (Russian Federation))" data-affiliation=" (Siberian State Aerospace University named after Academician M.F.Reshetnev 31 KrasnoyarskiyRabochiy prospect, Krasnoyarsk, 660014 (Russian Federation))" >Karaseva, M V; KrasnoyarskiyRabochiy prospect, Krasnoyarsk, 660014 (Russian Federation))" data-affiliation=" (Siberian State Aerospace University named after Academician M.F.Reshetnev 31 KrasnoyarskiyRabochiy prospect, Krasnoyarsk, 660014 (Russian Federation))" >Tsareva, E A; Tsarev, R Y

    2015-01-01

    The paper considers the problem of selection the topological structure of the automated control system of spacecrafts. The integer linear model of mathematical programming designed to define the optimal topological structure for spacecraft control is proposed. To solve the determination problem of topological structure of the control system of spacecrafts developed the procedure of the directed search of some structure variants according to the scheme 'Branch and bound'. The example of the automated control system of spacecraft development included the combination of ground control stations, managing the spacecraft of three classes with a geosynchronous orbit with constant orbital periods is presented

  1. Results from active spacecraft potential control on the Geotail spacecraft

    International Nuclear Information System (INIS)

    Schmidt, R.; Arends, H.; Pedersen, A.

    1995-01-01

    A low and actively controlled electrostatic potential on the outer surfaces of a scientific spacecraft is very important for accurate measurements of cold plasma electrons and ions and the DC to low-frequency electric field. The Japanese/NASA Geotail spacecraft carriers as part of its scientific payload a novel ion emitter for active control of the electrostatic potential on the surface of the spacecraft. The aim of the ion emitter is to reduce the positive surface potential which is normally encountered in the outer magnetosphere when the spacecraft is sunlit. Ion emission clamps the surface potential to near the ambient plasma potential. Without emission control, Geotail has encountered plasma conditions in the lobes of the magnetotail which resulted in surface potentials of up to about +70 V. The ion emitter proves to be able to discharge the outer surfaces of the spacecraft and is capable of keeping the surface potential stable at about +2 V. This potential is measured with respect to one of the electric field probes which are current biased and thus kept at a potential slightly above the ambient plasma potential. The instrument uses the liquid metal field ion emission principle to emit indium ions. The ion beam energy is about 6 keV and the typical total emission current amounts to about 15 μA. Neither variations in the ambient plasma conditions nor operation of two electron emitters on Geotail produce significant variations of the controlled surface potential as long as the resulting electron emission currents remain much smaller than the ion emission current. Typical results of the active potential control are shown, demonstrating the surface potential reduction and its stability over time. 25 refs., 5 figs

  2. Versatile Markovian models for networks with asymmetric TCP sources

    NARCIS (Netherlands)

    van Foreest, N.D.; Haverkort, Boudewijn R.H.M.; Mandjes, M.R.H.; Scheinhardt, Willem R.W.

    2004-01-01

    In this paper we use Stochastic Petri Nets (SPNs) to study the interaction of multiple TCP sources that share one or two buffers, thereby considerably extending earlier work. We first consider two sources sharing a buffer and investigate the consequences of two popular assumptions for the loss

  3. A discriminative syntactic model for source permutation via tree transduction

    NARCIS (Netherlands)

    Khalilov, M.; Sima'an, K.; Wu, D.

    2010-01-01

    A major challenge in statistical machine translation is mitigating the word order differences between source and target strings. While reordering and lexical translation choices are often conducted in tandem, source string permutation prior to translation is attractive for studying reordering using

  4. Seismic source characterization for the 2014 update of the U.S. National Seismic Hazard Model

    Science.gov (United States)

    Moschetti, Morgan P.; Powers, Peter; Petersen, Mark D.; Boyd, Oliver; Chen, Rui; Field, Edward H.; Frankel, Arthur; Haller, Kathleen; Harmsen, Stephen; Mueller, Charles S.; Wheeler, Russell; Zeng, Yuehua

    2015-01-01

    We present the updated seismic source characterization (SSC) for the 2014 update of the National Seismic Hazard Model (NSHM) for the conterminous United States. Construction of the seismic source models employs the methodology that was developed for the 1996 NSHM but includes new and updated data, data types, source models, and source parameters that reflect the current state of knowledge of earthquake occurrence and state of practice for seismic hazard analyses. We review the SSC parameterization and describe the methods used to estimate earthquake rates, magnitudes, locations, and geometries for all seismic source models, with an emphasis on new source model components. We highlight the effects that two new model components—incorporation of slip rates from combined geodetic-geologic inversions and the incorporation of adaptively smoothed seismicity models—have on probabilistic ground motions, because these sources span multiple regions of the conterminous United States and provide important additional epistemic uncertainty for the 2014 NSHM.

  5. Muticriteria decision making model for chosing between open source and non-open source software

    Directory of Open Access Journals (Sweden)

    Edmilson Alves de Moraes

    2008-09-01

    Full Text Available This article proposes the use of a multicriterio method for supporting decision on a problem where the intent is to chose for software given the options of open source and not-open source. The study shows how a method for decison making can be used to provide problem structuration and simplify the decision maker job. The method Analytic Hierarchy Process-AHP is described step-by-step and its benefits and flaws are discussed. Followin the theoretical discussion, a muliple case study is presented, where two companies are to use the decison making method. The analysis was supported by Expert Choice, a software developed based on AHP framework.

  6. Laboratory Plasma Source as an MHD Model for Astrophysical Jets

    Science.gov (United States)

    Mayo, Robert M.

    1997-01-01

    The significance of the work described herein lies in the demonstration of Magnetized Coaxial Plasma Gun (MCG) devices like CPS-1 to produce energetic laboratory magneto-flows with embedded magnetic fields that can be used as a simulation tool to study flow interaction dynamic of jet flows, to demonstrate the magnetic acceleration and collimation of flows with primarily toroidal fields, and study cross field transport in turbulent accreting flows. Since plasma produced in MCG devices have magnetic topology and MHD flow regime similarity to stellar and extragalactic jets, we expect that careful investigation of these flows in the laboratory will reveal fundamental physical mechanisms influencing astrophysical flows. Discussion in the next section (sec.2) focuses on recent results describing collimation, leading flow surface interaction layers, and turbulent accretion. The primary objectives for a new three year effort would involve the development and deployment of novel electrostatic, magnetic, and visible plasma diagnostic techniques to measure plasma and flow parameters of the CPS-1 device in the flow chamber downstream of the plasma source to study, (1) mass ejection, morphology, and collimation and stability of energetic outflows, (2) the effects of external magnetization on collimation and stability, (3) the interaction of such flows with background neutral gas, the generation of visible emission in such interaction, and effect of neutral clouds on jet flow dynamics, and (4) the cross magnetic field transport of turbulent accreting flows. The applicability of existing laboratory plasma facilities to the study of stellar and extragalactic plasma should be exploited to elucidate underlying physical mechanisms that cannot be ascertained though astrophysical observation, and provide baseline to a wide variety of proposed models, MHD and otherwise. The work proposed herin represents a continued effort on a novel approach in relating laboratory experiments to

  7. Near Source 2007 Peru Tsunami Runup Observations and Modeling

    Science.gov (United States)

    Borrero, J. C.; Fritz, H. M.; Kalligeris, N.; Broncano, P.; Ortega, E.

    2008-12-01

    On 15 August 2007 an earthquake with moment magnitude (Mw) of 8.0 centered off the coast of central Peru, generated a tsunami with locally focused runup heights of up to 10 m. A reconnaissance team was deployed two weeks after the event and investigated the tsunami effects at 51 sites. Three tsunami fatalities were reported south of the Paracas Peninsula in a sparsely populated desert area where the largest tsunami runup heights and massive inundation distances up to 2 km were measured. Numerical modeling of the earthquake source and tsunami suggest that a region of high slip near the coastline was primarily responsible for the extreme runup heights. The town of Pisco was spared by the Paracas Peninsula, which blocked tsunami waves from propagating northward from the high slip region. As with all near field tsunamis, the waves struck within minutes of the massive ground shaking. Spontaneous evacuations coordinated by the Peruvian Coast Guard minimized the fatalities and illustrate the importance of community-based education and awareness programs. The residents of the fishing village Lagunilla were unaware of the tsunami hazard after an earthquake and did not evacuate, which resulted in 3 fatalities. Despite the relatively benign tsunami effects at Pisco from this event, the tsunami hazard for this city (and its liquefied natural gas terminal) cannot be underestimated. Between 1687 and 1868, the city of Pisco was destroyed 4 times by tsunami waves. Since then, two events (1974 and 2007) have resulted in partial inundation and moderate damage. The fact that potentially devastating tsunami runup heights were observed immediately south of the peninsula only serves to underscore this point.

  8. A Systems Thinking Model for Open Source Software Development in Social Media

    OpenAIRE

    Mustaquim, Moyen

    2010-01-01

    In this paper a social media model, based on systems thinking methodology is proposed to understand the behavior of the open source software development community working in social media.The proposed model is focused on relational influences of two different systems- social media and the open source community. This model can be useful for taking decisions which are complicated and where solutions are not apparent.Based on the proposed model, an efficient way of working in open source developm...

  9. Comparison of analytic source models for head scatter factor calculation and planar dose calculation for IMRT

    International Nuclear Information System (INIS)

    Yan Guanghua; Liu, Chihray; Lu Bo; Palta, Jatinder R; Li, Jonathan G

    2008-01-01

    The purpose of this study was to choose an appropriate head scatter source model for the fast and accurate independent planar dose calculation for intensity-modulated radiation therapy (IMRT) with MLC. The performance of three different head scatter source models regarding their ability to model head scatter and facilitate planar dose calculation was evaluated. A three-source model, a two-source model and a single-source model were compared in this study. In the planar dose calculation algorithm, in-air fluence distribution was derived from each of the head scatter source models while considering the combination of Jaw and MLC opening. Fluence perturbations due to tongue-and-groove effect, rounded leaf end and leaf transmission were taken into account explicitly. The dose distribution was calculated by convolving the in-air fluence distribution with an experimentally determined pencil-beam kernel. The results were compared with measurements using a diode array and passing rates with 2%/2 mm and 3%/3 mm criteria were reported. It was found that the two-source model achieved the best agreement on head scatter factor calculation. The three-source model and single-source model underestimated head scatter factors for certain symmetric rectangular fields and asymmetric fields, but similar good agreement could be achieved when monitor back scatter effect was incorporated explicitly. All the three source models resulted in comparable average passing rates (>97%) when the 3%/3 mm criterion was selected. The calculation with the single-source model and two-source model was slightly faster than the three-source model due to their simplicity

  10. Comparison of analytic source models for head scatter factor calculation and planar dose calculation for IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Yan Guanghua [Department of Nuclear and Radiological Engineering, University of Florida, Gainesville, FL 32611 (United States); Liu, Chihray; Lu Bo; Palta, Jatinder R; Li, Jonathan G [Department of Radiation Oncology, University of Florida, Gainesville, FL 32610-0385 (United States)

    2008-04-21

    The purpose of this study was to choose an appropriate head scatter source model for the fast and accurate independent planar dose calculation for intensity-modulated radiation therapy (IMRT) with MLC. The performance of three different head scatter source models regarding their ability to model head scatter and facilitate planar dose calculation was evaluated. A three-source model, a two-source model and a single-source model were compared in this study. In the planar dose calculation algorithm, in-air fluence distribution was derived from each of the head scatter source models while considering the combination of Jaw and MLC opening. Fluence perturbations due to tongue-and-groove effect, rounded leaf end and leaf transmission were taken into account explicitly. The dose distribution was calculated by convolving the in-air fluence distribution with an experimentally determined pencil-beam kernel. The results were compared with measurements using a diode array and passing rates with 2%/2 mm and 3%/3 mm criteria were reported. It was found that the two-source model achieved the best agreement on head scatter factor calculation. The three-source model and single-source model underestimated head scatter factors for certain symmetric rectangular fields and asymmetric fields, but similar good agreement could be achieved when monitor back scatter effect was incorporated explicitly. All the three source models resulted in comparable average passing rates (>97%) when the 3%/3 mm criterion was selected. The calculation with the single-source model and two-source model was slightly faster than the three-source model due to their simplicity.

  11. Charging in the environment of large spacecraft

    International Nuclear Information System (INIS)

    Lai, S.T.

    1993-01-01

    This paper discusses some potential problems of spacecraft charging as a result of interactions between a large spacecraft, such as the Space Station, and its environment. Induced electric field, due to VXB effect, may be important for large spacecraft at low earth orbits. Differential charging, due to different properties of surface materials, may be significant when the spacecraft is partly in sunshine and partly in shadow. Triple-root potential jump condition may occur because of differential charging. Sudden onset of severe differential charging may occur when an electron or ion beam is emitted from the spacecraft. The beam may partially return to the ''hot spots'' on the spacecraft. Wake effects, due to blocking of ambient ion trajectories, may result in an undesirable negative potential region in the vicinity of a large spacecraft. Outgassing and exhaust may form a significant spacecraft induced environment; ionization may occur. Spacecraft charging and discharging may affect the electronic components on board

  12. Source apportionment of airborne particulates through receptor modeling: Indian scenario

    Science.gov (United States)

    Banerjee, Tirthankar; Murari, Vishnu; Kumar, Manish; Raju, M. P.

    2015-10-01

    Airborne particulate chemistry mostly governed by associated sources and apportionment of specific sources is extremely essential to delineate explicit control strategies. The present submission initially deals with the publications (1980s-2010s) of Indian origin which report regional heterogeneities of particulate concentrations with reference to associated species. Such meta-analyses clearly indicate the presence of reservoir of both primary and secondary aerosols in different geographical regions. Further, identification of specific signatory molecules for individual source category was also evaluated in terms of their scientific merit and repeatability. Source signatures mostly resemble international profile while, in selected cases lack appropriateness. In India, source apportionment (SA) of airborne particulates was initiated way back in 1985 through factor analysis, however, principal component analysis (PCA) shares a major proportion of applications (34%) followed by enrichment factor (EF, 27%), chemical mass balance (CMB, 15%) and positive matrix factorization (PMF, 9%). Mainstream SA analyses identify earth crust and road dust resuspensions (traced by Al, Ca, Fe, Na and Mg) as a principal source (6-73%) followed by vehicular emissions (traced by Fe, Cu, Pb, Cr, Ni, Mn, Ba and Zn; 5-65%), industrial emissions (traced by Co, Cr, Zn, V, Ni, Mn, Cd; 0-60%), fuel combustion (traced by K, NH4+, SO4-, As, Te, S, Mn; 4-42%), marine aerosols (traced by Na, Mg, K; 0-15%) and biomass/refuse burning (traced by Cd, V, K, Cr, As, TC, Na, K, NH4+, NO3-, OC; 1-42%). In most of the cases, temporal variations of individual source contribution for a specific geographic region exhibit radical heterogeneity possibly due to unscientific orientation of individual tracers for specific source and well exaggerated by methodological weakness, inappropriate sample size, implications of secondary aerosols and inadequate emission inventories. Conclusively, a number of challenging

  13. Studies and modeling of cold neutron sources; Etude et modelisation des sources froides de neutron

    Energy Technology Data Exchange (ETDEWEB)

    Campioni, G

    2004-11-15

    With the purpose of updating knowledge in the fields of cold neutron sources, the work of this thesis has been run according to the 3 following axes. First, the gathering of specific information forming the materials of this work. This set of knowledge covers the following fields: cold neutron, cross-sections for the different cold moderators, flux slowing down, different measurements of the cold flux and finally, issues in the thermal analysis of the problem. Secondly, the study and development of suitable computation tools. After an analysis of the problem, several tools have been planed, implemented and tested in the 3-dimensional radiation transport code Tripoli-4. In particular, a module of uncoupling, integrated in the official version of Tripoli-4, can perform Monte-Carlo parametric studies with a spare factor of Cpu time fetching 50 times. A module of coupling, simulating neutron guides, has also been developed and implemented in the Monte-Carlo code McStas. Thirdly, achieving a complete study for the validation of the installed calculation chain. These studies focus on 3 cold sources currently functioning: SP1 from Orphee reactor and 2 other sources (SFH and SFV) from the HFR at the Laue Langevin Institute. These studies give examples of problems and methods for the design of future cold sources.

  14. RANS modeling of scalar dispersion from localized sources within a simplified urban-area model

    Science.gov (United States)

    Rossi, Riccardo; Capra, Stefano; Iaccarino, Gianluca

    2011-11-01

    The dispersion of a passive scalar downstream a localized source within a simplified urban-like geometry is examined by means of RANS scalar flux models. The computations are conducted under conditions of neutral stability and for three different incoming wind directions (0°, 45°, 90°) at a roughness Reynolds number of Ret = 391. A Reynolds stress transport model is used to close the flow governing equations whereas both the standard eddy-diffusivity closure and algebraic flux models are employed to close the transport equation for the passive scalar. The comparison with a DNS database shows improved reliability from algebraic scalar flux models towards predicting both the mean concentration and the plume structure. Since algebraic flux models do not increase substantially the computational effort, the results indicate that the use of tensorial-diffusivity can be promising tool for dispersion simulations for the urban environment.

  15. Architecture for spacecraft operations planning

    Science.gov (United States)

    Davis, William S.

    1991-01-01

    A system which generates plans for the dynamic environment of space operations is discussed. This system synthesizes plans by combining known operations under a set of physical, functional, and temperal constraints from various plan entities, which are modeled independently but combine in a flexible manner to suit dynamic planning needs. This independence allows the generation of a single plan source which can be compiled and applied to a variety of agents. The architecture blends elements of temperal logic, nonlinear planning, and object oriented constraint modeling to achieve its flexibility. This system was applied to the domain of the Intravehicular Activity (IVA) maintenance and repair aboard Space Station Freedom testbed.

  16. Modelling of novel light sources based on asymmetric heterostructures

    International Nuclear Information System (INIS)

    Afonenko, A.A.; Kononenko, V.K.; Manak, I.S.

    1995-01-01

    For asymmetric quantum-well heterojunction laser sources, processes of carrier injection into quantum wells are considered. In contrast to ordinary quantum-well light sources, active layers in the novel nanocrystalline systems have different thickness and/or compositions. In addition, wide-band gap barrier layers separating the quantum wells may have a linear or parabolic energy potential profile. For various kinds of the structures, mathematical simulation of dynamic response has been carried out. (author). 8 refs, 5 figs

  17. Source apportionment of fine particulate matter in China in 2013 using a source-oriented chemical transport model.

    Science.gov (United States)

    Shi, Zhihao; Li, Jingyi; Huang, Lin; Wang, Peng; Wu, Li; Ying, Qi; Zhang, Hongliang; Lu, Li; Liu, Xuejun; Liao, Hong; Hu, Jianlin

    2017-12-01

    China has been suffering high levels of fine particulate matter (PM 2.5 ). Designing effective PM 2.5 control strategies requires information about the contributions of different sources. In this study, a source-oriented Community Multiscale Air Quality (CMAQ) model was applied to quantitatively estimate the contributions of different source sectors to PM 2.5 in China. Emissions of primary PM 2.5 and gas pollutants of SO 2 , NO x , and NH 3 , which are precursors of particulate sulfate, nitrate, and ammonium (SNA, major PM 2.5 components in China), from eight source categories (power plants, residential sources, industries, transportation, open burning, sea salt, windblown dust and agriculture) were separately tracked to determine their contributions to PM 2.5 in 2013. Industrial sector is the largest source of SNA in Beijing, Xi'an and Chongqing, followed by agriculture and power plants. Residential emissions are also important sources of SNA, especially in winter when severe pollution events often occur. Nationally, the contributions of different source sectors to annual total PM 2.5 from high to low are industries, residential sources, agriculture, power plants, transportation, windblown dust, open burning and sea salt. Provincially, residential sources and industries are the major anthropogenic sources of primary PM 2.5 , while industries, agriculture, power plants and transportation are important for SNA in most provinces. For total PM 2.5 , residential and industrial emissions are the top two sources, with a combined contribution of 40-50% in most provinces. The contributions of power plants and agriculture to total PM 2.5 are about 10%, respectively. Secondary organic aerosol accounts for about 10% of annual PM 2.5 in most provinces, with higher contributions in southern provinces such as Yunnan (26%), Hainan (25%) and Taiwan (21%). Windblown dust is an important source in western provinces such as Xizang (55% of total PM 2.5 ), Qinghai (74%), Xinjiang (59

  18. Source apportionment of PM2.5 in North India using source-oriented air quality models

    International Nuclear Information System (INIS)

    Guo, Hao; Kota, Sri Harsha; Sahu, Shovan Kumar; Hu, Jianlin; Ying, Qi; Gao, Aifang; Zhang, Hongliang

    2017-01-01

    In recent years, severe pollution events were observed frequently in India especially at its capital, New Delhi. However, limited studies have been conducted to understand the sources to high pollutant concentrations for designing effective control strategies. In this work, source-oriented versions of the Community Multi-scale Air Quality (CMAQ) model with Emissions Database for Global Atmospheric Research (EDGAR) were applied to quantify the contributions of eight source types (energy, industry, residential, on-road, off-road, agriculture, open burning and dust) to fine particulate matter (PM 2.5 ) and its components including primary PM (PPM) and secondary inorganic aerosol (SIA) i.e. sulfate, nitrate and ammonium ions, in Delhi and three surrounding cities, Chandigarh, Lucknow and Jaipur in 2015. PPM mass is dominated by industry and residential activities (>60%). Energy (∼39%) and industry (∼45%) sectors contribute significantly to PPM at south of Delhi, which reach a maximum of 200 μg/m 3 during winter. Unlike PPM, SIA concentrations from different sources are more heterogeneous. High SIA concentrations (∼25 μg/m 3 ) at south Delhi and central Uttar Pradesh were mainly attributed to energy, industry and residential sectors. Agriculture is more important for SIA than PPM and contributions of on-road and open burning to SIA are also higher than to PPM. Residential sector contributes highest to total PM 2.5 (∼80 μg/m 3 ), followed by industry (∼70 μg/m 3 ) in North India. Energy and agriculture contribute ∼25 μg/m 3 and ∼16 μg/m 3 to total PM 2.5 , while SOA contributes <5 μg/m 3 . In Delhi, industry and residential activities contribute to 80% of total PM 2.5 . - Highlights: • Sources of PM 2.5 in North India were quantified by source-oriented CMAQ. • Industrial/residential activities are the dominating sources (60–70%) for PPM. • Energy/agriculture are the most important sources (30–40%) for SIA. • Strong seasonal

  19. Water Quality Assessment of River Soan (Pakistan) and Source Apportionment of Pollution Sources Through Receptor Modeling.

    Science.gov (United States)

    Nazeer, Summya; Ali, Zeshan; Malik, Riffat Naseem

    2016-07-01

    The present study was designed to determine the spatiotemporal patterns in water quality of River Soan using multivariate statistics. A total of 26 sites were surveyed along River Soan and its associated tributaries during pre- and post-monsoon seasons in 2008. Hierarchical agglomerative cluster analysis (HACA) classified sampling sites into three groups according to their degree of pollution, which ranged from least to high degradation of water quality. Discriminant function analysis (DFA) revealed that alkalinity, orthophosphates, nitrates, ammonia, salinity, and Cd were variables that significantly discriminate among three groups identified by HACA. Temporal trends as identified through DFA revealed that COD, DO, pH, Cu, Cd, and Cr could be attributed for major seasonal variations in water quality. PCA/FA identified six factors as potential sources of pollution of River Soan. Absolute principal component scores using multiple regression method (APCS-MLR) further explained the percent contribution from each source. Heavy metals were largely added through industrial activities (28 %) and sewage waste (28 %), nutrients through agriculture runoff (35 %) and sewage waste (28 %), organic pollution through sewage waste (27 %) and urban runoff (17 %) and macroelements through urban runoff (39 %), and mineralization and sewage waste (30 %). The present study showed that anthropogenic activities are the major source of variations in River Soan. In order to address the water quality issues, implementation of effective waste management measures are needed.

  20. eTOXlab, an open source modeling framework for implementing predictive models in production environments.

    Science.gov (United States)

    Carrió, Pau; López, Oriol; Sanz, Ferran; Pastor, Manuel

    2015-01-01

    Computational models based in Quantitative-Structure Activity Relationship (QSAR) methodologies are widely used tools for predicting the biological properties of new compounds. In many instances, such models are used as a routine in the industry (e.g. food, cosmetic or pharmaceutical industry) for the early assessment of the biological properties of new compounds. However, most of the tools currently available for developing QSAR models are not well suited for supporting the whole QSAR model life cycle in production environments. We have developed eTOXlab; an open source modeling framework designed to be used at the core of a self-contained virtual machine that can be easily deployed in production environments, providing predictions as web services. eTOXlab consists on a collection of object-oriented Python modules with methods mapping common tasks of standard modeling workflows. This framework allows building and validating QSAR models as well as predicting the properties of new compounds using either a command line interface or a graphic user interface (GUI). Simple models can be easily generated by setting a few parameters, while more complex models can be implemented by overriding pieces of the original source code. eTOXlab benefits from the object-oriented capabilities of Python for providing high flexibility: any model implemented using eTOXlab inherits the features implemented in the parent model, like common tools and services or the automatic exposure of the models as prediction web services. The particular eTOXlab architecture as a self-contained, portable prediction engine allows building models with confidential information within corporate facilities, which can be safely exported and used for prediction without disclosing the structures of the training series. The software presented here provides full support to the specific needs of users that want to develop, use and maintain predictive models in corporate environments. The technologies used by e

  1. Application of square-root filtering for spacecraft attitude control

    Science.gov (United States)

    Sorensen, J. A.; Schmidt, S. F.; Goka, T.

    1978-01-01

    Suitable digital algorithms are developed and tested for providing on-board precision attitude estimation and pointing control for potential use in the Landsat-D spacecraft. These algorithms provide pointing accuracy of better than 0.01 deg. To obtain necessary precision with efficient software, a six state-variable square-root Kalman filter combines two star tracker measurements to update attitude estimates obtained from processing three gyro outputs. The validity of the estimation and control algorithms are established, and the sensitivity of their performance to various error sources and software parameters are investigated by detailed digital simulation. Spacecraft computer memory, cycle time, and accuracy requirements are estimated.

  2. Modelling [CAS - CERN Accelerator School, Ion Sources, Senec (Slovakia), 29 May - 8 June 2012

    International Nuclear Information System (INIS)

    Spädtke, P

    2013-01-01

    Modeling of technical machines became a standard technique since computer became powerful enough to handle the amount of data relevant to the specific system. Simulation of an existing physical device requires the knowledge of all relevant quantities. Electric fields given by the surrounding boundary as well as magnetic fields caused by coils or permanent magnets have to be known. Internal sources for both fields are sometimes taken into account, such as space charge forces or the internal magnetic field of a moving bunch of charged particles. Used solver routines are briefly described and some bench-marking is shown to estimate necessary computing times for different problems. Different types of charged particle sources will be shown together with a suitable model to describe the physical model. Electron guns are covered as well as different ion sources (volume ion sources, laser ion sources, Penning ion sources, electron resonance ion sources, and H - -sources) together with some remarks on beam transport. (author)

  3. A Research on the Electrical Test Fault Diagnostic and Data Mining of a Manned Spacecraft

    Directory of Open Access Journals (Sweden)

    Yang Feng

    2017-01-01

    Full Text Available The paper introduces the modeling method and modeling tool for the fault diagnosis of manned spacecraft, the multi-signal flow graph model of a manned space equipment was established using this method; the framework of the fault detection and diagnosis system of manned spacecraft is proposed, the function of ground system and function of the spacecraft are clearly defined. The structure of the functional module is given separately; finally, the tool builds the fault detection and diagnosis system, the application of fault diagnosis method for manned spacecraft is used for reference.

  4. Monte Carlo model for a thick target T(D,n)4 He neutron source

    International Nuclear Information System (INIS)

    Webster, W.M.

    1976-01-01

    A brief description is given of a calculational model developed to simulate a T(D,n) 4 He neutron source which is anisotropic in energy and intensity. The model also provides a means for including the time dependency of the neutron source. Although the model has been applied specifically to the Lawrence Livermore Laboratory ICT accelerator, the technique is general and can be applied to any similar neutron source

  5. Quick Spacecraft Thermal Analysis Tool, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — For spacecraft design and development teams concerned with cost and schedule, the Quick Spacecraft Thermal Analysis Tool (QuickSTAT) is an innovative software suite...

  6. A 1D ion species model for an RF driven negative ion source

    Science.gov (United States)

    Turner, I.; Holmes, A. J. T.

    2017-08-01

    A one-dimensional model for an RF driven negative ion source has been developed based on an inductive discharge. The RF source differs from traditional filament and arc ion sources because there are no primary electrons present, and is simply composed of an antenna region (driver) and a main plasma discharge region. However the model does still make use of the classical plasma transport equations for particle energy and flow, which have previously worked well for modelling DC driven sources. The model has been developed primarily to model the Small Negative Ion Facility (SNIF) ion source at CCFE, but may be easily adapted to model other RF sources. Currently the model considers the hydrogen ion species, and provides a detailed description of the plasma parameters along the source axis, i.e. plasma temperature, density and potential, as well as current densities and species fluxes. The inputs to the model are currently the RF power, the magnetic filter field and the source gas pressure. Results from the model are presented and where possible compared to existing experimental data from SNIF, with varying RF power, source pressure.

  7. Characteristics and Source Apportionment of Marine Aerosols over East China Sea Using a Source-oriented Chemical Transport Model

    Science.gov (United States)

    Kang, M.; Zhang, H.; Fu, P.

    2017-12-01

    Marine aerosols exert a strong influence on global climate change and biogeochemical cycling, as oceans cover beyond 70% of the Earth's surface. However, investigations on marine aerosols are relatively limited at present due to the difficulty and inconvenience in sampling marine aerosols as well as their diverse sources. East China Sea (ECS), lying over the broad shelf of the western North Pacific, is adjacent to the Asian mainland, where continental-scale air pollution could impose a heavy load on the marine atmosphere through long-range atmospheric transport. Thus, contributions of major sources to marine aerosols need to be identified for policy makers to develop cost effective control strategies. In this work, a source-oriented version of the Community Multiscale Air Quality (CMAQ) model, which can directly track the contributions from multiple emission sources to marine aerosols, is used to investigate the contributions from power, industry, transportation, residential, biogenic and biomass burning to marine aerosols over the ECS in May and June 2014. The model simulations indicate significant spatial and temporal variations of concentrations as well as the source contributions. This study demonstrates that the Asian continent can greatly affect the marine atmosphere through long-range transport.

  8. Time-dependent polar distribution of outgassing from a spacecraft

    Science.gov (United States)

    Scialdone, J. J.

    1974-01-01

    A technique has been developed to obtain a characterization of the self-generated environment of a spacecraft and its variation with time, angular position, and distance. The density, pressure, outgassing flux, total weight loss, and other important parameters were obtained from data provided by two mass measuring crystal microbalances, mounted back to back, at distance of 1 m from the spacecraft equivalent surface. A major outgassing source existed at an angular position of 300 deg to 340 deg, near the rocket motor, while the weakest source was at the antennas. The strongest source appeared to be caused by a material diffusion process which produced a directional density at 1 m distance of about 1.6 x 10 to the 11th power molecules/cu cm after 1 hr in vacuum and decayed to 1.6 x 10 to the 9th power molecules/cu cm after 200 hr. The total average outgassing flux at the same distance and during the same time span changed from 1.2 x 10 to the minus 7th power to 1.4 x to the minus 10th power g/sq cm/s. These values are three times as large at the spacecraft surface. Total weight loss was 537 g after 10 hr and about 833 g after 200 hr. Self-contamination of the spacecraft was equivalent to that in orbit at about 300-km altitude.

  9. Multiple spacecraft Michelson stellar interferometer

    Science.gov (United States)

    Stachnik, R. V.; Arnold, D.; Melroy, P.; Mccormack, E. F.; Gezari, D. Y.

    1984-01-01

    Results of an orbital analysis and performance assessment of SAMSI (Spacecraft Array for Michelson Spatial Interferometry) are presented. The device considered includes two one-meter telescopes in orbits which are identical except for slightly different inclinations; the telescopes achieve separations as large as 10 km and relay starlight to a central station which has a one-meter optical delay line in one interferometer arm. It is shown that a 1000-km altitude, zero mean inclination orbit affords natural scanning of the 10-km baseline with departures from optical pathlength equality which are well within the corrective capacity of the optical delay line. Electric propulsion is completely adequate to provide the required spacecraft motions, principally those needed for repointing. Resolution of 0.00001 arcsec and magnitude limits of 15 to 20 are achievable.

  10. Spacecraft Tests of General Relativity

    Science.gov (United States)

    Anderson, John D.

    1997-01-01

    Current spacecraft tests of general relativity depend on coherent radio tracking referred to atomic frequency standards at the ground stations. This paper addresses the possibility of improved tests using essentially the current system, but with the added possibility of a space-borne atomic clock. Outside of the obvious measurement of the gravitational frequency shift of the spacecraft clock, a successor to the suborbital flight of a Scout D rocket in 1976 (GP-A Project), other metric tests would benefit most directly by a possible improved sensitivity for the reduced coherent data. For purposes of illustration, two possible missions are discussed. The first is a highly eccentric Earth orbiter, and the second a solar-conjunction experiment to measure the Shapiro time delay using coherent Doppler data instead of the conventional ranging modulation.

  11. Attitude Fusion Techniques for Spacecraft

    DEFF Research Database (Denmark)

    Bjarnø, Jonas Bækby

    Spacecraft platform instability constitutes one of the most significant limiting factors in hyperacuity pointing and tracking applications, yet the demand for accurate, timely and reliable attitude information is ever increasing. The PhD research project described within this dissertation has...... served to investigate the solution space for augmenting the DTU μASC stellar reference sensor with a miniature Inertial Reference Unit (IRU), thereby obtaining improved bandwidth, accuracy and overall operational robustness of the fused instrument. Present day attitude determination requirements are met...... of the instrument, and affecting operations during agile and complex spacecraft attitude maneuvers. As such, there exists a theoretical foundation for augmenting the high frequency performance of the μASC instrument, by harnessing the complementary nature of optical stellar reference and inertial sensor technology...

  12. Logistic Regression Modeling of Diminishing Manufacturing Sources for Integrated Circuits

    National Research Council Canada - National Science Library

    Gravier, Michael

    1999-01-01

    .... The research identified logistic regression as a powerful tool for analysis of DMSMS and further developed twenty models attempting to identify the "best" way to model and predict DMSMS using logistic regression...

  13. Autonomous spacecraft rendezvous and docking

    Science.gov (United States)

    Tietz, J. C.; Almand, B. J.

    A storyboard display is presented which summarizes work done recently in design and simulation of autonomous video rendezvous and docking systems for spacecraft. This display includes: photographs of the simulation hardware, plots of chase vehicle trajectories from simulations, pictures of the docking aid including image processing interpretations, and drawings of the control system strategy. Viewgraph-style sheets on the display bulletin board summarize the simulation objectives, benefits, special considerations, approach, and results.

  14. Nonlinearity-induced spacecraft tumbling

    International Nuclear Information System (INIS)

    Amos, A.K.

    1994-01-01

    An existing tumbling criterion for the dumbbell satellite in planar librations is reexamined and modified to reflect a recently identified tumbling mode associated with the horizontal attitude orientation. It is shown that for any initial attitude there exists a critical angular rate below which the motion is oscillatory and harmonic and beyond which a continuous tumbling will ensue. If the angular rate is at the critical value the spacecraft drifts towards the horizontal attitude from which a spontaneous periodic tumbling occurs

  15. Integrating standard operating procedures with spacecraft automation, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Spacecraft automation has the potential to assist crew members and spacecraft operators in managing spacecraft systems during extended space missions. Automation can...

  16. Modeling a point-source release of 1,1,1-trichloroethane using EPA's SCREEN model

    International Nuclear Information System (INIS)

    Henriques, W.D.; Dixon, K.R.

    1994-01-01

    Using data from the Environmental Protection Agency's Toxic Release Inventory 1988 (EPA TRI88), pollutant concentration estimates were modeled for a point source air release of 1,1,1-trichloroethane at the Savannah River Plant located in Aiken, South Carolina. Estimates were calculating using the EPA's SCREEN model utilizing typical meteorological conditions to determine maximum impact of the plume under different mixing conditions for locations within 100 meters of the stack. Input data for the SCREEN model were then manipulated to simulate the impact of the release under urban conditions (for the purpose of assessing future landuse considerations) and under flare release options to determine if these parameters lessen or increase the probability of human or wildlife exposure to significant concentrations. The results were then compared to EPA reference concentrations (RfC) in order to assess the size of the buffer around the stack which may potentially have levels that exceed this level of safety

  17. Open Source Software Success Model for Iran: End-User Satisfaction Viewpoint

    Directory of Open Access Journals (Sweden)

    Ali Niknafs

    2012-03-01

    Full Text Available The open source software development is notable option for software companies. Recent years, many advantages of this software type are cause of move to that in Iran. National security and international restrictions problems and also software and services costs and more other problems intensified importance of use of this software. Users and their viewpoints are the critical success factor in the software plans. But there is not an appropriate model for open source software case in Iran. This research tried to develop a measuring open source software success model for Iran. By use of data gathered from open source users and online survey the model was tested. The results showed that components by positive effect on open source success were user satisfaction, open source community services quality, open source quality, copyright and security.

  18. Power-law thermal model for blackbody sources

    International Nuclear Information System (INIS)

    Del Grande, N.K.

    1979-01-01

    The spectral radiant emittance W/sub E/ from a blackbody at a temperature kT for photons at energies E above the spectral peak (2.82144 kT) varies as (kT)/sup E/kT/. This power-law temperature dependence, an approximation of Planck's radiation law, may have applications for measuring the emissivity of sources emitting in the soft x-ray region

  19. From sub-source to source: Interpreting results of biological trace investigations using probabilistic models

    NARCIS (Netherlands)

    Oosterman, W.T.; Kokshoorn, B.; Maaskant-van Wijk, P.A.; de Zoete, J.

    2015-01-01

    The current method of reporting a putative cell type is based on a non-probabilistic assessment of test results by the forensic practitioner. Additionally, the association between donor and cell type in mixed DNA profiles can be exceedingly complex. We present a probabilistic model for

  20. Reliability of Coulomb stress changes inferred from correlated uncertainties of finite-fault source models

    KAUST Repository

    Woessner, J.; Jonsson, Sigurjon; Sudhaus, H.; Baumann, C.

    2012-01-01

    Static stress transfer is one physical mechanism to explain triggered seismicity. Coseismic stress-change calculations strongly depend on the parameterization of the causative finite-fault source model. These models are uncertain due

  1. Spacecraft computer technology at Southwest Research Institute

    Science.gov (United States)

    Shirley, D. J.

    1993-01-01

    Southwest Research Institute (SwRI) has developed and delivered spacecraft computers for a number of different near-Earth-orbit spacecraft including shuttle experiments and SDIO free-flyer experiments. We describe the evolution of the basic SwRI spacecraft computer design from those weighing in at 20 to 25 lb and using 20 to 30 W to newer models weighing less than 5 lb and using only about 5 W, yet delivering twice the processing throughput. Because of their reduced size, weight, and power, these newer designs are especially applicable to planetary instrument requirements. The basis of our design evolution has been the availability of more powerful processor chip sets and the development of higher density packaging technology, coupled with more aggressive design strategies in incorporating high-density FPGA technology and use of high-density memory chips. In addition to reductions in size, weight, and power, the newer designs also address the necessity of survival in the harsh radiation environment of space. Spurred by participation in such programs as MSTI, LACE, RME, Delta 181, Delta Star, and RADARSAT, our designs have evolved in response to program demands to be small, low-powered units, radiation tolerant enough to be suitable for both Earth-orbit microsats and for planetary instruments. Present designs already include MIL-STD-1750 and Multi-Chip Module (MCM) technology with near-term plans to include RISC processors and higher-density MCM's. Long term plans include development of whole-core processors on one or two MCM's.

  2. Cassini Spacecraft Uncertainty Analysis Data and Methodology Review and Update/Volume 1: Updated Parameter Uncertainty Models for the Consequence Analysis

    Energy Technology Data Exchange (ETDEWEB)

    WHEELER, TIMOTHY A.; WYSS, GREGORY D.; HARPER, FREDERICK T.

    2000-11-01

    Uncertainty distributions for specific parameters of the Cassini General Purpose Heat Source Radioisotope Thermoelectric Generator (GPHS-RTG) Final Safety Analysis Report consequence risk analysis were revised and updated. The revisions and updates were done for all consequence parameters for which relevant information exists from the joint project on Probabilistic Accident Consequence Uncertainty Analysis by the United States Nuclear Regulatory Commission and the Commission of European Communities.

  3. The Design of a Fire Source in Scale-Model Experiments with Smoke Ventilation

    DEFF Research Database (Denmark)

    Nielsen, Peter Vilhelm; Brohus, Henrik; la Cour-Harbo, H.

    2004-01-01

    The paper describes the design of a fire and a smoke source for scale-model experiments with smoke ventilation. It is only possible to work with scale-model experiments where the Reynolds number is reduced compared to full scale, and it is demonstrated that special attention to the fire source...... (heat and smoke source) may improve the possibility of obtaining Reynolds number independent solutions with a fully developed flow. The paper shows scale-model experiments for the Ofenegg tunnel case. Design of a fire source for experiments with smoke ventilation in a large room and smoke movement...

  4. [Source apportionment of soil heavy metals in Jiapigou goldmine based on the UNMIX model].

    Science.gov (United States)

    Ai, Jian-chao; Wang, Ning; Yang, Jing

    2014-09-01

    The paper determines 16 kinds of metal elements' concentration in soil samples which collected in Jipigou goldmine upper the Songhua River. The UNMIX Model which was recommended by US EPA to get the source apportionment results was applied in this study, Cd, Hg, Pb and Ag concentration contour maps were generated by using Kriging interpolation method to verify the results. The main conclusions of this study are: (1)the concentrations of Cd, Hg, Pb and Ag exceeded Jilin Province soil background values and enriched obviously in soil samples; (2)using the UNMIX Model resolved four pollution sources: source 1 represents human activities of transportation, ore mining and garbage, and the source 1's contribution is 39. 1% ; Source 2 represents the contribution of the weathering of rocks and biological effects, and the source 2's contribution is 13. 87% ; Source 3 is a comprehensive source of soil parent material and chemical fertilizer, and the source 3's contribution is 23. 93% ; Source 4 represents iron ore mining and transportation sources, and the source 4's contribution is 22. 89%. (3)the UNMIX Model results are in accordance with the survey of local land-use types, human activities and Cd, Hg and Pb content distributions.

  5. Endangered Butterflies as a Model System for Managing Source Sink Dynamics on Department of Defense Lands

    Science.gov (United States)

    used three species of endangered butterflies as a model system to rigorously investigate the source-sink dynamics of species being managed on military...lands. Butterflies have numerous advantages as models for source-sink dynamics , including rapid generation times and relatively limited dispersal, but...they are subject to the same processes that determine source-sink dynamics of longer-lived, more vagile taxa.1.2 Technical Approach: For two of our

  6. Challenges for Knowledge Management in the Context of IT Global Sourcing Models Implementation

    OpenAIRE

    Perechuda , Kazimierz; Sobińska , Małgorzata

    2014-01-01

    Part 2: Models and Functioning of Knowledge Management; International audience; The article gives a literature overview of the current challenges connected with the implementation of the newest IT sourcing models. In the dynamic environment, organizations are required to build their competitive advantage not only on their own resources, but also on resources commissioned from external providers, accessed through various forms of sourcing, including the sourcing of IT services. This paper pres...

  7. Simultaneous EEG Source and Forward Model Reconstruction (SOFOMORE) using a Hierarchical Bayesian Approach

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2011-01-01

    We present an approach to handle forward model uncertainty for EEG source reconstruction. A stochastic forward model representation is motivated by the many random contributions to the path from sources to measurements including the tissue conductivity distribution, the geometry of the cortical s...

  8. Fine-Grained Energy Modeling for the Source Code of a Mobile Application

    DEFF Research Database (Denmark)

    Li, Xueliang; Gallagher, John Patrick

    2016-01-01

    The goal of an energy model for source code is to lay a foundation for the application of energy-aware programming techniques. State of the art solutions are based on source-line energy information. In this paper, we present an approach to constructing a fine-grained energy model which is able...

  9. Magnetospheric magnetic field modelling for the 2011 and 2012 HST Saturn aurora campaigns – implications for auroral source regions

    Directory of Open Access Journals (Sweden)

    E. S. Belenkaya

    2014-06-01

    Full Text Available A unique set of images of Saturn's northern polar UV aurora was obtained by the Hubble Space Telescope in 2011 and 2012 at times when the Cassini spacecraft was located in the solar wind just upstream of Saturn's bow shock. This rare situation provides an opportunity to use the Kronian paraboloid magnetic field model to examine source locations of the bright auroral features by mapping them along field lines into the magnetosphere, taking account of the interplanetary magnetic field (IMF measured near simultaneously by Cassini. It is found that the persistent dawn arc maps to closed field lines in the dawn to noon sector, with an equatorward edge generally located in the inner part of the ring current, typically at ~ 7 Saturn radii (RS near dawn, and a poleward edge that maps variously between the centre of the ring current and beyond its outer edge at ~ 15 RS, depending on the latitudinal width of the arc. This location, together with a lack of response in properties to the concurrent IMF, suggests a principal connection with ring-current and nightside processes. The higher-latitude patchy auroras observed intermittently near to noon and at later local times extending towards dusk are instead found to straddle the model open–closed field boundary, thus mapping along field lines to the dayside outer magnetosphere and magnetopause. These emissions, which occur preferentially for northward IMF directions, are thus likely associated with reconnection and open-flux production at the magnetopause. One image for southward IMF also exhibits a prominent patch of very high latitude emissions extending poleward of patchy dawn arc emissions in the pre-noon sector. This is found to lie centrally within the region of open model field lines, suggesting an origin in the current system associated with lobe reconnection, similar to that observed in the terrestrial magnetosphere for northward IMF.

  10. Modeling of magnetically enhanced capacitively coupled plasma sources: Ar discharges

    International Nuclear Information System (INIS)

    Kushner, Mark J.

    2003-01-01

    Magnetically enhanced capacitively coupled plasma sources use transverse static magnetic fields to modify the performance of low pressure radio frequency discharges. Magnetically enhanced reactive ion etching (MERIE) sources typically use magnetic fields of tens to hundreds of Gauss parallel to the substrate to increase the plasma density at a given pressure or to lower the operating pressure. In this article results from a two-dimensional hybrid-fluid computational investigation of MERIE reactors with plasmas sustained in argon are discussed for an industrially relevant geometry. The reduction in electron cross field mobility as the magnetic field increases produces a systematic decrease in the dc bias (becoming more positive). This decrease is accompanied by a decrease in the energy and increase in angular spread of the ion flux to the substrate. Similar trends are observed when decreasing pressure for a constant magnetic field. Although for constant power the magnitudes of ion fluxes to the substrate increase with moderate magnetic fields, the fluxes decreased at larger magnetic fields. These trends are due, in part, to a reduction in the contributions of more efficient multistep ionization

  11. Mathematical models of thermohydraulic disturbance sources in the NPP circuits

    International Nuclear Information System (INIS)

    Proskuryakov, K.N.

    1999-01-01

    Methods and means of diagnostics of equipment and processes at NPPs allowing one to substantially increase safety and economic efficiency of nuclear power plant operation are considered. Development of mathematical models, describing the occurrence and propagation of violations is conducted

  12. Logistic Regression Modeling of Diminishing Manufacturing Sources for Integrated Circuits

    National Research Council Canada - National Science Library

    Gravier, Michael

    1999-01-01

    .... This thesis draws on available data from the electronics integrated circuit industry to attempt to assess whether statistical modeling offers a viable method for predicting the presence of DMSMS...

  13. Computer modelling of radioactive source terms at a tokamak reactor

    International Nuclear Information System (INIS)

    Meide, A.

    1984-12-01

    The Monte Carlo code MCNP has been used to create a simple three-dimensional mathematical model representing 1/12 of a tokamak fusion reactor for studies of the exposure rate level from neutrons as well as gamma rays from the activated materials, and for later estimates of the consequences to the environment, public, and operating personnel. The model is based on the recommendations from the NET/INTOR workshops. (author)

  14. Considering a point-source in a regional air pollution model; Prise en compte d`une source ponctuelle dans un modele regional de pollution atmospherique

    Energy Technology Data Exchange (ETDEWEB)

    Lipphardt, M.

    1997-06-19

    This thesis deals with the development and validation of a point-source plume model, with the aim to refine the representation of intensive point-source emissions in regional-scale air quality models. The plume is modelled at four levels of increasing complexity, from a modified Gaussian plume model to the Freiberg and Lusis ring model. Plume elevation is determined by Netterville`s plume rise model, using turbulence and atmospheric stability parameters. A model for the effect of a fine-scale turbulence on the mean concentrations in the plume is developed and integrated in the ring model. A comparison between results with and without considering micro-mixing shows the importance of this effect in a chemically reactive plume. The plume model is integrated into the Eulerian transport/chemistry model AIRQUAL, using an interface between Airqual and the sub-model, and interactions between the two scales are described. A simulation of an air pollution episode over Paris is carried out, showing that the utilization of such a sub-scale model improves the accuracy of the air quality model

  15. Standardization and Economics of Nuclear Spacecraft, Final Report, Phase I, Sense Study

    Energy Technology Data Exchange (ETDEWEB)

    1973-03-01

    Feasibility and cost benefits of nuclear-powered standardized spacecraft are investigated. The study indicates that two shuttle-launched nuclear-powered spacecraft should be able to serve the majority of unmanned NASA missions anticipated for the 1980's. The standard spacecraft include structure, thermal control, power, attitude control, some propulsion capability and tracking, telemetry, and command subsystems. One spacecraft design, powered by the radioisotope thermoelectric generator, can serve missions requiring up to 450 watts. The other spacecraft design, powered by similar nuclear heat sources in a Brayton-cycle generator, can serve missions requiring up to 21000 watts. Design concepts and trade-offs are discussed. The conceptual designs selected are presented and successfully tested against a variety of missions. The thermal design is such that both spacecraft are capable of operating in any earth orbit and any orientation without modification. Three-axis stabilization is included. Several spacecraft can be stacked in the shuttle payload compartment for multi-mission launches. A reactor-powered thermoelectric generator system, operating at an electric power level of 5000 watts, is briefly studied for applicability to two test missions of divers requirements. A cost analysis indicates that use of the two standardized spacecraft offers sizable savings in comparison with specially designed solar-powered spacecraft. There is a duplicate copy.

  16. Modeling the NPE with finite sources and empirical Green`s functions

    Energy Technology Data Exchange (ETDEWEB)

    Hutchings, L.; Kasameyer, P.; Goldstein, P. [Lawrence Livermore National Lab., CA (United States)] [and others

    1994-12-31

    In order to better understand the source characteristics of both nuclear and chemical explosions for purposes of discrimination, we have modeled the NPE chemical explosion as a finite source and with empirical Green`s functions. Seismograms are synthesized at four sties to test the validity of source models. We use a smaller chemical explosion detonated in the vicinity of the working point to obtain empirical Green`s functions. Empirical Green`s functions contain all the linear information of the geology along the propagation path and recording site, which are identical for chemical or nuclear explosions, and therefore reduce the variability in modeling the source of the larger event. We further constrain the solution to have the overall source duration obtained from point-source deconvolution results. In modeling the source, we consider both an elastic source on a spherical surface and an inelastic expanding spherical volume source. We found that the spherical volume solution provides better fits to observed seismograms. The potential to identify secondary sources was examined, but the resolution is too poor to be definitive.

  17. Information contraction and extraction by multivariate autoregressive (MAR) modelling. Pt. 2. Dominant noise sources in BWRS

    International Nuclear Information System (INIS)

    Morishima, N.

    1996-01-01

    The multivariate autoregressive (MAR) modeling of a vector noise process is discussed in terms of the estimation of dominant noise sources in BWRs. The discussion is based on a physical approach: a transfer function model on BWR core dynamics is utilized in developing a noise model; a set of input-output relations between three system variables and twelve different noise sources is obtained. By the least-square fitting of a theoretical PSD on neutron noise to an experimental one, four kinds of dominant noise sources are selected. It is shown that some of dominant noise sources consist of two or more different noise sources and have the spectral properties of being coloured and correlated with each other. By diagonalizing the PSD matrix for dominant noise sources, we may obtain an MAR expression for a vector noise process as a response to the diagonal elements(i.e. residual noises) being white and mutually-independent. (Author)

  18. Source term model evaluations for the low-level waste facility performance assessment

    Energy Technology Data Exchange (ETDEWEB)

    Yim, M.S.; Su, S.I. [North Carolina State Univ., Raleigh, NC (United States)

    1995-12-31

    The estimation of release of radionuclides from various waste forms to the bottom boundary of the waste disposal facility (source term) is one of the most important aspects of LLW facility performance assessment. In this work, several currently used source term models are comparatively evaluated for the release of carbon-14 based on a test case problem. The models compared include PRESTO-EPA-CPG, IMPACTS, DUST and NEFTRAN-II. Major differences in assumptions and approaches between the models are described and key parameters are identified through sensitivity analysis. The source term results from different models are compared and other concerns or suggestions are discussed.

  19. Asteroid models from photometry and complementary data sources

    Energy Technology Data Exchange (ETDEWEB)

    Kaasalainen, Mikko [Department of Mathematics, Tampere University of Technology (Finland)

    2016-05-10

    I discuss inversion methods for asteroid shape and spin reconstruction with photometry (lightcurves) and complementary data sources such as adaptive optics or other images, occultation timings, interferometry, and range-Doppler radar data. These are essentially different sampling modes (generalized projections) of plane-of-sky images. An important concept in this approach is the optimal weighting of the various data modes. The maximum compatibility estimate, a multi-modal generalization of the maximum likelihood estimate, can be used for this purpose. I discuss the fundamental properties of lightcurve inversion by examining the two-dimensional case that, though not usable in our three-dimensional world, is simple to analyze, and it shares essentially the same uniqueness and stability properties as the 3-D case. After this, I review the main aspects of 3-D shape representations, lightcurve inversion, and the inclusion of complementary data.

  20. Asteroid models from photometry and complementary data sources

    International Nuclear Information System (INIS)

    Kaasalainen, Mikko

    2016-01-01

    I discuss inversion methods for asteroid shape and spin reconstruction with photometry (lightcurves) and complementary data sources such as adaptive optics or other images, occultation timings, interferometry, and range-Doppler radar data. These are essentially different sampling modes (generalized projections) of plane-of-sky images. An important concept in this approach is the optimal weighting of the various data modes. The maximum compatibility estimate, a multi-modal generalization of the maximum likelihood estimate, can be used for this purpose. I discuss the fundamental properties of lightcurve inversion by examining the two-dimensional case that, though not usable in our three-dimensional world, is simple to analyze, and it shares essentially the same uniqueness and stability properties as the 3-D case. After this, I review the main aspects of 3-D shape representations, lightcurve inversion, and the inclusion of complementary data.

  1. Modelling RF-plasma interaction in ECR ion sources

    Directory of Open Access Journals (Sweden)

    Mascali David

    2017-01-01

    Full Text Available This paper describes three-dimensional self-consistent numerical simulations of wave propagation in magnetoplasmas of Electron cyclotron resonance ion sources (ECRIS. Numerical results can give useful information on the distribution of the absorbed RF power and/or efficiency of RF heating, especially in the case of alternative schemes such as mode-conversion based heating scenarios. Ray-tracing approximation is allowed only for small wavelength compared to the system scale lengths: as a consequence, full-wave solutions of Maxwell-Vlasov equation must be taken into account in compact and strongly inhomogeneous ECRIS plasmas. This contribution presents a multi-scale temporal domains approach for simultaneously including RF dynamics and plasma kinetics in a “cold-plasma”, and some perspectives for “hot-plasma” implementation. The presented results rely with the attempt to establish a modal-conversion scenario of OXB-type in double frequency heating inside an ECRIS testbench.

  2. Spacecraft Design Thermal Control Subsystem

    Science.gov (United States)

    Miyake, Robert N.

    2008-01-01

    The Thermal Control Subsystem engineers task is to maintain the temperature of all spacecraft components, subsystems, and the total flight system within specified limits for all flight modes from launch to end-of-mission. In some cases, specific stability and gradient temperature limits will be imposed on flight system elements. The Thermal Control Subsystem of "normal" flight systems, the mass, power, control, and sensing systems mass and power requirements are below 10% of the total flight system resources. In general the thermal control subsystem engineer is involved in all other flight subsystem designs.

  3. Planning Inmarsat's second generation of spacecraft

    Science.gov (United States)

    Williams, W. P.

    1982-09-01

    The next generation of studies of the Inmarsat service are outlined, such as traffic forecasting studies, communications capacity estimates, space segment design, cost estimates, and financial analysis. Traffic forecasting will require future demand estimates, and a computer model has been developed which estimates demand over the Atlantic, Pacific, and Indian ocean regions. Communications estimates are based on traffic estimates, as a model converts traffic demand into a required capacity figure for a given area. The Erlang formula is used, requiring additional data such as peak hour ratios and distribution estimates. Basic space segment technical requirements are outlined (communications payload, transponder arrangements, etc), and further design studies involve such areas as space segment configuration, launcher and spacecraft studies, transmission planning, and earth segment configurations. Cost estimates of proposed design parameters will be performed, but options must be reduced to make construction feasible. Finally, a financial analysis will be carried out in order to calculate financial returns.

  4. Radioisotope AMTEC power system designs for spacecraft applications

    International Nuclear Information System (INIS)

    Ivanenok, J.F. III; Sievers, R.K.; Hunt, T.K.; Johnson, G.A.

    1993-01-01

    The Alkali Metal Thermal to Electric Converter (AMTEC) system is an exceptional candidate for high performance spacecraft power systems including small systems powered by General Purpose Heat Sources (GPHS). The AMTEC converter is best described as a thermally regenerative electrochemical concentration cell. AMTEC is a static energy conversion device and can operate at efficiencies between 15% and 30%. The single tube, remote condensed, wick return minicell design has been incorporated into a radioisotope powered system model. Reported cell efficiencies used for these system design studies ranged from 15% to 25%. This efficiency is significantly higher than other static conversion systems operating at the same temperatures. Savings in mass and cost, relative to other more conventional static conversion systems, have also been shown. The minicell used for this system study has many advanced features not combined in previous designs, including wick return, remote condensing, and hot zone feedthroughs. All of these features significantly enhance the performance of the AMTEC cell. Additionally, the cell end provides enough area for adequate heat transfer from the GPHS module, eliminating the need for a ''hot shoe'', and reducing the complexity and weight of the system. This paper describes and compares small (two module) and larger (16 module) AMTEC radioisotope powered systems and describes the computer model developed to predict their performance

  5. Current-voltage model of LED light sources

    DEFF Research Database (Denmark)

    Beczkowski, Szymon; Munk-Nielsen, Stig

    2012-01-01

    Amplitude modulation is rarely used for dimming light-emitting diodes in polychromatic luminaires due to big color shifts caused by varying magnitude of LED driving current and nonlinear relationship between intensity of a diode and driving current. Current-voltage empirical model of light...

  6. Spacecraft Fire Safety Research at NASA Glenn Research Center

    Science.gov (United States)

    Meyer, Marit

    2016-01-01

    Appropriate design of fire detection systems requires knowledge of both the expected fire signature and the background aerosol levels. Terrestrial fire detection systems have been developed based on extensive study of terrestrial fires. Unfortunately there is no corresponding data set for spacecraft fires and consequently the fire detectors in current spacecraft were developed based upon terrestrial designs. In low gravity, buoyant flow is negligible which causes particles to concentrate at the smoke source, increasing their residence time, and increasing the transport time to smoke detectors. Microgravity fires have significantly different structure than those in 1-g which can change the formation history of the smoke particles. Finally the materials used in spacecraft are different from typical terrestrial environments where smoke properties have been evaluated. It is critically important to detect a fire in its early phase before a flame is established, given the fixed volume of air on any spacecraft. Consequently, the primary target for spacecraft fire detection is pyrolysis products rather than soot. Experimental investigations have been performed at three different NASA facilities which characterize smoke aerosols from overheating common spacecraft materials. The earliest effort consists of aerosol measurements in low gravity, called the Smoke Aerosol Measurement Experiment (SAME), and subsequent ground-based testing of SAME smoke in 55-gallon drums with an aerosol reference instrument. Another set of experiments were performed at NASAs Johnson Space Center White Sands Test Facility (WSTF), with additional fuels and an alternate smoke production method. Measurements of these smoke products include mass and number concentration, and a thermal precipitator was designed for this investigation to capture particles for microscopic analysis. The final experiments presented are from NASAs Gases and Aerosols from Smoldering Polymers (GASP) Laboratory, with selected

  7. On the sources of technological change: What do the models assume?

    International Nuclear Information System (INIS)

    Clarke, Leon; Weyant, John; Edmonds, Jae

    2008-01-01

    It is widely acknowledged that technological change can substantially reduce the costs of stabilizing atmospheric concentrations of greenhouse gases. This paper discusses the sources of technological change and the representations of these sources in formal models of energy and the environment. The paper distinguishes between three major sources of technological change-R and D, learning-by-doing and spillovers-and introduces a conceptual framework for linking modeling approaches to assumptions about these real-world sources. A selective review of modeling approaches, including those employing exogenous technological change, suggests that most formal models have meaningful real-world interpretations that focus on a subset of possible sources of technological change while downplaying the roles of others

  8. Model Predictive Control of Z-source Neutral Point Clamped Inverter

    DEFF Research Database (Denmark)

    Mo, Wei; Loh, Poh Chiang; Blaabjerg, Frede

    2011-01-01

    This paper presents Model Predictive Control (MPC) of Z-source Neutral Point Clamped (NPC) inverter. For illustration, current control of Z-source NPC grid-connected inverter is analyzed and simulated. With MPC’s advantage of easily including system constraints, load current, impedance network...... response are obtained at the same time with a formulated Z-source NPC inverter network model. Operation steady state and transient state simulation results of MPC are going to be presented, which shows good reference tracking ability of this method. It provides new control method for Z-source NPC inverter...

  9. Environmentally-induced discharge transient coupling to spacecraft

    Science.gov (United States)

    Viswanathan, R.; Barbay, G.; Stevens, N. J.

    1985-01-01

    The Hughes SCREENS (Space Craft Response to Environments of Space) technique was applied to generic spin and 3-axis stabilized spacecraft models. It involved the NASCAP modeling for surface charging and lumped element modeling for transients coupling into a spacecraft. A differential voltage between antenna and spun shelf of approx. 400 V and current of 12 A resulted from discharge at antenna for the spinner and approx. 3 kv and 0.3 A from a discharge at solar panels for the 3-axis stabilized Spacecraft. A typical interface circuit response was analyzed to show that the transients would couple into the Spacecraft System through ground points, which are most vulnerable. A compilation and review was performed on 15 years of available data from electron and ion current collection phenomena. Empirical models were developed to match data and compared with flight data of Pix-1 and Pix-2 mission. It was found that large space power systems would float negative and discharge if operated at or above 300 V. Several recommendations are given to improve the models and to apply them to large space systems.

  10. Facilitating open global data use in earthquake source modelling to improve geodetic and seismological approaches

    Science.gov (United States)

    Sudhaus, Henriette; Heimann, Sebastian; Steinberg, Andreas; Isken, Marius; Vasyura-Bathke, Hannes

    2017-04-01

    In the last few years impressive achievements have been made in improving inferences about earthquake sources by using InSAR (Interferometric Synthetic Aperture Radar) data. Several factors aided these developments. The open data basis of earthquake observations has expanded vastly with the two powerful Sentinel-1 SAR sensors up in space. Increasing computer power allows processing of large data sets for more detailed source models. Moreover, data inversion approaches for earthquake source inferences are becoming more advanced. By now data error propagation is widely implemented and the estimation of model uncertainties is a regular feature of reported optimum earthquake source models. Also, more regularly InSAR-derived surface displacements and seismological waveforms are combined, which requires finite rupture models instead of point-source approximations and layered medium models instead of homogeneous half-spaces. In other words the disciplinary differences in geodetic and seismological earthquake source modelling shrink towards common source-medium descriptions and a source near-field/far-field data point of view. We explore and facilitate the combination of InSAR-derived near-field static surface displacement maps and dynamic far-field seismological waveform data for global earthquake source inferences. We join in the community efforts with the particular goal to improve crustal earthquake source inferences in generally not well instrumented areas, where often only the global backbone observations of earthquakes are available provided by seismological broadband sensor networks and, since recently, by Sentinel-1 SAR acquisitions. We present our work on modelling standards for the combination of static and dynamic surface displacements in the source's near-field and far-field, e.g. on data and prediction error estimations as well as model uncertainty estimation. Rectangular dislocations and moment-tensor point sources are exchanged by simple planar finite

  11. Benefits of Spacecraft Level Vibration Testing

    Science.gov (United States)

    Gordon, Scott; Kern, Dennis L.

    2015-01-01

    NASA-HDBK-7008 Spacecraft Level Dynamic Environments Testing discusses the approaches, benefits, dangers, and recommended practices for spacecraft level dynamic environments testing, including vibration testing. This paper discusses in additional detail the benefits and actual experiences of vibration testing spacecraft for NASA Goddard Space Flight Center (GSFC) and Jet Propulsion Laboratory (JPL) flight projects. JPL and GSFC have both similarities and differences in their spacecraft level vibration test approach: JPL uses a random vibration input and a frequency range usually starting at 5 Hz and extending to as high as 250 Hz. GSFC uses a sine sweep vibration input and a frequency range usually starting at 5 Hz and extending only to the limits of the coupled loads analysis (typically 50 to 60 Hz). However, both JPL and GSFC use force limiting to realistically notch spacecraft resonances and response (acceleration) limiting as necessary to protect spacecraft structure and hardware from exceeding design strength capabilities. Despite GSFC and JPL differences in spacecraft level vibration test approaches, both have uncovered a significant number of spacecraft design and workmanship anomalies in vibration tests. This paper will give an overview of JPL and GSFC spacecraft vibration testing approaches and provide a detailed description of spacecraft anomalies revealed.

  12. Injection of an electron beam into a plasma and spacecraft charging

    International Nuclear Information System (INIS)

    Okuda, H.; Kan, J.R.

    1987-01-01

    Injection of a nonrelativistic electron beam into a fully ionized plasma from a spacecraft including the effect of charging has been studied using a one-dimensional particle simulation model. It is found that the spacecraft charging remains negligible and the beam can propagate into a plasma, if the beam density is much smaller than the ambient density. When the injection current is increased by increasing the beam density, significant spacecraft charging takes place and the reflection of beam electrons back to the spacecraft reduces the beam current significantly. On the other hand, if the injection current is increased by increasing the beam energy, spacecraft charging remains negligible and a beam current much larger than the thermal return current can be injected. It is shown that the electric field caused by the beam--plasma instability accelerates the ambient electrons toward the spacecraft thereby enhancing the return current

  13. Spacecraft Dynamic Characterization by Strain Energies Method

    Science.gov (United States)

    Bretagne, J.-M.; Fragnito, M.; Massier, S.

    2002-01-01

    In the last years the significant increase in satellite broadcasting demand, with the wide band communication dawn, has given a great impulse to the telecommunication satellite market. The big demand is translated from operators (such as SES/Astra, Eutelsat, Intelsat, Inmarsat, EuroSkyWay etc.) in an increase of orders of telecom satellite to the world industrials. The largest part of these telecom satellite orders consists of Geostationary platforms which grow more and more in mass (over 5 tons) due to an ever longer demanded lifetime (up to 20 years), and become more complex due to the need of implementing an ever larger number of repeaters, antenna reflectors and feeds, etc... In this frame, the mechanical design and verification of these large spacecraft become difficult and ambitious at the same time, driven by the dry mass limitation objective. By the Finite Element Method (FEM), and on the basis of the telecom satellite heritage of a world leader constructor such as Alcatel Space Industries it is nowadays possible to model these spacecraft in a realistic and confident way in order to identify the main global dynamic aspects such as mode shapes, mass participation and/or dynamic responses. But on the other hand, one of the main aims consists in identifying soon in a program the most critical aspects of the system behavior in the launch dynamic environment, such as possible dynamic coupling between the different subsystems and secondary structures of the spacecraft (large deployable reflectors, thrusters, etc.). To this aim a numerical method has been developed in the frame of the Alcatel SPACEBUS family program, using MSC/Nastran capabilities and it is presented in this paper. The method is based on Spacecraft sub-structuring and strain energy calculation. The method mainly consists of two steps : 1) subsystem modal strain energy ratio (with respect to the global strain energy); 2) subsystem strain energy calculation for each mode according to the base driven

  14. Hybrid spacecraft attitude control system

    Directory of Open Access Journals (Sweden)

    Renuganth Varatharajoo

    2016-02-01

    Full Text Available The hybrid subsystem design could be an attractive approach for futurespacecraft to cope with their demands. The idea of combining theconventional Attitude Control System and the Electrical Power System ispresented in this article. The Combined Energy and Attitude ControlSystem (CEACS consisting of a double counter rotating flywheel assemblyis investigated for small satellites in this article. Another hybrid systemincorporating the conventional Attitude Control System into the ThermalControl System forming the Combined Attitude and Thermal ControlSystem (CATCS consisting of a "fluid wheel" and permanent magnets isalso investigated for small satellites herein. The governing equationsdescribing both these novel hybrid subsystems are presented and theironboard architectures are numerically tested. Both the investigated novelhybrid spacecraft subsystems comply with the reference missionrequirements.The hybrid subsystem design could be an attractive approach for futurespacecraft to cope with their demands. The idea of combining theconventional Attitude Control System and the Electrical Power System ispresented in this article. The Combined Energy and Attitude ControlSystem (CEACS consisting of a double counter rotating flywheel assemblyis investigated for small satellites in this article. Another hybrid systemincorporating the conventional Attitude Control System into the ThermalControl System forming the Combined Attitude and Thermal ControlSystem (CATCS consisting of a "fluid wheel" and permanent magnets isalso investigated for small satellites herein. The governing equationsdescribing both these novel hybrid subsystems are presented and theironboard architectures are numerically tested. Both the investigated novelhybrid spacecraft subsystems comply with the reference missionrequirements.

  15. Spatial and frequency domain ring source models for the single muscle fiber action potential

    DEFF Research Database (Denmark)

    Henneberg, Kaj-åge; R., Plonsey

    1994-01-01

    In the paper, single-fibre models for the extracellular action potential are developed that will allow the potential to the evaluated at an arbitrary field point in the extracellular space. Fourier-domain models are restricted in that they evaluate potentials at equidistant points along a line...... parallel to the fibre axis. Consequently, they cannot easily evaluate the potential at the boundary nodes of a boundary-element electrode model. The Fourier-domain models employ axial-symmetric ring source models, and thereby provide higher accuracy that the line source model, where the source is lumped...... including anisotropy show that the spatial models require extreme care in the integration procedure owing to the singularity in the weighting functions. With adequate sampling, the spatial models can evaluate extracellular potentials with high accuracy....

  16. Diamond carbon sources: a comparison of carbon isotope models

    International Nuclear Information System (INIS)

    Kirkley, M.B.; Otter, M.L.; Gurney, J.J.; Hill, S.J.

    1990-01-01

    The carbon isotope compositions of approximately 500 inclusion-bearing diamonds have been determined in the past decade. 98 percent of these diamonds readily fall into two broad categories on the basis of their inclusion mineralogies and compositions. These categories are peridotitic diamonds and eclogitic diamonds. Most peridotitic diamonds have δ 13 C values between -10 and -1 permil, whereas eclogitic diamonds have δ 13 C values between -28 and +2 permil. Peridotitic diamonds may represent primordial carbon, however, it is proposed that initially inhomogeneous δ 13 C values were subsequently homogenized, e.g. during melting and convection that is postulated to have occurred during the first billion years of the earth's existence. If this is the case, then the wider range of δ 13 C values exhibited by eclogitic diamonds requires a different explanation. Both the fractionation model and the subduction model can account for the range of observed δ 13 C values in eclogitic diamonds. 16 refs., 2 figs

  17. Comprehension of Spacecraft Telemetry Using Hierarchical Specifications of Behavior

    Science.gov (United States)

    Havelund, Klaus; Joshi, Rajeev

    2014-01-01

    A key challenge in operating remote spacecraft is that ground operators must rely on the limited visibility available through spacecraft telemetry in order to assess spacecraft health and operational status. We describe a tool for processing spacecraft telemetry that allows ground operators to impose structure on received telemetry in order to achieve a better comprehension of system state. A key element of our approach is the design of a domain-specific language that allows operators to express models of expected system behavior using partial specifications. The language allows behavior specifications with data fields, similar to other recent runtime verification systems. What is notable about our approach is the ability to develop hierarchical specifications of behavior. The language is implemented as an internal DSL in the Scala programming language that synthesizes rules from patterns of specification behavior. The rules are automatically applied to received telemetry and the inferred behaviors are available to ground operators using a visualization interface that makes it easier to understand and track spacecraft state. We describe initial results from applying our tool to telemetry received from the Curiosity rover currently roving the surface of Mars, where the visualizations are being used to trend subsystem behaviors, in order to identify potential problems before they happen. However, the technology is completely general and can be applied to any system that generates telemetry such as event logs.

  18. Conceptual model for deriving the repository source term

    International Nuclear Information System (INIS)

    Alexander, D.H.; Apted, M.J.; Liebetrau, A.M.; Van Luik, A.E.; Williford, R.E.; Doctor, P.G.; Pacific Northwest Lab., Richland, WA; Roy F. Weston, Inc./Rogers and Assoc. Engineering Corp., Rockville, MD)

    1984-01-01

    Part of a strategy for evaluating the compliance of geologic repositories with Federal regulations is a modeling approach that would provide realistic release estimates for a particular configuration of the engineered-barrier system. The objective is to avoid worst-case bounding assumptions that are physically impossible or excessively conservative and to obtain probabilitistic estimates of (1) the penetration time for metal barriers and (2) radionuclide-release rates for individually simulated waste packages after penetration has occurred. The conceptual model described in this paper will assume that release rates are explicitly related to such time-dependent processes as mass transfer, dissolution and precipitation, radionuclide decay, and variations in the geochemical environment. The conceptual model will take into account the reduction in the rates of waste-form dissolution and metal corrosion due to a buildup of chemical reaction products. The sorptive properties of the metal-barrier corrosion products in proximity to the waste form surface will also be included. Cumulative released from the engineered-barrier system will be calculated by summing the releases from a probabilistically generated population of individual waste packages. 14 refs., 7 figs

  19. Conceptual model for deriving the repository source term

    International Nuclear Information System (INIS)

    Alexander, D.H.; Apted, M.J.; Liebetrau, A.M.; Doctor, P.G.; Williford, R.E.; Van Luik, A.E.

    1984-11-01

    Part of a strategy for evaluating the compliance of geologic repositories with federal regulations is a modeling approach that would provide realistic release estimates for a particular configuration of the engineered-barrier system. The objective is to avoid worst-case bounding assumptions that are physically impossible or excessively conservative and to obtain probabilistic estimates of (1) the penetration time for metal barriers and (2) radionuclide-release rates for individually simulated waste packages after penetration has occurred. The conceptual model described in this paper will assume that release rates are explicitly related to such time-dependent processes as mass transfer, dissolution and precipitation, radionuclide decay, and variations in the geochemical environment. The conceptual model will take into account the reduction in the rates of waste-form dissolution and metal corrosion due to a buildup of chemical reaction products. The sorptive properties of the metal-barrier corrosion products in proximity to the waste form surface will also be included. Cumulative releases from the engineered-barrier system will be calculated by summing the releases from a probabilistically generated population of individual waste packages. 14 refs., 7 figs

  20. Triple-root jump in spacecraft potential due to electron beam emission or impact

    International Nuclear Information System (INIS)

    Lai, S.T.

    1992-01-01

    Triple-root jump in spacecraft potential is well understood in the double Maxwellian model of the natural space environment. In this paper, however, the author points out that triple-root jumps in spacecraft potential may also occur during photoemission or electron beam emission from a spacecraft. Impact of an incoming electron beam on a spacecraft may also cause triple-root jumps provided that the beam, ambient plasma, and surface parameters satisfy certain inequality conditions. The parametric conditions under which such beam induced triple-root jumps may occur are presented

  1. A GIS-based time-dependent seismic source modeling of Northern Iran

    Science.gov (United States)

    Hashemi, Mahdi; Alesheikh, Ali Asghar; Zolfaghari, Mohammad Reza

    2017-01-01

    The first step in any seismic hazard study is the definition of seismogenic sources and the estimation of magnitude-frequency relationships for each source. There is as yet no standard methodology for source modeling and many researchers have worked on this topic. This study is an effort to define linear and area seismic sources for Northern Iran. The linear or fault sources are developed based on tectonic features and characteristic earthquakes while the area sources are developed based on spatial distribution of small to moderate earthquakes. Time-dependent recurrence relationships are developed for fault sources using renewal approach while time-independent frequency-magnitude relationships are proposed for area sources based on Poisson process. GIS functionalities are used in this study to introduce and incorporate spatial-temporal and geostatistical indices in delineating area seismic sources. The proposed methodology is used to model seismic sources for an area of about 500 by 400 square kilometers around Tehran. Previous researches and reports are studied to compile an earthquake/fault catalog that is as complete as possible. All events are transformed to uniform magnitude scale; duplicate events and dependent shocks are removed. Completeness and time distribution of the compiled catalog is taken into account. The proposed area and linear seismic sources in conjunction with defined recurrence relationships can be used to develop time-dependent probabilistic seismic hazard analysis of Northern Iran.

  2. Calculation of media temperatures for nuclear sources in geologic depositories by a finite-length line source superposition model (FLLSSM)

    Energy Technology Data Exchange (ETDEWEB)

    Kays, W M; Hossaini-Hashemi, F [Stanford Univ., Palo Alto, CA (USA). Dept. of Mechanical Engineering; Busch, J S [Kaiser Engineers, Oakland, CA (USA)

    1982-02-01

    A linearized transient thermal conduction model was developed to economically determine media temperatures in geologic repositories for nuclear wastes. Individual canisters containing either high-level waste or spent fuel assemblies are represented as finite-length line sources in a continuous medium. The combined effects of multiple canisters in a representative storage pattern can be established in the medium at selected point of interest by superposition of the temperature rises calculated for each canister. A mathematical solution of the calculation for each separate source is given in this article, permitting a slow hand calculation. The full report, ONWI-94, contains the details of the computer code FLLSSM and its use, yielding the total solution in one computer output.

  3. Multi-kilowatt modularized spacecraft power processing system development

    International Nuclear Information System (INIS)

    Andrews, R.E.; Hayden, J.H.; Hedges, R.T.; Rehmann, D.W.

    1975-07-01

    A review of existing information pertaining to spacecraft power processing systems and equipment was accomplished with a view towards applicability to the modularization of multi-kilowatt power processors. Power requirements for future spacecraft were determined from the NASA mission model-shuttle systems payload data study which provided the limits for modular power equipment capabilities. Three power processing systems were compared to evaluation criteria to select the system best suited for modularity. The shunt regulated direct energy transfer system was selected by this analysis for a conceptual design effort which produced equipment specifications, schematics, envelope drawings, and power module configurations

  4. Stabilization of rotational motion with application to spacecraft attitude control

    DEFF Research Database (Denmark)

    Wisniewski, Rafal

    2000-01-01

    for global stabilization of a rotary motion. Along with a model of the system formulated in the Hamilton's canonical from the algorithm uses information about a required potential energy and a dissipation term. The control action is the sum of the gradient of the potential energy and the dissipation force......The objective of this paper is to develop a control scheme for stabilization of a hamiltonian system. The method generalizes the results available in the literature on motion control in the Euclidean space to an arbitrary differrential manifol equipped with a metric. This modification is essencial...... on a Riemannian manifold. The Lyapnov stability theory is adapted and reformulated to fit to the new framework of Riemannian manifolds. Toillustrate the results a spacecraft attitude control problem is considered. Firstly, a global canonical representation for the spacecraft motion is found, then three spacecraft...

  5. Stabilization of rotational motion with application to spacecraft attitude control

    DEFF Research Database (Denmark)

    Wisniewski, Rafal

    2001-01-01

    for global stabilization of a rotary motion. Along with a model of the system formulated in the Hamilton's canonical from the algorithm uses information about a required potential energy and a dissipation term. The control action is the sum of the gradient of the potential energy and the dissipation force......The objective of this paper is to develop a control scheme for stabilization of a hamiltonian system. The method generalizes the results available in the literature on motion control in the Euclidean space to an arbitrary differrential manifol equipped with a metric. This modification is essencial...... on a Riemannian manifold. The Lyapnov stability theory is adapted and reformulated to fit to the new framework of Riemannian manifolds. Toillustrate the results a spacecraft attitude control problem is considered. Firstly, a global canonical representation for the spacecraft motion is found, then three spacecraft...

  6. Modeling Geometric Arrangements of TiO2-Based Catalyst Substrates and Isotropic Light Sources to Enhance the Efficiency of a Photocatalystic Oxidation (PCO) Reactor

    Science.gov (United States)

    Richards, Jeffrey T.; Levine, Lanfang H.; Husk, Geoffrey K.

    2011-01-01

    The closed confined environments of the ISS, as well as in future spacecraft for exploration beyond LEO, provide many challenges to crew health. One such challenge is the availability of a robust, energy efficient, and re-generable air revitalization system that controls trace volatile organic contaminants (VOCs) to levels below a specified spacecraft maximum allowable concentration (SMAC). Photocatalytic oxidation (PCO), which is capable of mineralizing VOCs at room temperature and of accommodating a high volumetric flow, is being evaluated as an alternative trace contaminant control technology. In an architecture of a combined air and water management system, placing a PCO unit before a condensing heat exchanger for humidity control will greatly reduce the organic load into the humidity condensate loop ofthe water processing assembly (WPA) thereby enhancing the life cycle economics ofthe WPA. This targeted application dictates a single pass efficiency of greater than 90% for polar VOCs. Although this target was met in laboratory bench-scaled reactors, no commercial or SBIR-developed prototype PCO units examined to date have achieved this goal. Furthermore, the formation of partial oxidation products (e.g., acetaldehyde) was not eliminated. It is known that single pass efficiency and partial oxidation are strongly dependent upon the contact time and catalyst illumination, hence the requirement for an efficient reactor design. The objective of this study is to maximize the apparent contact time and illuminated catalyst surface area at a given reactor volume and volumetric flow. In this study, a Ti02-based photocatalyst is assumed to be immobilized on porous substrate panels and illumination derived from linear isotropic light sources. Mathematical modeling using computational fluid dynamics (CFD) analyses were performed to investigate the effect of: 1) the geometry and configuration of catalyst-coated substrate panels, 2) porosity of the supporting substrate, and 3

  7. Evaluation of the influence of uncertain forward models on the EEG source reconstruction problem

    DEFF Research Database (Denmark)

    Stahlhut, Carsten; Mørup, Morten; Winther, Ole

    2009-01-01

    in the different areas of the brain when noise is present. Results Due to mismatch between the true and experimental forward model, the reconstruction of the sources is determined by the angles between the i'th forward field associated with the true source and the j'th forward field in the experimental forward...... representation of the signal. Conclusions This analysis demonstrated that caution is needed when evaluating the source estimates in different brain regions. Moreover, we demonstrated the importance of reliable forward models, which may be used as a motivation for including the forward model uncertainty...

  8. Identifying the Source of Misfit in Item Response Theory Models.

    Science.gov (United States)

    Liu, Yang; Maydeu-Olivares, Alberto

    2014-01-01

    When an item response theory model fails to fit adequately, the items for which the model provides a good fit and those for which it does not must be determined. To this end, we compare the performance of several fit statistics for item pairs with known asymptotic distributions under maximum likelihood estimation of the item parameters: (a) a mean and variance adjustment to bivariate Pearson's X(2), (b) a bivariate subtable analog to Reiser's (1996) overall goodness-of-fit test, (c) a z statistic for the bivariate residual cross product, and (d) Maydeu-Olivares and Joe's (2006) M2 statistic applied to bivariate subtables. The unadjusted Pearson's X(2) with heuristically determined degrees of freedom is also included in the comparison. For binary and ordinal data, our simulation results suggest that the z statistic has the best Type I error and power behavior among all the statistics under investigation when the observed information matrix is used in its computation. However, if one has to use the cross-product information, the mean and variance adjusted X(2) is recommended. We illustrate the use of pairwise fit statistics in 2 real-data examples and discuss possible extensions of the current research in various directions.

  9. Investigations of incorporating source directivity into room acoustics computer models to improve auralizations

    Science.gov (United States)

    Vigeant, Michelle C.

    Room acoustics computer modeling and auralizations are useful tools when designing or modifying acoustically sensitive spaces. In this dissertation, the input parameter of source directivity has been studied in great detail to determine first its effect in room acoustics computer models and secondly how to better incorporate the directional source characteristics into these models to improve auralizations. To increase the accuracy of room acoustics computer models, the source directivity of real sources, such as musical instruments, must be included in the models. The traditional method for incorporating source directivity into room acoustics computer models involves inputting the measured static directivity data taken every 10° in a sphere-shaped pattern around the source. This data can be entered into the room acoustics software to create a directivity balloon, which is used in the ray tracing algorithm to simulate the room impulse response. The first study in this dissertation shows that using directional sources over an omni-directional source in room acoustics computer models produces significant differences both in terms of calculated room acoustics parameters and auralizations. The room acoustics computer model was also validated in terms of accurately incorporating the input source directivity. A recently proposed technique for creating auralizations using a multi-channel source representation has been investigated with numerous subjective studies, applied to both solo instruments and an orchestra. The method of multi-channel auralizations involves obtaining multi-channel anechoic recordings of short melodies from various instruments and creating individual channel auralizations. These auralizations are then combined to create a total multi-channel auralization. Through many subjective studies, this process was shown to be effective in terms of improving the realism and source width of the auralizations in a number of cases, and also modeling different

  10. Comparison of receptor models for source apportionment of volatile organic compounds in Beijing, China

    International Nuclear Information System (INIS)

    Song Yu; Dai Wei; Shao Min; Liu Ying; Lu Sihua; Kuster, William; Goldan, Paul

    2008-01-01

    Identifying the sources of volatile organic compounds (VOCs) is key to reducing ground-level ozone and secondary organic aerosols (SOAs). Several receptor models have been developed to apportion sources, but an intercomparison of these models had not been performed for VOCs in China. In the present study, we compared VOC sources based on chemical mass balance (CMB), UNMIX, and positive matrix factorization (PMF) models. Gasoline-related sources, petrochemical production, and liquefied petroleum gas (LPG) were identified by all three models as the major contributors, with UNMIX and PMF producing quite similar results. The contributions of gasoline-related sources and LPG estimated by the CMB model were higher, and petrochemical emissions were lower than in the UNMIX and PMF results, possibly because the VOC profiles used in the CMB model were for fresh emissions and the profiles extracted from ambient measurements by the two-factor analysis models were 'aged'. - VOCs sources were similar for three models with CMB showing a higher estimate for vehicles

  11. Comparison of receptor models for source apportionment of volatile organic compounds in Beijing, China

    Energy Technology Data Exchange (ETDEWEB)

    Song Yu; Dai Wei [Department of Environmental Sciences, Peking University, Beijing 100871 (China); Shao Min [State Joint Key Laboratory of Environmental Simulation and Pollution Control, Peking University, Beijing 100871 (China)], E-mail: mshao@pku.edu.cn; Liu Ying; Lu Sihua [State Joint Key Laboratory of Environmental Simulation and Pollution Control, Peking University, Beijing 100871 (China); Kuster, William; Goldan, Paul [Chemical Sciences Division, NOAA Earth System Research Laboratory, Boulder, CO 80305 (United States)

    2008-11-15

    Identifying the sources of volatile organic compounds (VOCs) is key to reducing ground-level ozone and secondary organic aerosols (SOAs). Several receptor models have been developed to apportion sources, but an intercomparison of these models had not been performed for VOCs in China. In the present study, we compared VOC sources based on chemical mass balance (CMB), UNMIX, and positive matrix factorization (PMF) models. Gasoline-related sources, petrochemical production, and liquefied petroleum gas (LPG) were identified by all three models as the major contributors, with UNMIX and PMF producing quite similar results. The contributions of gasoline-related sources and LPG estimated by the CMB model were higher, and petrochemical emissions were lower than in the UNMIX and PMF results, possibly because the VOC profiles used in the CMB model were for fresh emissions and the profiles extracted from ambient measurements by the two-factor analysis models were 'aged'. - VOCs sources were similar for three models with CMB showing a higher estimate for vehicles.

  12. Estimating Torque Imparted on Spacecraft Using Telemetry

    Science.gov (United States)

    Lee, Allan Y.; Wang, Eric K.; Macala, Glenn A.

    2013-01-01

    There have been a number of missions with spacecraft flying by planetary moons with atmospheres; there will be future missions with similar flybys. When a spacecraft such as Cassini flies by a moon with an atmosphere, the spacecraft will experience an atmospheric torque. This torque could be used to determine the density of the atmosphere. This is because the relation between the atmospheric torque vector and the atmosphere density could be established analytically using the mass properties of the spacecraft, known drag coefficient of objects in free-molecular flow, and the spacecraft velocity relative to the moon. The density estimated in this way could be used to check results measured by science instruments. Since the proposed methodology could estimate disturbance torque as small as 0.02 N-m, it could also be used to estimate disturbance torque imparted on the spacecraft during high-altitude flybys.

  13. Hanford tank residual waste - Contaminant source terms and release models

    International Nuclear Information System (INIS)

    Deutsch, William J.; Cantrell, Kirk J.; Krupka, Kenneth M.; Lindberg, Michael L.; Jeffery Serne, R.

    2011-01-01

    Highlights: → Residual waste from five Hanford spent fuel process storage tanks was evaluated. → Gibbsite is a common mineral in tanks with high Al concentrations. → Non-crystalline U-Na-C-O-P ± H phases are common in the U-rich residual. → Iron oxides/hydroxides have been identified in all residual waste samples. → Uranium release is highly dependent on waste and leachant compositions. - Abstract: Residual waste is expected to be left in 177 underground storage tanks after closure at the US Department of Energy's Hanford Site in Washington State, USA. In the long term, the residual wastes may represent a potential source of contamination to the subsurface environment. Residual materials that cannot be completely removed during the tank closure process are being studied to identify and characterize the solid phases and estimate the release of contaminants from these solids to water that might enter the closed tanks in the future. As of the end of 2009, residual waste from five tanks has been evaluated. Residual wastes from adjacent tanks C-202 and C-203 have high U concentrations of 24 and 59 wt.%, respectively, while residual wastes from nearby tanks C-103 and C-106 have low U concentrations of 0.4 and 0.03 wt.%, respectively. Aluminum concentrations are high (8.2-29.1 wt.%) in some tanks (C-103, C-106, and S-112) and relatively low ( 2 -saturated solution, or a CaCO 3 -saturated water. Uranium release concentrations are highly dependent on waste and leachant compositions with dissolved U concentrations one or two orders of magnitude higher in the tests with high U residual wastes, and also higher when leached with the CaCO 3 -saturated solution than with the Ca(OH) 2 -saturated solution. Technetium leachability is not as strongly dependent on the concentration of Tc in the waste, and it appears to be slightly more leachable by the Ca(OH) 2 -saturated solution than by the CaCO 3 -saturated solution. In general, Tc is much less leachable (<10 wt.% of the

  14. Analytic sensing for multi-layer spherical models with application to EEG source imaging

    OpenAIRE

    Kandaswamy, Djano; Blu, Thierry; Van De Ville, Dimitri

    2013-01-01

    Source imaging maps back boundary measurements to underlying generators within the domain; e. g., retrieving the parameters of the generating dipoles from electrical potential measurements on the scalp such as in electroencephalography (EEG). Fitting such a parametric source model is non-linear in the positions of the sources and renewed interest in mathematical imaging has led to several promising approaches. One important step in these methods is the application of a sensing principle that ...

  15. Parallel Beam Dynamics Simulation Tools for Future Light Source Linac Modeling

    International Nuclear Information System (INIS)

    Qiang, Ji; Pogorelov, Ilya v.; Ryne, Robert D.

    2007-01-01

    Large-scale modeling on parallel computers is playing an increasingly important role in the design of future light sources. Such modeling provides a means to accurately and efficiently explore issues such as limits to beam brightness, emittance preservation, the growth of instabilities, etc. Recently the IMPACT codes suite was enhanced to be applicable to future light source design. Simulations with IMPACT-Z were performed using up to one billion simulation particles for the main linac of a future light source to study the microbunching instability. Combined with the time domain code IMPACT-T, it is now possible to perform large-scale start-to-end linac simulations for future light sources, including the injector, main linac, chicanes, and transfer lines. In this paper we provide an overview of the IMPACT code suite, its key capabilities, and recent enhancements pertinent to accelerator modeling for future linac-based light sources

  16. Sources of motivation, interpersonal conflict management styles, and leadership effectiveness: a structural model.

    Science.gov (United States)

    Barbuto, John E; Xu, Ye

    2006-02-01

    126 leaders and 624 employees were sampled to test the relationship between sources of motivation and conflict management styles of leaders and how these variables influence effectiveness of leadership. Five sources of motivation measured by the Motivation Sources Inventory were tested-intrinsic process, instrumental, self-concept external, self-concept internal, and goal internalization. These sources of work motivation were associated with Rahim's modes of interpersonal conflict management-dominating, avoiding, obliging, complying, and integrating-and to perceived leadership effectiveness. A structural equation model tested leaders' conflict management styles and leadership effectiveness based upon different sources of work motivation. The model explained variance for obliging (65%), dominating (79%), avoiding (76%), and compromising (68%), but explained little variance for integrating (7%). The model explained only 28% of the variance in leader effectiveness.

  17. A Bayesian Framework for Analysis of Pseudo-Spatial Models of Comparable Engineered Systems with Application to Spacecraft Anomaly Prediction Based on Precedent Data

    Science.gov (United States)

    Ndu, Obibobi Kamtochukwu

    To ensure that estimates of risk and reliability inform design and resource allocation decisions in the development of complex engineering systems, early engagement in the design life cycle is necessary. An unfortunate constraint on the accuracy of such estimates at this stage of concept development is the limited amount of high fidelity design and failure information available on the actual system under development. Applying the human ability to learn from experience and augment our state of knowledge to evolve better solutions mitigates this limitation. However, the challenge lies in formalizing a methodology that takes this highly abstract, but fundamentally human cognitive, ability and extending it to the field of risk analysis while maintaining the tenets of generalization, Bayesian inference, and probabilistic risk analysis. We introduce an integrated framework for inferring the reliability, or other probabilistic measures of interest, of a new system or a conceptual variant of an existing system. Abstractly, our framework is based on learning from the performance of precedent designs and then applying the acquired knowledge, appropriately adjusted based on degree of relevance, to the inference process. This dissertation presents a method for inferring properties of the conceptual variant using a pseudo-spatial model that describes the spatial configuration of the family of systems to which the concept belongs. Through non-metric multidimensional scaling, we formulate the pseudo-spatial model based on rank-ordered subjective expert perception of design similarity between systems that elucidate the psychological space of the family. By a novel extension of Kriging methods for analysis of geospatial data to our "pseudo-space of comparable engineered systems", we develop a Bayesian inference model that allows prediction of the probabilistic measure of interest.

  18. Total Variability Modeling using Source-specific Priors

    DEFF Research Database (Denmark)

    Shepstone, Sven Ewan; Lee, Kong Aik; Li, Haizhou

    2016-01-01

    sequence of an utterance. In both cases the prior for the latent variable is assumed to be non-informative, since for homogeneous datasets there is no gain in generality in using an informative prior. This work shows in the heterogeneous case, that using informative priors for com- puting the posterior......, can lead to favorable results. We focus on modeling the priors using minimum divergence criterion or fac- tor analysis techniques. Tests on the NIST 2008 and 2010 Speaker Recognition Evaluation (SRE) dataset show that our proposed method beats four baselines: For i-vector extraction using an already...... trained matrix, for the short2-short3 task in SRE’08, five out of eight female and four out of eight male common conditions, were improved. For the core-extended task in SRE’10, four out of nine female and six out of nine male common conditions were improved. When incorporating prior information...

  19. Optimal Autonomous Spacecraft Resiliency Maneuvers Using Metaheuristics

    Science.gov (United States)

    2014-09-15

    This work was accepted for published by the American Institute of Aeronautics and Astronautics (AIAA) Journal of Spacecraft and Rockets in July 2014...publication in the AIAA Journal of Spacecraft and Rockets . Chapter 5 introduces an impulsive maneuvering strategy to deliver a spacecraft to its final...upon arrival r2 and v2 , respectively. The variable T2 determines the time of flight needed to make the maneuver, and the variable θ2 determines the

  20. Receptor modeling for source apportionment of polycyclic aromatic hydrocarbons in urban atmosphere.

    Science.gov (United States)

    Singh, Kunwar P; Malik, Amrita; Kumar, Ranjan; Saxena, Puneet; Sinha, Sarita

    2008-01-01

    This study reports source apportionment of polycyclic aromatic hydrocarbons (PAHs) in particulate depositions on vegetation foliages near highway in the urban environment of Lucknow city (India) using the principal components analysis/absolute principal components scores (PCA/APCS) receptor modeling approach. The multivariate method enables identification of major PAHs sources along with their quantitative contributions with respect to individual PAH. The PCA identified three major sources of PAHs viz. combustion, vehicular emissions, and diesel based activities. The PCA/APCS receptor modeling approach revealed that the combustion sources (natural gas, wood, coal/coke, biomass) contributed 19-97% of various PAHs, vehicular emissions 0-70%, diesel based sources 0-81% and other miscellaneous sources 0-20% of different PAHs. The contributions of major pyrolytic and petrogenic sources to the total PAHs were 56 and 42%, respectively. Further, the combustion related sources contribute major fraction of the carcinogenic PAHs in the study area. High correlation coefficient (R2 > 0.75 for most PAHs) between the measured and predicted concentrations of PAHs suggests for the applicability of the PCA/APCS receptor modeling approach for estimation of source contribution to the PAHs in particulates.

  1. Source modelling of train noise - Literature review and some initial measurements

    Energy Technology Data Exchange (ETDEWEB)

    Zhang Xuetao; Jonasson, Hans; Holmberg, Kjell

    2000-07-01

    A literature review of source modelling of railway noise is reported. Measurements on a special test rig at Surahammar and on the new railway line between Arlanda and Stockholm City are reported and analyzed. In the analysis the train is modelled as a number of point sources with or without directivity and each source is combined with analytical sound propagation theory to predict the sound propagation pattern best fitting the measured data. Wheel/rail rolling noise is considered to be the most important noise source. The rolling noise can be modelled as an array of moving point sources, which have a dipole-like horizontal directivity and some kind of vertical directivity. In general it is necessary to distribute the point sources on several heights. Based on our model analysis the source heights for the rolling noise should be below the wheel axles and the most important height is about a quarter of wheel diameter above the railheads. When train speeds are greater than 250 km/h aerodynamic noise will become important and even dominant. It may be important for low frequency components only if the train speed is less than 220 km/h. Little data are available for these cases. It is believed that aerodynamic noise has dipole-like directivity. Its spectrum depends on many factors: speed, railway system, type of train, bogies, wheels, pantograph, presence of barriers and even weather conditions. Other sources such as fans, engine, transmission and carriage bodies are at most second order noise sources, but for trains with a diesel locomotive engine the engine noise will be dominant if train speeds are less than about 100 km/h. The Nord 2000 comprehensive model for sound propagation outdoors, together with the source model that is based on the understandings above, can suitably handle the problems of railway noise propagation in one-third octave bands although there are still problems left to be solved.

  2. Pollutant source identification model for water pollution incidents in small straight rivers based on genetic algorithm

    Science.gov (United States)

    Zhang, Shou-ping; Xin, Xiao-kang

    2017-07-01

    Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.

  3. Modeling and analysis of a transcritical rankine power cycle with a low grade heat source

    DEFF Research Database (Denmark)

    Nguyen, Chan; Veje, Christian

    efficiency, exergetic efficiency and specific net power output. A generic cycle configuration has been used for analysis of a geothermal energy heat source. This model has been validated against similar calculations using industrial waste heat as the energy source. Calculations are done with fixed...

  4. Free Open Source Software: Social Phenomenon, New Management, New Business Models

    Directory of Open Access Journals (Sweden)

    Žilvinas Jančoras

    2011-08-01

    Full Text Available In the paper assumptions of free open source software existence, development, financing and competition models are presented. The free software as a social phenomenon and the open source software as the technological and managerial innovation environment are revealed. The social and business interaction processes are analyzed.Article in Lithuanian

  5. Parsing pyrogenic polycyclic aromatic hydrocarbons: forensic chemistry, receptor models, and source control policy.

    Science.gov (United States)

    O'Reilly, Kirk T; Pietari, Jaana; Boehm, Paul D

    2014-04-01

    A realistic understanding of contaminant sources is required to set appropriate control policy. Forensic chemical methods can be powerful tools in source characterization and identification, but they require a multiple-lines-of-evidence approach. Atmospheric receptor models, such as the US Environmental Protection Agency (USEPA)'s chemical mass balance (CMB), are increasingly being used to evaluate sources of pyrogenic polycyclic aromatic hydrocarbons (PAHs) in sediments. This paper describes the assumptions underlying receptor models and discusses challenges in complying with these assumptions in practice. Given the variability within, and the similarity among, pyrogenic PAH source types, model outputs are sensitive to specific inputs, and parsing among some source types may not be possible. Although still useful for identifying potential sources, the technical specialist applying these methods must describe both the results and their inherent uncertainties in a way that is understandable to nontechnical policy makers. The authors present an example case study concerning an investigation of a class of parking-lot sealers as a significant source of PAHs in urban sediment. Principal component analysis is used to evaluate published CMB model inputs and outputs. Targeted analyses of 2 areas where bans have been implemented are included. The results do not support the claim that parking-lot sealers are a significant source of PAHs in urban sediments. © 2013 SETAC.

  6. Determining Spacecraft Reaction Wheel Friction Parameters

    Science.gov (United States)

    Sarani, Siamak

    2009-01-01

    Software was developed to characterize the drag in each of the Cassini spacecraft's Reaction Wheel Assemblies (RWAs) to determine the RWA friction parameters. This tool measures the drag torque of RWAs for not only the high spin rates (greater than 250 RPM), but also the low spin rates (less than 250 RPM) where there is a lack of an elastohydrodynamic boundary layer in the bearings. RWA rate and drag torque profiles as functions of time are collected via telemetry once every 4 seconds and once every 8 seconds, respectively. Intermediate processing steps single-out the coast-down regions. A nonlinear model for the drag torque as a function of RWA spin rate is incorporated in order to characterize the low spin rate regime. The tool then uses a nonlinear parameter optimization algorithm based on the Nelder-Mead simplex method to determine the viscous coefficient, the Dahl friction, and the two parameters that account for the low spin-rate behavior.

  7. Cometary dust size distributions from flyby spacecraft

    International Nuclear Information System (INIS)

    Divine, N.

    1988-01-01

    Pior to the Halley flybys in 1986, the distribution of cometary dust grains with particle size were approximated using models which provided reasonable fits to the dynamics of dust tails, anti-tails, and infrared spectra. These distributions have since been improved using fluence data (i.e., particle fluxes integrated over time along the flyby trajectory) from three spacecraft. The fluence derived distributions are appropriate for comparison with simultaneous infrared photometry (from Earth) because they sample the particles in the same way as the IR data do (along the line of sight) and because they are directly proportional to the concentration distribution in that region of the coma which dominates the IR emission

  8. Human factors issues for interstellar spacecraft

    Science.gov (United States)

    Cohen, Marc M.; Brody, Adam R.

    1991-01-01

    Developments in research on space human factors are reviewed in the context of a self-sustaining interstellar spacecraft based on the notion of traveling space settlements. Assumptions about interstellar travel are set forth addressing costs, mission durations, and the need for multigenerational space colonies. The model of human motivation by Maslow (1970) is examined and directly related to the design of space habitat architecture. Human-factors technology issues encompass the human-machine interface, crew selection and training, and the development of spaceship infrastructure during transtellar flight. A scenario for feasible instellar travel is based on a speed of 0.5c, a timeframe of about 100 yr, and an expandable multigenerational crew of about 100 members. Crew training is identified as a critical human-factors issue requiring the development of perceptual and cognitive aids such as expert systems and virtual reality.

  9. Two Model-Based Methods for Policy Analyses of Fine Particulate Matter Control in China: Source Apportionment and Source Sensitivity

    Science.gov (United States)

    Li, X.; Zhang, Y.; Zheng, B.; Zhang, Q.; He, K.

    2013-12-01

    Anthropogenic emissions have been controlled in recent years in China to mitigate fine particulate matter (PM2.5) pollution. Recent studies show that sulfate dioxide (SO2)-only control cannot reduce total PM2.5 levels efficiently. Other species such as nitrogen oxide, ammonia, black carbon, and organic carbon may be equally important during particular seasons. Furthermore, each species is emitted from several anthropogenic sectors (e.g., industry, power plant, transportation, residential and agriculture). On the other hand, contribution of one emission sector to PM2.5 represents contributions of all species in this sector. In this work, two model-based methods are used to identify the most influential emission sectors and areas to PM2.5. The first method is the source apportionment (SA) based on the Particulate Source Apportionment Technology (PSAT) available in the Comprehensive Air Quality Model with extensions (CAMx) driven by meteorological predictions of the Weather Research and Forecast (WRF) model. The second method is the source sensitivity (SS) based on an adjoint integration technique (AIT) available in the GEOS-Chem model. The SA method attributes simulated PM2.5 concentrations to each emission group, while the SS method calculates their sensitivity to each emission group, accounting for the non-linear relationship between PM2.5 and its precursors. Despite their differences, the complementary nature of the two methods enables a complete analysis of source-receptor relationships to support emission control policies. Our objectives are to quantify the contributions of each emission group/area to PM2.5 in the receptor areas and to intercompare results from the two methods to gain a comprehensive understanding of the role of emission sources in PM2.5 formation. The results will be compared in terms of the magnitudes and rankings of SS or SA of emitted species and emission groups/areas. GEOS-Chem with AIT is applied over East Asia at a horizontal grid

  10. Operationally Responsive Spacecraft Subsystem, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Saber Astronautics proposes spacecraft subsystem control software which can autonomously reconfigure avionics for best performance during various mission conditions....

  11. Fecal indicator organism modeling and microbial source tracking in environmental waters: Chapter 3.4.6

    Science.gov (United States)

    Nevers, Meredith; Byappanahalli, Muruleedhara; Phanikumar, Mantha S.; Whitman, Richard L.

    2016-01-01

    Mathematical models have been widely applied to surface waters to estimate rates of settling, resuspension, flow, dispersion, and advection in order to calculate movement of particles that influence water quality. Of particular interest are the movement, survival, and persistence of microbial pathogens or their surrogates, which may contaminate recreational water, drinking water, or shellfish. Most models devoted to microbial water quality have been focused on fecal indicator organisms (FIO), which act as a surrogate for pathogens and viruses. Process-based modeling and statistical modeling have been used to track contamination events to source and to predict future events. The use of these two types of models require different levels of expertise and input; process-based models rely on theoretical physical constructs to explain present conditions and biological distribution while data-based, statistical models use extant paired data to do the same. The selection of the appropriate model and interpretation of results is critical to proper use of these tools in microbial source tracking. Integration of the modeling approaches could provide insight for tracking and predicting contamination events in real time. A review of modeling efforts reveals that process-based modeling has great promise for microbial source tracking efforts; further, combining the understanding of physical processes influencing FIO contamination developed with process-based models and molecular characterization of the population by gene-based (i.e., biological) or chemical markers may be an effective approach for locating sources and remediating contamination in order to protect human health better.

  12. Modelling Nd-isotopes with a coarse resolution ocean circulation model: Sensitivities to model parameters and source/sink distributions

    International Nuclear Information System (INIS)

    Rempfer, Johannes; Stocker, Thomas F.; Joos, Fortunat; Dutay, Jean-Claude; Siddall, Mark

    2011-01-01

    The neodymium (Nd) isotopic composition (Nd) of seawater is a quasi-conservative tracer of water mass mixing and is assumed to hold great potential for paleo-oceanographic studies. Here we present a comprehensive approach for the simulation of the two neodymium isotopes 143 Nd, and 144 Nd using the Bern3D model, a low resolution ocean model. The high computational efficiency of the Bern3D model in conjunction with our comprehensive approach allows us to systematically and extensively explore the sensitivity of Nd concentrations and ε Nd to the parametrisation of sources and sinks. Previous studies have been restricted in doing so either by the chosen approach or by computational costs. Our study thus presents the most comprehensive survey of the marine Nd cycle to date. Our model simulates both Nd concentrations as well as ε Nd in good agreement with observations. ε Nd co-varies with salinity, thus underlining its potential as a water mass proxy. Results confirm that the continental margins are required as a Nd source to simulate Nd concentrations and ε Nd consistent with observations. We estimate this source to be slightly smaller than reported in previous studies and find that above a certain magnitude its magnitude affects ε Nd only to a small extent. On the other hand, the parametrisation of the reversible scavenging considerably affects the ability of the model to simulate both, Nd concentrations and ε Nd . Furthermore, despite their small contribution, we find dust and rivers to be important components of the Nd cycle. In additional experiments, we systematically varied the diapycnal diffusivity as well as the Atlantic-to-Pacific freshwater flux to explore the sensitivity of Nd concentrations and its isotopic signature to the strength and geometry of the overturning circulation. These experiments reveal that Nd concentrations and ε Nd are comparatively little affected by variations in diapycnal diffusivity and the Atlantic-to-Pacific freshwater flux

  13. AutoLens: Automated Modeling of a Strong Lens's Light, Mass and Source

    Science.gov (United States)

    Nightingale, J. W.; Dye, S.; Massey, Richard J.

    2018-05-01

    This work presents AutoLens, the first entirely automated modeling suite for the analysis of galaxy-scale strong gravitational lenses. AutoLens simultaneously models the lens galaxy's light and mass whilst reconstructing the extended source galaxy on an adaptive pixel-grid. The method's approach to source-plane discretization is amorphous, adapting its clustering and regularization to the intrinsic properties of the lensed source. The lens's light is fitted using a superposition of Sersic functions, allowing AutoLens to cleanly deblend its light from the source. Single component mass models representing the lens's total mass density profile are demonstrated, which in conjunction with light modeling can detect central images using a centrally cored profile. Decomposed mass modeling is also shown, which can fully decouple a lens's light and dark matter and determine whether the two component are geometrically aligned. The complexity of the light and mass models are automatically chosen via Bayesian model comparison. These steps form AutoLens's automated analysis pipeline, such that all results in this work are generated without any user-intervention. This is rigorously tested on a large suite of simulated images, assessing its performance on a broad range of lens profiles, source morphologies and lensing geometries. The method's performance is excellent, with accurate light, mass and source profiles inferred for data sets representative of both existing Hubble imaging and future Euclid wide-field observations.

  14. An incentive-based source separation model for sustainable municipal solid waste management in China.

    Science.gov (United States)

    Xu, Wanying; Zhou, Chuanbin; Lan, Yajun; Jin, Jiasheng; Cao, Aixin

    2015-05-01

    Municipal solid waste (MSW) management (MSWM) is most important and challenging in large urban communities. Sound community-based waste management systems normally include waste reduction and material recycling elements, often entailing the separation of recyclable materials by the residents. To increase the efficiency of source separation and recycling, an incentive-based source separation model was designed and this model was tested in 76 households in Guiyang, a city of almost three million people in southwest China. This model embraced the concepts of rewarding households for sorting organic waste, government funds for waste reduction, and introducing small recycling enterprises for promoting source separation. Results show that after one year of operation, the waste reduction rate was 87.3%, and the comprehensive net benefit under the incentive-based source separation model increased by 18.3 CNY tonne(-1) (2.4 Euros tonne(-1)), compared to that under the normal model. The stakeholder analysis (SA) shows that the centralized MSW disposal enterprises had minimum interest and may oppose the start-up of a new recycling system, while small recycling enterprises had a primary interest in promoting the incentive-based source separation model, but they had the least ability to make any change to the current recycling system. The strategies for promoting this incentive-based source separation model are also discussed in this study. © The Author(s) 2015.

  15. Skull Defects in Finite Element Head Models for Source Reconstruction from Magnetoencephalography Signals

    Science.gov (United States)

    Lau, Stephan; Güllmar, Daniel; Flemming, Lars; Grayden, David B.; Cook, Mark J.; Wolters, Carsten H.; Haueisen, Jens

    2016-01-01

    Magnetoencephalography (MEG) signals are influenced by skull defects. However, there is a lack of evidence of this influence during source reconstruction. Our objectives are to characterize errors in source reconstruction from MEG signals due to ignoring skull defects and to assess the ability of an exact finite element head model to eliminate such errors. A detailed finite element model of the head of a rabbit used in a physical experiment was constructed from magnetic resonance and co-registered computer tomography imaging that differentiated nine tissue types. Sources of the MEG measurements above intact skull and above skull defects respectively were reconstructed using a finite element model with the intact skull and one incorporating the skull defects. The forward simulation of the MEG signals reproduced the experimentally observed characteristic magnitude and topography changes due to skull defects. Sources reconstructed from measured MEG signals above intact skull matched the known physical locations and orientations. Ignoring skull defects in the head model during reconstruction displaced sources under a skull defect away from that defect. Sources next to a defect were reoriented. When skull defects, with their physical conductivity, were incorporated in the head model, the location and orientation errors were mostly eliminated. The conductivity of the skull defect material non-uniformly modulated the influence on MEG signals. We propose concrete guidelines for taking into account conducting skull defects during MEG coil placement and modeling. Exact finite element head models can improve localization of brain function, specifically after surgery. PMID:27092044

  16. Spacecraft on-orbit deployment anomalies - What can be done?

    Science.gov (United States)

    Freeman, Michael T.

    1993-04-01

    Modern communications satellites rely heavily upon deployable appendage (i.e. solar arrays, communications antennas, etc.) to perform vital functions that enable the spacecraft to effectively conduct mission objectives. Communications and telemetry antennas provide the radiofrequency link between the spacecraft and the earth ground station, permitting data to be transmitted and received from the satellite. Solar arrays serve as the principle source of electrical energy to the satellite, and recharge internal batteries during operation. However, since satellites cannot carry backup systems, if a solar array fails to deploy, the mission is lost. This article examines the subject of on-orbit anomalies related to the deployment of spacecraft appendage, and possible causes of such failures. Topics discussed shall include mechanical launch loading, on-orbit thermal and solar concerns, reliability of spacecraft pyrotechnics, and practical limitations of ground-based deployment testing. Of particular significance, the article will feature an in-depth look at the lessons learned from the successful recovery of the Telesat Canada Anik-E2 satellite in 1991.

  17. Plasma Interactions with Spacecraft. Volume 2, NASCAP-2K Scientific Documentation for Version 4.1

    Science.gov (United States)

    2011-04-15

    surface is taken as the equipotential surface at  = ±ln2. This choice is made because the attracted species is absorbed by the sheath, so we have only...spacecraft-generated plasma environments on spacecraft systems. This document describes the physics and numeric models used in the surface charging...2 2.1 Surface Charging from Orbit Limited Currents

  18. Small Area Model-Based Estimators Using Big Data Sources

    Directory of Open Access Journals (Sweden)

    Marchetti Stefano

    2015-06-01

    Full Text Available The timely, accurate monitoring of social indicators, such as poverty or inequality, on a finegrained spatial and temporal scale is a crucial tool for understanding social phenomena and policymaking, but poses a great challenge to official statistics. This article argues that an interdisciplinary approach, combining the body of statistical research in small area estimation with the body of research in social data mining based on Big Data, can provide novel means to tackle this problem successfully. Big Data derived from the digital crumbs that humans leave behind in their daily activities are in fact providing ever more accurate proxies of social life. Social data mining from these data, coupled with advanced model-based techniques for fine-grained estimates, have the potential to provide a novel microscope through which to view and understand social complexity. This article suggests three ways to use Big Data together with small area estimation techniques, and shows how Big Data has the potential to mirror aspects of well-being and other socioeconomic phenomena.

  19. Coordinated polar spacecraft, geosynchronous spacecraft, and ground-based observations of magnetopause processes and their coupling to the ionosphere

    Directory of Open Access Journals (Sweden)

    G. Le

    2004-12-01

    Full Text Available In this paper, we present in-situ observations of processes occurring at the magnetopause and vicinity, including surface waves, oscillatory magnetospheric field lines, and flux transfer events, and coordinated observations at geosynchronous orbit by the GOES spacecraft, and on the ground by CANOPUS and 210° Magnetic Meridian (210MM magnetometer arrays. On 7 February 2002, during a high-speed solar wind stream, the Polar spacecraft was skimming the magnetopause in a post-noon meridian plane for ~3h. During this interval, it made two short excursions and a few partial crossings into the magnetosheath and observed quasi-periodic cold ion bursts in the region adjacent to the magnetopause current layer. The multiple magnetopause crossings, as well as the velocity of the cold ion bursts, indicate that the magnetopause was oscillating with an ~6-min period. Simultaneous observations of Pc5 waves at geosynchronous orbit by the GOES spacecraft and on the ground by the CANOPUS magnetometer array reveal that these magnetospheric pulsations were forced oscillations of magnetic field lines directly driven by the magnetopause oscillations. The magnetospheric pulsations occurred only in a limited longitudinal region in the post-noon dayside sector, and were not a global phenomenon, as one would expect for global field line resonance. Thus, the magnetopause oscillations at the source were also limited to a localized region spanning ~4h in local time. These observations suggest that it is unlikely that the Kelvin-Helmholz instability and/or fluctuations in the solar wind dynamic pressure were the direct driving mechanisms for the observed boundary oscillations. Instead, the likely mechanism for the localized boundary oscillations was pulsed reconnection at the magnetopause occurring along the X-line extending over the same 4-h region. The Pc5 band pressure fluctuations commonly seen in high-speed solar wind streams may modulate the reconnection rate as an

  20. Major models and data sources for residential and commercial sector energy conservation analysis. Final report

    Energy Technology Data Exchange (ETDEWEB)

    1980-09-01

    Major models and data sources are reviewed that can be used for energy-conservation analysis in the residential and commercial sectors to provide an introduction to the information that can or is available to DOE in order to further its efforts in analyzing and quantifying their policy and program requirements. Models and data sources examined in the residential sector are: ORNL Residential Energy Model; BECOM; NEPOOL; MATH/CHRDS; NIECS; Energy Consumption Data Base: Household Sector; Patterns of Energy Use by Electrical Appliances Data Base; Annual Housing Survey; 1970 Census of Housing; AIA Research Corporation Data Base; RECS; Solar Market Development Model; and ORNL Buildings Energy Use Data Book. Models and data sources examined in the commercial sector are: ORNL Commercial Sector Model of Energy Demand; BECOM; NEPOOL; Energy Consumption Data Base: Commercial Sector; F.W. Dodge Data Base; NFIB Energy Report for Small Businesses; ADL Commercial Sector Energy Use Data Base; AIA Research Corporation Data Base; Nonresidential Buildings Surveys of Energy Consumption; General Electric Co: Commercial Sector Data Base; The BOMA Commercial Sector Data Base; The Tishman-Syska and Hennessy Data Base; The NEMA Commercial Sector Data Base; ORNL Buildings Energy Use Data Book; and Solar Market Development Model. Purpose; basis for model structure; policy variables and parameters; level of regional, sectoral, and fuels detail; outputs; input requirements; sources of data; computer accessibility and requirements; and a bibliography are provided for each model and data source.

  1. Martian methane plume models for defining Mars rover methane source search strategies

    Science.gov (United States)

    Nicol, Christopher; Ellery, Alex; Lynch, Brian; Cloutis, Ed

    2018-07-01

    The detection of atmospheric methane on Mars implies an active methane source. This introduces the possibility of a biotic source with the implied need to determine whether the methane is indeed biotic in nature or geologically generated. There is a clear need for robotic algorithms which are capable of manoeuvring a rover through a methane plume on Mars to locate its source. We explore aspects of Mars methane plume modelling to reveal complex dynamics characterized by advection and diffusion. A statistical analysis of the plume model has been performed and compared to analyses of terrestrial plume models. Finally, we consider a robotic search strategy to find a methane plume source. We find that gradient-based techniques are ineffective, but that more sophisticated model-based search strategies are unlikely to be available in near-term rover missions.

  2. Development of Realistic Head Models for Electromagnetic Source Imaging of the Human Brain

    National Research Council Canada - National Science Library

    Akalin, Z

    2001-01-01

    In this work, a methodology is developed to solve the forward problem of electromagnetic source imaging using realistic head models, For this purpose, first segmentation of the 3 dimensional MR head...

  3. Variability of dynamic source parameters inferred from kinematic models of past earthquakes

    KAUST Repository

    Causse, M.; Dalguer, L. A.; Mai, Paul Martin

    2013-01-01

    We analyse the scaling and distribution of average dynamic source properties (fracture energy, static, dynamic and apparent stress drops) using 31 kinematic inversion models from 21 crustal earthquakes. Shear-stress histories are computed by solving

  4. Effects of Host-rock Fracturing on Elastic-deformation Source Models of Volcano Deflation.

    Science.gov (United States)

    Holohan, Eoghan P; Sudhaus, Henriette; Walter, Thomas R; Schöpfer, Martin P J; Walsh, John J

    2017-09-08

    Volcanoes commonly inflate or deflate during episodes of unrest or eruption. Continuum mechanics models that assume linear elastic deformation of the Earth's crust are routinely used to invert the observed ground motions. The source(s) of deformation in such models are generally interpreted in terms of magma bodies or pathways, and thus form a basis for hazard assessment and mitigation. Using discontinuum mechanics models, we show how host-rock fracturing (i.e. non-elastic deformation) during drainage of a magma body can progressively change the shape and depth of an elastic-deformation source. We argue that this effect explains the marked spatio-temporal changes in source model attributes inferred for the March-April 2007 eruption of Piton de la Fournaise volcano, La Reunion. We find that pronounced deflation-related host-rock fracturing can: (1) yield inclined source model geometries for a horizontal magma body; (2) cause significant upward migration of an elastic-deformation source, leading to underestimation of the true magma body depth and potentially to a misinterpretation of ascending magma; and (3) at least partly explain underestimation by elastic-deformation sources of changes in sub-surface magma volume.

  5. Validation and calibration of structural models that combine information from multiple sources.

    Science.gov (United States)

    Dahabreh, Issa J; Wong, John B; Trikalinos, Thomas A

    2017-02-01

    Mathematical models that attempt to capture structural relationships between their components and combine information from multiple sources are increasingly used in medicine. Areas covered: We provide an overview of methods for model validation and calibration and survey studies comparing alternative approaches. Expert commentary: Model validation entails a confrontation of models with data, background knowledge, and other models, and can inform judgments about model credibility. Calibration involves selecting parameter values to improve the agreement of model outputs with data. When the goal of modeling is quantitative inference on the effects of interventions or forecasting, calibration can be viewed as estimation. This view clarifies issues related to parameter identifiability and facilitates formal model validation and the examination of consistency among different sources of information. In contrast, when the goal of modeling is the generation of qualitative insights about the modeled phenomenon, calibration is a rather informal process for selecting inputs that result in model behavior that roughly reproduces select aspects of the modeled phenomenon and cannot be equated to an estimation procedure. Current empirical research on validation and calibration methods consists primarily of methodological appraisals or case-studies of alternative techniques and cannot address the numerous complex and multifaceted methodological decisions that modelers must make. Further research is needed on different approaches for developing and validating complex models that combine evidence from multiple sources.

  6. Attitude tracking control of flexible spacecraft with large amplitude slosh

    Science.gov (United States)

    Deng, Mingle; Yue, Baozeng

    2017-12-01

    This paper is focused on attitude tracking control of a spacecraft that is equipped with flexible appendage and partially filled liquid propellant tank. The large amplitude liquid slosh is included by using a moving pulsating ball model that is further improved to estimate the settling location of liquid in microgravity or a zero-g environment. The flexible appendage is modelled as a three-dimensional Bernoulli-Euler beam, and the assumed modal method is employed. A hybrid controller that combines sliding mode control with an adaptive algorithm is designed for spacecraft to perform attitude tracking. The proposed controller has proved to be asymptotically stable. A nonlinear model for the overall coupled system including spacecraft attitude dynamics, liquid slosh, structural vibration and control action is established. Numerical simulation results are presented to show the dynamic behaviors of the coupled system and to verify the effectiveness of the control approach when the spacecraft undergoes the disturbance produced by large amplitude slosh and appendage vibration. Lastly, the designed adaptive algorithm is found to be effective to improve the precision of attitude tracking.

  7. Added-value joint source modelling of seismic and geodetic data

    Science.gov (United States)

    Sudhaus, Henriette; Heimann, Sebastian; Walter, Thomas R.; Krueger, Frank

    2013-04-01

    In tectonically active regions earthquake source studies strongly support the analysis of the current faulting processes as they reveal the location and geometry of active faults, the average slip released or more. For source modelling of shallow, moderate to large earthquakes often a combination of geodetic (GPS, InSAR) and seismic data is used. A truly joint use of these data, however, usually takes place only on a higher modelling level, where some of the first-order characteristics (time, centroid location, fault orientation, moment) have been fixed already. These required basis model parameters have to be given, assumed or inferred in a previous, separate and highly non-linear modelling step using one of the these data sets alone. We present a new earthquake rupture model implementation that realizes a fully combined data integration of surface displacement measurements and seismic data in a non-linear optimization of simple but extended planar ruptures. The model implementation allows for fast forward calculations of full seismograms and surface deformation and therefore enables us to use Monte Carlo global search algorithms. Furthermore, we benefit from the complementary character of seismic and geodetic data, e. g. the high definition of the source location from geodetic data and the sensitivity of the resolution of the seismic data on moment releases at larger depth. These increased constraints from the combined dataset make optimizations efficient, even for larger model parameter spaces and with a very limited amount of a priori assumption on the source. A vital part of our approach is rigorous data weighting based on the empirically estimated data errors. We construct full data error variance-covariance matrices for geodetic data to account for correlated data noise and also weight the seismic data based on their signal-to-noise ratio. The estimation of the data errors and the fast forward modelling opens the door for Bayesian inferences of the source

  8. A Method of Auxiliary Sources Approach for Modelling the Impact of Ground Planes on Antenna

    DEFF Research Database (Denmark)

    Larsen, Niels Vesterdal; Breinbjerg, Olav

    2006-01-01

    The Method of Auxiliary Sources (MAS) is employed to model the impact of finite ground planes on the radiation from antennas. Two different antenna test cases are shown and the calculated results agree well with reference measurements......The Method of Auxiliary Sources (MAS) is employed to model the impact of finite ground planes on the radiation from antennas. Two different antenna test cases are shown and the calculated results agree well with reference measurements...

  9. Energy models for commercial energy prediction and substitution of renewable energy sources

    International Nuclear Information System (INIS)

    Iniyan, S.; Suganthi, L.; Samuel, Anand A.

    2006-01-01

    In this paper, three models have been projected namely Modified Econometric Mathematical (MEM) model, Mathematical Programming Energy-Economy-Environment (MPEEE) model, and Optimal Renewable Energy Mathematical (OREM) model. The actual demand for coal, oil and electricity is predicted using the MEM model based on economic, technological and environmental factors. The results were used in the MPEEE model, which determines the optimum allocation of commercial energy sources based on environmental limitations. The gap between the actual energy demand from the MEM model and optimal energy use from the MPEEE model, has to be met by the renewable energy sources. The study develops an OREM model that would facilitate effective utilization of renewable energy sources in India, based on cost, efficiency, social acceptance, reliability, potential and demand. The economic variations in solar energy systems and inclusion of environmental constraint are also analyzed with OREM model. The OREM model will help policy makers in the formulation and implementation of strategies concerning renewable energy sources in India for the next two decades

  10. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans.

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-07

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients' CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  11. A measurement-based generalized source model for Monte Carlo dose simulations of CT scans

    Science.gov (United States)

    Ming, Xin; Feng, Yuanming; Liu, Ransheng; Yang, Chengwen; Zhou, Li; Zhai, Hezheng; Deng, Jun

    2017-03-01

    The goal of this study is to develop a generalized source model for accurate Monte Carlo dose simulations of CT scans based solely on the measurement data without a priori knowledge of scanner specifications. The proposed generalized source model consists of an extended circular source located at x-ray target level with its energy spectrum, source distribution and fluence distribution derived from a set of measurement data conveniently available in the clinic. Specifically, the central axis percent depth dose (PDD) curves measured in water and the cone output factors measured in air were used to derive the energy spectrum and the source distribution respectively with a Levenberg-Marquardt algorithm. The in-air film measurement of fan-beam dose profiles at fixed gantry was back-projected to generate the fluence distribution of the source model. A benchmarked Monte Carlo user code was used to simulate the dose distributions in water with the developed source model as beam input. The feasibility and accuracy of the proposed source model was tested on a GE LightSpeed and a Philips Brilliance Big Bore multi-detector CT (MDCT) scanners available in our clinic. In general, the Monte Carlo simulations of the PDDs in water and dose profiles along lateral and longitudinal directions agreed with the measurements within 4%/1 mm for both CT scanners. The absolute dose comparison using two CTDI phantoms (16 cm and 32 cm in diameters) indicated a better than 5% agreement between the Monte Carlo-simulated and the ion chamber-measured doses at a variety of locations for the two scanners. Overall, this study demonstrated that a generalized source model can be constructed based only on a set of measurement data and used for accurate Monte Carlo dose simulations of patients’ CT scans, which would facilitate patient-specific CT organ dose estimation and cancer risk management in the diagnostic and therapeutic radiology.

  12. A Two-Temperature Open-Source CFD Model for Hypersonic Reacting Flows, Part One: Zero-Dimensional Analysis

    OpenAIRE

    Vincent Casseau; Rodrigo C. Palharini; Thomas J. Scanlon; Richard E. Brown

    2016-01-01

    A two-temperature CFD (computational fluid dynamics) solver is a prerequisite to any spacecraft re-entry numerical study that aims at producing results with a satisfactory level of accuracy within realistic timescales. In this respect, a new two-temperature CFD solver, hy2Foam, has been developed within the framework of the open-source CFD platform OpenFOAM for the prediction of hypersonic reacting flows. This solver makes the distinct juncture between the trans-rotational and multiple vibrat...

  13. Source Release Modeling for the Idaho National Engineering and Environmental Laboratory's Subsurface Disposal Area

    International Nuclear Information System (INIS)

    Becker, B.H.

    2002-01-01

    A source release model was developed to determine the release of contaminants into the shallow subsurface, as part of the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) evaluation at the Idaho National Engineering and Environmental Laboratory's (INEEL) Subsurface Disposal Area (SDA). The output of the source release model is used as input to the subsurface transport and biotic uptake models. The model allowed separating the waste into areas that match the actual disposal units. This allows quantitative evaluation of the relative contribution to the total risk and allows evaluation of selective remediation of the disposal units within the SDA

  14. Receptor modeling studies for the characterization of PM10 pollution sources in Belgrade

    Directory of Open Access Journals (Sweden)

    Mijić Zoran

    2012-01-01

    Full Text Available The objective of this study is to determine the major sources and potential source regions of PM10 over Belgrade, Serbia. The PM10 samples were collected from July 2003 to December 2006 in very urban area of Belgrade and concentrations of Al, V, Cr, Mn, Fe, Ni, Cu, Zn, Cd and Pb were analyzed by atomic absorption spectrometry. The analysis of seasonal variations of PM10 mass and some element concentrations reported relatively higher concentrations in winter, what underlined the importance of local emission sources. The Unmix model was used for source apportionment purpose and the four main source profiles (fossil fuel combustion; traffic exhaust/regional transport from industrial centers; traffic related particles/site specific sources and mineral/crustal matter were identified. Among the resolved factors the fossil fuel combustion was the highest contributor (34% followed by traffic/regional industry (26%. Conditional probability function (CPF results identified possible directions of local sources. The potential source contribution function (PSCF and concentration weighted trajectory (CWT receptor models were used to identify spatial source distribution and contribution of regional-scale transported aerosols. [Projekat Ministarstva nauke Republike Srbije, br. III43007 i br. III41011

  15. Modeling generalized interline power-flow controller (GIPFC using 48-pulse voltage source converters

    Directory of Open Access Journals (Sweden)

    Amir Ghorbani

    2018-05-01

    Full Text Available Generalized interline power-flow controller (GIPFC is one of the voltage-source controller (VSC-based flexible AC transmission system (FACTS controllers that can independently regulate the power-flow over each transmission line of a multiline system. This paper presents the modeling and performance analysis of GIPFC based on 48-pulsed voltage-source converters. This paper deals with a cascaded multilevel converter model, which is a 48-pulse (three levels voltage source converter. The voltage source converter described in this paper is a harmonic neutralized, 48-pulse GTO converter. The GIPFC controller is based on d-q orthogonal coordinates. The algorithm is verified using simulations in MATLAB/Simulink environment. Comparisons between unified power flow controller (UPFC and GIPFC are also included. Keywords: Generalized interline power-flow controller (GIPFC, Voltage source converter (VCS, 48-pulse GTO converter

  16. Effects of source shape on the numerical aperture factor with a geometrical-optics model.

    Science.gov (United States)

    Wan, Der-Shen; Schmit, Joanna; Novak, Erik

    2004-04-01

    We study the effects of an extended light source on the calibration of an interference microscope, also referred to as an optical profiler. Theoretical and experimental numerical aperture (NA) factors for circular and linear light sources along with collimated laser illumination demonstrate that the shape of the light source or effective aperture cone is critical for a correct NA factor calculation. In practice, more-accurate results for the NA factor are obtained when a linear approximation to the filament light source shape is used in a geometric model. We show that previously measured and derived NA factors show some discrepancies because a circular rather than linear approximation to the filament source was used in the modeling.

  17. Source Localization with Acoustic Sensor Arrays Using Generative Model Based Fitting with Sparse Constraints

    Directory of Open Access Journals (Sweden)

    Javier Macias-Guarasa

    2012-10-01

    Full Text Available This paper presents a novel approach for indoor acoustic source localization using sensor arrays. The proposed solution starts by defining a generative model, designed to explain the acoustic power maps obtained by Steered Response Power (SRP strategies. An optimization approach is then proposed to fit the model to real input SRP data and estimate the position of the acoustic source. Adequately fitting the model to real SRP data, where noise and other unmodelled effects distort the ideal signal, is the core contribution of the paper. Two basic strategies in the optimization are proposed. First, sparse constraints in the parameters of the model are included, enforcing the number of simultaneous active sources to be limited. Second, subspace analysis is used to filter out portions of the input signal that cannot be explained by the model. Experimental results on a realistic speech database show statistically significant localization error reductions of up to 30% when compared with the SRP-PHAT strategies.

  18. The Analytical Repository Source-Term (AREST) model: Description and documentation

    International Nuclear Information System (INIS)

    Liebetrau, A.M.; Apted, M.J.; Engel, D.W.; Altenhofen, M.K.; Strachan, D.M.; Reid, C.R.; Windisch, C.F.; Erikson, R.L.; Johnson, K.I.

    1987-10-01

    The geologic repository system consists of several components, one of which is the engineered barrier system. The engineered barrier system interfaces with natural barriers that constitute the setting of the repository. A model that simulates the releases from the engineered barrier system into the natural barriers of the geosphere, called a source-term model, is an important component of any model for assessing the overall performance of the geologic repository system. The Analytical Repository Source-Term (AREST) model being developed is one such model. This report describes the current state of development of the AREST model and the code in which the model is implemented. The AREST model consists of three component models and five process models that describe the post-emplacement environment of a waste package. All of these components are combined within a probabilistic framework. The component models are a waste package containment (WPC) model that simulates the corrosion and degradation processes which eventually result in waste package containment failure; a waste package release (WPR) model that calculates the rates of radionuclide release from the failed waste package; and an engineered system release (ESR) model that controls the flow of information among all AREST components and process models and combines release output from the WPR model with failure times from the WPC model to produce estimates of total release. 167 refs., 40 figs., 12 tabs

  19. An Equivalent Source Method for Modelling the Global Lithospheric Magnetic Field

    DEFF Research Database (Denmark)

    Kother, Livia Kathleen; Hammer, Magnus Danel; Finlay, Chris

    2014-01-01

    We present a new technique for modelling the global lithospheric magnetic field at Earth's surface based on the estimation of equivalent potential field sources. As a demonstration we show an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010 when...... are also employed to minimize the influence of the ionospheric field. The model for the remaining lithospheric magnetic field consists of magnetic point sources (monopoles) arranged in an icosahedron grid. The corresponding source values are estimated using an iteratively reweighted least squares algorithm...... in the CHAOS-4 and MF7 models using more conventional spherical harmonic based approaches. Advantages of the equivalent source method include its local nature, allowing e.g. for regional grid refinement, and the ease of transforming to spherical harmonics when needed. Future applications will make use of Swarm...

  20. Autonomous spacecraft landing through human pre-attentive vision

    International Nuclear Information System (INIS)

    Schiavone, Giuseppina; Izzo, Dario; Simões, Luís F; De Croon, Guido C H E

    2012-01-01

    In this work, we exploit a computational model of human pre-attentive vision to guide the descent of a spacecraft on extraterrestrial bodies. Providing the spacecraft with high degrees of autonomy is a challenge for future space missions. Up to present, major effort in this research field has been concentrated in hazard avoidance algorithms and landmark detection, often by reference to a priori maps, ranked by scientists according to specific scientific criteria. Here, we present a bio-inspired approach based on the human ability to quickly select intrinsically salient targets in the visual scene; this ability is fundamental for fast decision-making processes in unpredictable and unknown circumstances. The proposed system integrates a simple model of the spacecraft and optimality principles which guarantee minimum fuel consumption during the landing procedure; detected salient sites are used for retargeting the spacecraft trajectory, under safety and reachability conditions. We compare the decisions taken by the proposed algorithm with that of a number of human subjects tested under the same conditions. Our results show how the developed algorithm is indistinguishable from the human subjects with respect to areas, occurrence and timing of the retargeting. (paper)

  1. A GIS-based atmospheric dispersion model for pollutants emitted by complex source areas.

    Science.gov (United States)

    Teggi, Sergio; Costanzini, Sofia; Ghermandi, Grazia; Malagoli, Carlotta; Vinceti, Marco

    2018-01-01

    Gaussian dispersion models are widely used to simulate the concentrations and deposition fluxes of pollutants emitted by source areas. Very often, the calculation time limits the number of sources and receptors and the geometry of the sources must be simple and without holes. This paper presents CAREA, a new GIS-based Gaussian model for complex source areas. CAREA was coded in the Python language, and is largely based on a simplified formulation of the very popular and recognized AERMOD model. The model allows users to define in a GIS environment thousands of gridded or scattered receptors and thousands of complex sources with hundreds of vertices and holes. CAREA computes ground level, or near ground level, concentrations and dry deposition fluxes of pollutants. The input/output and the runs of the model can be completely managed in GIS environment (e.g. inside a GIS project). The paper presents the CAREA formulation and its applications to very complex test cases. The tests shows that the processing time are satisfactory and that the definition of sources and receptors and the output retrieval are quite easy in a GIS environment. CAREA and AERMOD are compared using simple and reproducible test cases. The comparison shows that CAREA satisfactorily reproduces AERMOD simulations and is considerably faster than AERMOD. Copyright © 2017 Elsevier B.V. All rights reserved.

  2. Modeling Source Water TOC Using Hydroclimate Variables and Local Polynomial Regression.

    Science.gov (United States)

    Samson, Carleigh C; Rajagopalan, Balaji; Summers, R Scott

    2016-04-19

    To control disinfection byproduct (DBP) formation in drinking water, an understanding of the source water total organic carbon (TOC) concentration variability can be critical. Previously, TOC concentrations in water treatment plant source waters have been modeled using streamflow data. However, the lack of streamflow data or unimpaired flow scenarios makes it difficult to model TOC. In addition, TOC variability under climate change further exacerbates the problem. Here we proposed a modeling approach based on local polynomial regression that uses climate, e.g. temperature, and land surface, e.g., soil moisture, variables as predictors of TOC concentration, obviating the need for streamflow. The local polynomial approach has the ability to capture non-Gaussian and nonlinear features that might be present in the relationships. The utility of the methodology is demonstrated using source water quality and climate data in three case study locations with surface source waters including river and reservoir sources. The models show good predictive skill in general at these locations, with lower skills at locations with the most anthropogenic influences in their streams. Source water TOC predictive models can provide water treatment utilities important information for making treatment decisions for DBP regulation compliance under future climate scenarios.

  3. Inter-comparison of receptor models for PM source apportionment: Case study in an industrial area

    Science.gov (United States)

    Viana, M.; Pandolfi, M.; Minguillón, M. C.; Querol, X.; Alastuey, A.; Monfort, E.; Celades, I.

    2008-05-01

    Receptor modelling techniques are used to identify and quantify the contributions from emission sources to the levels and major and trace components of ambient particulate matter (PM). A wide variety of receptor models are currently available, and consequently the comparability between models should be evaluated if source apportionment data are to be used as input in health effects studies or mitigation plans. Three of the most widespread receptor models (principal component analysis, PCA; positive matrix factorization, PMF; chemical mass balance, CMB) were applied to a single PM10 data set (n=328 samples, 2002-2005) obtained from an industrial area in NE Spain, dedicated to ceramic production. Sensitivity and temporal trend analyses (using the Mann-Kendall test) were applied. Results evidenced the good overall performance of the three models (r2>0.83 and α>0.91×between modelled and measured PM10 mass), with a good agreement regarding source identification and high correlations between input (CMB) and output (PCA, PMF) source profiles. Larger differences were obtained regarding the quantification of source contributions (up to a factor of 4 in some cases). The combined application of different types of receptor models would solve the limitations of each of the models, by constructing a more robust solution based on their strengths. The authors suggest the combined use of factor analysis techniques (PCA, PMF) to identify and interpret emission sources, and to obtain a first quantification of their contributions to the PM mass, and the subsequent application of CMB. Further research is needed to ensure that source apportionment methods are robust enough for application to PM health effects assessments.

  4. TTEthernet for Integrated Spacecraft Networks

    Science.gov (United States)

    Loveless, Andrew

    2015-01-01

    Aerospace projects have traditionally employed federated avionics architectures, in which each computer system is designed to perform one specific function (e.g. navigation). There are obvious downsides to this approach, including excessive weight (from so much computing hardware), and inefficient processor utilization (since modern processors are capable of performing multiple tasks). There has therefore been a push for integrated modular avionics (IMA), in which common computing platforms can be leveraged for different purposes. This consolidation of multiple vehicle functions to shared computing platforms can significantly reduce spacecraft cost, weight, and design complexity. However, the application of IMA principles introduces significant challenges, as the data network must accommodate traffic of mixed criticality and performance levels - potentially all related to the same shared computer hardware. Because individual network technologies are rarely so competent, the development of truly integrated network architectures often proves unreasonable. Several different types of networks are utilized - each suited to support a specific vehicle function. Critical functions are typically driven by precise timing loops, requiring networks with strict guarantees regarding message latency (i.e. determinism) and fault-tolerance. Alternatively, non-critical systems generally employ data networks prioritizing flexibility and high performance over reliable operation. Switched Ethernet has seen widespread success filling this role in terrestrial applications. Its high speed, flexibility, and the availability of inexpensive commercial off-the-shelf (COTS) components make it desirable for inclusion in spacecraft platforms. Basic Ethernet configurations have been incorporated into several preexisting aerospace projects, including both the Space Shuttle and International Space Station (ISS). However, classical switched Ethernet cannot provide the high level of network

  5. Electrical description of a magnetic pole enhanced inductively coupled plasma source: Refinement of the transformer model by reverse electromagnetic modeling

    International Nuclear Information System (INIS)

    Meziani, T.; Colpo, P.; Rossi, F.

    2006-01-01

    The magnetic pole enhanced inductively coupled source (MaPE-ICP) is an innovative low-pressure plasma source that allows for high plasma density and high plasma uniformity, as well as large-area plasma generation. This article presents an electrical characterization of this source, and the experimental measurements are compared to the results obtained after modeling the source by the equivalent circuit of the transformer. In particular, the method applied consists in performing a reverse electromagnetic modeling of the source by providing the measured plasma parameters such as plasma density and electron temperature as an input, and computing the total impedance seen at the primary of the transformer. The impedance results given by the model are compared to the experimental results. This approach allows for a more comprehensive refinement of the electrical model in order to obtain a better fitting of the results. The electrical characteristics of the system, and in particular the total impedance, were measured at the inductive coil antenna (primary of the transformer). The source was modeled electrically by a finite element method, treating the plasma as a conductive load and taking into account the complex plasma conductivity, the value of which was calculated from the electron density and electron temperature measurements carried out previously. The electrical characterization of the inductive excitation source itself versus frequency showed that the source cannot be treated as purely inductive and that the effect of parasitic capacitances must be taken into account in the model. Finally, considerations on the effect of the magnetic core addition on the capacitive component of the coupling are made

  6. Spacecraft command and control using expert systems

    Science.gov (United States)

    Norcross, Scott; Grieser, William H.

    1994-01-01

    This paper describes a product called the Intelligent Mission Toolkit (IMT), which was created to meet the changing demands of the spacecraft command and control market. IMT is a command and control system built upon an expert system. Its primary functions are to send commands to the spacecraft and process telemetry data received from the spacecraft. It also controls the ground equipment used to support the system, such as encryption gear, and telemetry front-end equipment. Add-on modules allow IMT to control antennas and antenna interface equipment. The design philosophy for IMT is to utilize available commercial products wherever possible. IMT utilizes Gensym's G2 Real-time Expert System as the core of the system. G2 is responsible for overall system control, spacecraft commanding control, and spacecraft telemetry analysis and display. Other commercial products incorporated into IMT include the SYBASE relational database management system and Loral Test and Integration Systems' System 500 for telemetry front-end processing.

  7. Simulation of ultrasonic surface waves with multi-Gaussian and point source beam models

    International Nuclear Information System (INIS)

    Zhao, Xinyu; Schmerr, Lester W. Jr.; Li, Xiongbing; Sedov, Alexander

    2014-01-01

    In the past decade, multi-Gaussian beam models have been developed to solve many complicated bulk wave propagation problems. However, to date those models have not been extended to simulate the generation of Rayleigh waves. Here we will combine Gaussian beams with an explicit high frequency expression for the Rayleigh wave Green function to produce a three-dimensional multi-Gaussian beam model for the fields radiated from an angle beam transducer mounted on a solid wedge. Simulation results obtained with this model are compared to those of a point source model. It is shown that the multi-Gaussian surface wave beam model agrees well with the point source model while being computationally much more efficient

  8. Solving the forward problem in EEG source analysis by spherical and fdm head modeling: a comparative analysis - biomed 2009

    NARCIS (Netherlands)

    Vatta, F.; Meneghini, F.; Esposito, F.; Mininel, S.; Di Salle, F.

    2009-01-01

    Neural source localization techniques based on electroencephalography (EEG) use scalp potential data to infer the location of underlying neural activity. This procedure entails modeling the sources of EEG activity and modeling the head volume conduction process to link the modeled sources to the

  9. Introducing a new open source GIS user interface for the SWAT model

    Science.gov (United States)

    The Soil and Water Assessment Tool (SWAT) model is a robust watershed modelling tool. It typically uses the ArcSWAT interface to create its inputs. ArcSWAT is public domain software which works in the licensed ArcGIS environment. The aim of this paper was to develop an open source user interface ...

  10. Model description for calculating the source term of the Angra 1 environmental control system

    International Nuclear Information System (INIS)

    Oliveira, L.F.S. de; Amaral Neto, J.D.; Salles, M.R.

    1988-01-01

    This work presents the model used for evaluation of source term released from Angra 1 Nuclear Power Plant in case of an accident. After that, an application of the model for the case of a Fuel Assembly Drop Accident Inside the Fuel Handling Building during reactor refueling is presented. (author) [pt

  11. Comparative study of surrogate models for groundwater contamination source identification at DNAPL-contaminated sites

    Science.gov (United States)

    Hou, Zeyu; Lu, Wenxi

    2018-05-01

    Knowledge of groundwater contamination sources is critical for effectively protecting groundwater resources, estimating risks, mitigating disaster, and designing remediation strategies. Many methods for groundwater contamination source identification (GCSI) have been developed in recent years, including the simulation-optimization technique. This study proposes utilizing a support vector regression (SVR) model and a kernel extreme learning machine (KELM) model to enrich the content of the surrogate model. The surrogate model was itself key in replacing the simulation model, reducing the huge computational burden of iterations in the simulation-optimization technique to solve GCSI problems, especially in GCSI problems of aquifers contaminated by dense nonaqueous phase liquids (DNAPLs). A comparative study between the Kriging, SVR, and KELM models is reported. Additionally, there is analysis of the influence of parameter optimization and the structure of the training sample dataset on the approximation accuracy of the surrogate model. It was found that the KELM model was the most accurate surrogate model, and its performance was significantly improved after parameter optimization. The approximation accuracy of the surrogate model to the simulation model did not always improve with increasing numbers of training samples. Using the appropriate number of training samples was critical for improving the performance of the surrogate model and avoiding unnecessary computational workload. It was concluded that the KELM model developed in this work could reasonably predict system responses in given operation conditions. Replacing the simulation model with a KELM model considerably reduced the computational burden of the simulation-optimization process and also maintained high computation accuracy.

  12. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    Energy Technology Data Exchange (ETDEWEB)

    Murray, S. G.; Trott, C. M.; Jordan, C. H. [ARC Centre of Excellence for All-sky Astrophysics (CAASTRO) (Australia)

    2017-08-10

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  13. An Improved Statistical Point-source Foreground Model for the Epoch of Reionization

    Science.gov (United States)

    Murray, S. G.; Trott, C. M.; Jordan, C. H.

    2017-08-01

    We present a sophisticated statistical point-source foreground model for low-frequency radio Epoch of Reionization (EoR) experiments using the 21 cm neutral hydrogen emission line. Motivated by our understanding of the low-frequency radio sky, we enhance the realism of two model components compared with existing models: the source count distributions as a function of flux density and spatial position (source clustering), extending current formalisms for the foreground covariance of 2D power-spectral modes in 21 cm EoR experiments. The former we generalize to an arbitrarily broken power law, and the latter to an arbitrary isotropically correlated field. This paper presents expressions for the modified covariance under these extensions, and shows that for a more realistic source spatial distribution, extra covariance arises in the EoR window that was previously unaccounted for. Failure to include this contribution can yield bias in the final power-spectrum and under-estimate uncertainties, potentially leading to a false detection of signal. The extent of this effect is uncertain, owing to ignorance of physical model parameters, but we show that it is dependent on the relative abundance of faint sources, to the effect that our extension will become more important for future deep surveys. Finally, we show that under some parameter choices, ignoring source clustering can lead to false detections on large scales, due to both the induced bias and an artificial reduction in the estimated measurement uncertainty.

  14. Variability of dynamic source parameters inferred from kinematic models of past earthquakes

    KAUST Repository

    Causse, M.

    2013-12-24

    We analyse the scaling and distribution of average dynamic source properties (fracture energy, static, dynamic and apparent stress drops) using 31 kinematic inversion models from 21 crustal earthquakes. Shear-stress histories are computed by solving the elastodynamic equations while imposing the slip velocity of a kinematic source model as a boundary condition on the fault plane. This is achieved using a 3-D finite difference method in which the rupture kinematics are modelled with the staggered-grid-split-node fault representation method of Dalguer & Day. Dynamic parameters are then estimated from the calculated stress-slip curves and averaged over the fault plane. Our results indicate that fracture energy, static, dynamic and apparent stress drops tend to increase with magnitude. The epistemic uncertainty due to uncertainties in kinematic inversions remains small (ϕ ∼ 0.1 in log10 units), showing that kinematic source models provide robust information to analyse the distribution of average dynamic source parameters. The proposed scaling relations may be useful to constrain friction law parameters in spontaneous dynamic rupture calculations for earthquake source studies, and physics-based near-source ground-motion prediction for seismic hazard and risk mitigation.

  15. Medical Significance of Microorganisms in Spacecraft Environment

    Science.gov (United States)

    Pierson, Duane L.; Ott, C. Mark

    2007-01-01

    Microorganisms can spoil food supplies, contaminate drinking water, release noxious volatile compounds, initiate allergic responses, contaminate the environment, and cause infectious diseases. International acceptability limits have been established for bacterial and fungal contaminants in air and on surfaces, and environmental monitoring is conducted to ensure compliance. Allowable levels of microorganism in water and food have also been established. Environmental monitoring of the space shuttle, the Mir, and the ISS have allowed for some general conclusions. Generally, the bacteria found in air and on interior surfaces are largely of human origin such as Staphylococcus spp., Micrococcus spp. Common environmental genera such as Bacillus spp. are the most commonly isolated bacteria from all spacecraft. Yeast species associated with humans such as Candida spp. are commonly found. Aspergillus spp., Penicillium spp., and Cladosporium spp. are the most commonly isolated filamentous fungi. Microbial levels in the environment differ significantly depending upon humidity levels, condensate accumulation, and availability of carbon sources. However, human "normal flora" of bacteria and fungi can result in serious, life-threatening diseases if human immunity is compromised. Disease incidence is expected to increase as mission duration increases.

  16. Double point source W-phase inversion: Real-time implementation and automated model selection

    Science.gov (United States)

    Nealy, Jennifer; Hayes, Gavin

    2015-01-01

    Rapid and accurate characterization of an earthquake source is an extremely important and ever evolving field of research. Within this field, source inversion of the W-phase has recently been shown to be an effective technique, which can be efficiently implemented in real-time. An extension to the W-phase source inversion is presented in which two point sources are derived to better characterize complex earthquakes. A single source inversion followed by a double point source inversion with centroid locations fixed at the single source solution location can be efficiently run as part of earthquake monitoring network operational procedures. In order to determine the most appropriate solution, i.e., whether an earthquake is most appropriately described by a single source or a double source, an Akaike information criterion (AIC) test is performed. Analyses of all earthquakes of magnitude 7.5 and greater occurring since January 2000 were performed with extended analyses of the September 29, 2009 magnitude 8.1 Samoa earthquake and the April 19, 2014 magnitude 7.5 Papua New Guinea earthquake. The AIC test is shown to be able to accurately select the most appropriate model and the selected W-phase inversion is shown to yield reliable solutions that match published analyses of the same events.

  17. Relating MBSE to Spacecraft Development: A NASA Pathfinder

    Science.gov (United States)

    Othon, Bill

    2016-01-01

    The NASA Engineering and Safety Center (NESC) has sponsored a Pathfinder Study to investigate how Model Based Systems Engineering (MBSE) and Model Based Engineering (MBE) techniques can be applied by NASA spacecraft development projects. The objectives of this Pathfinder Study included analyzing both the products of the modeling activity, as well as the process and tool chain through which the spacecraft design activities are executed. Several aspects of MBSE methodology and process were explored. Adoption and consistent use of the MBSE methodology within an existing development environment can be difficult. The Pathfinder Team evaluated the possibility that an "MBSE Template" could be developed as both a teaching tool as well as a baseline from which future NASA projects could leverage. Elements of this template include spacecraft system component libraries, data dictionaries and ontology specifications, as well as software services that do work on the models themselves. The Pathfinder Study also evaluated the tool chain aspects of development. Two chains were considered: 1. The Development tool chain, through which SysML model development was performed and controlled, and 2. The Analysis tool chain, through which both static and dynamic system analysis is performed. Of particular interest was the ability to exchange data between SysML and other engineering tools such as CAD and Dynamic Simulation tools. For this study, the team selected a Mars Lander vehicle as the element to be designed. The paper will discuss what system models were developed, how data was captured and exchanged, and what analyses were conducted.

  18. Application of Modern Fortran to Spacecraft Trajectory Design and Optimization

    Science.gov (United States)

    Williams, Jacob; Falck, Robert D.; Beekman, Izaak B.

    2018-01-01

    In this paper, applications of the modern Fortran programming language to the field of spacecraft trajectory optimization and design are examined. Modern object-oriented Fortran has many advantages for scientific programming, although many legacy Fortran aerospace codes have not been upgraded to use the newer standards (or have been rewritten in other languages perceived to be more modern). NASA's Copernicus spacecraft trajectory optimization program, originally a combination of Fortran 77 and Fortran 95, has attempted to keep up with modern standards and makes significant use of the new language features. Various algorithms and methods are presented from trajectory tools such as Copernicus, as well as modern Fortran open source libraries and other projects.

  19. A study of Schwarz converters for nuclear powered spacecraft

    Science.gov (United States)

    Stuart, Thomas A.; Schwarze, Gene E.

    1987-01-01

    High power space systems which use low dc voltage, high current sources such as thermoelectric generators, will most likely require high voltage conversion for transmission purposes. This study considers the use of the Schwarz resonant converter for use as the basic building block to accomplish this low-to-high voltage conversion for either a dc or an ac spacecraft bus. The Schwarz converter has the important assets of both inherent fault tolerance and resonant operation; parallel operation in modular form is possible. A regulated dc spacecraft bus requires only a single stage converter while a constant frequency ac bus requires a cascaded Schwarz converter configuration. If the power system requires constant output power from the dc generator, then a second converter is required to route unneeded power to a ballast load.

  20. An open source Bayesian Monte Carlo isotope mixing model with applications in Earth surface processes

    Science.gov (United States)

    Arendt, Carli A.; Aciego, Sarah M.; Hetland, Eric A.

    2015-05-01

    The implementation of isotopic tracers as constraints on source contributions has become increasingly relevant to understanding Earth surface processes. Interpretation of these isotopic tracers has become more accessible with the development of Bayesian Monte Carlo (BMC) mixing models, which allow uncertainty in mixing end-members and provide methodology for systems with multicomponent mixing. This study presents an open source multiple isotope BMC mixing model that is applicable to Earth surface environments with sources exhibiting distinct end-member isotopic signatures. Our model is first applied to new δ18O and δD measurements from the Athabasca Glacier, which showed expected seasonal melt evolution trends and vigorously assessed the statistical relevance of the resulting fraction estimations. To highlight the broad applicability of our model to a variety of Earth surface environments and relevant isotopic systems, we expand our model to two additional case studies: deriving melt sources from δ18O, δD, and 222Rn measurements of Greenland Ice Sheet bulk water samples and assessing nutrient sources from ɛNd and 87Sr/86Sr measurements of Hawaiian soil cores. The model produces results for the Greenland Ice Sheet and Hawaiian soil data sets that are consistent with the originally published fractional contribution estimates. The advantage of this method is that it quantifies the error induced by variability in the end-member compositions, unrealized by the models previously applied to the above case studies. Results from all three case studies demonstrate the broad applicability of this statistical BMC isotopic mixing model for estimating source contribution fractions in a variety of Earth surface systems.

  1. The continental source of glyoxal estimated by the synergistic use of spaceborne measurements and inverse modelling

    Directory of Open Access Journals (Sweden)

    A. Richter

    2009-11-01

    Full Text Available Tropospheric glyoxal and formaldehyde columns retrieved from the SCIAMACHY satellite instrument in 2005 are used with the IMAGESv2 global chemistry-transport model and its adjoint in a two-compound inversion scheme designed to estimate the continental source of glyoxal. The formaldehyde observations provide an important constraint on the production of glyoxal from isoprene in the model, since the degradation of isoprene constitutes an important source of both glyoxal and formaldehyde. Current modelling studies underestimate largely the observed glyoxal satellite columns, pointing to the existence of an additional land glyoxal source of biogenic origin. We include an extra glyoxal source in the model and we explore its possible distribution and magnitude through two inversion experiments. In the first case, the additional source is represented as a direct glyoxal emission, and in the second, as a secondary formation through the oxidation of an unspecified glyoxal precursor. Besides this extra source, the inversion scheme optimizes the primary glyoxal and formaldehyde emissions, as well as their secondary production from other identified non-methane volatile organic precursors of anthropogenic, pyrogenic and biogenic origin.

    In the first inversion experiment, the additional direct source, estimated at 36 Tg/yr, represents 38% of the global continental source, whereas the contribution of isoprene is equally important (30%, the remainder being accounted for by anthropogenic (20% and pyrogenic fluxes. The inversion succeeds in reducing the underestimation of the glyoxal columns by the model, but it leads to a severe overestimation of glyoxal surface concentrations in comparison with in situ measurements. In the second scenario, the inferred total global continental glyoxal source is estimated at 108 Tg/yr, almost two times higher than the global a priori source. The extra secondary source is the largest contribution to the global glyoxal

  2. Investigation of tenuous plasma environment using Active Spacecraft Potential Control (ASPOC) on Magnetospheric Multiscale (MMS) Mission

    Science.gov (United States)

    Nakamura, Rumi; Jeszenszky, Harald; Torkar, Klaus; Andriopoulou, Maria; Fremuth, Gerhard; Taijmar, Martin; Scharlemann, Carsten; Svenes, Knut; Escoubet, Philippe; Prattes, Gustav; Laky, Gunter; Giner, Franz; Hoelzl, Bernhard

    2015-04-01

    The NASA's Magnetospheric Multiscale (MMS) Mission is planned to be launched on March 12, 2015. The scientific objectives of the MMS mission are to explore and understand the fundamental plasma physics processes of magnetic reconnection, particle acceleration and turbulence in the Earth's magnetosphere. The region of scientific interest of MMS is in a tenuous plasma environment where the positive spacecraft potential reaches an equilibrium at several tens of Volts. An Active Spacecraft Potential Control (ASPOC) instrument neutralizes the spacecraft potential by releasing positive charge produced by indium ion emitters. ASPOC thereby reduces the potential in order to improve the electric field and low-energy particle measurement. The method has been successfully applied on other spacecraft such as Cluster and Double Star. Two ASPOC units are present on each of the MMS spacecraft. Each unit contains four ion emitters, whereby one emitter per instrument is operated at a time. ASPOC for MMS includes new developments in the design of the emitters and the electronics enabling lower spacecraft potentials, higher reliability, and a more uniform potential structure in the spacecraft's sheath compared to previous missions. Model calculations confirm the findings from previous applications that the plasma measurements will not be affected by the beam's space charge. A perfectly stable spacecraft potential precludes the utilization of the spacecraft as a plasma probe, which is a conventional technique used to estimate ambient plasma density from the spacecraft potential. The small residual variations of the potential controlled by ASPOC, however, still allow to determine ambient plasma density by comparing two closely separated spacecraft and thereby reconstructing the uncontrolled potential variation from the controlled potential. Regular intercalibration of controlled and uncontrolled potentials is expected to increase the reliability of this new method.

  3. Modelling surface energy fluxes over a Dehesa ecosystem using a two-source energy balance model.

    Science.gov (United States)

    Andreu, Ana; Kustas, William. P.; Anderson, Martha C.; Carrara, Arnaud; Patrocinio Gonzalez-Dugo, Maria

    2013-04-01

    The Dehesa is the most widespread agroforestry land-use system in Europe, covering more than 3 million hectares in the Iberian Peninsula and Greece (Grove and Rackham, 2001; Papanastasis, 2004). It is an agro-silvo-pastural ecosystem consisting of widely-spaced oak trees (mostly Quercus ilex L.), combined with crops, pasture and Mediterranean shrubs, and it is recognized as an example of sustainable land use and for his importance in the rural economy (Diaz et al., 1997; Plieninger and Wilbrand, 2001). The ecosystem is influenced by a Mediterranean climate, with recurrent and severe droughts. Over the last decades the Dehesa has faced multiple environmental threats, derived from intensive agricultural use and socio-economic changes, which have caused environmental degradation of the area, namely reduction in tree density and stocking rates, changes in soil properties and hydrological processes and an increase of soil erosion (Coelho et al. 2004; Schnabel and Ferreira, 2004; Montoya 1998; Pulido and Díaz, 2005). Understanding the hydrological, atmospheric and physiological processes that affect the functioning of the ecosystem will improve the management and conservation of the Dehesa. One of the key metrics in assessing ecosystem health, particularly in this water-limited environment, is the capability of monitoring evaporation (ET). To make large area assessments requires the use of remote sensing. Thermal-based energy balance techniques that distinguish soil/substrate and vegetation contributions to the radiative temperature and radiation/turbulent fluxes have proven to be reliable in such semi-arid sparse canopy-cover landscapes. In particular, the two-source energy balance (TSEB) model of Norman et al. (1995) and Kustas and Norman (1999) has shown to be robust for a wide range of partially-vegetated landscapes. The TSEB formulation is evaluated at a flux tower site located in center Spain (Majadas del Tietar, Caceres). Its application in this environment is

  4. Evaluation of the Agricultural Non-point Source Pollution in Chongqing Based on PSR Model

    Institute of Scientific and Technical Information of China (English)

    Hanwen; ZHANG; Xinli; MOU; Hui; XIE; Hong; LU; Xingyun; YAN

    2014-01-01

    Through a series of exploration based on PSR framework model,for the purpose of building a suitable Chongqing agricultural nonpoint source pollution evaluation index system model framework,combined with the presence of Chongqing specific agro-environmental issues,we build a agricultural non-point source pollution assessment index system,and then study the agricultural system pressure,agro-environmental status and human response in total 3 major categories,develope an agricultural non-point source pollution evaluation index consisting of 3 criteria indicators and 19 indicators. As can be seen from the analysis,pressures and responses tend to increase and decrease linearly,state and complex have large fluctuations,and their fluctuations are similar mainly due to the elimination of pressures and impact,increasing the impact for agricultural non-point source pollution.

  5. Sensitivity of the coastal tsunami simulation to the complexity of the 2011 Tohoku earthquake source model

    Science.gov (United States)

    Monnier, Angélique; Loevenbruck, Anne; Gailler, Audrey; Hébert, Hélène

    2016-04-01

    The 11 March 2011 Tohoku-Oki event, whether earthquake or tsunami, is exceptionally well documented. A wide range of onshore and offshore data has been recorded from seismic, geodetic, ocean-bottom pressure and sea level sensors. Along with these numerous observations, advance in inversion technique and computing facilities have led to many source studies. Rupture parameters inversion such as slip distribution and rupture history permit to estimate the complex coseismic seafloor deformation. From the numerous published seismic source studies, the most relevant coseismic source models are tested. The comparison of the predicted signals generated using both static and cinematic ruptures to the offshore and coastal measurements help determine which source model should be used to obtain the more consistent coastal tsunami simulations. This work is funded by the TANDEM project, reference ANR-11-RSNR-0023-01 of the French Programme Investissements d'Avenir (PIA 2014-2018).

  6. Neutron activation analysis: Modelling studies to improve the neutron flux of Americium-Beryllium source

    Energy Technology Data Exchange (ETDEWEB)

    Didi, Abdessamad; Dadouch, Ahmed; Tajmouati, Jaouad; Bekkouri, Hassane [Advanced Technology and Integration System, Dept. of Physics, Faculty of Science Dhar Mehraz, University Sidi Mohamed Ben Abdellah, Fez (Morocco); Jai, Otman [Laboratory of Radiation and Nuclear Systems, Dept. of Physics, Faculty of Sciences, Tetouan (Morocco)

    2017-06-15

    Americium–beryllium (Am-Be; n, γ) is a neutron emitting source used in various research fields such as chemistry, physics, geology, archaeology, medicine, and environmental monitoring, as well as in the forensic sciences. It is a mobile source of neutron activity (20 Ci), yielding a small thermal neutron flux that is water moderated. The aim of this study is to develop a model to increase the neutron thermal flux of a source such as Am-Be. This study achieved multiple advantageous results: primarily, it will help us perform neutron activation analysis. Next, it will give us the opportunity to produce radio-elements with short half-lives. Am-Be single and multisource (5 sources) experiments were performed within an irradiation facility with a paraffin moderator. The resulting models mainly increase the thermal neutron flux compared to the traditional method with water moderator.

  7. An Equivalent Source Method for Modelling the Lithospheric Magnetic Field Using Satellite and Airborne Magnetic Data

    DEFF Research Database (Denmark)

    Kother, Livia Kathleen; Hammer, Magnus Danel; Finlay, Chris

    . Advantages of the equivalent source method include its local nature and the ease of transforming to spherical harmonics when needed. The method can also be applied in local, high resolution, investigations of the lithospheric magnetic field, for example where suitable aeromagnetic data is available......We present a technique for modelling the lithospheric magnetic field based on estimation of equivalent potential field sources. As a first demonstration we present an application to magnetic field measurements made by the CHAMP satellite during the period 2009-2010. Three component vector field...... for the remaining lithospheric magnetic field consists of magnetic point sources (monopoles) arranged in an icosahedron grid with an increasing grid resolution towards the airborne survey area. The corresponding source values are estimated using an iteratively reweighted least squares algorithm that includes model...

  8. Differences in directional sound source behavior and perception between assorted computer room models

    DEFF Research Database (Denmark)

    Vigeant, Michelle C.; Wang, Lily M.; Rindel, Jens Holger

    2004-01-01

    considering reverberation time. However, for the three other parameters evaluated (sound pressure level, clarity index and lateral fraction), the changing diffusivity of the room does not diminish the importance of the directivity. The study therefore shows the importance of considering source directivity......Source directivity is an important input variable when using room acoustic computer modeling programs to generate auralizations. Previous research has shown that using a multichannel anechoic recording can produce a more natural sounding auralization, particularly as the number of channels...

  9. Source model for the Copahue volcano magmaplumbing system constrained by InSARsurface deformation observations

    Science.gov (United States)

    Lundgren, P.; Nikkhoo, M.; Samsonov, S. V.; Milillo, P.; Gil-Cruz, F., Sr.; Lazo, J.

    2017-12-01

    Copahue volcano straddling the edge of the Agrio-Caviahue caldera along the Chile-Argentinaborder in the southern Andes has been in unrest since inflation began in late 2011. We constrain Copahue'ssource models with satellite and airborne interferometric synthetic aperture radar (InSAR) deformationobservations. InSAR time series from descending track RADARSAT-2 and COSMO-SkyMed data span theentire inflation period from 2011 to 2016, with their initially high rates of 12 and 15 cm/yr, respectively,slowing only slightly despite ongoing small eruptions through 2016. InSAR ascending and descending tracktime series for the 2013-2016 time period constrain a two-source compound dislocation model, with a rate ofvolume increase of 13 × 106 m3/yr. They consist of a shallow, near-vertical, elongated source centered at2.5 km beneath the summit and a deeper, shallowly plunging source centered at 7 km depth connecting theshallow source to the deeper caldera. The deeper source is located directly beneath the volcano tectonicseismicity with the lower bounds of the seismicity parallel to the plunge of the deep source. InSAR time seriesalso show normal fault offsets on the NE flank Copahue faults. Coulomb stress change calculations forright-lateral strike slip (RLSS), thrust, and normal receiver faults show positive values in the north caldera forboth RLSS and normal faults, suggesting that northward trending seismicity and Copahue fault motion withinthe caldera are caused by the modeled sources. Together, the InSAR-constrained source model and theseismicity suggest a deep conduit or transfer zone where magma moves from the central caldera toCopahue's upper edifice.

  10. A simplified model of the source channel of the Leksell GammaKnife tested with PENELOPE

    OpenAIRE

    Al-Dweri, Feras M. O.; Lallena, Antonio M.; Vilches, Manuel

    2004-01-01

    Monte Carlo simulations using the code PENELOPE have been performed to test a simplified model of the source channel geometry of the Leksell GammaKnife$^{\\circledR}$. The characteristics of the radiation passing through the treatment helmets are analysed in detail. We have found that only primary particles emitted from the source with polar angles smaller than 3$^{\\rm o}$ with respect to the beam axis are relevant for the dosimetry of the Gamma Knife. The photons trajectories reaching the out...

  11. Modeled Sources, Transport, and Accumulation of Dissolved Solids in Water Resources of the Southwestern United States.

    Science.gov (United States)

    Anning, David W

    2011-10-01

    Information on important source areas for dissolved solids in streams of the southwestern United States, the relative share of deliveries of dissolved solids to streams from natural and human sources, and the potential for salt accumulation in soil or groundwater was developed using a SPAtially Referenced Regressions On Watershed attributes model. Predicted area-normalized reach-catchment delivery rates of dissolved solids to streams ranged from Salton Sea accounting unit.

  12. Quantification of source-term profiles from near-field geochemical models

    International Nuclear Information System (INIS)

    McKinley, I.G.

    1985-01-01

    A geochemical model of the near-field is described which quantitatively treats the processes of engineered barrier degradation, buffering of aqueous chemistry by solid phases, nuclide solubilization and transport through the near-field and release to the far-field. The radionuclide source-terms derived from this model are compared with those from a simpler model used for repository safety analysis. 10 refs., 2 figs., 2 tabs

  13. Certification of model spectrometric alpha sources (MSAS) and problems of the MSAS system improvement

    International Nuclear Information System (INIS)

    Belyatskij, A.F.; Gejdel'man, A.M.; Egorov, Yu.S.; Nedovesov, V.G.; Chechev, V.P.

    1984-01-01

    Results of certification of standard spectrometric alpha sources (SSAS) of industrial production are presented: methods for certification by main radiation physical parameters: proper halfwidth of α-lines, activity of radionuclides in the source, energies of α-particle emitting sources and relative intensity of different energy α-particle groups - are analysed. The advantage for the SSAS system improvement - a set of model measures for α-radiation, a collection of interconnected data units on physical, engineering and design characteristics of SSAS, methods for their obtaining and determination, on instruments used, is considered

  14. Assessing the contribution of binaural cues for apparent source width perception via a functional model

    DEFF Research Database (Denmark)

    Käsbach, Johannes; Hahmann, Manuel; May, Tobias

    2016-01-01

    In echoic conditions, sound sources are not perceived as point sources but appear to be expanded. The expansion in the horizontal dimension is referred to as apparent source width (ASW). To elicit this perception, the auditory system has access to fluctuations of binaural cues, the interaural time...... a statistical representation of ITDs and ILDs based on percentiles integrated over time and frequency. The model’s performance was evaluated against psychoacoustic data obtained with noise, speech and music signals in loudspeakerbased experiments. A robust model prediction of ASW was achieved using a cross...

  15. Rate equation modelling of the optically pumped spin-exchange source

    International Nuclear Information System (INIS)

    Stenger, J.; Rith, K.

    1995-01-01

    Sources for spin polarized hydrogen or deuterium, polarized via spin-exchange of a laser optically pumped alkali metal, can be modelled by rate equations. The rate equations for this type of source, operated either with hydrogen or deuterium, are given explicitly with the intention of providing a useful tool for further source optimization and understanding. Laser optical pumping of alkali metal, spin-exchange collisions of hydrogen or deuterium atoms with each other and with alkali metal atoms are included, as well as depolarization due to flow and wall collisions. (orig.)

  16. Application of hierarchical Bayesian unmixing models in river sediment source apportionment

    Science.gov (United States)

    Blake, Will; Smith, Hugh; Navas, Ana; Bodé, Samuel; Goddard, Rupert; Zou Kuzyk, Zou; Lennard, Amy; Lobb, David; Owens, Phil; Palazon, Leticia; Petticrew, Ellen; Gaspar, Leticia; Stock, Brian; Boeckx, Pacsal; Semmens, Brice

    2016-04-01

    Fingerprinting and unmixing concepts are used widely across environmental disciplines for forensic evaluation of pollutant sources. In aquatic and marine systems, this includes tracking the source of organic and inorganic pollutants in water and linking problem sediment to soil erosion and land use sources. It is, however, the particular complexity of ecological systems that has driven creation of the most sophisticated mixing models, primarily to (i) evaluate diet composition in complex ecological food webs, (ii) inform population structure and (iii) explore animal movement. In the context of the new hierarchical Bayesian unmixing model, MIXSIAR, developed to characterise intra-population niche variation in ecological systems, we evaluate the linkage between ecological 'prey' and 'consumer' concepts and river basin sediment 'source' and sediment 'mixtures' to exemplify the value of ecological modelling tools to river basin science. Recent studies have outlined advantages presented by Bayesian unmixing approaches in handling complex source and mixture datasets while dealing appropriately with uncertainty in parameter probability distributions. MixSIAR is unique in that it allows individual fixed and random effects associated with mixture hierarchy, i.e. factors that might exert an influence on model outcome for mixture groups, to be explored within the source-receptor framework. This offers new and powerful ways of interpreting river basin apportionment data. In this contribution, key components of the model are evaluated in the context of common experimental designs for sediment fingerprinting studies namely simple, nested and distributed catchment sampling programmes. Illustrative examples using geochemical and compound specific stable isotope datasets are presented and used to discuss best practice with specific attention to (1) the tracer selection process, (2) incorporation of fixed effects relating to sample timeframe and sediment type in the modelling

  17. SPARROW models used to understand nutrient sources in the Mississippi/Atchafalaya River Basin

    Science.gov (United States)

    Robertson, Dale M.; Saad, David A.

    2013-01-01

    Nitrogen (N) and phosphorus (P) loading from the Mississippi/Atchafalaya River Basin (MARB) has been linked to hypoxia in the Gulf of Mexico. To describe where and from what sources those loads originate, SPAtially Referenced Regression On Watershed attributes (SPARROW) models were constructed for the MARB using geospatial datasets for 2002, including inputs from wastewater treatment plants (WWTPs), and calibration sites throughout the MARB. Previous studies found that highest N and P yields were from the north-central part of the MARB (Corn Belt). Based on the MARB SPARROW models, high