WorldWideScience

Sample records for nasa high-end computing

  1. High-End Computing Challenges in Aerospace Design and Engineering

    Science.gov (United States)

    Bailey, F. Ronald

    2004-01-01

    High-End Computing (HEC) has had significant impact on aerospace design and engineering and is poised to make even more in the future. In this paper we describe four aerospace design and engineering challenges: Digital Flight, Launch Simulation, Rocket Fuel System and Digital Astronaut. The paper discusses modeling capabilities needed for each challenge and presents projections of future near and far-term HEC computing requirements. NASA's HEC Project Columbia is described and programming strategies presented that are necessary to achieve high real performance.

  2. High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations

    Science.gov (United States)

    Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.

    2003-01-01

    Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.

  3. High-End Scientific Computing

    Science.gov (United States)

    EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.

  4. CRITICAL ISSUES IN HIGH END COMPUTING - FINAL REPORT

    Energy Technology Data Exchange (ETDEWEB)

    Corones, James [Krell Institute

    2013-09-23

    High-End computing (HEC) has been a driver for advances in science and engineering for the past four decades. Increasingly HEC has become a significant element in the national security, economic vitality, and competitiveness of the United States. Advances in HEC provide results that cut across traditional disciplinary and organizational boundaries. This program provides opportunities to share information about HEC systems and computational techniques across multiple disciplines and organizations through conferences and exhibitions of HEC advances held in Washington DC so that mission agency staff, scientists, and industry can come together with White House, Congressional and Legislative staff in an environment conducive to the sharing of technical information, accomplishments, goals, and plans. A common thread across this series of conferences is the understanding of computational science and applied mathematics techniques across a diverse set of application areas of interest to the Nation. The specific objectives of this program are: Program Objective 1. To provide opportunities to share information about advances in high-end computing systems and computational techniques between mission critical agencies, agency laboratories, academics, and industry. Program Objective 2. To gather pertinent data, address specific topics of wide interest to mission critical agencies. Program Objective 3. To promote a continuing discussion of critical issues in high-end computing. Program Objective 4.To provide a venue where a multidisciplinary scientific audience can discuss the difficulties applying computational science techniques to specific problems and can specify future research that, if successful, will eliminate these problems.

  5. NASA Advanced Supercomputing Facility Expansion

    Science.gov (United States)

    Thigpen, William W.

    2017-01-01

    The NASA Advanced Supercomputing (NAS) Division enables advances in high-end computing technologies and in modeling and simulation methods to tackle some of the toughest science and engineering challenges facing NASA today. The name "NAS" has long been associated with leadership and innovation throughout the high-end computing (HEC) community. We play a significant role in shaping HEC standards and paradigms, and provide leadership in the areas of large-scale InfiniBand fabrics, Lustre open-source filesystems, and hyperwall technologies. We provide an integrated high-end computing environment to accelerate NASA missions and make revolutionary advances in science. Pleiades, a petaflop-scale supercomputer, is used by scientists throughout the U.S. to support NASA missions, and is ranked among the most powerful systems in the world. One of our key focus areas is in modeling and simulation to support NASA's real-world engineering applications and make fundamental advances in modeling and simulation methods.

  6. NASA's computer science research program

    Science.gov (United States)

    Larsen, R. L.

    1983-01-01

    Following a major assessment of NASA's computing technology needs, a new program of computer science research has been initiated by the Agency. The program includes work in concurrent processing, management of large scale scientific databases, software engineering, reliable computing, and artificial intelligence. The program is driven by applications requirements in computational fluid dynamics, image processing, sensor data management, real-time mission control and autonomous systems. It consists of university research, in-house NASA research, and NASA's Research Institute for Advanced Computer Science (RIACS) and Institute for Computer Applications in Science and Engineering (ICASE). The overall goal is to provide the technical foundation within NASA to exploit advancing computing technology in aerospace applications.

  7. Human and Robotic Space Mission Use Cases for High-Performance Spaceflight Computing

    Science.gov (United States)

    Some, Raphael; Doyle, Richard; Bergman, Larry; Whitaker, William; Powell, Wesley; Johnson, Michael; Goforth, Montgomery; Lowry, Michael

    2013-01-01

    Spaceflight computing is a key resource in NASA space missions and a core determining factor of spacecraft capability, with ripple effects throughout the spacecraft, end-to-end system, and mission. Onboard computing can be aptly viewed as a "technology multiplier" in that advances provide direct dramatic improvements in flight functions and capabilities across the NASA mission classes, and enable new flight capabilities and mission scenarios, increasing science and exploration return. Space-qualified computing technology, however, has not advanced significantly in well over ten years and the current state of the practice fails to meet the near- to mid-term needs of NASA missions. Recognizing this gap, the NASA Game Changing Development Program (GCDP), under the auspices of the NASA Space Technology Mission Directorate, commissioned a study on space-based computing needs, looking out 15-20 years. The study resulted in a recommendation to pursue high-performance spaceflight computing (HPSC) for next-generation missions, and a decision to partner with the Air Force Research Lab (AFRL) in this development.

  8. The NASA computer science research program plan

    Science.gov (United States)

    1983-01-01

    A taxonomy of computer science is included, one state of the art of each of the major computer science categories is summarized. A functional breakdown of NASA programs under Aeronautics R and D, space R and T, and institutional support is also included. These areas were assessed against the computer science categories. Concurrent processing, highly reliable computing, and information management are identified.

  9. Educational NASA Computational and Scientific Studies (enCOMPASS)

    Science.gov (United States)

    Memarsadeghi, Nargess

    2013-01-01

    Educational NASA Computational and Scientific Studies (enCOMPASS) is an educational project of NASA Goddard Space Flight Center aimed at bridging the gap between computational objectives and needs of NASA's scientific research, missions, and projects, and academia's latest advances in applied mathematics and computer science. enCOMPASS achieves this goal via bidirectional collaboration and communication between NASA and academia. Using developed NASA Computational Case Studies in university computer science/engineering and applied mathematics classes is a way of addressing NASA's goals of contributing to the Science, Technology, Education, and Math (STEM) National Objective. The enCOMPASS Web site at http://encompass.gsfc.nasa.gov provides additional information. There are currently nine enCOMPASS case studies developed in areas of earth sciences, planetary sciences, and astrophysics. Some of these case studies have been published in AIP and IEEE's Computing in Science and Engineering magazines. A few university professors have used enCOMPASS case studies in their computational classes and contributed their findings to NASA scientists. In these case studies, after introducing the science area, the specific problem, and related NASA missions, students are first asked to solve a known problem using NASA data and past approaches used and often published in a scientific/research paper. Then, after learning about the NASA application and related computational tools and approaches for solving the proposed problem, students are given a harder problem as a challenge for them to research and develop solutions for. This project provides a model for NASA scientists and engineers on one side, and university students, faculty, and researchers in computer science and applied mathematics on the other side, to learn from each other's areas of work, computational needs and solutions, and the latest advances in research and development. This innovation takes NASA science and

  10. NASA's Heliophysics Theory Program - Accomplishments in Life Cycle Ending 2011

    Science.gov (United States)

    Grebowsky, J.

    2011-01-01

    NASA's Heliophysics Theory Program (HTP) is now into a new triennial cycle of funded research, with new research awards beginning in 2011. The theory program was established by the (former) Solar Terrestrial Division in 1980 to redress a weakness of support in the theory area. It has been a successful, evolving scientific program with long-term funding of relatively large "critical mass groups" pursuing theory and modeling on a scale larger than that available within the limits of traditional NASA Supporting Research and Technology (SR&T) awards. The results of the last 3 year funding cycle, just ended, contributed to ever more cutting edge theoretical understanding of all parts of the Sun-Earth Connection chain. Advances ranged from the core of the Sun out into the corona, through the solar wind into the Earth's magnetosphere and down to the ionosphere and lower atmosphere, also contributing to understanding the environments of other solar system bodies. The HTP contributions were not isolated findings but continued to contribute to the planning and implementation of NASA spacecraft missions and to the development of the predictive computer models that have become the workhorses for analyzing satellite and ground-based measurements.

  11. Computational Nanoelectronics and Nanotechnology at NASA ARC

    Science.gov (United States)

    Saini, Subhash

    1998-01-01

    Both physical and economic considerations indicate that the scaling era of CMOS will run out of steam around the year 2010. However, physical laws also indicate that it is possible to compute at a rate of a billion times present speeds with the expenditure of only one Watt of electrical power. NASA has long-term needs where ultra-small semiconductor devices are needed for critical applications: high performance, low power, compact computers for intelligent autonomous vehicles and Petaflop computing technolpgy are some key examples. To advance the design, development, and production of future generation micro- and nano-devices, IT Modeling and Simulation Group has been started at NASA Ames with a goal to develop an integrated simulation environment that addresses problems related to nanoelectronics and molecular nanotecnology. Overview of nanoelectronics and nanotechnology research activities being carried out at Ames Research Center will be presented. We will also present the vision and the research objectives of the IT Modeling and Simulation Group including the applications of nanoelectronic based devices relevant to NASA missions.

  12. An Application-Based Performance Evaluation of NASAs Nebula Cloud Computing Platform

    Science.gov (United States)

    Saini, Subhash; Heistand, Steve; Jin, Haoqiang; Chang, Johnny; Hood, Robert T.; Mehrotra, Piyush; Biswas, Rupak

    2012-01-01

    The high performance computing (HPC) community has shown tremendous interest in exploring cloud computing as it promises high potential. In this paper, we examine the feasibility, performance, and scalability of production quality scientific and engineering applications of interest to NASA on NASA's cloud computing platform, called Nebula, hosted at Ames Research Center. This work represents the comprehensive evaluation of Nebula using NUTTCP, HPCC, NPB, I/O, and MPI function benchmarks as well as four applications representative of the NASA HPC workload. Specifically, we compare Nebula performance on some of these benchmarks and applications to that of NASA s Pleiades supercomputer, a traditional HPC system. We also investigate the impact of virtIO and jumbo frames on interconnect performance. Overall results indicate that on Nebula (i) virtIO and jumbo frames improve network bandwidth by a factor of 5x, (ii) there is a significant virtualization layer overhead of about 10% to 25%, (iii) write performance is lower by a factor of 25x, (iv) latency for short MPI messages is very high, and (v) overall performance is 15% to 48% lower than that on Pleiades for NASA HPC applications. We also comment on the usability of the cloud platform.

  13. Hot Chips and Hot Interconnects for High End Computing Systems

    Science.gov (United States)

    Saini, Subhash

    2005-01-01

    I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).

  14. Development of a High Resolution Weather Forecast Model for Mesoamerica Using the NASA Nebula Cloud Computing Environment

    Science.gov (United States)

    Molthan, Andrew L.; Case, Jonathan L.; Venner, Jason; Moreno-Madrinan, Max. J.; Delgado, Francisco

    2012-01-01

    Over the past two years, scientists in the Earth Science Office at NASA fs Marshall Space Flight Center (MSFC) have explored opportunities to apply cloud computing concepts to support near real ]time weather forecast modeling via the Weather Research and Forecasting (WRF) model. Collaborators at NASA fs Short ]term Prediction Research and Transition (SPoRT) Center and the SERVIR project at Marshall Space Flight Center have established a framework that provides high resolution, daily weather forecasts over Mesoamerica through use of the NASA Nebula Cloud Computing Platform at Ames Research Center. Supported by experts at Ames, staff at SPoRT and SERVIR have established daily forecasts complete with web graphics and a user interface that allows SERVIR partners access to high resolution depictions of weather in the next 48 hours, useful for monitoring and mitigating meteorological hazards such as thunderstorms, heavy precipitation, and tropical weather that can lead to other disasters such as flooding and landslides. This presentation will describe the framework for establishing and providing WRF forecasts, example applications of output provided via the SERVIR web portal, and early results of forecast model verification against available surface ] and satellite ]based observations.

  15. Applied Computational Fluid Dynamics at NASA Ames Research Center

    Science.gov (United States)

    Holst, Terry L.; Kwak, Dochan (Technical Monitor)

    1994-01-01

    The field of Computational Fluid Dynamics (CFD) has advanced to the point where it can now be used for many applications in fluid mechanics research and aerospace vehicle design. A few applications being explored at NASA Ames Research Center will be presented and discussed. The examples presented will range in speed from hypersonic to low speed incompressible flow applications. Most of the results will be from numerical solutions of the Navier-Stokes or Euler equations in three space dimensions for general geometry applications. Computational results will be used to highlight the presentation as appropriate. Advances in computational facilities including those associated with NASA's CAS (Computational Aerosciences) Project of the Federal HPCC (High Performance Computing and Communications) Program will be discussed. Finally, opportunities for future research will be presented and discussed. All material will be taken from non-sensitive, previously-published and widely-disseminated work.

  16. Facilitating NASA Earth Science Data Processing Using Nebula Cloud Computing

    Science.gov (United States)

    Pham, Long; Chen, Aijun; Kempler, Steven; Lynnes, Christopher; Theobald, Michael; Asghar, Esfandiari; Campino, Jane; Vollmer, Bruce

    2011-01-01

    Cloud Computing has been implemented in several commercial arenas. The NASA Nebula Cloud Computing platform is an Infrastructure as a Service (IaaS) built in 2008 at NASA Ames Research Center and 2010 at GSFC. Nebula is an open source Cloud platform intended to: a) Make NASA realize significant cost savings through efficient resource utilization, reduced energy consumption, and reduced labor costs. b) Provide an easier way for NASA scientists and researchers to efficiently explore and share large and complex data sets. c) Allow customers to provision, manage, and decommission computing capabilities on an as-needed bases

  17. Computational needs survey of NASA automation and robotics missions. Volume 2: Appendixes

    Science.gov (United States)

    Davis, Gloria J.

    1991-01-01

    NASA's operational use of advanced processor technology in space systems lags behind its commercial development by more than eight years. One of the factors contributing to this is the fact that mission computing requirements are frequency unknown, unstated, misrepresented, or simply not available in a timely manner. NASA must provide clear common requirements to make better use of available technology, to cut development lead time on deployable architectures, and to increase the utilization of new technology. Here, NASA, industry and academic communities are provided with a preliminary set of advanced mission computational processing requirements of automation and robotics (A and R) systems. The results were obtained in an assessment of the computational needs of current projects throughout NASA. The high percent of responses indicated a general need for enhanced computational capabilities beyond the currently available 80386 and 68020 processor technology. Because of the need for faster processors and more memory, 90 percent of the polled automation projects have reduced or will reduce the scope of their implemented capabilities. The requirements are presented with respect to their targeted environment, identifying the applications required, system performance levels necessary to support them, and the degree to which they are met with typical programmatic constraints. Here, appendixes are provided.

  18. NASA Center for Computational Sciences: History and Resources

    Science.gov (United States)

    2000-01-01

    The Nasa Center for Computational Sciences (NCCS) has been a leading capacity computing facility, providing a production environment and support resources to address the challenges facing the Earth and space sciences research community.

  19. Federal High End Computing (HEC) Information Portal

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This portal provides information about opportunities to engage in U.S. Federal government high performance computing activities, including supercomputer use,...

  20. Computational Simulations of the NASA Langley HyMETS Arc-Jet Facility

    Science.gov (United States)

    Brune, A. J.; Bruce, W. E., III; Glass, D. E.; Splinter, S. C.

    2017-01-01

    The Hypersonic Materials Environmental Test System (HyMETS) arc-jet facility located at the NASA Langley Research Center in Hampton, Virginia, is primarily used for the research, development, and evaluation of high-temperature thermal protection systems for hypersonic vehicles and reentry systems. In order to improve testing capabilities and knowledge of the test article environment, an effort is underway to computationally simulate the flow-field using computational fluid dynamics (CFD). A detailed three-dimensional model of the arc-jet nozzle and free-jet portion of the flow-field has been developed and compared to calibration probe Pitot pressure and stagnation-point heat flux for three test conditions at low, medium, and high enthalpy. The CFD model takes into account uniform pressure and non-uniform enthalpy profiles at the nozzle inlet as well as catalytic recombination efficiency effects at the probe surface. Comparing the CFD results and test data indicates an effectively fully-catalytic copper surface on the heat flux probe of about 10% efficiency and a 2-3 kpa pressure drop from the arc heater bore, where the pressure is measured, to the plenum section, prior to the nozzle. With these assumptions, the CFD results are well within the uncertainty of the stagnation pressure and heat flux measurements. The conditions at the nozzle exit were also compared with radial and axial velocimetry. This simulation capability will be used to evaluate various three-dimensional models that are tested in the HyMETS facility. An end-to-end aerothermal and thermal simulation of HyMETS test articles will follow this work to provide a better understanding of the test environment, test results, and to aid in test planning. Additional flow-field diagnostic measurements will also be considered to improve the modeling capability.

  1. Computer graphics aid mission operations. [NASA missions

    Science.gov (United States)

    Jeletic, James F.

    1990-01-01

    The application of computer graphics techniques in NASA space missions is reviewed. Telemetric monitoring of the Space Shuttle and its components is discussed, noting the use of computer graphics for real-time visualization problems in the retrieval and repair of the Solar Maximum Mission. The use of the world map display for determining a spacecraft's location above the earth and the problem of verifying the relative position and orientation of spacecraft to celestial bodies are examined. The Flight Dynamics/STS Three-dimensional Monitoring System and the Trajectroy Computations and Orbital Products System world map display are described, emphasizing Space Shuttle applications. Also, consideration is given to the development of monitoring systems such as the Shuttle Payloads Mission Monitoring System and the Attitude Heads-Up Display and the use of the NASA-Goddard Two-dimensional Graphics Monitoring System during Shuttle missions and to support the Hubble Space Telescope.

  2. Fluid dynamics parallel computer development at NASA Langley Research Center

    Science.gov (United States)

    Townsend, James C.; Zang, Thomas A.; Dwoyer, Douglas L.

    1987-01-01

    To accomplish more detailed simulations of highly complex flows, such as the transition to turbulence, fluid dynamics research requires computers much more powerful than any available today. Only parallel processing on multiple-processor computers offers hope for achieving the required effective speeds. Looking ahead to the use of these machines, the fluid dynamicist faces three issues: algorithm development for near-term parallel computers, architecture development for future computer power increases, and assessment of possible advantages of special purpose designs. Two projects at NASA Langley address these issues. Software development and algorithm exploration is being done on the FLEX/32 Parallel Processing Research Computer. New architecture features are being explored in the special purpose hardware design of the Navier-Stokes Computer. These projects are complementary and are producing promising results.

  3. Guidelines for development of NASA (National Aeronautics and Space Administration) computer security training programs

    Science.gov (United States)

    Tompkins, F. G.

    1983-01-01

    The report presents guidance for the NASA Computer Security Program Manager and the NASA Center Computer Security Officials as they develop training requirements and implement computer security training programs. NASA audiences are categorized based on the computer security knowledge required to accomplish identified job functions. Training requirements, in terms of training subject areas, are presented for both computer security program management personnel and computer resource providers and users. Sources of computer security training are identified.

  4. Computational needs survey of NASA automation and robotics missions. Volume 1: Survey and results

    Science.gov (United States)

    Davis, Gloria J.

    1991-01-01

    NASA's operational use of advanced processor technology in space systems lags behind its commercial development by more than eight years. One of the factors contributing to this is that mission computing requirements are frequently unknown, unstated, misrepresented, or simply not available in a timely manner. NASA must provide clear common requirements to make better use of available technology, to cut development lead time on deployable architectures, and to increase the utilization of new technology. A preliminary set of advanced mission computational processing requirements of automation and robotics (A&R) systems are provided for use by NASA, industry, and academic communities. These results were obtained in an assessment of the computational needs of current projects throughout NASA. The high percent of responses indicated a general need for enhanced computational capabilities beyond the currently available 80386 and 68020 processor technology. Because of the need for faster processors and more memory, 90 percent of the polled automation projects have reduced or will reduce the scope of their implementation capabilities. The requirements are presented with respect to their targeted environment, identifying the applications required, system performance levels necessary to support them, and the degree to which they are met with typical programmatic constraints. Volume one includes the survey and results. Volume two contains the appendixes.

  5. Proposed Use of the NASA Ames Nebula Cloud Computing Platform for Numerical Weather Prediction and the Distribution of High Resolution Satellite Imagery

    Science.gov (United States)

    Limaye, Ashutosh S.; Molthan, Andrew L.; Srikishen, Jayanthi

    2010-01-01

    The development of the Nebula Cloud Computing Platform at NASA Ames Research Center provides an open-source solution for the deployment of scalable computing and storage capabilities relevant to the execution of real-time weather forecasts and the distribution of high resolution satellite data to the operational weather community. Two projects at Marshall Space Flight Center may benefit from use of the Nebula system. The NASA Short-term Prediction Research and Transition (SPoRT) Center facilitates the use of unique NASA satellite data and research capabilities in the operational weather community by providing datasets relevant to numerical weather prediction, and satellite data sets useful in weather analysis. SERVIR provides satellite data products for decision support, emphasizing environmental threats such as wildfires, floods, landslides, and other hazards, with interests in numerical weather prediction in support of disaster response. The Weather Research and Forecast (WRF) model Environmental Modeling System (WRF-EMS) has been configured for Nebula cloud computing use via the creation of a disk image and deployment of repeated instances. Given the available infrastructure within Nebula and the "infrastructure as a service" concept, the system appears well-suited for the rapid deployment of additional forecast models over different domains, in response to real-time research applications or disaster response. Future investigations into Nebula capabilities will focus on the development of a web mapping server and load balancing configuration to support the distribution of high resolution satellite data sets to users within the National Weather Service and international partners of SERVIR.

  6. An introduction to NASA's advanced computing program: Integrated computing systems in advanced multichip modules

    Science.gov (United States)

    Fang, Wai-Chi; Alkalai, Leon

    1996-01-01

    Recent changes within NASA's space exploration program favor the design, implementation, and operation of low cost, lightweight, small and micro spacecraft with multiple launches per year. In order to meet the future needs of these missions with regard to the use of spacecraft microelectronics, NASA's advanced flight computing (AFC) program is currently considering industrial cooperation and advanced packaging architectures. In relation to this, the AFC program is reviewed, considering the design and implementation of NASA's AFC multichip module.

  7. NASA Computational Case Study: The Flight of Friendship 7

    Science.gov (United States)

    Simpson, David G.

    2012-01-01

    In this case study, we learn how to compute the position of an Earth-orbiting spacecraft as a function of time. As an exercise, we compute the position of John Glenn's Mercury spacecraft Friendship 7 as it orbited the Earth during the third flight of NASA's Mercury program.

  8. An Offload NIC for NASA, NLR, and Grid Computing

    Science.gov (United States)

    Awrach, James

    2013-01-01

    , and to add several more capabilities while reducing space consumption and cost. Provisions were designed for interoperability with systems used in the NASA HEC (High-End Computing) program. The new acceleration engine consists of state-ofthe- art FPGA (field-programmable gate array) core IP, C, and Verilog code; novel communication protocol; and extensions to the Globus structure. The engine provides the functions of network acceleration, encryption, compression, packet-ordering, and security added to Globus grid or for cloud data transfer. This system is scalable in nX10-Gbps increments through 100-Gbps f-d. It can be interfaced to industry-standard system-side or network-side devices or core IP in increments of 10 GigE, scaling to provide IEEE 40/100 GigE compliance.

  9. User-oriented end-to-end transport protocols for the real-time distribution of telemetry data from NASA spacecraft

    Science.gov (United States)

    Hooke, A. J.

    1979-01-01

    A set of standard telemetry protocols for downlink data flow facilitating the end-to-end transport of instrument data from the spacecraft to the user in real time is proposed. The direct switching of data by autonomous message 'packets' that are assembled by the source instrument on the spacecraft is discussed. The data system consists thus of a format on a message rather than word basis, and such packet telemetry would include standardized protocol headers. Standards are being developed within the NASA End-to-End Data System (NEEDS) program for the source packet and transport frame protocols. The source packet protocol contains identification of both the sequence number of the packet as it is generated by the source and the total length of the packet, while the transport frame protocol includes a sequence count defining the serial number of the frame as it is generated by the spacecraft data system, and a field specifying any 'options' selected in the format of the frame itself.

  10. Exploiting NASA's Cumulus Earth Science Cloud Archive with Services and Computation

    Science.gov (United States)

    Pilone, D.; Quinn, P.; Jazayeri, A.; Schuler, I.; Plofchan, P.; Baynes, K.; Ramachandran, R.

    2017-12-01

    NASA's Earth Observing System Data and Information System (EOSDIS) houses nearly 30PBs of critical Earth Science data and with upcoming missions is expected to balloon to between 200PBs-300PBs over the next seven years. In addition to the massive increase in data collected, researchers and application developers want more and faster access - enabling complex visualizations, long time-series analysis, and cross dataset research without needing to copy and manage massive amounts of data locally. NASA has started prototyping with commercial cloud providers to make this data available in elastic cloud compute environments, allowing application developers direct access to the massive EOSDIS holdings. In this talk we'll explain the principles behind the archive architecture and share our experience of dealing with large amounts of data with serverless architectures including AWS Lambda, the Elastic Container Service (ECS) for long running jobs, and why we dropped thousands of lines of code for AWS Step Functions. We'll discuss best practices and patterns for accessing and using data available in a shared object store (S3) and leveraging events and message passing for sophisticated and highly scalable processing and analysis workflows. Finally we'll share capabilities NASA and cloud services are making available on the archives to enable massively scalable analysis and computation in a variety of formats and tools.

  11. Projected Applications of a "Climate in a Box" Computing System at the NASA Short-Term Prediction Research and Transition (SPoRT) Center

    Science.gov (United States)

    Jedlovec, Gary J.; Molthan, Andrew L.; Zavodsky, Bradley; Case, Jonathan L.; LaFontaine, Frank J.

    2010-01-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center focuses on the transition of unique observations and research capabilities to the operational weather community, with a goal of improving short-term forecasts on a regional scale. Advances in research computing have lead to "Climate in a Box" systems, with hardware configurations capable of producing high resolution, near real-time weather forecasts, but with footprints, power, and cooling requirements that are comparable to desktop systems. The SPoRT Center has developed several capabilities for incorporating unique NASA research capabilities and observations with real-time weather forecasts. Planned utilization includes the development of a fully-cycled data assimilation system used to drive 36-48 hour forecasts produced by the NASA Unified version of the Weather Research and Forecasting (WRF) model (NU-WRF). The horsepower provided by the "Climate in a Box" system is expected to facilitate the assimilation of vertical profiles of temperature and moisture provided by the Atmospheric Infrared Sounder (AIRS) aboard the NASA Aqua satellite. In addition, the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard NASA s Aqua and Terra satellites provide high-resolution sea surface temperatures and vegetation characteristics. The development of MODIS normalized difference vegetation index (NVDI) composites for use within the NASA Land Information System (LIS) will assist in the characterization of vegetation, and subsequently the surface albedo and processes related to soil moisture. Through application of satellite simulators, NASA satellite instruments can be used to examine forecast model errors in cloud cover and other characteristics. Through the aforementioned application of the "Climate in a Box" system and NU-WRF capabilities, an end goal is the establishment of a real-time forecast system that fully integrates modeling and analysis capabilities developed within the NASA SPo

  12. Computer science: Key to a space program renaissance. The 1981 NASA/ASEE summer study on the use of computer science and technology in NASA. Volume 2: Appendices

    Science.gov (United States)

    Freitas, R. A., Jr. (Editor); Carlson, P. A. (Editor)

    1983-01-01

    Adoption of an aggressive computer science research and technology program within NASA will: (1) enable new mission capabilities such as autonomous spacecraft, reliability and self-repair, and low-bandwidth intelligent Earth sensing; (2) lower manpower requirements, especially in the areas of Space Shuttle operations, by making fuller use of control center automation, technical support, and internal utilization of state-of-the-art computer techniques; (3) reduce project costs via improved software verification, software engineering, enhanced scientist/engineer productivity, and increased managerial effectiveness; and (4) significantly improve internal operations within NASA with electronic mail, managerial computer aids, an automated bureaucracy and uniform program operating plans.

  13. Current state and future direction of computer systems at NASA Langley Research Center

    Science.gov (United States)

    Rogers, James L. (Editor); Tucker, Jerry H. (Editor)

    1992-01-01

    Computer systems have advanced at a rate unmatched by any other area of technology. As performance has dramatically increased there has been an equally dramatic reduction in cost. This constant cost performance improvement has precipitated the pervasiveness of computer systems into virtually all areas of technology. This improvement is due primarily to advances in microelectronics. Most people are now convinced that the new generation of supercomputers will be built using a large number (possibly thousands) of high performance microprocessors. Although the spectacular improvements in computer systems have come about because of these hardware advances, there has also been a steady improvement in software techniques. In an effort to understand how these hardware and software advances will effect research at NASA LaRC, the Computer Systems Technical Committee drafted this white paper to examine the current state and possible future directions of computer systems at the Center. This paper discusses selected important areas of computer systems including real-time systems, embedded systems, high performance computing, distributed computing networks, data acquisition systems, artificial intelligence, and visualization.

  14. Projected Applications of a ``Climate in a Box'' Computing System at the NASA Short-term Prediction Research and Transition (SPoRT) Center

    Science.gov (United States)

    Jedlovec, G.; Molthan, A.; Zavodsky, B.; Case, J.; Lafontaine, F.

    2010-12-01

    The NASA Short-term Prediction Research and Transition (SPoRT) Center focuses on the transition of unique observations and research capabilities to the operational weather community, with a goal of improving short-term forecasts on a regional scale. Advances in research computing have lead to “Climate in a Box” systems, with hardware configurations capable of producing high resolution, near real-time weather forecasts, but with footprints, power, and cooling requirements that are comparable to desktop systems. The SPoRT Center has developed several capabilities for incorporating unique NASA research capabilities and observations with real-time weather forecasts. Planned utilization includes the development of a fully-cycled data assimilation system used to drive 36-48 hour forecasts produced by the NASA Unified version of the Weather Research and Forecasting (WRF) model (NU-WRF). The horsepower provided by the “Climate in a Box” system is expected to facilitate the assimilation of vertical profiles of temperature and moisture provided by the Atmospheric Infrared Sounder (AIRS) aboard the NASA Aqua satellite. In addition, the Moderate Resolution Imaging Spectroradiometer (MODIS) instruments aboard NASA’s Aqua and Terra satellites provide high-resolution sea surface temperatures and vegetation characteristics. The development of MODIS normalized difference vegetation index (NVDI) composites for use within the NASA Land Information System (LIS) will assist in the characterization of vegetation, and subsequently the surface albedo and processes related to soil moisture. Through application of satellite simulators, NASA satellite instruments can be used to examine forecast model errors in cloud cover and other characteristics. Through the aforementioned application of the “Climate in a Box” system and NU-WRF capabilities, an end goal is the establishment of a real-time forecast system that fully integrates modeling and analysis capabilities developed

  15. A Computational Analysis Model for Open-ended Cognitions

    Science.gov (United States)

    Morita, Junya; Miwa, Kazuhisa

    In this paper, we propose a novel usage for computational cognitive models. In cognitive science, computational models have played a critical role of theories for human cognitions. Many computational models have simulated results of controlled psychological experiments successfully. However, there have been only a few attempts to apply the models to complex realistic phenomena. We call such a situation ``open-ended situation''. In this study, MAC/FAC (``many are called, but few are chosen''), proposed by [Forbus 95], that models two stages of analogical reasoning was applied to our open-ended psychological experiment. In our experiment, subjects were presented a cue story, and retrieved cases that had been learned in their everyday life. Following this, they rated inferential soundness (goodness as analogy) of each retrieved case. For each retrieved case, we computed two kinds of similarity scores (content vectors/structural evaluation scores) using the algorithms of the MAC/FAC. As a result, the computed content vectors explained the overall retrieval of cases well, whereas the structural evaluation scores had a strong relation to the rated scores. These results support the MAC/FAC's theoretical assumption - different similarities are involved on the two stages of analogical reasoning. Our study is an attempt to use a computational model as an analysis device for open-ended human cognitions.

  16. High-Performance Computing Paradigm and Infrastructure

    CERN Document Server

    Yang, Laurence T

    2006-01-01

    With hyperthreading in Intel processors, hypertransport links in next generation AMD processors, multi-core silicon in today's high-end microprocessors from IBM and emerging grid computing, parallel and distributed computers have moved into the mainstream

  17. Facilitating NASA Earth Science Data Processing Using Nebula Cloud Computing

    Science.gov (United States)

    Chen, A.; Pham, L.; Kempler, S.; Theobald, M.; Esfandiari, A.; Campino, J.; Vollmer, B.; Lynnes, C.

    2011-12-01

    Cloud Computing technology has been used to offer high-performance and low-cost computing and storage resources for both scientific problems and business services. Several cloud computing services have been implemented in the commercial arena, e.g. Amazon's EC2 & S3, Microsoft's Azure, and Google App Engine. There are also some research and application programs being launched in academia and governments to utilize Cloud Computing. NASA launched the Nebula Cloud Computing platform in 2008, which is an Infrastructure as a Service (IaaS) to deliver on-demand distributed virtual computers. Nebula users can receive required computing resources as a fully outsourced service. NASA Goddard Earth Science Data and Information Service Center (GES DISC) migrated several GES DISC's applications to the Nebula as a proof of concept, including: a) The Simple, Scalable, Script-based Science Processor for Measurements (S4PM) for processing scientific data; b) the Atmospheric Infrared Sounder (AIRS) data process workflow for processing AIRS raw data; and c) the GES-DISC Interactive Online Visualization ANd aNalysis Infrastructure (GIOVANNI) for online access to, analysis, and visualization of Earth science data. This work aims to evaluate the practicability and adaptability of the Nebula. The initial work focused on the AIRS data process workflow to evaluate the Nebula. The AIRS data process workflow consists of a series of algorithms being used to process raw AIRS level 0 data and output AIRS level 2 geophysical retrievals. Migrating the entire workflow to the Nebula platform is challenging, but practicable. After installing several supporting libraries and the processing code itself, the workflow is able to process AIRS data in a similar fashion to its current (non-cloud) configuration. We compared the performance of processing 2 days of AIRS level 0 data through level 2 using a Nebula virtual computer and a local Linux computer. The result shows that Nebula has significantly

  18. Creating Communications, Computing, and Networking Technology Development Road Maps for Future NASA Human and Robotic Missions

    Science.gov (United States)

    Bhasin, Kul; Hayden, Jeffrey L.

    2005-01-01

    For human and robotic exploration missions in the Vision for Exploration, roadmaps are needed for capability development and investments based on advanced technology developments. A roadmap development process was undertaken for the needed communications, and networking capabilities and technologies for the future human and robotics missions. The underlying processes are derived from work carried out during development of the future space communications architecture, an d NASA's Space Architect Office (SAO) defined formats and structures for accumulating data. Interrelationships were established among emerging requirements, the capability analysis and technology status, and performance data. After developing an architectural communications and networking framework structured around the assumed needs for human and robotic exploration, in the vicinity of Earth, Moon, along the path to Mars, and in the vicinity of Mars, information was gathered from expert participants. This information was used to identify the capabilities expected from the new infrastructure and the technological gaps in the way of obtaining them. We define realistic, long-term space communication architectures based on emerging needs and translate the needs into interfaces, functions, and computer processing that will be required. In developing our roadmapping process, we defined requirements for achieving end-to-end activities that will be carried out by future NASA human and robotic missions. This paper describes: 10 the architectural framework developed for analysis; 2) our approach to gathering and analyzing data from NASA, industry, and academia; 3) an outline of the technology research to be done, including milestones for technology research and demonstrations with timelines; and 4) the technology roadmaps themselves.

  19. SSR_pipeline--computer software for the identification of microsatellite sequences from paired-end Illumina high-throughput DNA sequence data

    Science.gov (United States)

    Miller, Mark P.; Knaus, Brian J.; Mullins, Thomas D.; Haig, Susan M.

    2013-01-01

    SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (SSRs; for example, microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains three analysis modules along with a fourth control module that can be used to automate analyses of large volumes of data. The modules are used to (1) identify the subset of paired-end sequences that pass quality standards, (2) align paired-end reads into a single composite DNA sequence, and (3) identify sequences that possess microsatellites conforming to user specified parameters. Each of the three separate analysis modules also can be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc). All modules are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, Windows). The program suite relies on a compiled Python extension module to perform paired-end alignments. Instructions for compiling the extension from source code are provided in the documentation. Users who do not have Python installed on their computers or who do not have the ability to compile software also may choose to download packaged executable files. These files include all Python scripts, a copy of the compiled extension module, and a minimal installation of Python in a single binary executable. See program documentation for more information.

  20. The NASA Computational Fluid Dynamics (CFD) program - Building technology to solve future challenges

    Science.gov (United States)

    Richardson, Pamela F.; Dwoyer, Douglas L.; Kutler, Paul; Povinelli, Louis A.

    1993-01-01

    This paper presents the NASA Computational Fluid Dynamics program in terms of a strategic vision and goals as well as NASA's financial commitment and personnel levels. The paper also identifies the CFD program customers and the support to those customers. In addition, the paper discusses technical emphasis and direction of the program and some recent achievements. NASA's Ames, Langley, and Lewis Research Centers are the research hubs of the CFD program while the NASA Headquarters Office of Aeronautics represents and advocates the program.

  1. Comparison between low-cost marker-less and high-end marker-based motion capture systems for the computer-aided assessment of working ergonomics.

    Science.gov (United States)

    Patrizi, Alfredo; Pennestrì, Ettore; Valentini, Pier Paolo

    2016-01-01

    The paper deals with the comparison between a high-end marker-based acquisition system and a low-cost marker-less methodology for the assessment of the human posture during working tasks. The low-cost methodology is based on the use of a single Microsoft Kinect V1 device. The high-end acquisition system is the BTS SMART that requires the use of reflective markers to be placed on the subject's body. Three practical working activities involving object lifting and displacement have been investigated. The operational risk has been evaluated according to the lifting equation proposed by the American National Institute for Occupational Safety and Health. The results of the study show that the risk multipliers computed from the two acquisition methodologies are very close for all the analysed activities. In agreement to this outcome, the marker-less methodology based on the Microsoft Kinect V1 device seems very promising to promote the dissemination of computer-aided assessment of ergonomics while maintaining good accuracy and affordable costs. PRACTITIONER’S SUMMARY: The study is motivated by the increasing interest for on-site working ergonomics assessment. We compared a low-cost marker-less methodology with a high-end marker-based system. We tested them on three different working tasks, assessing the working risk of lifting loads. The two methodologies showed comparable precision in all the investigations.

  2. Managing the Risks Associated with End-User Computing.

    Science.gov (United States)

    Alavi, Maryam; Weiss, Ira R.

    1986-01-01

    Identifies organizational risks of end-user computing (EUC) associated with different stages of the end-user applications life cycle (analysis, design, implementation). Generic controls are identified that address each of the risks enumerated in a manner that allows EUC management to select those most appropriate to their EUC environment. (5…

  3. High Performance Spaceflight Computing (HPSC)

    Data.gov (United States)

    National Aeronautics and Space Administration — Space-based computing has not kept up with the needs of current and future NASA missions. We are developing a next-generation flight computing system that addresses...

  4. Training leads to increased auditory brain-computer interface performance of end-users with motor impairments.

    Science.gov (United States)

    Halder, S; Käthner, I; Kübler, A

    2016-02-01

    Auditory brain-computer interfaces are an assistive technology that can restore communication for motor impaired end-users. Such non-visual brain-computer interface paradigms are of particular importance for end-users that may lose or have lost gaze control. We attempted to show that motor impaired end-users can learn to control an auditory speller on the basis of event-related potentials. Five end-users with motor impairments, two of whom with additional visual impairments, participated in five sessions. We applied a newly developed auditory brain-computer interface paradigm with natural sounds and directional cues. Three of five end-users learned to select symbols using this method. Averaged over all five end-users the information transfer rate increased by more than 1800% from the first session (0.17 bits/min) to the last session (3.08 bits/min). The two best end-users achieved information transfer rates of 5.78 bits/min and accuracies of 92%. Our results show that an auditory BCI with a combination of natural sounds and directional cues, can be controlled by end-users with motor impairment. Training improves the performance of end-users to the level of healthy controls. To our knowledge, this is the first time end-users with motor impairments controlled an auditory brain-computer interface speller with such high accuracy and information transfer rates. Further, our results demonstrate that operating a BCI with event-related potentials benefits from training and specifically end-users may require more than one session to develop their full potential. Copyright © 2015 International Federation of Clinical Neurophysiology. Published by Elsevier Ireland Ltd. All rights reserved.

  5. End design of the SSC 58 mm High Gradient Quadrupole

    International Nuclear Information System (INIS)

    Caspi, S.

    1992-01-01

    The ''end'' design of the High Gradient Quad. was done with consideration to the integrated field harmonics, the iron contribution, and the maximum field at the conductor. Magnetic analysis was done on the return end only, however the physical dimension of the lead end were determined as well. Using the cross-section of the windings and Cook's program BEND, we generated the physical end windings around the return end. Placing a single wire at the center of each turn the integrated gradient was computed and iterating on the end block spacers the integrated harmonics minimized. The final geometry was then used for more, extensive calculations, such as the field at the conductor and the 3D field harmonics. For this detailed calculation we have placed a single line current at the center of each strand and included the iron contribution (μ = ∞), see Appendix C. With the termination of the iron serving as a reference, the maximum length of the inner and outer layers are 182 mm and 215 mm respectively. The magnetic length of the end was computed from the gradient function A 2 and was found to be 142 mm. In reality we expect the physical length of the end to be somewhat larger, however this should have little or no effect on the magnetic length. The gradient in the straight section is 212.44 T/m at 7000 A and the integrated value of the gradient is -3.01665 E5 (G) in the end region marked by the magnetic length of the end. The respective integrated harmonics for the end 12 pole and 20 pole are -10.6658 (G/CM 4 ) and 0.7279 (G/cm 8 ) corresponding to b 6 = 0.351 , b 10 = -0.024 units. The above was computed from the values of A 2 , A 6 , and A 10

  6. High-Performance Java Codes for Computational Fluid Dynamics

    Science.gov (United States)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  7. High-Speed On-Board Data Processing Platform for LIDAR Projects at NASA Langley Research Center

    Science.gov (United States)

    Beyon, J.; Ng, T. K.; Davis, M. J.; Adams, J. K.; Lin, B.

    2015-12-01

    The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program during April, 2012 - April, 2015. HOPS is an enabler for science missions with extremely high data processing rates. In this three-year effort of HOPS, Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) and 3-D Winds were of interest in particular. As for ASCENDS, HOPS replaces time domain data processing with frequency domain processing while making the real-time on-board data processing possible. As for 3-D Winds, HOPS offers real-time high-resolution wind profiling with 4,096-point fast Fourier transform (FFT). HOPS is adaptable with quick turn-around time. Since HOPS offers reusable user-friendly computational elements, its FPGA IP Core can be modified for a shorter development period if the algorithm changes. The FPGA and memory bandwidth of HOPS is 20 GB/sec while the typical maximum processor-to-SDRAM bandwidth of the commercial radiation tolerant high-end processors is about 130-150 MB/sec. The inter-board communication bandwidth of HOPS is 4 GB/sec while the effective processor-to-cPCI bandwidth of commercial radiation tolerant high-end boards is about 50-75 MB/sec. Also, HOPS offers VHDL cores for the easy and efficient implementation of ASCENDS and 3-D Winds, and other similar algorithms. A general overview of the 3-year development of HOPS is the goal of this presentation.

  8. Development of a High Resolution Weather Forecast Model for Mesoamerica Using the NASA Ames Code I Private Cloud Computing Environment

    Science.gov (United States)

    Molthan, Andrew; Case, Jonathan; Venner, Jason; Moreno-Madrinan, Max J.; Delgado, Francisco

    2012-01-01

    Two projects at NASA Marshall Space Flight Center have collaborated to develop a high resolution weather forecast model for Mesoamerica: The NASA Short-term Prediction Research and Transition (SPoRT) Center, which integrates unique NASA satellite and weather forecast modeling capabilities into the operational weather forecasting community. NASA's SERVIR Program, which integrates satellite observations, ground-based data, and forecast models to improve disaster response in Central America, the Caribbean, Africa, and the Himalayas.

  9. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  10. MANAGING HIGH-END, HIGH-VOLUME INNOVATIVE PRODUCTS

    Directory of Open Access Journals (Sweden)

    Gembong Baskoro

    2008-01-01

    Full Text Available This paper discuses the concept of managing high-end, high-volume innovative products. High-end, high-volume consumer products are products that have considerable influence to the way of life. Characteristic of High-end, high-volume consumer products are (1 short cycle time, (2 quick obsolete time, and (3 rapid price erosion. Beside the disadvantages that they are high risk for manufacturers, if manufacturers are able to understand precisely the consumer needs then they have the potential benefit or success to be the market leader. High innovation implies to high utilization of the user, therefore these products can influence indirectly to the way of people life. The objective of managing them is to achieve sustainability of the products development and innovation. This paper observes the behavior of these products in companies operated in high-end, high-volume consumer product.

  11. Tools for 3D scientific visualization in computational aerodynamics at NASA Ames Research Center

    International Nuclear Information System (INIS)

    Bancroft, G.; Plessel, T.; Merritt, F.; Watson, V.

    1989-01-01

    Hardware, software, and techniques used by the Fluid Dynamics Division (NASA) for performing visualization of computational aerodynamics, which can be applied to the visualization of flow fields from computer simulations of fluid dynamics about the Space Shuttle, are discussed. Three visualization techniques applied, post-processing, tracking, and steering, are described, as well as the post-processing software packages used, PLOT3D, SURF (Surface Modeller), GAS (Graphical Animation System), and FAST (Flow Analysis software Toolkit). Using post-processing methods a flow simulation was executed on a supercomputer and, after the simulation was complete, the results were processed for viewing. It is shown that the high-resolution, high-performance three-dimensional workstation combined with specially developed display and animation software provides a good tool for analyzing flow field solutions obtained from supercomputers. 7 refs

  12. Advanced Camera Image Cropping Approach for CNN-Based End-to-End Controls on Sustainable Computing

    Directory of Open Access Journals (Sweden)

    Yunsick Sung

    2018-03-01

    Full Text Available Recent research on deep learning has been applied to a diversity of fields. In particular, numerous studies have been conducted on self-driving vehicles using end-to-end approaches based on images captured by a single camera. End-to-end controls learn the output vectors of output devices directly from the input vectors of available input devices. In other words, an end-to-end approach learns not by analyzing the meaning of input vectors, but by extracting optimal output vectors based on input vectors. Generally, when end-to-end control is applied to self-driving vehicles, the steering wheel and pedals are controlled autonomously by learning from the images captured by a camera. However, high-resolution images captured from a car cannot be directly used as inputs to Convolutional Neural Networks (CNNs owing to memory limitations; the image size needs to be efficiently reduced. Therefore, it is necessary to extract features from captured images automatically and to generate input images by merging the parts of the images that contain the extracted features. This paper proposes a learning method for end-to-end control that generates input images for CNNs by extracting road parts from input images, identifying the edges of the extracted road parts, and merging the parts of the images that contain the detected edges. In addition, a CNN model for end-to-end control is introduced. Experiments involving the Open Racing Car Simulator (TORCS, a sustainable computing environment for cars, confirmed the effectiveness of the proposed method for self-driving by comparing the accumulated difference in the angle of the steering wheel in the images generated by it with those of resized images containing the entire captured area and cropped images containing only a part of the captured area. The results showed that the proposed method reduced the accumulated difference by 0.839% and 0.850% compared to those yielded by the resized images and cropped images

  13. Parallel Computing:. Some Activities in High Energy Physics

    Science.gov (United States)

    Willers, Ian

    This paper examines some activities in High Energy Physics that utilise parallel computing. The topic includes all computing from the proposed SIMD front end detectors, the farming applications, high-powered RISC processors and the large machines in the computer centers. We start by looking at the motivation behind using parallelism for general purpose computing. The developments around farming are then described from its simplest form to the more complex system in Fermilab. Finally, there is a list of some developments that are happening close to the experiments.

  14. NSI customer service representatives and user support office: NASA Science Internet

    Science.gov (United States)

    1991-01-01

    The NASA Science Internet, (NSI) was established in 1987 to provide NASA's Offices of Space Science and Applications (OSSA) missions with transparent wide-area data connectivity to NASA's researchers, computational resources, and databases. The NSI Office at NASA/Ames Research Center has the lead responsibility for implementing a total, open networking program to serve the OSSA community. NSI is a full-service communications provider whose services include science network planning, network engineering, applications development, network operations, and network information center/user support services. NSI's mission is to provide reliable high-speed communications to the NASA science community. To this end, the NSI Office manages and operates the NASA Science Internet, a multiprotocol network currently supporting both DECnet and TCP/IP protocols. NSI utilizes state-of-the-art network technology to meet its customers' requirements. THe NASA Science Internet interconnects with other national networks including the National Science Foundation's NSFNET, the Department of Energy's ESnet, and the Department of Defense's MILNET. NSI also has international connections to Japan, Australia, New Zealand, Chile, and several European countries. NSI cooperates with other government agencies as well as academic and commercial organizations to implement networking technologies which foster interoperability, improve reliability and performance, increase security and control, and expedite migration to the OSI protocols.

  15. Modern design of a fast front-end computer

    Science.gov (United States)

    Šoštarić, Z.; Anic̈ić, D.; Sekolec, L.; Su, J.

    1994-12-01

    Front-end computers (FEC) at Paul Scherrer Institut provide access to accelerator CAMAC-based sensors and actuators by way of a local area network. In the scope of the new generation FEC project, a front-end is regarded as a collection of services. The functionality of one such service is described in terms of Yourdon's environment, behaviour, processor and task models. The computational model (software representation of the environment) of the service is defined separately, using the information model of the Shlaer-Mellor method, and Sather OO language. In parallel with the analysis and later with the design, a suite of test programmes was developed to evaluate the feasibility of different computing platforms for the project and a set of rapid prototypes was produced to resolve different implementation issues. The past and future aspects of the project and its driving forces are presented. Justification of the choice of methodology, platform and requirement, is given. We conclude with a description of the present state, priorities and limitations of our project.

  16. Development of superconductor electronics technology for high-end computing

    Energy Technology Data Exchange (ETDEWEB)

    Silver, A [Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109-8099 (United States); Kleinsasser, A [Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109-8099 (United States); Kerber, G [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States); Herr, Q [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States); Dorojevets, M [Department of Electrical and Computer Engineering, SUNY-Stony Brook, NY 11794-2350 (United States); Bunyk, P [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States); Abelson, L [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States)

    2003-12-01

    This paper describes our programme to develop and demonstrate ultra-high performance single flux quantum (SFQ) VLSI technology that will enable superconducting digital processors for petaFLOPS-scale computing. In the hybrid technology, multi-threaded architecture, the computational engine to power a petaFLOPS machine at affordable power will consist of 4096 SFQ multi-chip processors, with 50 to 100 GHz clock frequency and associated cryogenic RAM. We present the superconducting technology requirements, progress to date and our plan to meet these requirements. We improved SFQ Nb VLSI by two generations, to a 8 kA cm{sup -2}, 1.25 {mu}m junction process, incorporated new CAD tools into our methodology, demonstrated methods for recycling the bias current and data communication at speeds up to 60 Gb s{sup -1}, both on and between chips through passive transmission lines. FLUX-1 is the most ambitious project implemented in SFQ technology to date, a prototype general-purpose 8 bit microprocessor chip. We are testing the FLUX-1 chip (5K gates, 20 GHz clock) and designing a 32 bit floating-point SFQ multiplier with vector-register memory. We report correct operation of the complete stripline-connected gate library with large bias margins, as well as several larger functional units used in FLUX-1. The next stage will be an SFQ multi-processor machine. Important challenges include further reducing chip supply current and on-chip power dissipation, developing at least 64 kbit, sub-nanosecond cryogenic RAM chips, developing thermally and electrically efficient high data rate cryogenic-to-ambient input/output technology and improving Nb VLSI to increase gate density.

  17. Development of superconductor electronics technology for high-end computing

    International Nuclear Information System (INIS)

    Silver, A; Kleinsasser, A; Kerber, G; Herr, Q; Dorojevets, M; Bunyk, P; Abelson, L

    2003-01-01

    This paper describes our programme to develop and demonstrate ultra-high performance single flux quantum (SFQ) VLSI technology that will enable superconducting digital processors for petaFLOPS-scale computing. In the hybrid technology, multi-threaded architecture, the computational engine to power a petaFLOPS machine at affordable power will consist of 4096 SFQ multi-chip processors, with 50 to 100 GHz clock frequency and associated cryogenic RAM. We present the superconducting technology requirements, progress to date and our plan to meet these requirements. We improved SFQ Nb VLSI by two generations, to a 8 kA cm -2 , 1.25 μm junction process, incorporated new CAD tools into our methodology, demonstrated methods for recycling the bias current and data communication at speeds up to 60 Gb s -1 , both on and between chips through passive transmission lines. FLUX-1 is the most ambitious project implemented in SFQ technology to date, a prototype general-purpose 8 bit microprocessor chip. We are testing the FLUX-1 chip (5K gates, 20 GHz clock) and designing a 32 bit floating-point SFQ multiplier with vector-register memory. We report correct operation of the complete stripline-connected gate library with large bias margins, as well as several larger functional units used in FLUX-1. The next stage will be an SFQ multi-processor machine. Important challenges include further reducing chip supply current and on-chip power dissipation, developing at least 64 kbit, sub-nanosecond cryogenic RAM chips, developing thermally and electrically efficient high data rate cryogenic-to-ambient input/output technology and improving Nb VLSI to increase gate density

  18. The NASA CSTI High Capacity Power Program

    International Nuclear Information System (INIS)

    Winter, J.M.

    1991-09-01

    The SP-100 program was established in 1983 by DOD, DOE, and NASA as a joint program to develop the technology necessary for space nuclear power systems for military and civil applications. During 1986 and 1987, the NASA Advanced Technology Program was responsible for maintaining the momentum of promising technology advancement efforts started during Phase 1 of SP-100 and to strengthen, in key areas, the chances for successful development and growth capability of space nuclear reactor power systems for future space applications. In 1988, the NASA Advanced Technology Program was incorporated into NASA's new Civil Space Technology Initiative (CSTI). The CSTI program was established to provide the foundation for technology development in automation and robotics, information, propulsion, and power. The CSTI High Capacity Power Program builds on the technology efforts of the SP-100 program, incorporates the previous NASA advanced technology project, and provides a bridge to the NASA exploration technology programs. The elements of CSTI high capacity power development include conversion systems: Stirling and thermoelectric, thermal management, power management, system diagnostics, and environmental interactions. Technology advancement in all areas, including materials, is required to provide the growth capability, high reliability, and 7 to 10 year lifetime demanded for future space nuclear power systems. The overall program will develop and demonstrate the technology base required to provide a wide range of modular power systems while minimizing the impact of day/night operations as well as attitudes and distance from the Sun. Significant accomplishments in all of the program elements will be discussed, along with revised goals and project timelines recently developed

  19. The NASA Ames Polycyclic Aromatic Hydrocarbon Infrared Spectroscopic Database : The Computed Spectra

    NARCIS (Netherlands)

    Bauschlicher, C. W.; Boersma, C.; Ricca, A.; Mattioda, A. L.; Cami, J.; Peeters, E.; de Armas, F. Sanchez; Saborido, G. Puerta; Hudgins, D. M.; Allamandola, L. J.

    The astronomical emission features, formerly known as the unidentified infrared bands, are now commonly ascribed to polycyclic aromatic hydrocarbons (PAHs). The laboratory experiments and computational modeling done at the NASA Ames Research Center to create a collection of PAH IR spectra relevant

  20. NASA Gulf of Mexico Initiative Hypoxia Research

    Science.gov (United States)

    Armstrong, Curtis D.

    2012-01-01

    The Applied Science & Technology Project Office at Stennis Space Center (SSC) manages NASA's Gulf of Mexico Initiative (GOMI). Addressing short-term crises and long-term issues, GOMI participants seek to understand the environment using remote sensing, in-situ observations, laboratory analyses, field observations and computational models. New capabilities are transferred to end-users to help them make informed decisions. Some GOMI activities of interest to the hypoxia research community are highlighted.

  1. Flow Control Research at NASA Langley in Support of High-Lift Augmentation

    Science.gov (United States)

    Sellers, William L., III; Jones, Gregory S.; Moore, Mark D.

    2002-01-01

    The paper describes the efforts at NASA Langley to apply active and passive flow control techniques for improved high-lift systems, and advanced vehicle concepts utilizing powered high-lift techniques. The development of simplified high-lift systems utilizing active flow control is shown to provide significant weight and drag reduction benefits based on system studies. Active flow control that focuses on separation, and the development of advanced circulation control wings (CCW) utilizing unsteady excitation techniques will be discussed. The advanced CCW airfoils can provide multifunctional controls throughout the flight envelope. Computational and experimental data are shown to illustrate the benefits and issues with implementation of the technology.

  2. Highly Parallel Computing Architectures by using Arrays of Quantum-dot Cellular Automata (QCA): Opportunities, Challenges, and Recent Results

    Science.gov (United States)

    Fijany, Amir; Toomarian, Benny N.

    2000-01-01

    There has been significant improvement in the performance of VLSI devices, in terms of size, power consumption, and speed, in recent years and this trend may also continue for some near future. However, it is a well known fact that there are major obstacles, i.e., physical limitation of feature size reduction and ever increasing cost of foundry, that would prevent the long term continuation of this trend. This has motivated the exploration of some fundamentally new technologies that are not dependent on the conventional feature size approach. Such technologies are expected to enable scaling to continue to the ultimate level, i.e., molecular and atomistic size. Quantum computing, quantum dot-based computing, DNA based computing, biologically inspired computing, etc., are examples of such new technologies. In particular, quantum-dots based computing by using Quantum-dot Cellular Automata (QCA) has recently been intensely investigated as a promising new technology capable of offering significant improvement over conventional VLSI in terms of reduction of feature size (and hence increase in integration level), reduction of power consumption, and increase of switching speed. Quantum dot-based computing and memory in general and QCA specifically, are intriguing to NASA due to their high packing density (10(exp 11) - 10(exp 12) per square cm ) and low power consumption (no transfer of current) and potentially higher radiation tolerant. Under Revolutionary Computing Technology (RTC) Program at the NASA/JPL Center for Integrated Space Microelectronics (CISM), we have been investigating the potential applications of QCA for the space program. To this end, exploiting the intrinsic features of QCA, we have designed novel QCA-based circuits for co-planner (i.e., single layer) and compact implementation of a class of data permutation matrices, a class of interconnection networks, and a bit-serial processor. Building upon these circuits, we have developed novel algorithms and QCA

  3. Parallel computation of fluid-structural interactions using high resolution upwind schemes

    Science.gov (United States)

    Hu, Zongjun

    An efficient and accurate solver is developed to simulate the non-linear fluid-structural interactions in turbomachinery flutter flows. A new low diffusion E-CUSP scheme, Zha CUSP scheme, is developed to improve the efficiency and accuracy of the inviscid flux computation. The 3D unsteady Navier-Stokes equations with the Baldwin-Lomax turbulence model are solved using the finite volume method with the dual-time stepping scheme. The linearized equations are solved with Gauss-Seidel line iterations. The parallel computation is implemented using MPI protocol. The solver is validated with 2D cases for its turbulence modeling, parallel computation and unsteady calculation. The Zha CUSP scheme is validated with 2D cases, including a supersonic flat plate boundary layer, a transonic converging-diverging nozzle and a transonic inlet diffuser. The Zha CUSP2 scheme is tested with 3D cases, including a circular-to-rectangular nozzle, a subsonic compressor cascade and a transonic channel. The Zha CUSP schemes are proved to be accurate, robust and efficient in these tests. The steady and unsteady separation flows in a 3D stationary cascade under high incidence and three inlet Mach numbers are calculated to study the steady state separation flow patterns and their unsteady oscillation characteristics. The leading edge vortex shedding is the mechanism behind the unsteady characteristics of the high incidence separated flows. The separation flow characteristics is affected by the inlet Mach number. The blade aeroelasticity of a linear cascade with forced oscillating blades is studied using parallel computation. A simplified two-passage cascade with periodic boundary condition is first calculated under a medium frequency and a low incidence. The full scale cascade with 9 blades and two end walls is then studied more extensively under three oscillation frequencies and two incidence angles. The end wall influence and the blade stability are studied and compared under different

  4. The NASA Carbon Monitoring System

    Science.gov (United States)

    Hurtt, G. C.

    2015-12-01

    Greenhouse gas emission inventories, forest carbon sequestration programs (e.g., Reducing Emissions from Deforestation and Forest Degradation (REDD and REDD+), cap-and-trade systems, self-reporting programs, and their associated monitoring, reporting and verification (MRV) frameworks depend upon data that are accurate, systematic, practical, and transparent. A sustained, observationally-driven carbon monitoring system using remote sensing data has the potential to significantly improve the relevant carbon cycle information base for the U.S. and world. Initiated in 2010, NASA's Carbon Monitoring System (CMS) project is prototyping and conducting pilot studies to evaluate technological approaches and methodologies to meet carbon monitoring and reporting requirements for multiple users and over multiple scales of interest. NASA's approach emphasizes exploitation of the satellite remote sensing resources, computational capabilities, scientific knowledge, airborne science capabilities, and end-to-end system expertise that are major strengths of the NASA Earth Science program. Through user engagement activities, the NASA CMS project is taking specific actions to be responsive to the needs of stakeholders working to improve carbon MRV frameworks. The first phase of NASA CMS projects focused on developing products for U.S. biomass/carbon stocks and global carbon fluxes, and on scoping studies to identify stakeholders and explore other potential carbon products. The second phase built upon these initial efforts, with a large expansion in prototyping activities across a diversity of systems, scales, and regions, including research focused on prototype MRV systems and utilization of COTS technologies. Priorities for the future include: 1) utilizing future satellite sensors, 2) prototyping with commercial off-the-shelf technology, 3) expanding the range of prototyping activities, 4) rigorous evaluation, uncertainty quantification, and error characterization, 5) stakeholder

  5. EBR-II Cover Gas Cleanup System upgrade distributed control and front end computer systems

    International Nuclear Information System (INIS)

    Carlson, R.B.

    1992-01-01

    The Experimental Breeder Reactor II (EBR-II) Cover Gas Cleanup System (CGCS) control system was upgraded in 1991 to improve control and provide a graphical operator interface. The upgrade consisted of a main control computer, a distributed control computer, a front end input/output computer, a main graphics interface terminal, and a remote graphics interface terminal. This paper briefly describes the Cover Gas Cleanup System and the overall control system; gives reasons behind the computer system structure; and then gives a detailed description of the distributed control computer, the front end computer, and how these computers interact with the main control computer. The descriptions cover both hardware and software

  6. CHEETAH: circuit-switched high-speed end-to-end transport architecture

    Science.gov (United States)

    Veeraraghavan, Malathi; Zheng, Xuan; Lee, Hyuk; Gardner, M.; Feng, Wuchun

    2003-10-01

    Leveraging the dominance of Ethernet in LANs and SONET/SDH in MANs and WANs, we propose a service called CHEETAH (Circuit-switched High-speed End-to-End Transport ArcHitecture). The service concept is to provide end hosts with high-speed, end-to-end circuit connectivity on a call-by-call shared basis, where a "circuit" consists of Ethernet segments at the ends that are mapped into Ethernet-over-SONET long-distance circuits. This paper focuses on the file-transfer application for such circuits. For this application, the CHEETAH service is proposed as an add-on to the primary Internet access service already in place for enterprise hosts. This allows an end host that is sending a file to first attempt setting up an end-to-end Ethernet/EoS circuit, and if rejected, fall back to the TCP/IP path. If the circuit setup is successful, the end host will enjoy a much shorter file-transfer delay than on the TCP/IP path. To determine the conditions under which an end host with access to the CHEETAH service should attempt circuit setup, we analyze mean file-transfer delays as a function of call blocking probability in the circuit-switched network, probability of packet loss in the IP network, round-trip times, link rates, and so on.

  7. De Novo Ultrascale Atomistic Simulations On High-End Parallel Supercomputers

    Energy Technology Data Exchange (ETDEWEB)

    Nakano, A; Kalia, R K; Nomura, K; Sharma, A; Vashishta, P; Shimojo, F; van Duin, A; Goddard, III, W A; Biswas, R; Srivastava, D; Yang, L H

    2006-09-04

    We present a de novo hierarchical simulation framework for first-principles based predictive simulations of materials and their validation on high-end parallel supercomputers and geographically distributed clusters. In this framework, high-end chemically reactive and non-reactive molecular dynamics (MD) simulations explore a wide solution space to discover microscopic mechanisms that govern macroscopic material properties, into which highly accurate quantum mechanical (QM) simulations are embedded to validate the discovered mechanisms and quantify the uncertainty of the solution. The framework includes an embedded divide-and-conquer (EDC) algorithmic framework for the design of linear-scaling simulation algorithms with minimal bandwidth complexity and tight error control. The EDC framework also enables adaptive hierarchical simulation with automated model transitioning assisted by graph-based event tracking. A tunable hierarchical cellular decomposition parallelization framework then maps the O(N) EDC algorithms onto Petaflops computers, while achieving performance tunability through a hierarchy of parameterized cell data/computation structures, as well as its implementation using hybrid Grid remote procedure call + message passing + threads programming. High-end computing platforms such as IBM BlueGene/L, SGI Altix 3000 and the NSF TeraGrid provide an excellent test grounds for the framework. On these platforms, we have achieved unprecedented scales of quantum-mechanically accurate and well validated, chemically reactive atomistic simulations--1.06 billion-atom fast reactive force-field MD and 11.8 million-atom (1.04 trillion grid points) quantum-mechanical MD in the framework of the EDC density functional theory on adaptive multigrids--in addition to 134 billion-atom non-reactive space-time multiresolution MD, with the parallel efficiency as high as 0.998 on 65,536 dual-processor BlueGene/L nodes. We have also achieved an automated execution of hierarchical QM

  8. High-Power Hall Propulsion Development at NASA Glenn Research Center

    Science.gov (United States)

    Kamhawi, Hani; Manzella, David H.; Smith, Timothy D.; Schmidt, George R.

    2014-01-01

    The NASA Office of the Chief Technologist Game Changing Division is sponsoring the development and testing of enabling technologies to achieve efficient and reliable human space exploration. High-power solar electric propulsion has been proposed by NASA's Human Exploration Framework Team as an option to achieve these ambitious missions to near Earth objects. NASA Glenn Research Center (NASA Glenn) is leading the development of mission concepts for a solar electric propulsion Technical Demonstration Mission. The mission concepts are highlighted in this paper but are detailed in a companion paper. There are also multiple projects that are developing technologies to support a demonstration mission and are also extensible to NASA's goals of human space exploration. Specifically, the In-Space Propulsion technology development project at NASA Glenn has a number of tasks related to high-power Hall thrusters including performance evaluation of existing Hall thrusters; performing detailed internal discharge chamber, near-field, and far-field plasma measurements; performing detailed physics-based modeling with the NASA Jet Propulsion Laboratory's Hall2De code; performing thermal and structural modeling; and developing high-power efficient discharge modules for power processing. This paper summarizes the various technology development tasks and progress made to date

  9. Upgrading NASA/DOSE laser ranging system control computers

    Science.gov (United States)

    Ricklefs, Randall L.; Cheek, Jack; Seery, Paul J.; Emenheiser, Kenneth S.; Hanrahan, William P., III; Mcgarry, Jan F.

    1993-01-01

    Laser ranging systems now managed by the NASA Dynamics of the Solid Earth (DOSE) and operated by the Bendix Field Engineering Corporation, the University of Hawaii, and the University of Texas have produced a wealth on interdisciplinary scientific data over the last three decades. Despite upgrades to the most of the ranging station subsystems, the control computers remain a mix of 1970's vintage minicomputers. These encompass a wide range of vendors, operating systems, and languages, making hardware and software support increasingly difficult. Current technology allows replacement of controller computers at a relatively low cost while maintaining excellent processing power and a friendly operating environment. The new controller systems are now being designed using IBM-PC-compatible 80486-based microcomputers, a real-time Unix operating system (LynxOS), and X-windows/Motif IB, and serial interfaces have been chosen. This design supports minimizing short and long term costs by relying on proven standards for both hardware and software components. Currently, the project is in the design and prototyping stage with the first systems targeted for production in mid-1993.

  10. Evaluating Cloud Computing in the Proposed NASA DESDynI Ground Data System

    Science.gov (United States)

    Tran, John J.; Cinquini, Luca; Mattmann, Chris A.; Zimdars, Paul A.; Cuddy, David T.; Leung, Kon S.; Kwoun, Oh-Ig; Crichton, Dan; Freeborn, Dana

    2011-01-01

    The proposed NASA Deformation, Ecosystem Structure and Dynamics of Ice (DESDynI) mission would be a first-of-breed endeavor that would fundamentally change the paradigm by which Earth Science data systems at NASA are built. DESDynI is evaluating a distributed architecture where expert science nodes around the country all engage in some form of mission processing and data archiving. This is compared to the traditional NASA Earth Science missions where the science processing is typically centralized. What's more, DESDynI is poised to profoundly increase the amount of data collection and processing well into the 5 terabyte/day and tens of thousands of job range, both of which comprise a tremendous challenge to DESDynI's proposed distributed data system architecture. In this paper, we report on a set of architectural trade studies and benchmarks meant to inform the DESDynI mission and the broader community of the impacts of these unprecedented requirements. In particular, we evaluate the benefits of cloud computing and its integration with our existing NASA ground data system software called Apache Object Oriented Data Technology (OODT). The preliminary conclusions of our study suggest that the use of the cloud and OODT together synergistically form an effective, efficient and extensible combination that could meet the challenges of NASA science missions requiring DESDynI-like data collection and processing volumes at reduced costs.

  11. X-ray computed tomography comparison of individual and parallel assembled commercial lithium iron phosphate batteries at end of life after high rate cycling

    Science.gov (United States)

    Carter, Rachel; Huhman, Brett; Love, Corey T.; Zenyuk, Iryna V.

    2018-03-01

    X-ray computed tomography (X-ray CT) across multiple length scales is utilized for the first time to investigate the physical abuse of high C-rate pulsed discharge on cells wired individually and in parallel.. Manufactured lithium iron phosphate cells boasting high rate capability were pulse power tested in both wiring conditions with high discharge currents of 10C for a high number of cycles (up to 1200) until end of life (health (SOH) monitoring methods, is diagnosed using CT by rendering the interior current collector without harm or alteration to the active materials. Correlation of CT observations to the electrochemical pulse data from the parallel-wired cells reveals the risk of parallel wiring during high C-rate pulse discharge.

  12. Data, Meet Compute: NASA's Cumulus Ingest Architecture

    Science.gov (United States)

    Quinn, Patrick

    2018-01-01

    NASA's Earth Observing System Data and Information System (EOSDIS) houses nearly 30PBs of critical Earth Science data and with upcoming missions is expected to balloon to between 200PBs-300PBs over the next seven years. In addition to the massive increase in data collected, researchers and application developers want more and faster access - enabling complex visualizations, long time-series analysis, and cross dataset research without needing to copy and manage massive amounts of data locally. NASA has looked to the cloud to address these needs, building its Cumulus system to manage the ingest of diverse data in a wide variety of formats into the cloud. In this talk, we look at what Cumulus is from a high level and then take a deep dive into how it manages complexity and versioning associated with multiple AWS Lambda and ECS microservices communicating through AWS Step Functions across several disparate installations

  13. Internet end-to-end performance monitoring for the High Energy Nuclear and Particle Physics community

    International Nuclear Information System (INIS)

    Matthews, W.

    2000-01-01

    Modern High Energy Nuclear and Particle Physics (HENP) experiments at Laboratories around the world present a significant challenge to wide area networks. Petabytes (1015) or exabytes (1018) of data will be generated during the lifetime of the experiment. Much of this data will be distributed via the Internet to the experiment's collaborators at Universities and Institutes throughout the world for analysis. In order to assess the feasibility of the computing goals of these and future experiments, the HENP networking community is actively monitoring performance across a large part of the Internet used by its collaborators. Since 1995, the pingER project has been collecting data on ping packet loss and round trip times. In January 2000, there are 28 monitoring sites in 15 countries gathering data on over 2,000 end-to-end pairs. HENP labs such as SLAC, Fermi Lab and CERN are using Advanced Network's Surveyor project and monitoring performance from one-way delay of UDP packets. More recently several HENP sites have become involved with NLANR's active measurement program (AMP). In addition SLAC and CERN are part of the RIPE test-traffic project and SLAC is home for a NIMI machine. The large End-to-end performance monitoring infrastructure allows the HENP networking community to chart long term trends and closely examine short term glitches across a wide range of networks and connections. The different methodologies provide opportunities to compare results based on different protocols and statistical samples. Understanding agreement and discrepancies between results provides particular insight into the nature of the network. This paper will highlight the practical side of monitoring by reviewing the special needs of High Energy Nuclear and Particle Physics experiments and provide an overview of the experience of measuring performance across a large number of interconnected networks throughout the world with various methodologies. In particular, results from each project

  14. Internet end-to-end performance monitoring for the High Energy Nuclear and Particle Physics community

    Energy Technology Data Exchange (ETDEWEB)

    Matthews, W.

    2000-02-22

    Modern High Energy Nuclear and Particle Physics (HENP) experiments at Laboratories around the world present a significant challenge to wide area networks. Petabytes (1015) or exabytes (1018) of data will be generated during the lifetime of the experiment. Much of this data will be distributed via the Internet to the experiment's collaborators at Universities and Institutes throughout the world for analysis. In order to assess the feasibility of the computing goals of these and future experiments, the HENP networking community is actively monitoring performance across a large part of the Internet used by its collaborators. Since 1995, the pingER project has been collecting data on ping packet loss and round trip times. In January 2000, there are 28 monitoring sites in 15 countries gathering data on over 2,000 end-to-end pairs. HENP labs such as SLAC, Fermi Lab and CERN are using Advanced Network's Surveyor project and monitoring performance from one-way delay of UDP packets. More recently several HENP sites have become involved with NLANR's active measurement program (AMP). In addition SLAC and CERN are part of the RIPE test-traffic project and SLAC is home for a NIMI machine. The large End-to-end performance monitoring infrastructure allows the HENP networking community to chart long term trends and closely examine short term glitches across a wide range of networks and connections. The different methodologies provide opportunities to compare results based on different protocols and statistical samples. Understanding agreement and discrepancies between results provides particular insight into the nature of the network. This paper will highlight the practical side of monitoring by reviewing the special needs of High Energy Nuclear and Particle Physics experiments and provide an overview of the experience of measuring performance across a large number of interconnected networks throughout the world with various methodologies. In particular, results

  15. The NASA Integrated Information Technology Architecture

    Science.gov (United States)

    Baldridge, Tim

    1997-01-01

    This document defines an Information Technology Architecture for the National Aeronautics and Space Administration (NASA), where Information Technology (IT) refers to the hardware, software, standards, protocols and processes that enable the creation, manipulation, storage, organization and sharing of information. An architecture provides an itemization and definition of these IT structures, a view of the relationship of the structures to each other and, most importantly, an accessible view of the whole. It is a fundamental assumption of this document that a useful, interoperable and affordable IT environment is key to the execution of the core NASA scientific and project competencies and business practices. This Architecture represents the highest level system design and guideline for NASA IT related activities and has been created on the authority of the NASA Chief Information Officer (CIO) and will be maintained under the auspices of that office. It addresses all aspects of general purpose, research, administrative and scientific computing and networking throughout the NASA Agency and is applicable to all NASA administrative offices, projects, field centers and remote sites. Through the establishment of five Objectives and six Principles this Architecture provides a blueprint for all NASA IT service providers: civil service, contractor and outsourcer. The most significant of the Objectives and Principles are the commitment to customer-driven IT implementations and the commitment to a simpler, cost-efficient, standards-based, modular IT infrastructure. In order to ensure that the Architecture is presented and defined in the context of the mission, project and business goals of NASA, this Architecture consists of four layers in which each subsequent layer builds on the previous layer. They are: 1) the Business Architecture: the operational functions of the business, or Enterprise, 2) the Systems Architecture: the specific Enterprise activities within the context

  16. Automated Test for NASA CFS

    Science.gov (United States)

    McComas, David C.; Strege, Susanne L.; Carpenter, Paul B. Hartman, Randy

    2015-01-01

    The core Flight System (cFS) is a flight software (FSW) product line developed by the Flight Software Systems Branch (FSSB) at NASA's Goddard Space Flight Center (GSFC). The cFS uses compile-time configuration parameters to implement variable requirements to enable portability across embedded computing platforms and to implement different end-user functional needs. The verification and validation of these requirements is proving to be a significant challenge. This paper describes the challenges facing the cFS and the results of a pilot effort to apply EXB Solution's testing approach to the cFS applications.

  17. EXPERIENCE WITH FPGA-BASED PROCESSOR CORE AS FRONT-END COMPUTER

    International Nuclear Information System (INIS)

    HOFF, L.T.

    2005-01-01

    The RHIC control system architecture follows the familiar ''standard model''. LINUX workstations are used as operator consoles. Front-end computers are distributed around the accelerator, close to equipment being controlled or monitored. These computers are generally based on VMEbus CPU modules running the VxWorks operating system. I/O is typically performed via the VMEbus, or via PMC daughter cards (via an internal PCI bus), or via on-board I/O interfaces (Ethernet or serial). Advances in FPGA size and sophistication now permit running virtual processor ''cores'' within the FPGA logic, including ''cores'' with advanced features such as memory management. Such systems offer certain advantages over traditional VMEbus Front-end computers. Advantages include tighter coupling with FPGA logic, and therefore higher I/O bandwidth, and flexibility in packaging, possibly resulting in a lower noise environment and/or lower cost. This paper presents the experience acquired while porting the RHIC control system to a PowerPC 405 core within a Xilinx FPGA for use in low-level RF control

  18. High-end encroachment patterns of new products

    NARCIS (Netherlands)

    Rhee, van der B.; Schmidt, G.; Orden, van J.

    2012-01-01

    Previous research describes two key ways in which a new product may encroach on an existing market. In high-end encroachment, the new product first sells to high-end customers and then diffuses down-market; in low-end encroachment, the new product enters at the low end and encroaches up-market. This

  19. Practical End-to-End Performance Testing Tool for High Speed 3G-Based Networks

    Science.gov (United States)

    Shinbo, Hiroyuki; Tagami, Atsushi; Ano, Shigehiro; Hasegawa, Toru; Suzuki, Kenji

    High speed IP communication is a killer application for 3rd generation (3G) mobile systems. Thus 3G network operators should perform extensive tests to check whether expected end-to-end performances are provided to customers under various environments. An important objective of such tests is to check whether network nodes fulfill requirements to durations of processing packets because a long duration of such processing causes performance degradation. This requires testers (persons who do tests) to precisely know how long a packet is hold by various network nodes. Without any tool's help, this task is time-consuming and error prone. Thus we propose a multi-point packet header analysis tool which extracts and records packet headers with synchronized timestamps at multiple observation points. Such recorded packet headers enable testers to calculate such holding durations. The notable feature of this tool is that it is implemented on off-the shelf hardware platforms, i.e., lap-top personal computers. The key challenges of the implementation are precise clock synchronization without any special hardware and a sophisticated header extraction algorithm without any drop.

  20. High performance real-time flight simulation at NASA Langley

    Science.gov (United States)

    Cleveland, Jeff I., II

    1994-01-01

    In order to meet the stringent time-critical requirements for real-time man-in-the-loop flight simulation, computer processing operations must be deterministic and be completed in as short a time as possible. This includes simulation mathematical model computational and data input/output to the simulators. In 1986, in response to increased demands for flight simulation performance, personnel at NASA's Langley Research Center (LaRC), working with the contractor, developed extensions to a standard input/output system to provide for high bandwidth, low latency data acquisition and distribution. The Computer Automated Measurement and Control technology (IEEE standard 595) was extended to meet the performance requirements for real-time simulation. This technology extension increased the effective bandwidth by a factor of ten and increased the performance of modules necessary for simulator communications. This technology is being used by more than 80 leading technological developers in the United States, Canada, and Europe. Included among the commercial applications of this technology are nuclear process control, power grid analysis, process monitoring, real-time simulation, and radar data acquisition. Personnel at LaRC have completed the development of the use of supercomputers for simulation mathematical model computational to support real-time flight simulation. This includes the development of a real-time operating system and the development of specialized software and hardware for the CAMAC simulator network. This work, coupled with the use of an open systems software architecture, has advanced the state of the art in real time flight simulation. The data acquisition technology innovation and experience with recent developments in this technology are described.

  1. Brain-computer interface controlled gaming: evaluation of usability by severely motor restricted end-users.

    Science.gov (United States)

    Holz, Elisa Mira; Höhne, Johannes; Staiger-Sälzer, Pit; Tangermann, Michael; Kübler, Andrea

    2013-10-01

    Connect-Four, a new sensorimotor rhythm (SMR) based brain-computer interface (BCI) gaming application, was evaluated by four severely motor restricted end-users; two were in the locked-in state and had unreliable eye-movement. Following the user-centred approach, usability of the BCI prototype was evaluated in terms of effectiveness (accuracy), efficiency (information transfer rate (ITR) and subjective workload) and users' satisfaction. Online performance varied strongly across users and sessions (median accuracy (%) of end-users: A=.65; B=.60; C=.47; D=.77). Our results thus yielded low to medium effectiveness in three end-users and high effectiveness in one end-user. Consequently, ITR was low (0.05-1.44bits/min). Only two end-users were able to play the game in free-mode. Total workload was moderate but varied strongly across sessions. Main sources of workload were mental and temporal demand. Furthermore, frustration contributed to the subjective workload of two end-users. Nevertheless, most end-users accepted the BCI application well and rated satisfaction medium to high. Sources for dissatisfaction were (1) electrode gel and cap, (2) low effectiveness, (3) time-consuming adjustment and (4) not easy-to-use BCI equipment. All four end-users indicated ease of use as being one of the most important aspect of BCI. Effectiveness and efficiency are lower as compared to applications using the event-related potential as input channel. Nevertheless, the SMR-BCI application was satisfactorily accepted by the end-users and two of four could imagine using the BCI application in their daily life. Thus, despite moderate effectiveness and efficiency BCIs might be an option when controlling an application for entertainment. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. NASA Enterprise Managed Cloud Computing (EMCC): Delivering an Initial Operating Capability (IOC) for NASA use of Commercial Infrastructure-as-a-Service (IaaS)

    Science.gov (United States)

    O'Brien, Raymond

    2017-01-01

    In 2016, Ames supported the NASA CIO in delivering an initial operating capability for Agency use of commercial cloud computing. This presentation provides an overview of the project, the services approach followed, and the major components of the capability that was delivered. The presentation is being given at the request of Amazon Web Services to a contingent representing the Brazilian Federal Government and Defense Organization that is interested in the use of Amazon Web Services (AWS). NASA is currently a customer of AWS and delivered the Initial Operating Capability using AWS as its first commercial cloud provider. The IOC, however, designed to also support other cloud providers in the future.

  3. NASA's OCA Mirroring System: An Application of Multiagent Systems in Mission Control

    Science.gov (United States)

    Sierhuis, Maarten; Clancey, William J.; vanHoof, Ron J. J.; Seah, Chin H.; Scott, Michael S.; Nado, Robert A.; Blumenberg, Susan F.; Shafto, Michael G.; Anderson, Brian L.; Bruins, Anthony C.; hide

    2009-01-01

    Orbital Communications Adaptor (OCA) Flight Controllers, in NASA's International Space Station Mission Control Center, use different computer systems to uplink, downlink, mirror, archive, and deliver files to and from the International Space Station (ISS) in real time. The OCA Mirroring System (OCAMS) is a multiagent software system (MAS) that is operational in NASA's Mission Control Center. This paper presents OCAMS and its workings in an operational setting where flight controllers rely on the system 24x7. We also discuss the return on investment, based on a simulation baseline, six months of 24x7 operations at NASA Johnson Space Center in Houston, Texas, and a projection of future capabilities. This paper ends with a discussion of the value of MAS and future planned functionality and capabilities.

  4. NEXUS/NASCAD- NASA ENGINEERING EXTENDIBLE UNIFIED SOFTWARE SYSTEM WITH NASA COMPUTER AIDED DESIGN

    Science.gov (United States)

    Purves, L. R.

    1994-01-01

    NEXUS, the NASA Engineering Extendible Unified Software system, is a research set of computer programs designed to support the full sequence of activities encountered in NASA engineering projects. This sequence spans preliminary design, design analysis, detailed design, manufacturing, assembly, and testing. NEXUS primarily addresses the process of prototype engineering, the task of getting a single or small number of copies of a product to work. Prototype engineering is a critical element of large scale industrial production. The time and cost needed to introduce a new product are heavily dependent on two factors: 1) how efficiently required product prototypes can be developed, and 2) how efficiently required production facilities, also a prototype engineering development, can be completed. NEXUS extendibility and unification are achieved by organizing the system as an arbitrarily large set of computer programs accessed in a common manner through a standard user interface. The NEXUS interface is a multipurpose interactive graphics interface called NASCAD (NASA Computer Aided Design). NASCAD can be used to build and display two and three-dimensional geometries, to annotate models with dimension lines, text strings, etc., and to store and retrieve design related information such as names, masses, and power requirements of components used in the design. From the user's standpoint, NASCAD allows the construction, viewing, modification, and other processing of data structures that represent the design. Four basic types of data structures are supported by NASCAD: 1) three-dimensional geometric models of the object being designed, 2) alphanumeric arrays to hold data ranging from numeric scalars to multidimensional arrays of numbers or characters, 3) tabular data sets that provide a relational data base capability, and 4) procedure definitions to combine groups of system commands or other user procedures to create more powerful functions. NASCAD has extensive abilities to

  5. NASA Earth Observation Systems and Applications for Health: Moving from Research to Operational End Users

    Science.gov (United States)

    Haynes, J.; Estes, S. M.

    2017-12-01

    Health providers and researchers need environmental data to study and understand the geographic, environmental, and meteorological differences in disease. Satellite remote sensing of the environment offers a unique vantage point that can fill in the gaps of environmental, spatial, and temporal data for tracking disease. This presentation will demonstrate NASA's applied science programs efforts to transition from research to operations to benefit society. Satellite earth observations present a unique vantage point of the earth's environment from space, which offers a wealth of health applications for the imaginative investigator. The presentation is directly related to Earth Observing systems and Global Health Surveillance and will present research results of the remote sensing environmental observations of earth and health applications, which can contribute to the health research. As part of NASA approach and methodology they have used Earth Observation Systems and Applications for Health Models to provide a method for bridging gaps of environmental, spatial, and temporal data for tracking disease. This presentation will provide a venue where the results of both research and practice using satellite earth observations to study weather and it's role in health research and the transition to operational end users.

  6. NASA strategic plan

    Science.gov (United States)

    1994-01-01

    The NASA Strategic Plan is a living document. It provides far-reaching goals and objectives to create stability for NASA's efforts. The Plan presents NASA's top-level strategy: it articulates what NASA does and for whom; it differentiates between ends and means; it states where NASA is going and what NASA intends to do to get there. This Plan is not a budget document, nor does it present priorities for current or future programs. Rather, it establishes a framework for shaping NASA's activities and developing a balanced set of priorities across the Agency. Such priorities will then be reflected in the NASA budget. The document includes vision, mission, and goals; external environment; conceptual framework; strategic enterprises (Mission to Planet Earth, aeronautics, human exploration and development of space, scientific research, space technology, and synergy); strategic functions (transportation to space, space communications, human resources, and physical resources); values and operating principles; implementing strategy; and senior management team concurrence.

  7. NASA's Earth science flight program status

    Science.gov (United States)

    Neeck, Steven P.; Volz, Stephen M.

    2010-10-01

    NASA's strategic goal to "advance scientific understanding of the changing Earth system to meet societal needs" continues the agency's legacy of expanding human knowledge of the Earth through space activities, as mandated by the National Aeronautics and Space Act of 1958. Over the past 50 years, NASA has been the world leader in developing space-based Earth observing systems and capabilities that have fundamentally changed our view of our planet and have defined Earth system science. The U.S. National Research Council report "Earth Observations from Space: The First 50 Years of Scientific Achievements" published in 2008 by the National Academy of Sciences articulates those key achievements and the evolution of the space observing capabilities, looking forward to growing potential to address Earth science questions and enable an abundance of practical applications. NASA's Earth science program is an end-to-end one that encompasses the development of observational techniques and the instrument technology needed to implement them. This includes laboratory testing and demonstration from surface, airborne, or space-based platforms; research to increase basic process knowledge; incorporation of results into complex computational models to more fully characterize the present state and future evolution of the Earth system; and development of partnerships with national and international organizations that can use the generated information in environmental forecasting and in policy, business, and management decisions. Currently, NASA's Earth Science Division (ESD) has 14 operating Earth science space missions with 6 in development and 18 under study or in technology risk reduction. Two Tier 2 Decadal Survey climate-focused missions, Active Sensing of CO2 Emissions over Nights, Days and Seasons (ASCENDS) and Surface Water and Ocean Topography (SWOT), have been identified in conjunction with the U.S. Global Change Research Program and initiated for launch in the 2019

  8. Computer-based communication in support of scientific and technical work. [conferences on management information systems used by scientists of NASA programs

    Science.gov (United States)

    Vallee, J.; Wilson, T.

    1976-01-01

    Results are reported of the first experiments for a computer conference management information system at the National Aeronautics and Space Administration. Between August 1975 and March 1976, two NASA projects with geographically separated participants (NASA scientists) used the PLANET computer conferencing system for portions of their work. The first project was a technology assessment of future transportation systems. The second project involved experiments with the Communication Technology Satellite. As part of this project, pre- and postlaunch operations were discussed in a computer conference. These conferences also provided the context for an analysis of the cost of computer conferencing. In particular, six cost components were identified: (1) terminal equipment, (2) communication with a network port, (3) network connection, (4) computer utilization, (5) data storage and (6) administrative overhead.

  9. Distributed management of scientific projects - An analysis of two computer-conferencing experiments at NASA

    Science.gov (United States)

    Vallee, J.; Gibbs, B.

    1976-01-01

    Between August 1975 and March 1976, two NASA projects with geographically separated participants used a computer-conferencing system developed by the Institute for the Future for portions of their work. Monthly usage statistics for the system were collected in order to examine the group and individual participation figures for all conferences. The conference transcripts were analysed to derive observations about the use of the medium. In addition to the results of these analyses, the attitudes of users and the major components of the costs of computer conferencing are discussed.

  10. Modeling Guru: Knowledge Base for NASA Modelers

    Science.gov (United States)

    Seablom, M. S.; Wojcik, G. S.; van Aartsen, B. H.

    2009-05-01

    Modeling Guru is an on-line knowledge-sharing resource for anyone involved with or interested in NASA's scientific models or High End Computing (HEC) systems. Developed and maintained by the NASA's Software Integration and Visualization Office (SIVO) and the NASA Center for Computational Sciences (NCCS), Modeling Guru's combined forums and knowledge base for research and collaboration is becoming a repository for the accumulated expertise of NASA's scientific modeling and HEC communities. All NASA modelers and associates are encouraged to participate and provide knowledge about the models and systems so that other users may benefit from their experience. Modeling Guru is divided into a hierarchy of communities, each with its own set forums and knowledge base documents. Current modeling communities include those for space science, land and atmospheric dynamics, atmospheric chemistry, and oceanography. In addition, there are communities focused on NCCS systems, HEC tools and libraries, and programming and scripting languages. Anyone may view most of the content on Modeling Guru (available at http://modelingguru.nasa.gov/), but you must log in to post messages and subscribe to community postings. The site offers a full range of "Web 2.0" features, including discussion forums, "wiki" document generation, document uploading, RSS feeds, search tools, blogs, email notification, and "breadcrumb" links. A discussion (a.k.a. forum "thread") is used to post comments, solicit feedback, or ask questions. If marked as a question, SIVO will monitor the thread, and normally respond within a day. Discussions can include embedded images, tables, and formatting through the use of the Rich Text Editor. Also, the user can add "Tags" to their thread to facilitate later searches. The "knowledge base" is comprised of documents that are used to capture and share expertise with others. The default "wiki" document lets users edit within the browser so others can easily collaborate on the

  11. A PROFICIENT MODEL FOR HIGH END SECURITY IN CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    R. Bala Chandar

    2014-01-01

    Full Text Available Cloud computing is an inspiring technology due to its abilities like ensuring scalable services, reducing the anxiety of local hardware and software management associated with computing while increasing flexibility and scalability. A key trait of the cloud services is remotely processing of data. Even though this technology had offered a lot of services, there are a few concerns such as misbehavior of server side stored data , out of control of data owner's data and cloud computing does not control the access of outsourced data desired by the data owner. To handle these issues, we propose a new model to ensure the data correctness for assurance of stored data, distributed accountability for authentication and efficient access control of outsourced data for authorization. This model strengthens the correctness of data and helps to achieve the cloud data integrity, supports data owner to have control on their own data through tracking and improves the access control of outsourced data.

  12. GWDC Expands High-End Market Share

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    @@ It is a decision of great significance for GWDC to expand high-end market share in order to realize its transformation of development strategy and improve its development quality. As an important step of GWDC to explore high-end market, Oman PDO Project marks the first time that the Chinese petroleum engineering service team cooperates with the transnational petroleum corporations ranking first three in the world.

  13. A NASA high-power space-based laser research and applications program

    Science.gov (United States)

    Deyoung, R. J.; Walberg, G. D.; Conway, E. J.; Jones, L. W.

    1983-01-01

    Applications of high power lasers are discussed which might fulfill the needs of NASA missions, and the technology characteristics of laser research programs are outlined. The status of the NASA programs or lasers, laser receivers, and laser propulsion is discussed, and recommendations are presented for a proposed expanded NASA program in these areas. Program elements that are critical are discussed in detail.

  14. A multitasking, multisinked, multiprocessor data acquisition front end

    International Nuclear Information System (INIS)

    Fox, R.; Au, R.; Molen, A.V.

    1989-01-01

    The authors have developed a generalized data acquisition front end system which is based on MC68020 processors running a commercial real time kernel (rhoSOS), and implemented primarily in a high level language (C). This system has been attached to the back end on-line computing system at NSCL via our high performance ETHERNET protocol. Data may be simultaneously sent to any number of back end systems. Fixed fraction sampling along links to back end computing is also supported. A nonprocedural program generator simplifies the development of experiment specific code

  15. Evolving Storage and Cyber Infrastructure at the NASA Center for Climate Simulation

    Science.gov (United States)

    Salmon, Ellen; Duffy, Daniel; Spear, Carrie; Sinno, Scott; Vaughan, Garrison; Bowen, Michael

    2018-01-01

    This talk will describe recent developments at the NASA Center for Climate Simulation, which is funded by NASAs Science Mission Directorate, and supports the specialized data storage and computational needs of weather, ocean, and climate researchers, as well as astrophysicists, heliophysicists, and planetary scientists. To meet requirements for higher-resolution, higher-fidelity simulations, the NCCS augments its High Performance Computing (HPC) and storage retrieval environment. As the petabytes of model and observational data grow, the NCCS is broadening data services offerings and deploying and expanding virtualization resources for high performance analytics.

  16. High-Fidelity Computational Aerodynamics of the Elytron 4S UAV

    Science.gov (United States)

    Ventura Diaz, Patricia; Yoon, Seokkwan; Theodore, Colin R.

    2018-01-01

    High-fidelity Computational Fluid Dynamics (CFD) have been carried out for the Elytron 4S Unmanned Aerial Vehicle (UAV), also known as the converticopter "proto12". It is the scaled wind tunnel model of the Elytron 4S, an Urban Air Mobility (UAM) concept, a tilt-wing, box-wing rotorcraft capable of Vertical Take-Off and Landing (VTOL). The three-dimensional unsteady Navier-Stokes equations are solved on overset grids employing high-order accurate schemes, dual-time stepping, and a hybrid turbulence model using NASA's CFD code OVERFLOW. The Elytron 4S UAV has been simulated in airplane mode and in helicopter mode.

  17. Scientific visualization in computational aerodynamics at NASA Ames Research Center

    Science.gov (United States)

    Bancroft, Gordon V.; Plessel, Todd; Merritt, Fergus; Walatka, Pamela P.; Watson, Val

    1989-01-01

    The visualization methods used in computational fluid dynamics research at the NASA-Ames Numerical Aerodynamic Simulation facility are examined, including postprocessing, tracking, and steering methods. The visualization requirements of the facility's three-dimensional graphical workstation are outlined and the types hardware and software used to meet these requirements are discussed. The main features of the facility's current and next-generation workstations are listed. Emphasis is given to postprocessing techniques, such as dynamic interactive viewing on the workstation and recording and playback on videodisk, tape, and 16-mm film. Postprocessing software packages are described, including a three-dimensional plotter, a surface modeler, a graphical animation system, a flow analysis software toolkit, and a real-time interactive particle-tracer.

  18. NASA's Astrophysics Data Archives

    Science.gov (United States)

    Hasan, H.; Hanisch, R.; Bredekamp, J.

    2000-09-01

    The NASA Office of Space Science has established a series of archival centers where science data acquired through its space science missions is deposited. The availability of high quality data to the general public through these open archives enables the maximization of science return of the flight missions. The Astrophysics Data Centers Coordinating Council, an informal collaboration of archival centers, coordinates data from five archival centers distiguished primarily by the wavelength range of the data deposited there. Data are available in FITS format. An overview of NASA's data centers and services is presented in this paper. A standard front-end modifyer called `Astrowbrowse' is described. Other catalog browsers and tools include WISARD and AMASE supported by the National Space Scince Data Center, as well as ISAIA, a follow on to Astrobrowse.

  19. THE NASA AMES POLYCYCLIC AROMATIC HYDROCARBON INFRARED SPECTROSCOPIC DATABASE: THE COMPUTED SPECTRA

    International Nuclear Information System (INIS)

    Bauschlicher, C. W.; Ricca, A.; Boersma, C.; Mattioda, A. L.; Cami, J.; Peeters, E.; Allamandola, L. J.; Sanchez de Armas, F.; Puerta Saborido, G.; Hudgins, D. M.

    2010-01-01

    The astronomical emission features, formerly known as the unidentified infrared bands, are now commonly ascribed to polycyclic aromatic hydrocarbons (PAHs). The laboratory experiments and computational modeling done at the NASA Ames Research Center to create a collection of PAH IR spectra relevant to test and refine the PAH hypothesis have been assembled into a spectroscopic database. This database now contains over 800 PAH spectra spanning 2-2000 μm (5000-5 cm -1 ). These data are now available on the World Wide Web at www.astrochem.org/pahdb. This paper presents an overview of the computational spectra in the database and the tools developed to analyze and interpret astronomical spectra using the database. A description of the online and offline user tools available on the Web site is also presented.

  20. Integrating thematic web portal capabilities into the NASA Earthdata Web Infrastructure

    Science.gov (United States)

    Wong, M. M.; McLaughlin, B. D.; Huang, T.; Baynes, K.

    2015-12-01

    The National Aeronautics and Space Administration (NASA) acquires and distributes an abundance of Earth science data on a daily basis to a diverse user community worldwide. To assist the scientific community and general public in achieving a greater understanding of the interdisciplinary nature of Earth science and of key environmental and climate change topics, the NASA Earthdata web infrastructure is integrating new methods of presenting and providing access to Earth science information, data, research and results. This poster will present the process of integrating thematic web portal capabilities into the NASA Earthdata web infrastructure, with examples from the Sea Level Change Portal. The Sea Level Change Portal will be a source of current NASA research, data and information regarding sea level change. The portal will provide sea level change information through articles, graphics, videos and animations, an interactive tool to view and access sea level change data and a dashboard showing sea level change indicators. Earthdata is a part of the Earth Observing System Data and Information System (EOSDIS) project. EOSDIS is a key core capability in NASA's Earth Science Data Systems Program. It provides end-to-end capabilities for managing NASA's Earth science data from various sources - satellites, aircraft, field measurements, and various other programs. It is comprised of twelve Distributed Active Archive Centers (DAACs), Science Computing Facilities (SCFs), data discovery and service access client (Reverb and Earthdata Search), dataset directory (Global Change Master Directory - GCMD), near real-time data (Land Atmosphere Near real-time Capability for EOS - LANCE), Worldview (an imagery visualization interface), Global Imagery Browse Services, the Earthdata Code Collaborative and a host of other discipline specific data discovery, data access, data subsetting and visualization tools.

  1. Export Controls: Implementation of the 1998 Legislative Mandate for High Performance Computers

    National Research Council Canada - National Science Library

    1999-01-01

    We found that most of the 938 proposed exports of high performance computers to civilian end users in countries of concern from February 3, 1998, when procedures implementing the 1998 authorization...

  2. Users’ Perceptions Using Low-End and High-End Mobile-Rendered HMDs: A Comparative Study

    Directory of Open Access Journals (Sweden)

    M.-Carmen Juan

    2018-02-01

    Full Text Available Currently, it is possible to combine Mobile-Rendered Head-Mounted Displays (MR HMDs with smartphones to have Augmented Reality platforms. The differences between these types of platforms can affect the user’s experiences and satisfaction. This paper presents a study that analyses the user’s perception when using the same Augmented Reality app with two MR HMD (low-end and high-end. Our study evaluates the user’s experience taking into account several factors (control, sensory, distraction, ergonomics and realism. An Augmalpha-lowerented Reality app was developed to carry out the comparison for two MR HMDs. The application had exactly the same visual appearance and functionality for both devices. Forty adults participated in our study. From the results, there were no statistically significant differences for the users’ experience for the different factors when using the two MR HMDs, except for the ergonomic factors in favour of the high-end MR HMD. Even though the scores for the high-end MR HMD were higher in nearly all of the questions, both MR HMDs provided a very satisfying viewing experience with very high scores. The results were independent of gender and age. The participants rated the high-end MR HMD as the best one. Nevertheless, when they were asked which MR HMD they would buy, the participants chose the low-end MR HMD taking into account its price.

  3. NASA's Software Safety Standard

    Science.gov (United States)

    Ramsay, Christopher M.

    2007-01-01

    NASA relies more and more on software to control, monitor, and verify its safety critical systems, facilities and operations. Since the 1960's there has hardly been a spacecraft launched that does not have a computer on board that will provide command and control services. There have been recent incidents where software has played a role in high-profile mission failures and hazardous incidents. For example, the Mars Orbiter, Mars Polar Lander, the DART (Demonstration of Autonomous Rendezvous Technology), and MER (Mars Exploration Rover) Spirit anomalies were all caused or contributed to by software. The Mission Control Centers for the Shuttle, ISS, and unmanned programs are highly dependant on software for data displays, analysis, and mission planning. Despite this growing dependence on software control and monitoring, there has been little to no consistent application of software safety practices and methodology to NASA's projects with safety critical software. Meanwhile, academia and private industry have been stepping forward with procedures and standards for safety critical systems and software, for example Dr. Nancy Leveson's book Safeware: System Safety and Computers. The NASA Software Safety Standard, originally published in 1997, was widely ignored due to its complexity and poor organization. It also focused on concepts rather than definite procedural requirements organized around a software project lifecycle. Led by NASA Headquarters Office of Safety and Mission Assurance, the NASA Software Safety Standard has recently undergone a significant update. This new standard provides the procedures and guidelines for evaluating a project for safety criticality and then lays out the minimum project lifecycle requirements to assure the software is created, operated, and maintained in the safest possible manner. This update of the standard clearly delineates the minimum set of software safety requirements for a project without detailing the implementation for those

  4. Computer systems and software engineering

    Science.gov (United States)

    Mckay, Charles W.

    1988-01-01

    The High Technologies Laboratory (HTL) was established in the fall of 1982 at the University of Houston Clear Lake. Research conducted at the High Tech Lab is focused upon computer systems and software engineering. There is a strong emphasis on the interrelationship of these areas of technology and the United States' space program. In Jan. of 1987, NASA Headquarters announced the formation of its first research center dedicated to software engineering. Operated by the High Tech Lab, the Software Engineering Research Center (SERC) was formed at the University of Houston Clear Lake. The High Tech Lab/Software Engineering Research Center promotes cooperative research among government, industry, and academia to advance the edge-of-knowledge and the state-of-the-practice in key topics of computer systems and software engineering which are critical to NASA. The center also recommends appropriate actions, guidelines, standards, and policies to NASA in matters pertinent to the center's research. Results of the research conducted at the High Tech Lab/Software Engineering Research Center have given direction to many decisions made by NASA concerning the Space Station Program.

  5. Lessons Learned while Exploring Cloud-Native Architectures for NASA EOSDIS Applications and Systems

    Science.gov (United States)

    Pilone, D.

    2016-12-01

    As new, high data rate missions begin collecting data, the NASA's Earth Observing System Data and Information System (EOSDIS) archive is projected to grow roughly 20x to over 300PBs by 2025. To prepare for the dramatic increase in data and enable broad scientific inquiry into larger time series and datasets, NASA has been exploring the impact of applying cloud technologies throughout EOSDIS. In this talk we will provide an overview of NASA's prototyping and lessons learned in applying cloud architectures to: Highly scalable and extensible ingest and archive of EOSDIS data Going "all-in" on cloud based application architectures including "serverless" data processing pipelines and evaluating approaches to vendor-lock in Rethinking data distribution and approaches to analysis in a cloud environment Incorporating and enforcing security controls while minimizing the barrier for research efforts to deploy to NASA compliant, operational environments. NASA's Earth Observing System (EOS) is a coordinated series of satellites for long term global observations. NASA's Earth Observing System Data and Information System (EOSDIS) is a multi-petabyte-scale archive of environmental data that supports global climate change research by providing end-to-end services from EOS instrument data collection to science data processing to full access to EOS and other earth science data. On a daily basis, the EOSDIS ingests, processes, archives and distributes over 3 terabytes of data from NASA's Earth Science missions representing over 6000 data products ranging from various types of science disciplines. EOSDIS has continually evolved to improve the discoverability, accessibility, and usability of high-impact NASA data spanning the multi-petabyte-scale archive of Earth science data products.

  6. The use of the Climate-science Computational End Station (CCES) development and grand challenge team for the next IPCC assessment: an operational plan

    International Nuclear Information System (INIS)

    Washington, W M; Buja, L; Gent, P; Drake, J; Erickson, D; Anderson, D; Bader, D; Dickinson, R; Ghan, S; Jones, P; Jacob, R

    2008-01-01

    The grand challenge of climate change science is to predict future climates based on scenarios of anthropogenic emissions and other changes resulting from options in energy and development policies. Addressing this challenge requires a Climate Science Computational End Station consisting of a sustained climate model research, development, and application program combined with world-class DOE leadership computing resources to enable advanced computational simulation of the Earth system. This project provides the primary computer allocations for the DOE SciDAC and Climate Change Prediction Program. It builds on the successful interagency collaboration of the National Science and the U.S. Department of Energy in developing and applying the Community Climate System Model (CCSM) for climate change science. It also includes collaboration with the National Aeronautics and Space Administration in carbon data assimilation and university partners with expertise in high-end computational climate research

  7. SPoRT - An End-to-End R2O Activity

    Science.gov (United States)

    Jedlovec, Gary J.

    2009-01-01

    Established in 2002 to demonstrate the weather and forecasting application of real-time EOS measurements, the Short-term Prediction Research and Transition (SPoRT) program has grown to be an end-to-end research to operations activity focused on the use of advanced NASA modeling and data assimilation approaches, nowcasting techniques, and unique high-resolution multispectral observational data applications from EOS satellites to improve short-term weather forecasts on a regional and local scale. SPoRT currently partners with several universities and other government agencies for access to real-time data and products, and works collaboratively with them and operational end users at 13 WFOs to develop and test the new products and capabilities in a "test-bed" mode. The test-bed simulates key aspects of the operational environment without putting constraints on the forecaster workload. Products and capabilities which show utility in the test-bed environment are then transitioned experimentally into the operational environment for further evaluation and assessment. SPoRT focuses on a suite of data and products from MODIS, AMSR-E, and AIRS on the NASA Terra and Aqua satellites, and total lightning measurements from ground-based networks. Some of the observations are assimilated into or used with various versions of the WRF model to provide supplemental forecast guidance to operational end users. SPoRT is enhancing partnerships with NOAA / NESDIS for new product development and data access to exploit the remote sensing capabilities of instruments on the NPOESS satellites to address short term weather forecasting problems. The VIIRS and CrIS instruments on the NPP and follow-on NPOESS satellites provide similar observing capabilities to the MODIS and AIRS instruments on Terra and Aqua. SPoRT will be transitioning existing and new capabilities into the AWIIPS II environment to continue the continuity of its activities.

  8. Consolidating NASA's Arc Jets

    Science.gov (United States)

    Balboni, John A.; Gokcen, Tahir; Hui, Frank C. L.; Graube, Peter; Morrissey, Patricia; Lewis, Ronald

    2015-01-01

    The paper describes the consolidation of NASA's high powered arc-jet testing at a single location. The existing plasma arc-jet wind tunnels located at the Johnson Space Center were relocated to Ames Research Center while maintaining NASA's technical capability to ground-test thermal protection system materials under simulated atmospheric entry convective heating. The testing conditions at JSC were reproduced and successfully demonstrated at ARC through close collaboration between the two centers. New equipment was installed at Ames to provide test gases of pure nitrogen mixed with pure oxygen, and for future nitrogen-carbon dioxide mixtures. A new control system was custom designed, installed and tested. Tests demonstrated the capability of the 10 MW constricted-segmented arc heater at Ames meets the requirements of the major customer, NASA's Orion program. Solutions from an advanced computational fluid dynamics code were used to aid in characterizing the properties of the plasma stream and the surface environment on the calorimeters in the supersonic flow stream produced by the arc heater.

  9. Front-end vision and multi-scale image analysis multi-scale computer vision theory and applications, written in Mathematica

    CERN Document Server

    Romeny, Bart M Haar

    2008-01-01

    Front-End Vision and Multi-Scale Image Analysis is a tutorial in multi-scale methods for computer vision and image processing. It builds on the cross fertilization between human visual perception and multi-scale computer vision (`scale-space') theory and applications. The multi-scale strategies recognized in the first stages of the human visual system are carefully examined, and taken as inspiration for the many geometric methods discussed. All chapters are written in Mathematica, a spectacular high-level language for symbolic and numerical manipulations. The book presents a new and effective

  10. NASA's Scientific Visualization Studio

    Science.gov (United States)

    Mitchell, Horace G.

    2003-01-01

    Since 1988, the Scientific Visualization Studio(SVS) at NASA Goddard Space Flight Center has produced scientific visualizations of NASA s scientific research and remote sensing data for public outreach. These visualizations take the form of images, animations, and end-to-end systems and have been used in many venues: from the network news to science programs such as NOVA, from museum exhibits at the Smithsonian to White House briefings. This presentation will give an overview of the major activities and accomplishments of the SVS, and some of the most interesting projects and systems developed at the SVS will be described. Particular emphasis will be given to the practices and procedures by which the SVS creates visualizations, from the hardware and software used to the structures and collaborations by which products are designed, developed, and delivered to customers. The web-based archival and delivery system for SVS visualizations at svs.gsfc.nasa.gov will also be described.

  11. Semi-Automatic Science Workflow Synthesis for High-End Computing on the NASA Earth Exchange

    Data.gov (United States)

    National Aeronautics and Space Administration — Enhance capabilities for collaborative data analysis and modeling in Earth sciences. Develop components for automatic workflow capture, archiving and management....

  12. 14 CFR 1201.402 - NASA Industrial Applications Centers.

    Science.gov (United States)

    2010-01-01

    ... and innovative technology to nonaerospace sectors of the economy—NASA operates a network of Industrial..., Department of Computer Science, Baton Rouge, LA 70813-2065. (b) To obtain access to NASA-developed computer...

  13. Advanced Methodologies for NASA Science Missions

    Science.gov (United States)

    Hurlburt, N. E.; Feigelson, E.; Mentzel, C.

    2017-12-01

    Most of NASA's commitment to computational space science involves the organization and processing of Big Data from space-based satellites, and the calculations of advanced physical models based on these datasets. But considerable thought is also needed on what computations are needed. The science questions addressed by space data are so diverse and complex that traditional analysis procedures are often inadequate. The knowledge and skills of the statistician, applied mathematician, and algorithmic computer scientist must be incorporated into programs that currently emphasize engineering and physical science. NASA's culture and administrative mechanisms take full cognizance that major advances in space science are driven by improvements in instrumentation. But it is less well recognized that new instruments and science questions give rise to new challenges in the treatment of satellite data after it is telemetered to the ground. These issues might be divided into two stages: data reduction through software pipelines developed within NASA mission centers; and science analysis that is performed by hundreds of space scientists dispersed through NASA, U.S. universities, and abroad. Both stages benefit from the latest statistical and computational methods; in some cases, the science result is completely inaccessible using traditional procedures. This paper will review the current state of NASA and present example applications using modern methodologies.

  14. Research Institute for Advanced Computer Science

    Science.gov (United States)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2000-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center. It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. Ames has been designated NASA's Center of Excellence in Information Technology. In this capacity, Ames is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA Ames and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth; (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking. Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to a

  15. High Voltage Hall Accelerator Propulsion System Development for NASA Science Missions

    Science.gov (United States)

    Kamhawi, Hani; Haag, Thomas; Huang, Wensheng; Shastry, Rohit; Pinero, Luis; Peterson, Todd; Dankanich, John; Mathers, Alex

    2013-01-01

    NASA Science Mission Directorates In-Space Propulsion Technology Program is sponsoring the development of a 3.8 kW-class engineering development unit Hall thruster for implementation in NASA science and exploration missions. NASA Glenn Research Center and Aerojet are developing a high fidelity high voltage Hall accelerator (HiVHAc) thruster that can achieve specific impulse magnitudes greater than 2,700 seconds and xenon throughput capability in excess of 300 kilograms. Performance, plume mappings, thermal characterization, and vibration tests of the HiVHAc engineering development unit thruster have been performed. In addition, the HiVHAc project is also pursuing the development of a power processing unit (PPU) and xenon feed system (XFS) for integration with the HiVHAc engineering development unit thruster. Colorado Power Electronics and NASA Glenn Research Center have tested a brassboard PPU for more than 1,500 hours in a vacuum environment, and a new brassboard and engineering model PPU units are under development. VACCO Industries developed a xenon flow control module which has undergone qualification testing and will be integrated with the HiVHAc thruster extended duration tests. Finally, recent mission studies have shown that the HiVHAc propulsion system has sufficient performance for four Discovery- and two New Frontiers-class NASA design reference missions.

  16. Computational Process Modeling for Additive Manufacturing (OSU)

    Science.gov (United States)

    Bagg, Stacey; Zhang, Wei

    2015-01-01

    Powder-Bed Additive Manufacturing (AM) through Direct Metal Laser Sintering (DMLS) or Selective Laser Melting (SLM) is being used by NASA and the Aerospace industry to "print" parts that traditionally are very complex, high cost, or long schedule lead items. The process spreads a thin layer of metal powder over a build platform, then melts the powder in a series of welds in a desired shape. The next layer of powder is applied, and the process is repeated until layer-by-layer, a very complex part can be built. This reduces cost and schedule by eliminating very complex tooling and processes traditionally used in aerospace component manufacturing. To use the process to print end-use items, NASA seeks to understand SLM material well enough to develop a method of qualifying parts for space flight operation. Traditionally, a new material process takes many years and high investment to generate statistical databases and experiential knowledge, but computational modeling can truncate the schedule and cost -many experiments can be run quickly in a model, which would take years and a high material cost to run empirically. This project seeks to optimize material build parameters with reduced time and cost through modeling.

  17. [Activities of Research Institute for Advanced Computer Science

    Science.gov (United States)

    Gross, Anthony R. (Technical Monitor); Leiner, Barry M.

    2001-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administrations missions. RIACS is located at the NASA Ames Research Center, Moffett Field, California. RIACS research focuses on the three cornerstones of IT research necessary to meet the future challenges of NASA missions: 1. Automated Reasoning for Autonomous Systems Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. 2. Human-Centered Computing Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities. 3. High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to analysis of large scientific datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply IT research to a variety of NASA application domains. RIACS also engages in other activities, such as workshops, seminars, visiting scientist programs and student summer programs, designed to encourage and facilitate collaboration between the university and NASA IT research communities.

  18. Eclipse Across America: Through the Eyes of NASA

    Science.gov (United States)

    Young, C. Alex; Heliophysics Education Consortium

    2018-01-01

    Monday, August 21, 2017, marked the first total solar eclipse to cross the continental United States coast-to-coast in almost a century. NASA scientists and educators, working alongside many partners, were spread across the entire country, both inside and outside the path of totality. Like many other organizations, NASA prepared for this eclipse for several years. The August 21 eclipse was NASA's biggest media event in recent history, and was made possible by the work of thousands of volunteers, collaborators and NASA employees. The agency supported science, outreach, and media communications activities along the path of totality and across the country. This culminated in a 3 ½-hour broadcast from Charleston, SC, showcasing the sights and sounds of the eclipse – starting with the view from a plane off the coast of Oregon and ending with images from the International Space Station as the Moon's inner shadow left the US East Coast. Along the way, NASA shared experiments and research from different groups of scientists, including 11 NASA-supported studies, 50+ high-altitude balloon launches, and 12 NASA and partner space-based assets. This talk shares the timeline of this momentous event from NASA's perspective, describing outreach successes and providing a glimpse at some of the science results available and yet to come.

  19. A self-analysis of the NASA-TLX workload measure.

    Science.gov (United States)

    Noyes, Jan M; Bruneau, Daniel P J

    2007-04-01

    Computer use and, more specifically, the administration of tests and materials online continue to proliferate. A number of subjective, self-report workload measures exist, but the National Aeronautics and Space Administration-Task Load Index (NASA-TLX) is probably the most well known and used. The aim of this paper is to consider the workload costs associated with the computer-based and paper versions of the NASA-TLX measure. It was found that there is a significant difference between the workload scores for the two media, with the computer version of the NASA-TLX incurring more workload. This has implications for the practical use of the NASA-TLX as well as for other computer-based workload measures.

  20. Antecedents and Outcomes of End User Computing Competence

    National Research Council Canada - National Science Library

    Case, David

    2003-01-01

    .... The end user has had to evolve and will continue evolving as well; from someone with low level technical skills to someone with a high level of technical knowledge and information managerial skills...

  1. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    Science.gov (United States)

    Brown, David K; Penkler, David L; Musyoka, Thommas M; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.

  2. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing.

    Directory of Open Access Journals (Sweden)

    David K Brown

    Full Text Available Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS, a workflow management system and web interface for high performance computing (HPC. JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS.

  3. JMS: An Open Source Workflow Management System and Web-Based Cluster Front-End for High Performance Computing

    Science.gov (United States)

    Brown, David K.; Penkler, David L.; Musyoka, Thommas M.; Bishop, Özlem Tastan

    2015-01-01

    Complex computational pipelines are becoming a staple of modern scientific research. Often these pipelines are resource intensive and require days of computing time. In such cases, it makes sense to run them over high performance computing (HPC) clusters where they can take advantage of the aggregated resources of many powerful computers. In addition to this, researchers often want to integrate their workflows into their own web servers. In these cases, software is needed to manage the submission of jobs from the web interface to the cluster and then return the results once the job has finished executing. We have developed the Job Management System (JMS), a workflow management system and web interface for high performance computing (HPC). JMS provides users with a user-friendly web interface for creating complex workflows with multiple stages. It integrates this workflow functionality with the resource manager, a tool that is used to control and manage batch jobs on HPC clusters. As such, JMS combines workflow management functionality with cluster administration functionality. In addition, JMS provides developer tools including a code editor and the ability to version tools and scripts. JMS can be used by researchers from any field to build and run complex computational pipelines and provides functionality to include these pipelines in external interfaces. JMS is currently being used to house a number of bioinformatics pipelines at the Research Unit in Bioinformatics (RUBi) at Rhodes University. JMS is an open-source project and is freely available at https://github.com/RUBi-ZA/JMS. PMID:26280450

  4. Image Analysis via Soft Computing: Prototype Applications at NASA KSC and Product Commercialization

    Science.gov (United States)

    Dominguez, Jesus A.; Klinko, Steve

    2011-01-01

    This slide presentation reviews the use of "soft computing" which differs from "hard computing" in that it is more tolerant of imprecision, partial truth, uncertainty, and approximation and its use in image analysis. Soft computing provides flexible information processing to handle real life ambiguous situations and achieve tractability, robustness low solution cost, and a closer resemblance to human decision making. Several systems are or have been developed: Fuzzy Reasoning Edge Detection (FRED), Fuzzy Reasoning Adaptive Thresholding (FRAT), Image enhancement techniques, and visual/pattern recognition. These systems are compared with examples that show the effectiveness of each. NASA applications that are reviewed are: Real-Time (RT) Anomaly Detection, Real-Time (RT) Moving Debris Detection and the Columbia Investigation. The RT anomaly detection reviewed the case of a damaged cable for the emergency egress system. The use of these techniques is further illustrated in the Columbia investigation with the location and detection of Foam debris. There are several applications in commercial usage: image enhancement, human screening and privacy protection, visual inspection, 3D heart visualization, tumor detections and x ray image enhancement.

  5. NASA-IGES Translator and Viewer

    Science.gov (United States)

    Chou, Jin J.; Logan, Michael A.

    1995-01-01

    NASA-IGES Translator (NIGEStranslator) is a batch program that translates a general IGES (Initial Graphics Exchange Specification) file to a NASA-IGES-Nurbs-Only (NINO) file. IGES is the most popular geometry exchange standard among Computer Aided Geometric Design (CAD) systems. NINO format is a subset of IGES, implementing the simple and yet the most popular NURBS (Non-Uniform Rational B-Splines) representation. NIGEStranslator converts a complex IGES file to the simpler NINO file to simplify the tasks of CFD grid generation for models in CAD format. The NASA-IGES Viewer (NIGESview) is an Open-Inventor-based, highly interactive viewer/ editor for NINO files. Geometry in the IGES files can be viewed, copied, transformed, deleted, and inquired. Users can use NIGEStranslator to translate IGES files from CAD systems to NINO files. The geometry then can be examined with NIGESview. Extraneous geometries can be interactively removed, and the cleaned model can be written to an IGES file, ready to be used in grid generation.

  6. Telepresence master glove controller for dexterous robotic end-effectors

    Science.gov (United States)

    Fisher, Scott S.

    1987-01-01

    This paper describes recent research in the Aerospace Human Factors Research Division at NASA's Ames Research Center to develop a glove-like, control and data-recording device (DataGlove) that records and transmits to a host computer in real time, and at appropriate resolution, a numeric data-record of a user's hand/finger shape and dynamics. System configuration and performance specifications are detailed, and current research is discussed investigating its applications in operator control of dexterous robotic end-effectors and for use as a human factors research tool in evaluation of operator hand function requirements and performance in other specialized task environments.

  7. Journal of Clinical Monitoring and Computing 2015 end of year summary : tissue oxygenation and microcirculation

    NARCIS (Netherlands)

    Scheeren, T W L

    Last year we started this series of end of year summaries of papers published in the 2014 issues of the Journal Of Clinical Monitoring And Computing with a review on near infrared spectroscopy (Scheeren et al. in J Clin Monit Comput 29(2):217-220, 2015). This year we will broaden the scope and

  8. 12 CFR Appendix J to Part 226 - Annual Percentage Rate Computations for Closed-End Credit Transactions

    Science.gov (United States)

    2010-01-01

    ...-End Credit Transactions J Appendix J to Part 226 Banks and Banking FEDERAL RESERVE SYSTEM (CONTINUED) BOARD OF GOVERNORS OF THE FEDERAL RESERVE SYSTEM TRUTH IN LENDING (REGULATION Z) Pt. 226, App. J Appendix J to Part 226—Annual Percentage Rate Computations for Closed-End Credit Transactions (a...

  9. High-Performance Compute Infrastructure in Astronomy: 2020 Is Only Months Away

    Science.gov (United States)

    Berriman, B.; Deelman, E.; Juve, G.; Rynge, M.; Vöckler, J. S.

    2012-09-01

    By 2020, astronomy will be awash with as much as 60 PB of public data. Full scientific exploitation of such massive volumes of data will require high-performance computing on server farms co-located with the data. Development of this computing model will be a community-wide enterprise that has profound cultural and technical implications. Astronomers must be prepared to develop environment-agnostic applications that support parallel processing. The community must investigate the applicability and cost-benefit of emerging technologies such as cloud computing to astronomy, and must engage the Computer Science community to develop science-driven cyberinfrastructure such as workflow schedulers and optimizers. We report here the results of collaborations between a science center, IPAC, and a Computer Science research institute, ISI. These collaborations may be considered pathfinders in developing a high-performance compute infrastructure in astronomy. These collaborations investigated two exemplar large-scale science-driver workflow applications: 1) Calculation of an infrared atlas of the Galactic Plane at 18 different wavelengths by placing data from multiple surveys on a common plate scale and co-registering all the pixels; 2) Calculation of an atlas of periodicities present in the public Kepler data sets, which currently contain 380,000 light curves. These products have been generated with two workflow applications, written in C for performance and designed to support parallel processing on multiple environments and platforms, but with different compute resource needs: the Montage image mosaic engine is I/O-bound, and the NASA Star and Exoplanet Database periodogram code is CPU-bound. Our presentation will report cost and performance metrics and lessons-learned for continuing development. Applicability of Cloud Computing: Commercial Cloud providers generally charge for all operations, including processing, transfer of input and output data, and for storage of data

  10. Status of NASA's Stirling Space Power Converter Program

    International Nuclear Information System (INIS)

    Dudenhoefer, J.E.; Winter, J.M.

    1994-01-01

    An overview is presented of the NASA Lewis Research Center Free-Piston Stirling Space Power Converter Technology Program. This work is being conducted under NASA's Civil Space Technology Initiative. The goal of the CSTI High Capacity Power Element is to develop the technology base needed to meet the long duration, high capacity power requirements for future NASA space initiatives. Efforts are focused upon increasing system power output and system thermal and electric energy conversion efficiency at least fivefold over current SP-100 technology, and on achieving systems that are compatible with space nuclear reactors. This paper will discuss Stirling experience in Space Power Converters. Fabrication is nearly completed for the 1050 K Component Test Power Converter (CTPC); results of motoring tests of the cold end (525 K), are presented. The success of these and future designs is dependent upon supporting research and technology efforts including heat pipes, bearings, superalloy joining technologies, high efficiency alternators, life and reliability testing and predictive methodologies. This paper provides an update of progress in some of these technologies leading off with a discussion of free-piston Stirling experience in space

  11. Reliability and Failure in NASA Missions: Blunders, Normal Accidents, High Reliability, Bad Luck

    Science.gov (United States)

    Jones, Harry W.

    2015-01-01

    NASA emphasizes crew safety and system reliability but several unfortunate failures have occurred. The Apollo 1 fire was mistakenly unanticipated. After that tragedy, the Apollo program gave much more attention to safety. The Challenger accident revealed that NASA had neglected safety and that management underestimated the high risk of shuttle. Probabilistic Risk Assessment was adopted to provide more accurate failure probabilities for shuttle and other missions. NASA's "faster, better, cheaper" initiative and government procurement reform led to deliberately dismantling traditional reliability engineering. The Columbia tragedy and Mars mission failures followed. Failures can be attributed to blunders, normal accidents, or bad luck. Achieving high reliability is difficult but possible.

  12. Storage system software solutions for high-end user needs

    Science.gov (United States)

    Hogan, Carole B.

    1992-01-01

    Today's high-end storage user is one that requires rapid access to a reliable terabyte-capacity storage system running in a distributed environment. This paper discusses conventional storage system software and concludes that this software, designed for other purposes, cannot meet high-end storage requirements. The paper also reviews the philosophy and design of evolving storage system software. It concludes that this new software, designed with high-end requirements in mind, provides the potential for solving not only the storage needs of today but those of the foreseeable future as well.

  13. Xenon Acquisition Strategies for High-Power Electric Propulsion NASA Missions

    Science.gov (United States)

    Herman, Daniel A.; Unfried, Kenneth G.

    2015-01-01

    Solar electric propulsion (SEP) has been used for station-keeping of geostationary communications satellites since the 1980s. Solar electric propulsion has also benefitted from success on NASA Science Missions such as Deep Space One and Dawn. The xenon propellant loads for these applications have been in the 100s of kilograms range. Recent studies performed for NASA's Human Exploration and Operations Mission Directorate (HEOMD) have demonstrated that SEP is critically enabling for both near-term and future exploration architectures. The high payoff for both human and science exploration missions and technology investment from NASA's Space Technology Mission Directorate (STMD) are providing the necessary convergence and impetus for a 30-kilowatt-class SEP mission. Multiple 30-50- kilowatt Solar Electric Propulsion Technology Demonstration Mission (SEP TDM) concepts have been developed based on the maturing electric propulsion and solar array technologies by STMD with recent efforts focusing on an Asteroid Redirect Robotic Mission (ARRM). Xenon is the optimal propellant for the existing state-of-the-art electric propulsion systems considering efficiency, storability, and contamination potential. NASA mission concepts developed and those proposed by contracted efforts for the 30-kilowatt-class demonstration have a range of xenon propellant loads from 100s of kilograms up to 10,000 kilograms. This paper examines the status of the xenon industry worldwide, including historical xenon supply and pricing. The paper will provide updated information on the xenon market relative to previous papers that discussed xenon production relative to NASA mission needs. The paper will discuss the various approaches for acquiring on the order of 10 metric tons of xenon propellant to support potential near-term NASA missions. Finally, the paper will discuss acquisitions strategies for larger NASA missions requiring 100s of metric tons of xenon will be discussed.

  14. The Nasa-Isro SAR Mission Science Data Products and Processing Workflows

    Science.gov (United States)

    Rosen, P. A.; Agram, P. S.; Lavalle, M.; Cohen, J.; Buckley, S.; Kumar, R.; Misra-Ray, A.; Ramanujam, V.; Agarwal, K. M.

    2017-12-01

    The NASA-ISRO SAR (NISAR) Mission is currently in the development phase and in the process of specifying its suite of data products and algorithmic workflows, responding to inputs from the NISAR Science and Applications Team. NISAR will provide raw data (Level 0), full-resolution complex imagery (Level 1), and interferometric and polarimetric image products (Level 2) for the entire data set, in both natural radar and geocoded coordinates. NASA and ISRO are coordinating the formats, meta-data layers, and algorithms for these products, for both the NASA-provided L-band radar and the ISRO-provided S-band radar. Higher level products will be also be generated for the purpose of calibration and validation, over large areas of Earth, including tectonic plate boundaries, ice sheets and sea-ice, and areas of ecosystem disturbance and change. This level of comprehensive product generation has been unprecedented for SAR missions in the past, and leads to storage processing challenges for the production system and the archive center. Further, recognizing the potential to support applications that require low latency product generation and delivery, the NISAR team is optimizing the entire end-to-end ground data system for such response, including exploring the advantages of cloud-based processing, algorithmic acceleration using GPUs, and on-demand processing schemes that minimize computational and transport costs, but allow rapid delivery to science and applications users. This paper will review the current products, workflows, and discuss the scientific and operational trade-space of mission capabilities.

  15. Flux-Level Transit Injection Experiments with NASA Pleiades Supercomputer

    Science.gov (United States)

    Li, Jie; Burke, Christopher J.; Catanzarite, Joseph; Seader, Shawn; Haas, Michael R.; Batalha, Natalie; Henze, Christopher; Christiansen, Jessie; Kepler Project, NASA Advanced Supercomputing Division

    2016-06-01

    Flux-Level Transit Injection (FLTI) experiments are executed with NASA's Pleiades supercomputer for the Kepler Mission. The latest release (9.3, January 2016) of the Kepler Science Operations Center Pipeline is used in the FLTI experiments. Their purpose is to validate the Analytic Completeness Model (ACM), which can be computed for all Kepler target stars, thereby enabling exoplanet occurrence rate studies. Pleiades, a facility of NASA's Advanced Supercomputing Division, is one of the world's most powerful supercomputers and represents NASA's state-of-the-art technology. We discuss the details of implementing the FLTI experiments on the Pleiades supercomputer. For example, taking into account that ~16 injections are generated by one core of the Pleiades processors in an hour, the “shallow” FLTI experiment, in which ~2000 injections are required per target star, can be done for 16% of all Kepler target stars in about 200 hours. Stripping down the transit search to bare bones, i.e. only searching adjacent high/low periods at high/low pulse durations, makes the computationally intensive FLTI experiments affordable. The design of the FLTI experiments and the analysis of the resulting data are presented in “Validating an Analytic Completeness Model for Kepler Target Stars Based on Flux-level Transit Injection Experiments” by Catanzarite et al. (#2494058).Kepler was selected as the 10th mission of the Discovery Program. Funding for the Kepler Mission has been provided by the NASA Science Mission Directorate.

  16. Functional requirements document for the Earth Observing System Data and Information System (EOSDIS) Scientific Computing Facilities (SCF) of the NASA/MSFC Earth Science and Applications Division, 1992

    Science.gov (United States)

    Botts, Michael E.; Phillips, Ron J.; Parker, John V.; Wright, Patrick D.

    1992-01-01

    Five scientists at MSFC/ESAD have EOS SCF investigator status. Each SCF has unique tasks which require the establishment of a computing facility dedicated to accomplishing those tasks. A SCF Working Group was established at ESAD with the charter of defining the computing requirements of the individual SCFs and recommending options for meeting these requirements. The primary goal of the working group was to determine which computing needs can be satisfied using either shared resources or separate but compatible resources, and which needs require unique individual resources. The requirements investigated included CPU-intensive vector and scalar processing, visualization, data storage, connectivity, and I/O peripherals. A review of computer industry directions and a market survey of computing hardware provided information regarding important industry standards and candidate computing platforms. It was determined that the total SCF computing requirements might be most effectively met using a hierarchy consisting of shared and individual resources. This hierarchy is composed of five major system types: (1) a supercomputer class vector processor; (2) a high-end scalar multiprocessor workstation; (3) a file server; (4) a few medium- to high-end visualization workstations; and (5) several low- to medium-range personal graphics workstations. Specific recommendations for meeting the needs of each of these types are presented.

  17. Evaluating the Efficacy of the Cloud for Cluster Computation

    Science.gov (United States)

    Knight, David; Shams, Khawaja; Chang, George; Soderstrom, Tom

    2012-01-01

    Computing requirements vary by industry, and it follows that NASA and other research organizations have computing demands that fall outside the mainstream. While cloud computing made rapid inroads for tasks such as powering web applications, performance issues on highly distributed tasks hindered early adoption for scientific computation. One venture to address this problem is Nebula, NASA's homegrown cloud project tasked with delivering science-quality cloud computing resources. However, another industry development is Amazon's high-performance computing (HPC) instances on Elastic Cloud Compute (EC2) that promises improved performance for cluster computation. This paper presents results from a series of benchmarks run on Amazon EC2 and discusses the efficacy of current commercial cloud technology for running scientific applications across a cluster. In particular, a 240-core cluster of cloud instances achieved 2 TFLOPS on High-Performance Linpack (HPL) at 70% of theoretical computational performance. The cluster's local network also demonstrated sub-100 ?s inter-process latency with sustained inter-node throughput in excess of 8 Gbps. Beyond HPL, a real-world Hadoop image processing task from NASA's Lunar Mapping and Modeling Project (LMMP) was run on a 29 instance cluster to process lunar and Martian surface images with sizes on the order of tens of gigapixels. These results demonstrate that while not a rival of dedicated supercomputing clusters, commercial cloud technology is now a feasible option for moderately demanding scientific workloads.

  18. Computing in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Watase, Yoshiyuki

    1991-09-15

    The increasingly important role played by computing and computers in high energy physics is displayed in the 'Computing in High Energy Physics' series of conferences, bringing together experts in different aspects of computing - physicists, computer scientists, and vendors.

  19. Real-Time On-Board Airborne Demonstration of High-Speed On-Board Data Processing for Science Instruments (HOPS)

    Science.gov (United States)

    Beyon, Jeffrey Y.; Ng, Tak-Kwong; Davis, Mitchell J.; Adams, James K.; Bowen, Stephen C.; Fay, James J.; Hutchinson, Mark A.

    2015-01-01

    The project called High-Speed On-Board Data Processing for Science Instruments (HOPS) has been funded by NASA Earth Science Technology Office (ESTO) Advanced Information Systems Technology (AIST) program since April, 2012. The HOPS team recently completed two flight campaigns during the summer of 2014 on two different aircrafts with two different science instruments. The first flight campaign was in July, 2014 based at NASA Langley Research Center (LaRC) in Hampton, VA on the NASA's HU-25 aircraft. The science instrument that flew with HOPS was Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) CarbonHawk Experiment Simulator (ACES) funded by NASA's Instrument Incubator Program (IIP). The second campaign was in August, 2014 based at NASA Armstrong Flight Research Center (AFRC) in Palmdale, CA on the NASA's DC-8 aircraft. HOPS flew with the Multifunctional Fiber Laser Lidar (MFLL) instrument developed by Excelis Inc. The goal of the campaigns was to perform an end-to-end demonstration of the capabilities of the HOPS prototype system (HOPS COTS) while running the most computationally intensive part of the ASCENDS algorithm real-time on-board. The comparison of the two flight campaigns and the results of the functionality tests of the HOPS COTS are presented in this paper.

  20. Computing in high energy physics

    International Nuclear Information System (INIS)

    Watase, Yoshiyuki

    1991-01-01

    The increasingly important role played by computing and computers in high energy physics is displayed in the 'Computing in High Energy Physics' series of conferences, bringing together experts in different aspects of computing - physicists, computer scientists, and vendors

  1. End-to-End Trajectory for Conjunction Class Mars Missions Using Hybrid Solar-Electric/Chemical Transportation System

    Science.gov (United States)

    Chai, Patrick R.; Merrill, Raymond G.; Qu, Min

    2016-01-01

    NASA's Human Spaceflight Architecture Team is developing a reusable hybrid transportation architecture in which both chemical and solar-electric propulsion systems are used to deliver crew and cargo to exploration destinations. By combining chemical and solar-electric propulsion into a single spacecraft and applying each where it is most effective, the hybrid architecture enables a series of Mars trajectories that are more fuel efficient than an all chemical propulsion architecture without significant increases to trip time. The architecture calls for the aggregation of exploration assets in cislunar space prior to departure for Mars and utilizes high energy lunar-distant high Earth orbits for the final staging prior to departure. This paper presents the detailed analysis of various cislunar operations for the EMC Hybrid architecture as well as the result of the higher fidelity end-to-end trajectory analysis to understand the implications of the design choices on the Mars exploration campaign.

  2. Comparison of High-Fidelity Computational Tools for Wing Design of a Distributed Electric Propulsion Aircraft

    Science.gov (United States)

    Deere, Karen A.; Viken, Sally A.; Carter, Melissa B.; Viken, Jeffrey K.; Derlaga, Joseph M.; Stoll, Alex M.

    2017-01-01

    A variety of tools, from fundamental to high order, have been used to better understand applications of distributed electric propulsion to aid the wing and propulsion system design of the Leading Edge Asynchronous Propulsion Technology (LEAPTech) project and the X-57 Maxwell airplane. Three high-fidelity, Navier-Stokes computational fluid dynamics codes used during the project with results presented here are FUN3D, STAR-CCM+, and OVERFLOW. These codes employ various turbulence models to predict fully turbulent and transitional flow. Results from these codes are compared for two distributed electric propulsion configurations: the wing tested at NASA Armstrong on the Hybrid-Electric Integrated Systems Testbed truck, and the wing designed for the X-57 Maxwell airplane. Results from these computational tools for the high-lift wing tested on the Hybrid-Electric Integrated Systems Testbed truck and the X-57 high-lift wing presented compare reasonably well. The goal of the X-57 wing and distributed electric propulsion system design achieving or exceeding the required ?? (sub L) = 3.95 for stall speed was confirmed with all of the computational codes.

  3. AHPCRC (Army High Performance Computing Research Center) Bulletin. Volume 1, Issue 2

    Science.gov (United States)

    2011-01-01

    area and the researchers working on these projects. Also inside: news from the AHPCRC consortium partners at Morgan State University and the NASA ...Computing Research Center is provided by the supercomputing and research facilities at Stanford University and at the NASA Ames Research Center at...atomic and molecular level, he said. He noted that “every general would like to have” a Star Trek -like holodeck, where holographic avatars could

  4. Xenon Acquisition Strategies for High-Power Electric Propulsion NASA Missions

    Science.gov (United States)

    Herman, Daniel A.; Unfried, Kenneth G.

    2015-01-01

    The benefits of high-power solar electric propulsion (SEP) for both NASA's human and science exploration missions combined with the technology investment from the Space Technology Mission Directorate have enabled the development of a 50kW-class SEP mission. NASA mission concepts developed, including the Asteroid Redirect Robotic Mission, and those proposed by contracted efforts for the 30kW-class demonstration have a range of xenon propellant loads from 100's of kg up to 10,000 kg. A xenon propellant load of 10 metric tons represents greater than 10% of the global annual production rate of xenon. A single procurement of this size with short-term delivery can disrupt the xenon market, driving up pricing, making the propellant costs for the mission prohibitive. This paper examines the status of the xenon industry worldwide, including historical xenon supply and pricing. The paper discusses approaches for acquiring on the order of 10 MT of xenon propellant considering realistic programmatic constraints to support potential near-term NASA missions. Finally, the paper will discuss acquisitions strategies for mission campaigns utilizing multiple high-power solar electric propulsion vehicles requiring 100's of metric tons of xenon over an extended period of time where a longer term acquisition approach could be implemented.

  5. NASA Information Technology Implementation Plan

    Science.gov (United States)

    2000-01-01

    NASA's Information Technology (IT) resources and IT support continue to be a growing and integral part of all NASA missions. Furthermore, the growing IT support requirements are becoming more complex and diverse. The following are a few examples of the growing complexity and diversity of NASA's IT environment. NASA is conducting basic IT research in the Intelligent Synthesis Environment (ISE) and Intelligent Systems (IS) Initiatives. IT security, infrastructure protection, and privacy of data are requiring more and more management attention and an increasing share of the NASA IT budget. Outsourcing of IT support is becoming a key element of NASA's IT strategy as exemplified by Outsourcing Desktop Initiative for NASA (ODIN) and the outsourcing of NASA Integrated Services Network (NISN) support. Finally, technology refresh is helping to provide improved support at lower cost. Recently the NASA Automated Data Processing (ADP) Consolidation Center (NACC) upgraded its bipolar technology computer systems with Complementary Metal Oxide Semiconductor (CMOS) technology systems. This NACC upgrade substantially reduced the hardware maintenance and software licensing costs, significantly increased system speed and capacity, and reduced customer processing costs by 11 percent.

  6. Center for Advanced Computational Technology

    Science.gov (United States)

    Noor, Ahmed K.

    2000-01-01

    The Center for Advanced Computational Technology (ACT) was established to serve as a focal point for diverse research activities pertaining to application of advanced computational technology to future aerospace systems. These activities include the use of numerical simulations, artificial intelligence methods, multimedia and synthetic environments, and computational intelligence, in the modeling, analysis, sensitivity studies, optimization, design and operation of future aerospace systems. The Center is located at NASA Langley and is an integral part of the School of Engineering and Applied Science of the University of Virginia. The Center has four specific objectives: 1) conduct innovative research on applications of advanced computational technology to aerospace systems; 2) act as pathfinder by demonstrating to the research community what can be done (high-potential, high-risk research); 3) help in identifying future directions of research in support of the aeronautical and space missions of the twenty-first century; and 4) help in the rapid transfer of research results to industry and in broadening awareness among researchers and engineers of the state-of-the-art in applications of advanced computational technology to the analysis, design prototyping and operations of aerospace and other high-performance engineering systems. In addition to research, Center activities include helping in the planning and coordination of the activities of a multi-center team of NASA and JPL researchers who are developing an intelligent synthesis environment for future aerospace systems; organizing workshops and national symposia; as well as writing state-of-the-art monographs and NASA special publications on timely topics.

  7. NASA's Climate in a Box: Desktop Supercomputing for Open Scientific Model Development

    Science.gov (United States)

    Wojcik, G. S.; Seablom, M. S.; Lee, T. J.; McConaughy, G. R.; Syed, R.; Oloso, A.; Kemp, E. M.; Greenseid, J.; Smith, R.

    2009-12-01

    NASA's High Performance Computing Portfolio in cooperation with its Modeling, Analysis, and Prediction program intends to make its climate and earth science models more accessible to a larger community. A key goal of this effort is to open the model development and validation process to the scientific community at large such that a natural selection process is enabled and results in a more efficient scientific process. One obstacle to others using NASA models is the complexity of the models and the difficulty in learning how to use them. This situation applies not only to scientists who regularly use these models but also non-typical users who may want to use the models such as scientists from different domains, policy makers, and teachers. Another obstacle to the use of these models is that access to high performance computing (HPC) accounts, from which the models are implemented, can be restrictive with long wait times in job queues and delays caused by an arduous process of obtaining an account, especially for foreign nationals. This project explores the utility of using desktop supercomputers in providing a complete ready-to-use toolkit of climate research products to investigators and on demand access to an HPC system. One objective of this work is to pre-package NASA and NOAA models so that new users will not have to spend significant time porting the models. In addition, the prepackaged toolkit will include tools, such as workflow, visualization, social networking web sites, and analysis tools, to assist users in running the models and analyzing the data. The system architecture to be developed will allow for automatic code updates for each user and an effective means with which to deal with data that are generated. We plan to investigate several desktop systems, but our work to date has focused on a Cray CX1. Currently, we are investigating the potential capabilities of several non-traditional development environments. While most NASA and NOAA models are

  8. Computational Structures Technology for Airframes and Propulsion Systems

    International Nuclear Information System (INIS)

    Noor, A.K.; Housner, J.M.; Starnes, J.H. Jr.; Hopkins, D.A.; Chamis, C.C.

    1992-05-01

    This conference publication contains the presentations and discussions from the joint University of Virginia (UVA)/NASA Workshops. The presentations included NASA Headquarters perspectives on High Speed Civil Transport (HSCT), goals and objectives of the UVA Center for Computational Structures Technology (CST), NASA and Air Force CST activities, CST activities for airframes and propulsion systems in industry, and CST activities at Sandia National Laboratory

  9. An Australian Perspective On The Challenges For Computer And Network Security For Novice End-Users

    Directory of Open Access Journals (Sweden)

    Patryk Szewczyk

    2012-12-01

    Full Text Available It is common for end-users to have difficulty in using computer or network security appropriately and thus have often been ridiculed when misinterpreting instructions or procedures. This discussion paper details the outcomes of research undertaken over the past six years on why security is overly complex for end-users. The results indicate that multiple issues may render end-users vulnerable to security threats and that there is no single solution to address these problems. Studies on a small group of senior citizens has shown that educational seminars can be beneficial in ensuring that simple security aspects are understood and used appropriately.

  10. Activities of the Research Institute for Advanced Computer Science

    Science.gov (United States)

    Oliger, Joseph

    1994-01-01

    The Research Institute for Advanced Computer Science (RIACS) was established by the Universities Space Research Association (USRA) at the NASA Ames Research Center (ARC) on June 6, 1983. RIACS is privately operated by USRA, a consortium of universities with research programs in the aerospace sciences, under contract with NASA. The primary mission of RIACS is to provide research and expertise in computer science and scientific computing to support the scientific missions of NASA ARC. The research carried out at RIACS must change its emphasis from year to year in response to NASA ARC's changing needs and technological opportunities. Research at RIACS is currently being done in the following areas: (1) parallel computing; (2) advanced methods for scientific computing; (3) high performance networks; and (4) learning systems. RIACS technical reports are usually preprints of manuscripts that have been submitted to research journals or conference proceedings. A list of these reports for the period January 1, 1994 through December 31, 1994 is in the Reports and Abstracts section of this report.

  11. Technology transfer at NASA - A librarian's view

    Science.gov (United States)

    Buchan, Ronald L.

    1991-01-01

    The NASA programs, publications, and services promoting the transfer and utilization of aerospace technology developed by and for NASA are briefly surveyed. Topics addressed include the corporate sources of NASA technical information and its interest for corporate users of information services; the IAA and STAR abstract journals; NASA/RECON, NTIS, and the AIAA Aerospace Database; the RECON Space Commercialization file; the Computer Software Management and Information Center file; company information in the RECON database; and services to small businesses. Also discussed are the NASA publications Tech Briefs and Spinoff, the Industrial Applications Centers, NASA continuing bibliographies on management and patent abstracts (indexed using the NASA Thesaurus), the Index to NASA News Releases and Speeches, and the Aerospace Research Information Network (ARIN).

  12. Optical Computers and Space Technology

    Science.gov (United States)

    Abdeldayem, Hossin A.; Frazier, Donald O.; Penn, Benjamin; Paley, Mark S.; Witherow, William K.; Banks, Curtis; Hicks, Rosilen; Shields, Angela

    1995-01-01

    The rapidly increasing demand for greater speed and efficiency on the information superhighway requires significant improvements over conventional electronic logic circuits. Optical interconnections and optical integrated circuits are strong candidates to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by the conventional electronic logic circuits. The new optical technology has increased the demand for high quality optical materials. NASA's recent involvement in processing optical materials in space has demonstrated that a new and unique class of high quality optical materials are processible in a microgravity environment. Microgravity processing can induce improved orders in these materials and could have a significant impact on the development of optical computers. We will discuss NASA's role in processing these materials and report on some of the associated nonlinear optical properties which are quite useful for optical computers technology.

  13. The Benefits and Complexities of Operating Geographic Information Systems (GIS) in a High Performance Computing (HPC) Environment

    Science.gov (United States)

    Shute, J.; Carriere, L.; Duffy, D.; Hoy, E.; Peters, J.; Shen, Y.; Kirschbaum, D.

    2017-12-01

    The NASA Center for Climate Simulation (NCCS) at the Goddard Space Flight Center is building and maintaining an Enterprise GIS capability for its stakeholders, to include NASA scientists, industry partners, and the public. This platform is powered by three GIS subsystems operating in a highly-available, virtualized environment: 1) the Spatial Analytics Platform is the primary NCCS GIS and provides users discoverability of the vast DigitalGlobe/NGA raster assets within the NCCS environment; 2) the Disaster Mapping Platform provides mapping and analytics services to NASA's Disaster Response Group; and 3) the internal (Advanced Data Analytics Platform/ADAPT) enterprise GIS provides users with the full suite of Esri and open source GIS software applications and services. All systems benefit from NCCS's cutting edge infrastructure, to include an InfiniBand network for high speed data transfers; a mixed/heterogeneous environment featuring seamless sharing of information between Linux and Windows subsystems; and in-depth system monitoring and warning systems. Due to its co-location with the NCCS Discover High Performance Computing (HPC) environment and the Advanced Data Analytics Platform (ADAPT), the GIS platform has direct access to several large NCCS datasets including DigitalGlobe/NGA, Landsat, MERRA, and MERRA2. Additionally, the NCCS ArcGIS Desktop Windows virtual machines utilize existing NetCDF and OPeNDAP assets for visualization, modelling, and analysis - thus eliminating the need for data duplication. With the advent of this platform, Earth scientists have full access to vast data repositories and the industry-leading tools required for successful management and analysis of these multi-petabyte, global datasets. The full system architecture and integration with scientific datasets will be presented. Additionally, key applications and scientific analyses will be explained, to include the NASA Global Landslide Catalog (GLC) Reporter crowdsourcing application, the

  14. Cyberinfrastructure for End-to-End Environmental Explorations

    Science.gov (United States)

    Merwade, V.; Kumar, S.; Song, C.; Zhao, L.; Govindaraju, R.; Niyogi, D.

    2007-12-01

    The design and implementation of a cyberinfrastructure for End-to-End Environmental Exploration (C4E4) is presented. The C4E4 framework addresses the need for an integrated data/computation platform for studying broad environmental impacts by combining heterogeneous data resources with state-of-the-art modeling and visualization tools. With Purdue being a TeraGrid Resource Provider, C4E4 builds on top of the Purdue TeraGrid data management system and Grid resources, and integrates them through a service-oriented workflow system. It allows researchers to construct environmental workflows for data discovery, access, transformation, modeling, and visualization. Using the C4E4 framework, we have implemented an end-to-end SWAT simulation and analysis workflow that connects our TeraGrid data and computation resources. It enables researchers to conduct comprehensive studies on the impact of land management practices in the St. Joseph watershed using data from various sources in hydrologic, atmospheric, agricultural, and other related disciplines.

  15. Experimental and computational investigation of the NASA low-speed centrifugal compressor flow field

    Science.gov (United States)

    Hathaway, Michael D.; Chriss, Randall M.; Wood, Jerry R.; Strazisar, Anthony J.

    1993-01-01

    An experimental and computational investigation of the NASA Lewis Research Center's low-speed centrifugal compressor (LSCC) flow field was conducted using laser anemometry and Dawes' three-dimensional viscous code. The experimental configuration consisted of a backswept impeller followed by a vaneless diffuser. Measurements of the three-dimensional velocity field were acquired at several measurement planes through the compressor. The measurements describe both the throughflow and secondary velocity field along each measurement plane. In several cases the measurements provide details of the flow within the blade boundary layers. Insight into the complex flow physics within centrifugal compressors is provided by the computational fluid dynamics analysis (CFD), and assessment of the CFD predictions is provided by comparison with the measurements. Five-hole probe and hot-wire surveys at the inlet and exit to the impeller as well as surface flow visualization along the impeller blade surfaces provided independent confirmation of the laser measurement technique. The results clearly document the development of the throughflow velocity wake that is characteristic of unshrouded centrifugal compressors.

  16. Computing in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Sarah; Devenish, Robin [Nuclear Physics Laboratory, Oxford University (United Kingdom)

    1989-07-15

    Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations. The state of the art, as well as new trends and hopes, were reflected in this year's 'Computing In High Energy Physics' conference held in the dreamy setting of Oxford's spires. The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference's aim – 'to bring together high energy physicists and computer scientists'.

  17. Computer sciences

    Science.gov (United States)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  18. APPLICATION OF OBJECT ORIENTED PROGRAMMING TECHNIQUES IN FRONT END COMPUTERS

    International Nuclear Information System (INIS)

    SKELLY, J.F.

    1997-01-01

    The Front End Computer (FEC) environment imposes special demands on software, beyond real time performance and robustness. FEC software must manage a diverse inventory of devices with individualistic timing requirements and hardware interfaces. It must implement network services which export device access to the control system at large, interpreting a uniform network communications protocol into the specific control requirements of the individual devices. Object oriented languages provide programming techniques which neatly address these challenges, and also offer benefits in terms of maintainability and flexibility. Applications are discussed which exhibit the use of inheritance, multiple inheritance and inheritance trees, and polymorphism to address the needs of FEC software

  19. Interactive Data Exploration for High-Performance Fluid Flow Computations through Porous Media

    KAUST Repository

    Perovic, Nevena

    2014-09-01

    © 2014 IEEE. Huge data advent in high-performance computing (HPC) applications such as fluid flow simulations usually hinders the interactive processing and exploration of simulation results. Such an interactive data exploration not only allows scientiest to \\'play\\' with their data but also to visualise huge (distributed) data sets in both an efficient and easy way. Therefore, we propose an HPC data exploration service based on a sliding window concept, that enables researches to access remote data (available on a supercomputer or cluster) during simulation runtime without exceeding any bandwidth limitations between the HPC back-end and the user front-end.

  20. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  1. End-to-end plasma bubble PIC simulations on GPUs

    Science.gov (United States)

    Germaschewski, Kai; Fox, William; Matteucci, Jackson; Bhattacharjee, Amitava

    2017-10-01

    Accelerator technologies play a crucial role in eventually achieving exascale computing capabilities. The current and upcoming leadership machines at ORNL (Titan and Summit) employ Nvidia GPUs, which provide vast computational power but also need specifically adapted computational kernels to fully exploit them. In this work, we will show end-to-end particle-in-cell simulations of the formation, evolution and coalescence of laser-generated plasma bubbles. This work showcases the GPU capabilities of the PSC particle-in-cell code, which has been adapted for this problem to support particle injection, a heating operator and a collision operator on GPUs.

  2. High-level language computer architecture

    CERN Document Server

    Chu, Yaohan

    1975-01-01

    High-Level Language Computer Architecture offers a tutorial on high-level language computer architecture, including von Neumann architecture and syntax-oriented architecture as well as direct and indirect execution architecture. Design concepts of Japanese-language data processing systems are discussed, along with the architecture of stack machines and the SYMBOL computer system. The conceptual design of a direct high-level language processor is also described.Comprised of seven chapters, this book first presents a classification of high-level language computer architecture according to the pr

  3. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  4. Swedish High-End Apparel Online

    OpenAIRE

    Hansson, Christoffer; Grabe, Thomas; Thomander, Karolina

    2010-01-01

    The study aims to through a qualitative case study describe how six Swedish high-end apparel companies attributed as part of “the Swedish fashion wonder” with online distribution have been affected by six chosen factors. The six factors presented are extracted from previous studies and consist of customer relationships, intermediary relationships, pricing, costs and revenue, competitors and impact on the brand. The results show that customer relationships is an important factor that most comp...

  5. High-school Student Teams in a National NASA Microgravity Science Competition

    Science.gov (United States)

    DeLombard, Richard; Hodanbosi, Carol; Stocker, Dennis

    2003-01-01

    The Dropping In a Microgravity Environment or DIME competition for high-school-aged student teams has completed the first year for nationwide eligibility after two regional pilot years. With the expanded geographic participation and increased complexity of experiments, new lessons were learned by the DIME staff. A team participating in DIME will research the field of microgravity, develop a hypothesis, and prepare a proposal for an experiment to be conducted in a NASA microgravity drop tower. A team of NASA scientists and engineers will select the top proposals and then the selected teams will design and build their experiment apparatus. When completed, team representatives will visit NASA Glenn in Cleveland, Ohio to operate their experiment in the 2.2 Second Drop Tower and participate in workshops and center tours. NASA participates in a wide variety of educational activities including competitive events. There are competitive events sponsored by NASA (e.g. NASA Student Involvement Program) and student teams mentored by NASA centers (e.g. For Inspiration and Recognition of Science and Technology Robotics Competition). This participation by NASA in these public forums serves to bring the excitement of aerospace science to students and educators.Researchers from academic institutions, NASA, and industry utilize the 2.2 Second Drop Tower at NASA Glenn Research Center in Cleveland, Ohio for microgravity research. The researcher may be able to complete the suite of experiments in the drop tower but many experiments are precursor experiments for spaceflight experiments. The short turnaround time for an experiment's operations (45 minutes) and ready access to experiment carriers makes the facility amenable for use in a student program. The pilot year for DIME was conducted during the 2000-2001 school year with invitations sent out to Ohio- based schools and organizations. A second pilot year was conducted during the 2001-2002 school year for teams in the six-state region

  6. NASA FY 2000 Accountability Report

    Science.gov (United States)

    2000-01-01

    This Accountability Report consolidates reports required by various statutes and summarizes NASA's program accomplishments and its stewardship over budget and financial resources. It is a culmination of NASA's management process, which begins with mission definition and program planning, continues with the formulation and justification of budgets for the President and Congress, and ends with scientific and engineering program accomplishments. The report covers activities from October 1, 1999, through September 30, 2000. Achievements are highlighted in the Statement of the Administrator and summarized in the Report.

  7. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  8. Computing in high energy physics

    International Nuclear Information System (INIS)

    Smith, Sarah; Devenish, Robin

    1989-01-01

    Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations. The state of the art, as well as new trends and hopes, were reflected in this year's 'Computing In High Energy Physics' conference held in the dreamy setting of Oxford's spires. The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference's aim – 'to bring together high energy physicists and computer scientists'

  9. A Bioinformatics Facility for NASA

    Science.gov (United States)

    Schweighofer, Karl; Pohorille, Andrew

    2006-01-01

    Building on an existing prototype, we have fielded a facility with bioinformatics technologies that will help NASA meet its unique requirements for biological research. This facility consists of a cluster of computers capable of performing computationally intensive tasks, software tools, databases and knowledge management systems. Novel computational technologies for analyzing and integrating new biological data and already existing knowledge have been developed. With continued development and support, the facility will fulfill strategic NASA s bioinformatics needs in astrobiology and space exploration. . As a demonstration of these capabilities, we will present a detailed analysis of how spaceflight factors impact gene expression in the liver and kidney for mice flown aboard shuttle flight STS-108. We have found that many genes involved in signal transduction, cell cycle, and development respond to changes in microgravity, but that most metabolic pathways appear unchanged.

  10. NASA/BAE SYSTEMS SpaceWire Effort

    Science.gov (United States)

    Rakow, Glenn Parker; Schnurr, Richard G.; Kapcio, Paul

    2003-01-01

    This paper discusses the state of the NASA and BAE SYSTEMS developments of SpaceWire. NASA has developed intellectual property that implements SpaceWire in Register Transfer Level (RTL) VHDL for a SpaceWire link and router. This design has been extensively verified using directed tests from the SpaceWire Standard and design specification, as well as being randomly tested to flush out hard to find bugs in the code. The high level features of the design will be discussed, including the support for multiple time code masters, which will be useful for the James Webb Space Telescope electrical architecture. This design is now ready to be targeted to FPGA's and ASICs. Target utilization and performance information will be presented for Spaceflight worthy FPGA's and a discussion of the ASIC implementations will be addressed. In particular, the BAE SYSTEMS ASIC will be highlighted which will be implemented on their .25micron rad-hard line. The chip will implement a 4-port router with the ability to tie chips together to make larger routers without external glue logic. This part will have integrated LVDS drivers/receivers, include a PLL and include skew control logic. It will be targeted to run at greater than 300 MHz and include the implementation for the proposed SpaceWire transport layer. The need to provide a reliable transport mechanism for SpaceWire has been identified by both NASA And ESA, who are attempting to define a transport layer standard that utilizes a low overhead, low latency connection oriented approach that works end-to-end. This layer needs to be implemented in hardware to prevent bottlenecks.

  11. Exploring Cognition Using Software Defined Radios for NASA Missions

    Science.gov (United States)

    Mortensen, Dale J.; Reinhart, Richard C.

    2016-01-01

    NASA missions typically operate using a communication infrastructure that requires significant schedule planning with limited flexibility when the needs of the mission change. Parameters such as modulation, coding scheme, frequency, and data rate are fixed for the life of the mission. This is due to antiquated hardware and software for both the space and ground assets and a very complex set of mission profiles. Automated techniques in place by commercial telecommunication companies are being explored by NASA to determine their usability by NASA to reduce cost and increase science return. Adding cognition the ability to learn from past decisions and adjust behavior is also being investigated. Software Defined Radios are an ideal way to implement cognitive concepts. Cognition can be considered in many different aspects of the communication system. Radio functions, such as frequency, modulation, data rate, coding and filters can be adjusted based on measurements of signal degradation. Data delivery mechanisms and route changes based on past successes and failures can be made to more efficiently deliver the data to the end user. Automated antenna pointing can be added to improve gain, coverage, or adjust the target. Scheduling improvements and automation to reduce the dependence on humans provide more flexible capabilities. The Cognitive Communications project, funded by the Space Communication and Navigation Program, is exploring these concepts and using the SCaN Testbed on board the International Space Station to implement them as they evolve. The SCaN Testbed contains three Software Defined Radios and a flight computer. These four computing platforms, along with a tracking antenna system and the supporting ground infrastructure, will be used to implement various concepts in a system similar to those used by missions. Multiple universities and SBIR companies are supporting this investigation. This paper will describe the cognitive system ideas under consideration and

  12. NASA Center for Climate Simulation (NCCS) Advanced Technology AT5 Virtualized Infiniband Report

    Science.gov (United States)

    Thompson, John H.; Bledsoe, Benjamin C.; Wagner, Mark; Shakshober, John; Fromkin, Russ

    2013-01-01

    The NCCS is part of the Computational and Information Sciences and Technology Office (CISTO) of Goddard Space Flight Center's (GSFC) Sciences and Exploration Directorate. The NCCS's mission is to enable scientists to increase their understanding of the Earth, the solar system, and the universe by supplying state-of-the-art high performance computing (HPC) solutions. To accomplish this mission, the NCCS (https://www.nccs.nasa.gov) provides high performance compute engines, mass storage, and network solutions to meet the specialized needs of the Earth and space science user communities

  13. The NASA CSTI High Capacity Power Project

    International Nuclear Information System (INIS)

    Winter, J.; Dudenhoefer, J.; Juhasz, A.; Schwarze, G.; Patterson, R.; Ferguson, D.; Schmitz, P.; Vandersande, J.

    1992-01-01

    This paper describes the elements of NASA's CSTI High Capacity Power Project which include Systems Analysis, Stirling Power Conversion, Thermoelectric Power Conversion, Thermal Management, Power Management, Systems Diagnostics, Environmental Interactions, and Material/Structural Development. Technology advancement in all elements is required to provide the growth capability, high reliability and 7 to 10 year lifetime demanded for future space nuclear power systems. The overall project will develop and demonstrate the technology base required to provide a wide range of modular power systems compatible with the SP-100 reactor which facilitates operation during lunar and planetary day/night cycles as well as allowing spacecraft operation at any attitude or distance from the sun. Significant accomplishments in all of the project elements will be presented, along with revised goals and project timeliness recently developed

  14. Development of a Dynamic, End-to-End Free Piston Stirling Convertor Model

    Science.gov (United States)

    Regan, Timothy F.; Gerber, Scott S.; Roth, Mary Ellen

    2003-01-01

    A dynamic model for a free-piston Stirling convertor is being developed at the NASA Glenn Research Center. The model is an end-to-end system model that includes the cycle thermodynamics, the dynamics, and electrical aspects of the system. The subsystems of interest are the heat source, the springs, the moving masses, the linear alternator, the controller and the end-user load. The envisioned use of the model will be in evaluating how changes in a subsystem could affect the operation of the convertor. The model under development will speed the evaluation of improvements to a subsystem and aid in determining areas in which most significant improvements may be found. One of the first uses of the end-to-end model will be in the development of controller architectures. Another related area is in evaluating changes to details in the linear alternator.

  15. 1995 NASA High-Speed Research Program Sonic Boom Workshop. Volume 2; Configuration Design, Analysis, and Testing

    Science.gov (United States)

    Baize, Daniel G. (Editor)

    1999-01-01

    The High-Speed Research Program and NASA Langley Research Center sponsored the NASA High-Speed Research Program Sonic Boom Workshop on September 12-13, 1995. The workshop was designed to bring together NASAs scientists and engineers and their counterparts in industry, other Government agencies, and academia working together in the sonic boom element of NASAs High-Speed Research Program. Specific objectives of this workshop were to: (1) report the progress and status of research in sonic boom propagation, acceptability, and design; (2) promote and disseminate this technology within the appropriate technical communities; (3) help promote synergy among the scientists working in the Program; and (4) identify technology pacing, the development C, of viable reduced-boom High-Speed Civil Transport concepts. The Workshop was organized in four sessions: Sessions 1 Sonic Boom Propagation (Theoretical); Session 2 Sonic Boom Propagation (Experimental); Session 3 Acceptability Studies-Human and Animal; and Session 4 - Configuration Design, Analysis, and Testing.

  16. Computational chemistry

    Science.gov (United States)

    Arnold, J. O.

    1987-01-01

    With the advent of supercomputers, modern computational chemistry algorithms and codes, a powerful tool was created to help fill NASA's continuing need for information on the properties of matter in hostile or unusual environments. Computational resources provided under the National Aerodynamics Simulator (NAS) program were a cornerstone for recent advancements in this field. Properties of gases, materials, and their interactions can be determined from solutions of the governing equations. In the case of gases, for example, radiative transition probabilites per particle, bond-dissociation energies, and rates of simple chemical reactions can be determined computationally as reliably as from experiment. The data are proving to be quite valuable in providing inputs to real-gas flow simulation codes used to compute aerothermodynamic loads on NASA's aeroassist orbital transfer vehicles and a host of problems related to the National Aerospace Plane Program. Although more approximate, similar solutions can be obtained for ensembles of atoms simulating small particles of materials with and without the presence of gases. Computational chemistry has application in studying catalysis, properties of polymers, all of interest to various NASA missions, including those previously mentioned. In addition to discussing these applications of computational chemistry within NASA, the governing equations and the need for supercomputers for their solution is outlined.

  17. High energy physics and grid computing

    International Nuclear Information System (INIS)

    Yu Chuansong

    2004-01-01

    The status of the new generation computing environment of the high energy physics experiments is introduced briefly in this paper. The development of the high energy physics experiments and the new computing requirements by the experiments are presented. The blueprint of the new generation computing environment of the LHC experiments, the history of the Grid computing, the R and D status of the high energy physics grid computing technology, the network bandwidth needed by the high energy physics grid and its development are described. The grid computing research in Chinese high energy physics community is introduced at last. (authors)

  18. High power electromagnetic propulsion research at the NASA Glenn Research Center

    International Nuclear Information System (INIS)

    LaPointe, Michael R.; Sankovic, John M.

    2000-01-01

    Interest in megawatt-class electromagnetic propulsion has been rekindled to support newly proposed high power orbit transfer and deep space mission applications. Electromagnetic thrusters can effectively process megawatts of power to provide a range of specific impulse values to meet diverse in-space propulsion requirements. Potential applications include orbit raising for the proposed multi-megawatt Space Solar Power Satellite and other large commercial and military space platforms, lunar and interplanetary cargo missions in support of the NASA Human Exploration and Development of Space strategic enterprise, robotic deep space exploration missions, and near-term interstellar precursor missions. As NASA's lead center for electric propulsion, the Glenn Research Center is developing a number of high power electromagnetic propulsion technologies to support these future mission applications. Program activities include research on MW-class magnetoplasmadynamic thrusters, high power pulsed inductive thrusters, and innovative electrodeless plasma thruster concepts. Program goals are highlighted, the status of each research area is discussed, and plans are outlined for the continued development of efficient, robust high power electromagnetic thrusters

  19. Exploration of operator method digital optical computers for application to NASA

    Science.gov (United States)

    1990-01-01

    Digital optical computer design has been focused primarily towards parallel (single point-to-point interconnection) implementation. This architecture is compared to currently developing VHSIC systems. Using demonstrated multichannel acousto-optic devices, a figure of merit can be formulated. The focus is on a figure of merit termed Gate Interconnect Bandwidth Product (GIBP). Conventional parallel optical digital computer architecture demonstrates only marginal competitiveness at best when compared to projected semiconductor implements. Global, analog global, quasi-digital, and full digital interconnects are briefly examined as alternative to parallel digital computer architecture. Digital optical computing is becoming a very tough competitor to semiconductor technology since it can support a very high degree of three dimensional interconnect density and high degrees of Fan-In without capacitive loading effects at very low power consumption levels.

  20. NASA and the National Climate Assessment: Promoting awareness of NASA Earth science

    Science.gov (United States)

    Leidner, A. K.

    2014-12-01

    NASA Earth science observations, models, analyses, and applications made significant contributions to numerous aspects of the Third National Climate Assessment (NCA) report and are contributing to sustained climate assessment activities. The agency's goal in participating in the NCA was to ensure that NASA scientific resources were made available to understand the current state of climate change science and climate change impacts. By working with federal agency partners and stakeholder communities to develop and write the report, the agency was able to raise awareness of NASA climate science with audiences beyond the traditional NASA community. To support assessment activities within the NASA community, the agency sponsored two competitive programs that not only funded research and tools for current and future assessments, but also increased capacity within our community to conduct assessment-relevant science and to participate in writing assessments. Such activities fostered the ability of graduate students, post-docs, and senior researchers to learn about the science needs of climate assessors and end-users, which can guide future research activities. NASA also contributed to developing the Global Change Information System, which deploys information from the NCA to scientists, decision makers, and the public, and thus contributes to climate literacy. Finally, NASA satellite imagery and animations used in the Third NCA helped the pubic and decision makers visualize climate changes and were frequently used in social media to communicate report key findings. These resources are also key for developing educational materials that help teachers and students explore regional climate change impacts and opportunities for responses.

  1. A content validity approach to creating an end-user computer skill assessment tool

    Directory of Open Access Journals (Sweden)

    Shirley Gibbs

    Full Text Available Practical assessment instruments are commonly used in the workplace and educational environments to assess a person\\'s level of digital literacy and end-user computer skill. However, it is often difficult to find statistical evidence of the actual validity of instruments being used. To ensure that the correct factors are being assessed for a particular purpose it is necessary to undertake some type of psychometric testing, and the first step is to study the content relevance of the measure. The purpose of this paper is to report on the rigorous judgment-quantification process using panels of experts in order to establish inter-rater reliability and agreement in the development of end-user instruments developed to measure workplace skills using spreadsheet and word-processing applications.

  2. NASA Automated Fiber Placement Capabilities: Similar Systems, Complementary Purposes

    Science.gov (United States)

    Wu, K. Chauncey; Jackson, Justin R.; Pelham, Larry I.; Stewart, Brian K.

    2015-01-01

    New automated fiber placement systems at the NASA Langley Research Center and NASA Marshall Space Flight Center provide state-of-art composites capabilities to these organizations. These systems support basic and applied research at Langley, complementing large-scale manufacturing and technology development at Marshall. These systems each consist of a multi-degree of freedom mobility platform including a commercial robot, a commercial tool changer mechanism, a bespoke automated fiber placement end effector, a linear track, and a rotational tool support structure. In addition, new end effectors with advanced capabilities may be either bought or developed with partners in industry and academia to extend the functionality of these systems. These systems will be used to build large and small composite parts in support of the ongoing NASA Composites for Exploration Upper Stage Project later this year.

  3. NASA GRC's High Pressure Burner Rig Facility and Materials Test Capabilities

    Science.gov (United States)

    Robinson, R. Craig

    1999-01-01

    The High Pressure Burner Rig (HPBR) at NASA Glenn Research Center is a high-velocity. pressurized combustion test rig used for high-temperature environmental durability studies of advanced materials and components. The facility burns jet fuel and air in controlled ratios, simulating combustion gas chemistries and temperatures that are realistic to those in gas turbine engines. In addition, the test section is capable of simulating the pressures and gas velocities representative of today's aircraft. The HPBR provides a relatively inexpensive. yet sophisticated means for researchers to study the high-temperature oxidation of advanced materials. The facility has the unique capability of operating under both fuel-lean and fuel-rich gas mixtures. using a fume incinerator to eliminate any harmful byproduct emissions (CO, H2S) of rich-burn operation. Test samples are easily accessible for ongoing inspection and documentation of weight change, thickness, cracking, and other metrics. Temperature measurement is available in the form of both thermocouples and optical pyrometery. and the facility is equipped with quartz windows for observation and video taping. Operating conditions include: (1) 1.0 kg/sec (2.0 lbm/sec) combustion and secondary cooling airflow capability: (2) Equivalence ratios of 0.5- 1.0 (lean) to 1.5-2.0 (rich), with typically 10% H2O vapor pressure: (3) Gas temperatures ranging 700-1650 C (1300-3000 F): (4) Test pressures ranging 4-12 atmospheres: (5) Gas flow velocities ranging 10-30 m/s (50-100) ft/sec.: and (6) Cyclic and steady-state exposure capabilities. The facility has historically been used to test coupon-size materials. including metals and ceramics. However complex-shaped components have also been tested including cylinders, airfoils, and film-cooled end walls. The facility has also been used to develop thin-film temperature measurement sensors.

  4. High energy physics and cloud computing

    International Nuclear Information System (INIS)

    Cheng Yaodong; Liu Baoxu; Sun Gongxing; Chen Gang

    2011-01-01

    High Energy Physics (HEP) has been a strong promoter of computing technology, for example WWW (World Wide Web) and the grid computing. In the new era of cloud computing, HEP has still a strong demand, and major international high energy physics laboratories have launched a number of projects to research on cloud computing technologies and applications. It describes the current developments in cloud computing and its applications in high energy physics. Some ongoing projects in the institutes of high energy physics, Chinese Academy of Sciences, including cloud storage, virtual computing clusters, and BESⅢ elastic cloud, are also described briefly in the paper. (authors)

  5. Data Mining and Knowledge Discover - IBM Cognitive Alternatives for NASA KSC

    Science.gov (United States)

    Velez, Victor Hugo

    2016-01-01

    Skillful tools in cognitive computing to transform industries have been found favorable and profitable for different Directorates at NASA KSC. In this study is shown how cognitive computing systems can be useful for NASA when computers are trained in the same way as humans are to gain knowledge over time. Increasing knowledge through senses, learning and a summation of events is how the applications created by the firm IBM empower the artificial intelligence in a cognitive computing system. NASA has explored and applied for the last decades the artificial intelligence approach specifically with cognitive computing in few projects adopting similar models proposed by IBM Watson. However, the usage of semantic technologies by the dedicated business unit developed by IBM leads these cognitive computing applications to outperform the functionality of the inner tools and present outstanding analysis to facilitate the decision making for managers and leads in a management information system.

  6. NASA Applications of Molecular Nanotechnology

    Science.gov (United States)

    Globus, Al; Bailey, David; Han, Jie; Jaffe, Richard; Levit, Creon; Merkle, Ralph; Srivastava, Deepak

    1998-01-01

    Laboratories throughout the world are rapidly gaining atomically precise control over matter. As this control extends to an ever wider variety of materials, processes and devices, opportunities for applications relevant to NASA's missions will be created. This document surveys a number of future molecular nanotechnology capabilities of aerospace interest. Computer applications, launch vehicle improvements, and active materials appear to be of particular interest. We also list a number of applications for each of NASA's enterprises. If advanced molecular nanotechnology can be developed, almost all of NASA's endeavors will be radically improved. In particular, a sufficiently advanced molecular nanotechnology can arguably bring large scale space colonization within our grasp.

  7. Twenty-first Century Space Science in The Urban High School Setting: The NASA/John Dewey High School Educational Outreach Partnership

    Science.gov (United States)

    Fried, B.; Levy, M.; Reyes, C.; Austin, S.

    2003-05-01

    A unique and innovative partnership has recently developed between NASA and John Dewey High School, infusing Space Science into the curriculum. This partnership builds on an existing relationship with MUSPIN/NASA and their regional center at the City University of New York based at Medgar Evers College. As an outgrowth of the success and popularity of our Remote Sensing Research Program, sponsored by the New York State Committee for the Advancement of Technology Education (NYSCATE), and the National Science Foundation and stimulated by MUSPIN-based faculty development workshops, our science department has branched out in a new direction - the establishment of a Space Science Academy. John Dewey High School, located in Brooklyn, New York, is an innovative inner city public school with students of a diverse multi-ethnic population and a variety of economic backgrounds. Students were recruited from this broad spectrum, which covers the range of learning styles and academic achievement. This collaboration includes students of high, average, and below average academic levels, emphasizing participation of students with learning disabilities. In this classroom without walls, students apply the strategies and methodologies of problem-based learning in solving complicated tasks. The cooperative learning approach simulates the NASA method of problem solving, as students work in teams, share research and results. Students learn to recognize the complexity of certain tasks as they apply Earth Science, Mathematics, Physics, Technology and Engineering to design solutions. Their path very much follows the NASA model as they design and build various devices. Our Space Science curriculum presently consists of a one-year sequence of elective classes taken in conjunction with Regents-level science classes. This sequence consists of Remote Sensing, Planetology, Mission to Mars (NASA sponsored research program), and Microbiology, where future projects will be astronomy related. This

  8. Nasa's Ant-Inspired Swarmie Robots

    Science.gov (United States)

    Leucht, Kurt W.

    2016-01-01

    As humans push further beyond the grasp of earth, robotic missions in advance of human missions will play an increasingly important role. These robotic systems will find and retrieve valuable resources as part of an in-situ resource utilization (ISRU) strategy. They will need to be highly autonomous while maintaining high task performance levels. NASA Kennedy Space Center has teamed up with the Biological Computation Lab at the University of New Mexico to create a swarm of small, low-cost, autonomous robots to be used as a ground-based research platform for ISRU missions. The behavior of the robot swarm mimics the central-place foraging strategy of ants to find and collect resources in a previously unmapped environment and return those resources to a central site. This talk will guide the audience through the Swarmie robot project from its conception by students in a New Mexico research lab to its robot trials in an outdoor parking lot at NASA. The software technologies and techniques used on the project will be discussed, as well as various challenges and solutions that were encountered by the development team along the way.

  9. Initial Flight Test of the Production Support Flight Control Computers at NASA Dryden Flight Research Center

    Science.gov (United States)

    Carter, John; Stephenson, Mark

    1999-01-01

    The NASA Dryden Flight Research Center has completed the initial flight test of a modified set of F/A-18 flight control computers that gives the aircraft a research control law capability. The production support flight control computers (PSFCC) provide an increased capability for flight research in the control law, handling qualities, and flight systems areas. The PSFCC feature a research flight control processor that is "piggybacked" onto the baseline F/A-18 flight control system. This research processor allows for pilot selection of research control law operation in flight. To validate flight operation, a replication of a standard F/A-18 control law was programmed into the research processor and flight-tested over a limited envelope. This paper provides a brief description of the system, summarizes the initial flight test of the PSFCC, and describes future experiments for the PSFCC.

  10. Performance of particle in cell methods on highly concurrent computational architectures

    International Nuclear Information System (INIS)

    Adams, M.F.; Ethier, S.; Wichmann, N.

    2009-01-01

    Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system of equations used in simulations of magnetic fusion plasmas. PIC methods use grid based computations, for solving Poisson's equation or more generally Maxwell's equations, as well as Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of discretizations, deterministic field solves and Monte-Carlo methods for the Vlasov equation, pose challenges in understanding and optimizing performance on today large scale computers which require high levels of concurrency. These challenges arises from the need to optimize two very different types of processes and the interactions between them. Modern cache based high-end computers have very deep memory hierarchies and high degrees of concurrency which must be utilized effectively to achieve good performance. The effective use of these machines requires maximizing concurrency by eliminating serial or redundant work and minimizing global communication. A related issue is minimizing the memory traffic between levels of the memory hierarchy because performance is often limited by the bandwidths and latencies of the memory system. This paper discusses some of the performance issues, particularly in regard to parallelism, of PIC methods. The gyrokinetic toroidal code (GTC) is used for these studies and a new radial grid decomposition is presented and evaluated. Scaling of the code is demonstrated on ITER sized plasmas with up to 16K Cray XT3/4 cores.

  11. Performance of particle in cell methods on highly concurrent computational architectures

    International Nuclear Information System (INIS)

    Adams, M F; Ethier, S; Wichmann, N

    2007-01-01

    Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system of equations used in simulations of magnetic fusion plasmas. PIC methods use grid based computations, for solving Poisson's equation or more generally Maxwell's equations, as well as Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of discretizations, deterministic field solves and Monte-Carlo methods for the Vlasov equation, pose challenges in understanding and optimizing performance on today large scale computers which require high levels of concurrency. These challenges arises from the need to optimize two very different types of processes and the interactions between them. Modern cache based high-end computers have very deep memory hierarchies and high degrees of concurrency which must be utilized effectively to achieve good performance. The effective use of these machines requires maximizing concurrency by eliminating serial or redundant work and minimizing global communication. A related issue is minimizing the memory traffic between levels of the memory hierarchy because performance is often limited by the bandwidths and latencies of the memory system. This paper discusses some of the performance issues, particularly in regard to parallelism, of PIC methods. The gyrokinetic toroidal code (GTC) is used for these studies and a new radial grid decomposition is presented and evaluated. Scaling of the code is demonstrated on ITER sized plasmas with up to 16K Cray XT3/4 cores

  12. NASA Data Archive Evaluation

    Science.gov (United States)

    Holley, Daniel C.; Haight, Kyle G.; Lindstrom, Ted

    1997-01-01

    The purpose of this study was to expose a range of naive individuals to the NASA Data Archive and to obtain feedback from them, with the goal of learning how useful people with varied backgrounds would find the Archive for research and other purposes. We processed 36 subjects in four experimental categories, designated in this report as C+R+, C+R-, C-R+ and C-R-, for computer experienced researchers, computer experienced non-researchers, non-computer experienced researchers, and non-computer experienced non-researchers, respectively. This report includes an assessment of general patterns of subject responses to the various aspects of the NASA Data Archive. Some of the aspects examined were interface-oriented, addressing such issues as whether the subject was able to locate information, figure out how to perform desired information retrieval tasks, etc. Other aspects were content-related. In doing these assessments, answers given to different questions were sometimes combined. This practice reflects the tendency of the subjects to provide answers expressing their experiences across question boundaries. Patterns of response are cross-examined by subject category in order to bring out deeper understandings of why subjects reacted the way they did to the archive. After the general assessment, there will be a more extensive summary of the replies received from the test subjects.

  13. An Interactive, Web-based High Performance Modeling Environment for Computational Epidemiology.

    Science.gov (United States)

    Deodhar, Suruchi; Bisset, Keith R; Chen, Jiangzhuo; Ma, Yifei; Marathe, Madhav V

    2014-07-01

    We present an integrated interactive modeling environment to support public health epidemiology. The environment combines a high resolution individual-based model with a user-friendly web-based interface that allows analysts to access the models and the analytics back-end remotely from a desktop or a mobile device. The environment is based on a loosely-coupled service-oriented-architecture that allows analysts to explore various counter factual scenarios. As the modeling tools for public health epidemiology are getting more sophisticated, it is becoming increasingly hard for non-computational scientists to effectively use the systems that incorporate such models. Thus an important design consideration for an integrated modeling environment is to improve ease of use such that experimental simulations can be driven by the users. This is achieved by designing intuitive and user-friendly interfaces that allow users to design and analyze a computational experiment and steer the experiment based on the state of the system. A key feature of a system that supports this design goal is the ability to start, stop, pause and roll-back the disease propagation and intervention application process interactively. An analyst can access the state of the system at any point in time and formulate dynamic interventions based on additional information obtained through state assessment. In addition, the environment provides automated services for experiment set-up and management, thus reducing the overall time for conducting end-to-end experimental studies. We illustrate the applicability of the system by describing computational experiments based on realistic pandemic planning scenarios. The experiments are designed to demonstrate the system's capability and enhanced user productivity.

  14. Opera: reconstructing optimal genomic scaffolds with high-throughput paired-end sequences.

    Science.gov (United States)

    Gao, Song; Sung, Wing-Kin; Nagarajan, Niranjan

    2011-11-01

    Scaffolding, the problem of ordering and orienting contigs, typically using paired-end reads, is a crucial step in the assembly of high-quality draft genomes. Even as sequencing technologies and mate-pair protocols have improved significantly, scaffolding programs still rely on heuristics, with no guarantees on the quality of the solution. In this work, we explored the feasibility of an exact solution for scaffolding and present a first tractable solution for this problem (Opera). We also describe a graph contraction procedure that allows the solution to scale to large scaffolding problems and demonstrate this by scaffolding several large real and synthetic datasets. In comparisons with existing scaffolders, Opera simultaneously produced longer and more accurate scaffolds demonstrating the utility of an exact approach. Opera also incorporates an exact quadratic programming formulation to precisely compute gap sizes (Availability: http://sourceforge.net/projects/operasf/ ).

  15. Computational Analysis of G-Quadruplex Forming Sequences across Chromosomes Reveals High Density Patterns Near the Terminal Ends.

    Directory of Open Access Journals (Sweden)

    Julia H Chariker

    Full Text Available G-quadruplex structures (G4 are found throughout the human genome and are known to play a regulatory role in a variety of molecular processes. Structurally, they have many configurations and can form from one or more DNA strands. At the gene level, they regulate gene expression and protein synthesis. In this paper, chromosomal-level patterns of distribution are analyzed on the human genome to identify high-level distribution patterns potentially related to global functional processes. Here we show unique high density banding patterns on individual chromosomes that are highly correlated, appearing in a mirror pattern, across forward and reverse DNA strands. The highest density of G4 sequences occurs within four megabases of one end of most chromosomes and contains G4 motifs that bind with zinc finger proteins. These findings suggest that G4 may play a role in global chromosomal processes such as those found in meiosis.

  16. Establishing Esri ArcGIS Enterprise Platform Capabilities to Support Response Activities of the NASA Earth Science Disasters Program

    Science.gov (United States)

    Molthan, A.; Seepersad, J.; Shute, J.; Carriere, L.; Duffy, D.; Tisdale, B.; Kirschbaum, D.; Green, D. S.; Schwizer, L.

    2017-12-01

    NASA's Earth Science Disasters Program promotes the use of Earth observations to improve the prediction of, preparation for, response to, and recovery from natural and technological disasters. NASA Earth observations and those of domestic and international partners are combined with in situ observations and models by NASA scientists and partners to develop products supporting disaster mitigation, response, and recovery activities among several end-user partners. These products are accompanied by training to ensure proper integration and use of these materials in their organizations. Many products are integrated along with other observations available from other sources in GIS-capable formats to improve situational awareness and response efforts before, during and after a disaster. Large volumes of NASA observations support the generation of disaster response products by NASA field center scientists, partners in academia, and other institutions. For example, a prediction of high streamflows and inundation from a NASA-supported model may provide spatial detail of flood extent that can be combined with GIS information on population density, infrastructure, and land value to facilitate a prediction of who will be affected, and the economic impact. To facilitate the sharing of these outputs in a common framework that can be easily ingested by downstream partners, the NASA Earth Science Disasters Program partnered with Esri and the NASA Center for Climate Simulation (NCCS) to establish a suite of Esri/ArcGIS services to support the dissemination of routine and event-specific products to end users. This capability has been demonstrated to key partners including the Federal Emergency Management Agency using a case-study example of Hurricane Matthew, and will also help to support future domestic and international disaster events. The Earth Science Disasters Program has also established a longer-term vision to leverage scientists' expertise in the development and delivery of

  17. The high speed civil transport and NASA's High Speed Research (HSR) program

    Science.gov (United States)

    Shaw, Robert J.

    1994-01-01

    Ongoing studies being conducted not only in this country but in Europe and Asia suggest that a second generation supersonic transport, or High-Speed Civil Transport (HSCT), could become an important part of the 21st century international air transportation system. However, major environmental compatibility and economic viability issues must be resolved if the HSCT is to become a reality. This talk will overview the NASA High-Speed Research (HSR) program which is aimed at providing the U.S. industry with a technology base to allow them to consider launching an HSCT program early in the next century. The talk will also discuss some of the comparable activities going on within Europe and Japan.

  18. Data handling and visualization for NASA's science programs

    Science.gov (United States)

    Bredekamp, Joseph H. (Editor)

    1995-01-01

    Advanced information systems capabilities are essential to conducting NASA's scientific research mission. Access to these capabilities is no longer a luxury for a select few within the science community, but rather an absolute necessity for carrying out scientific investigations. The dependence on high performance computing and networking, as well as ready and expedient access to science data, metadata, and analysis tools is the fundamental underpinning for the entire research endeavor. At the same time, advances in the whole range of information technologies continues on an almost explosive growth path, reaching beyond the research community to affect the population as a whole. Capitalizing on and exploiting these advances are critical to the continued success of space science investigations. NASA must remain abreast of developments in the field and strike an appropriate balance between being a smart buyer and a direct investor in the technology which serves its unique requirements. Another key theme deals with the need for the space and computer science communities to collaborate as partners to more fully realize the potential of information technology in the space science research environment.

  19. The NASA Commercial Crew Program (CCP) Mission Assurance Process

    Science.gov (United States)

    Canfield, Amy

    2016-01-01

    In 2010, NASA established the Commercial Crew Program in order to provide human access to the International Space Station and low earth orbit via the commercial (non-governmental) sector. A particular challenge to NASA has been how to determine the commercial providers transportation system complies with Programmatic safety requirements. The process used in this determination is the Safety Technical Review Board which reviews and approves provider submitted Hazard Reports. One significant product of the review is a set of hazard control verifications. In past NASA programs, 100 percent of these safety critical verifications were typically confirmed by NASA. The traditional Safety and Mission Assurance (SMA) model does not support the nature of the Commercial Crew Program. To that end, NASA SMA is implementing a Risk Based Assurance (RBA) process to determine which hazard control verifications require NASA authentication. Additionally, a Shared Assurance Model is also being developed to efficiently use the available resources to execute the verifications. This paper will describe the evolution of the CCP Mission Assurance process from the beginning of the Program to its current incarnation. Topics to be covered include a short history of the CCP; the development of the Programmatic mission assurance requirements; the current safety review process; a description of the RBA process and its products and ending with a description of the Shared Assurance Model.

  20. Computer Interactives for the Mars Atmospheric and Volatile Evolution (MAVEN) Mission through NASA's "Project Spectra!"

    Science.gov (United States)

    Wood, E. L.

    2014-12-01

    "Project Spectra!" is a standards-based E-M spectrum and engineering program that includes paper and pencil activities as well as Flash-based computer games that help students solidify understanding of high-level planetary and solar physics. Using computer interactive games, students experience and manipulate information making abstract concepts accessible, solidifying understanding and enhancing retention of knowledge. Since students can choose what to watch and explore, the interactives accommodate a broad range of learning styles. Students can go back and forth through the interactives if they've missed a concept or wish to view something again. In the end, students are asked critical thinking questions and conduct web-based research. As part of the Mars Atmospheric and Volatile EvolutioN (MAVEN) mission education programming, we've developed two new interactives. The MAVEN mission will study volatiles in the upper atmosphere to help piece together Mars' climate history. In the first interactive, students explore black body radiation, albedo, and a simplified greenhouse effect to establish what factors contribute to overall planetary temperature. Students design a planet that is able to maintain liquid water on the surface. In the second interactive, students are asked to consider conditions needed for Mars to support water on the surface, keeping some variables fixed. Ideally, students will walk away with the very basic and critical elements required for climate studies, which has far-reaching implications beyond the study of Mars. These interactives were pilot tested at Arvada High School in Colorado.

  1. Computing farms

    International Nuclear Information System (INIS)

    Yeh, G.P.

    2000-01-01

    High-energy physics, nuclear physics, space sciences, and many other fields have large challenges in computing. In recent years, PCs have achieved performance comparable to the high-end UNIX workstations, at a small fraction of the price. We review the development and broad applications of commodity PCs as the solution to CPU needs, and look forward to the important and exciting future of large-scale PC computing

  2. Summary of Training Workshop on the Use of NASA tools for Coastal Resource Management in the Gulf of Mexico

    Energy Technology Data Exchange (ETDEWEB)

    Judd, Chaeli; Judd, Kathleen S.; Gulbransen, Thomas C.; Thom, Ronald M.

    2009-03-01

    A two-day training workshop was held in Xalapa, Mexico from March 10-11 2009 with the goal of training end users from the southern Gulf of Mexico states of Campeche and Veracruz in the use of tools to support coastal resource management decision-making. The workshop was held at the computer laboratory of the Institute de Ecologia, A.C. (INECOL). This report summarizes the results of that workshop and is a deliverable to our NASA client.

  3. NASA Administrative Data Base Management Systems, 1984

    Science.gov (United States)

    Radosevich, J. D. (Editor)

    1984-01-01

    Strategies for converting to a data base management system (DBMS) and the implementation of the software packages necessary are discussed. Experiences with DBMS at various NASA centers are related including Langley's ADABAS/NATURAL and the NEMS subsystem of the NASA metrology informaton system. The value of the integrated workstation with a personal computer is explored.

  4. Technological Innovations from NASA

    Science.gov (United States)

    Pellis, Neal R.

    2006-01-01

    The challenge of human space exploration places demands on technology that push concepts and development to the leading edge. In biotechnology and biomedical equipment development, NASA science has been the seed for numerous innovations, many of which are in the commercial arena. The biotechnology effort has led to rational drug design, analytical equipment, and cell culture and tissue engineering strategies. Biomedical research and development has resulted in medical devices that enable diagnosis and treatment advances. NASA Biomedical developments are exemplified in the new laser light scattering analysis for cataracts, the axial flow left ventricular-assist device, non contact electrocardiography, and the guidance system for LASIK surgery. Many more developments are in progress. NASA will continue to advance technologies, incorporating new approaches from basic and applied research, nanotechnology, computational modeling, and database analyses.

  5. Emerging and Future Computing Paradigms and Their Impact on the Research, Training, and Design Environments of the Aerospace Workforce

    Science.gov (United States)

    Noor, Ahmed K. (Compiler)

    2003-01-01

    The document contains the proceedings of the training workshop on Emerging and Future Computing Paradigms and their impact on the Research, Training and Design Environments of the Aerospace Workforce. The workshop was held at NASA Langley Research Center, Hampton, Virginia, March 18 and 19, 2003. The workshop was jointly sponsored by Old Dominion University and NASA. Workshop attendees came from NASA, other government agencies, industry and universities. The objectives of the workshop were to a) provide broad overviews of the diverse activities related to new computing paradigms, including grid computing, pervasive computing, high-productivity computing, and the IBM-led autonomic computing; and b) identify future directions for research that have high potential for future aerospace workforce environments. The format of the workshop included twenty-one, half-hour overview-type presentations and three exhibits by vendors.

  6. Space Images for NASA/JPL

    Science.gov (United States)

    Boggs, Karen; Gutheinz, Sandy C.; Watanabe, Susan M.; Oks, Boris; Arca, Jeremy M.; Stanboli, Alice; Peez, Martin; Whatmore, Rebecca; Kang, Minliang; Espinoza, Luis A.

    2010-01-01

    Space Images for NASA/JPL is an Apple iPhone application that allows the general public to access featured images from the Jet Propulsion Laboratory (JPL). A back-end infrastructure stores, tracks, and retrieves space images from the JPL Photojournal Web server, and catalogs the information into a streamlined rating infrastructure.

  7. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  8. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  9. High-performance computing on the Intel Xeon Phi how to fully exploit MIC architectures

    CERN Document Server

    Wang, Endong; Shen, Bo; Zhang, Guangyong; Lu, Xiaowei; Wu, Qing; Wang, Yajuan

    2014-01-01

    The aim of this book is to explain to high-performance computing (HPC) developers how to utilize the Intel® Xeon Phi™ series products efficiently. To that end, it introduces some computing grammar, programming technology and optimization methods for using many-integrated-core (MIC) platforms and also offers tips and tricks for actual use, based on the authors' first-hand optimization experience.The material is organized in three sections. The first section, "Basics of MIC", introduces the fundamentals of MIC architecture and programming, including the specific Intel MIC programming environment

  10. Design and Computational Fluid Dynamics Optimization of the Tube End Effector for Reactor Pressure Vessel Head Type VVER-1000

    International Nuclear Information System (INIS)

    Novosel, D.

    2006-01-01

    In this paper is presented development and optimization of the tube end effector design which should consist of 4 ultrasonic transducers, 4 Eddy Current's transducers and Radiation Proof Dot Camera. Basically, designing was conducted by main input requests, such as: inner diameter of a tested reactor pressure vessel head penetration tube, dimensions of a transducers and maximum allowable vertical movement of a manipulator connection rod in order to cover all inner tube surface. As is obvious, for ultrasonic testing should be provided the thin layer of liquid material (in our case water was chosen) which is necessary to make physical contact between transducer surface and investigated inner tube surface. By help of Computational Fluid Dynamics, determined were parameters of geometry, as the most important factor of transducer housing, hydraulically parameters for water supply and primary drain together implemented into this housing, movement of the end effectors (vertical and cylindrical) and finally, necessary equipment which has to provide all hydraulically and pneumatic requirements. As the cylindrical surface of the inner tube diameter was liquefied and contact between transducer housing and tested tube wasn't ideally covered, water leakage could occur in downstream direction. To reduce water leakage, which is highly contaminated, developed was second water drain by diffuser assembly which is driven by Venturi pipe, commercially called vacuum generator. Using the Computational Fluid Dynamic, obtained was optimized geometry of diffuser control volume with the highest efficiency, in other words, unobstructed fluid flux. Afterwards, the end effectors system was synchronized to the existing operable system for NDT methods all invented and designed by INETEC. (author)

  11. Model Reduction of Computational Aerothermodynamics for Multi-Discipline Analysis in High Speed Flows

    Science.gov (United States)

    Crowell, Andrew Rippetoe

    This dissertation describes model reduction techniques for the computation of aerodynamic heat flux and pressure loads for multi-disciplinary analysis of hypersonic vehicles. NASA and the Department of Defense have expressed renewed interest in the development of responsive, reusable hypersonic cruise vehicles capable of sustained high-speed flight and access to space. However, an extensive set of technical challenges have obstructed the development of such vehicles. These technical challenges are partially due to both the inability to accurately test scaled vehicles in wind tunnels and to the time intensive nature of high-fidelity computational modeling, particularly for the fluid using Computational Fluid Dynamics (CFD). The aim of this dissertation is to develop efficient and accurate models for the aerodynamic heat flux and pressure loads to replace the need for computationally expensive, high-fidelity CFD during coupled analysis. Furthermore, aerodynamic heating and pressure loads are systematically evaluated for a number of different operating conditions, including: simple two-dimensional flow over flat surfaces up to three-dimensional flows over deformed surfaces with shock-shock interaction and shock-boundary layer interaction. An additional focus of this dissertation is on the implementation and computation of results using the developed aerodynamic heating and pressure models in complex fluid-thermal-structural simulations. Model reduction is achieved using a two-pronged approach. One prong focuses on developing analytical corrections to isothermal, steady-state CFD flow solutions in order to capture flow effects associated with transient spatially-varying surface temperatures and surface pressures (e.g., surface deformation, surface vibration, shock impingements, etc.). The second prong is focused on minimizing the computational expense of computing the steady-state CFD solutions by developing an efficient surrogate CFD model. The developed two

  12. Research Institute for Advanced Computer Science: Annual Report October 1998 through September 1999

    Science.gov (United States)

    Leiner, Barry M.; Gross, Anthony R. (Technical Monitor)

    1999-01-01

    The Research Institute for Advanced Computer Science (RIACS) carries out basic research and technology development in computer science, in support of the National Aeronautics and Space Administration's missions. RIACS is located at the NASA Ames Research Center (ARC). It currently operates under a multiple year grant/cooperative agreement that began on October 1, 1997 and is up for renewal in the year 2002. ARC has been designated NASA's Center of Excellence in Information Technology. In this capacity, ARC is charged with the responsibility to build an Information Technology Research Program that is preeminent within NASA. RIACS serves as a bridge between NASA ARC and the academic community, and RIACS scientists and visitors work in close collaboration with NASA scientists. RIACS has the additional goal of broadening the base of researchers in these areas of importance to the nation's space and aeronautics enterprises. RIACS research focuses on the three cornerstones of information technology research necessary to meet the future challenges of NASA missions: (1) Automated Reasoning for Autonomous Systems. Techniques are being developed enabling spacecraft that will be self-guiding and self-correcting to the extent that they will require little or no human intervention. Such craft will be equipped to independently solve problems as they arise, and fulfill their missions with minimum direction from Earth. (2) Human-Centered Computing. Many NASA missions require synergy between humans and computers, with sophisticated computational aids amplifying human cognitive and perceptual abilities; (3) High Performance Computing and Networking Advances in the performance of computing and networking continue to have major impact on a variety of NASA endeavors, ranging from modeling and simulation to data analysis of large datasets to collaborative engineering, planning and execution. In addition, RIACS collaborates with NASA scientists to apply information technology research to

  13. Software Accelerates Computing Time for Complex Math

    Science.gov (United States)

    2014-01-01

    Ames Research Center awarded Newark, Delaware-based EM Photonics Inc. SBIR funding to utilize graphic processing unit (GPU) technology- traditionally used for computer video games-to develop high-computing software called CULA. The software gives users the ability to run complex algorithms on personal computers with greater speed. As a result of the NASA collaboration, the number of employees at the company has increased 10 percent.

  14. Computationally-optimized bone mechanical modeling from high-resolution structural images.

    Directory of Open Access Journals (Sweden)

    Jeremy F Magland

    Full Text Available Image-based mechanical modeling of the complex micro-structure of human bone has shown promise as a non-invasive method for characterizing bone strength and fracture risk in vivo. In particular, elastic moduli obtained from image-derived micro-finite element (μFE simulations have been shown to correlate well with results obtained by mechanical testing of cadaveric bone. However, most existing large-scale finite-element simulation programs require significant computing resources, which hamper their use in common laboratory and clinical environments. In this work, we theoretically derive and computationally evaluate the resources needed to perform such simulations (in terms of computer memory and computation time, which are dependent on the number of finite elements in the image-derived bone model. A detailed description of our approach is provided, which is specifically optimized for μFE modeling of the complex three-dimensional architecture of trabecular bone. Our implementation includes domain decomposition for parallel computing, a novel stopping criterion, and a system for speeding up convergence by pre-iterating on coarser grids. The performance of the system is demonstrated on a dual quad-core Xeon 3.16 GHz CPUs equipped with 40 GB of RAM. Models of distal tibia derived from 3D in-vivo MR images in a patient comprising 200,000 elements required less than 30 seconds to converge (and 40 MB RAM. To illustrate the system's potential for large-scale μFE simulations, axial stiffness was estimated from high-resolution micro-CT images of a voxel array of 90 million elements comprising the human proximal femur in seven hours CPU time. In conclusion, the system described should enable image-based finite-element bone simulations in practical computation times on high-end desktop computers with applications to laboratory studies and clinical imaging.

  15. Climbing the Slope of Enlightenment during NASA's Arctic Boreal Vulnerability Experiment

    Science.gov (United States)

    Griffith, P. C.; Hoy, E.; Duffy, D.; McInerney, M.

    2015-12-01

    The Arctic Boreal Vulnerability Experiment (ABoVE) is a new field campaign sponsored by NASA's Terrestrial Ecology Program and designed to improve understanding of the vulnerability and resilience of Arctic and boreal social-ecological systems to environmental change (http://above.nasa.gov). ABoVE is integrating field-based studies, modeling, and data from airborne and satellite remote sensing. The NASA Center for Climate Simulation (NCCS) has partnered with the NASA Carbon Cycle and Ecosystems Office (CCEO) to create a high performance science cloud for this field campaign. The ABoVE Science Cloud combines high performance computing with emerging technologies and data management with tools for analyzing and processing geographic information to create an environment specifically designed for large-scale modeling, analysis of remote sensing data, copious disk storage for "big data" with integrated data management, and integration of core variables from in-situ networks. The ABoVE Science Cloud is a collaboration that is accelerating the pace of new Arctic science for researchers participating in the field campaign. Specific examples of the utilization of the ABoVE Science Cloud by several funded projects will be presented.

  16. NASA Center for Climate Simulation (NCCS) Presentation

    Science.gov (United States)

    Webster, William P.

    2012-01-01

    The NASA Center for Climate Simulation (NCCS) offers integrated supercomputing, visualization, and data interaction technologies to enhance NASA's weather and climate prediction capabilities. It serves hundreds of users at NASA Goddard Space Flight Center, as well as other NASA centers, laboratories, and universities across the US. Over the past year, NCCS has continued expanding its data-centric computing environment to meet the increasingly data-intensive challenges of climate science. We doubled our Discover supercomputer's peak performance to more than 800 teraflops by adding 7,680 Intel Xeon Sandy Bridge processor-cores and most recently 240 Intel Xeon Phi Many Integrated Core (MIG) co-processors. A supercomputing-class analysis system named Dali gives users rapid access to their data on Discover and high-performance software including the Ultra-scale Visualization Climate Data Analysis Tools (UV-CDAT), with interfaces from user desktops and a 17- by 6-foot visualization wall. NCCS also is exploring highly efficient climate data services and management with a new MapReduce/Hadoop cluster while augmenting its data distribution to the science community. Using NCCS resources, NASA completed its modeling contributions to the Intergovernmental Panel on Climate Change (IPCG) Fifth Assessment Report this summer as part of the ongoing Coupled Modellntercomparison Project Phase 5 (CMIP5). Ensembles of simulations run on Discover reached back to the year 1000 to test model accuracy and projected climate change through the year 2300 based on four different scenarios of greenhouse gases, aerosols, and land use. The data resulting from several thousand IPCC/CMIP5 simulations, as well as a variety of other simulation, reanalysis, and observationdatasets, are available to scientists and decision makers through an enhanced NCCS Earth System Grid Federation Gateway. Worldwide downloads have totaled over 110 terabytes of data.

  17. An overview of flight computer technologies for future NASA

    Science.gov (United States)

    Alkalai, L.

    2001-01-01

    In this paper, we present an overview of current developments by several US Government Agencies and associated programs, towards high-performance single board computers for use in space. Three separate projects will be described; two that are based on the Power PC processor, and one based on the Pentium processor.

  18. Progress update of NASA's free-piston Stirling space power converter technology project

    Science.gov (United States)

    Dudenhoefer, James E.; Winter, Jerry M.; Alger, Donald

    1992-01-01

    A progress update is presented of the NASA LeRC Free-Piston Stirling Space Power Converter Technology Project. This work is being conducted under NASA's Civil Space Technology Initiative (CSTI). The goal of the CSTI High Capacity Power Element is to develop the technology base needed to meet the long duration, high capacity power requirements for future NASA space initiatives. Efforts are focused upon increasing system power output and system thermal and electric energy conversion efficiency at least five fold over current SP-100 technology, and on achieving systems that are compatible with space nuclear reactors. This paper will discuss progress toward 1050 K Stirling Space Power Converters. Fabrication is nearly completed for the 1050 K Component Test Power Converter (CTPC); results of motoring tests of the cold end (525 K), are presented. The success of these and future designs is dependent upon supporting research and technology efforts including heat pipes, bearings, superalloy joining technologies, high efficiency alternators, life and reliability testing, and predictive methodologies. This paper will compare progress in significant areas of component development from the start of the program with the Space Power Development Engine (SPDE) to the present work on CTPC.

  19. NASA/FAA North Texas Research Station Overview

    Science.gov (United States)

    Borchers, Paul F.

    2012-01-01

    NTX Research Staion: NASA research assets embedded in an interesting operational air transport environment. Seven personnel (2 civil servants, 5 contractors). ARTCC, TRACON, Towers, 3 air carrier AOCs(American, Eagle and Southwest), and 2 major airports all within 12 miles. Supports NASA Airspace Systems Program with research products at all levels (fundamental to system level). NTX Laboratory: 5000 sq ft purpose-built, dedicated, air traffic management research facility. Established data links to ARTCC, TRACON, Towers, air carriers, airport and NASA facilities. Re-configurable computer labs, dedicated radio tower, state-of-the-art equipment.

  20. An automated meta-monitoring mobile application and front-end interface for the ATLAS computing model

    Energy Technology Data Exchange (ETDEWEB)

    Kawamura, Gen; Quadt, Arnulf [II. Physikalisches Institut, Georg-August-Universitaet Goettingen (Germany)

    2016-07-01

    Efficient administration of computing centres requires advanced tools for the monitoring and front-end interface of the infrastructure. Providing the large-scale distributed systems as a global grid infrastructure, like the Worldwide LHC Computing Grid (WLCG) and ATLAS computing, is offering many existing web pages and information sources indicating the status of the services, systems and user jobs at grid sites. A meta-monitoring mobile application which automatically collects the information could give every administrator a sophisticated and flexible interface of the infrastructure. We describe such a solution; the MadFace mobile application developed at Goettingen. It is a HappyFace compatible mobile application which has a user-friendly interface. It also becomes very feasible to automatically investigate the status and problem from different sources and provides access of the administration roles for non-experts.

  1. Computing in high energy physics

    International Nuclear Information System (INIS)

    Hertzberger, L.O.; Hoogland, W.

    1986-01-01

    This book deals with advanced computing applications in physics, and in particular in high energy physics environments. The main subjects covered are networking; vector and parallel processing; and embedded systems. Also examined are topics such as operating systems, future computer architectures and commercial computer products. The book presents solutions that are foreseen as coping, in the future, with computing problems in experimental and theoretical High Energy Physics. In the experimental environment the large amounts of data to be processed offer special problems on-line as well as off-line. For on-line data reduction, embedded special purpose computers, which are often used for trigger applications are applied. For off-line processing, parallel computers such as emulator farms and the cosmic cube may be employed. The analysis of these topics is therefore a main feature of this volume

  2. Computer Technology for Industry

    Science.gov (United States)

    1979-01-01

    In this age of the computer, more and more business firms are automating their operations for increased efficiency in a great variety of jobs, from simple accounting to managing inventories, from precise machining to analyzing complex structures. In the interest of national productivity, NASA is providing assistance both to longtime computer users and newcomers to automated operations. Through a special technology utilization service, NASA saves industry time and money by making available already developed computer programs which have secondary utility. A computer program is essentially a set of instructions which tells the computer how to produce desired information or effect by drawing upon its stored input. Developing a new program from scratch can be costly and time-consuming. Very often, however, a program developed for one purpose can readily be adapted to a totally different application. To help industry take advantage of existing computer technology, NASA operates the Computer Software Management and Information Center (COSMIC)(registered TradeMark),located at the University of Georgia. COSMIC maintains a large library of computer programs developed for NASA, the Department of Defense, the Department of Energy and other technology-generating agencies of the government. The Center gets a continual flow of software packages, screens them for adaptability to private sector usage, stores them and informs potential customers of their availability.

  3. NASA Game Changing Development Program Manufacturing Innovation Project

    Science.gov (United States)

    Tolbert, Carol; Vickers, John

    2011-01-01

    This presentation examines the new NASA Manufacturing Innovation Project. The project is a part of the Game Changing Development Program which is one element of the Space Technology Programs Managed by Office of the Chief Technologist. The project includes innovative technologies in model-based manufacturing, digital additive manufacturing, and other next generation manufacturing tools. The project is also coupled with the larger federal initiatives in this area including the National Digital Engineering and Manufacturing Initiative and the Advanced Manufacturing Partnership. In addition to NASA, other interagency partners include the Department of Defense, Department of Commerce, NIST, Department of Energy, and the National Science Foundation. The development of game-changing manufacturing technologies are critical for NASA s mission of exploration, strengthening America s manufacturing competitiveness, and are highly related to current challenges in defense manufacturing activities. There is strong consensus across industry, academia, and government that the future competitiveness of U.S. industry will be determined, in large part, by a technologically advanced manufacturing sector. This presentation highlights the prospectus of next generation manufacturing technologies to the challenges faced NASA and by the Department of Defense. The project focuses on maturing innovative/high payoff model-based manufacturing technologies that may lead to entirely new approaches for a broad array of future NASA missions and solutions to significant national needs. Digital manufacturing and computer-integrated manufacturing "virtually" guarantee advantages in quality, speed, and cost and offer many long-term benefits across the entire product lifecycle. This paper addresses key enablers and emerging strategies in areas such as: Current government initiatives, Model-based manufacturing, and Additive manufacturing.

  4. A Heterogeneous High-Performance System for Computational and Computer Science

    Science.gov (United States)

    2016-11-15

    expand the research infrastructure at the institution but also to enhance the high -performance computing training provided to both undergraduate and... cloud computing, supercomputing, and the availability of cheap memory and storage led to enormous amounts of data to be sifted through in forensic... High -Performance Computing (HPC) tools that can be integrated with existing curricula and support our research to modernize and dramatically advance

  5. Overview of the NASA/RECON educational, research, and development activities of the Computer Science Departments of the University of Southwestern Louisiana and Southern University

    Science.gov (United States)

    Dominick, Wayne D. (Editor)

    1984-01-01

    This document presents a brief overview of the scope of activities undertaken by the Computer Science Departments of the University of Southern Louisiana (USL) and Southern University (SU) pursuant to a contract with NASA. Presented are only basic identification data concerning the contract activities since subsequent entries within the Working Paper Series will be oriented specifically toward a detailed development and presentation of plans, methodologies, and results of each contract activity. Also included is a table of contents of the entire USL/DBMS NASA/RECON Working Paper Series.

  6. A large-scale computer facility for computational aerodynamics

    International Nuclear Information System (INIS)

    Bailey, F.R.; Balhaus, W.F.

    1985-01-01

    The combination of computer system technology and numerical modeling have advanced to the point that computational aerodynamics has emerged as an essential element in aerospace vehicle design methodology. To provide for further advances in modeling of aerodynamic flow fields, NASA has initiated at the Ames Research Center the Numerical Aerodynamic Simulation (NAS) Program. The objective of the Program is to develop a leading-edge, large-scale computer facility, and make it available to NASA, DoD, other Government agencies, industry and universities as a necessary element in ensuring continuing leadership in computational aerodynamics and related disciplines. The Program will establish an initial operational capability in 1986 and systematically enhance that capability by incorporating evolving improvements in state-of-the-art computer system technologies as required to maintain a leadership role. This paper briefly reviews the present and future requirements for computational aerodynamics and discusses the Numerical Aerodynamic Simulation Program objectives, computational goals, and implementation plans

  7. High energy physics computing in Japan

    International Nuclear Information System (INIS)

    Watase, Yoshiyuki

    1989-01-01

    A brief overview of the computing provision for high energy physics in Japan is presented. Most of the computing power for high energy physics is concentrated in KEK. Here there are two large scale systems: one providing a general computing service including vector processing and the other dedicated to TRISTAN experiments. Each university group has a smaller sized mainframe or VAX system to facilitate both their local computing needs and the remote use of the KEK computers through a network. The large computer system for the TRISTAN experiments is described. An overview of a prospective future large facility is also given. (orig.)

  8. SSR_pipeline: a bioinformatic infrastructure for identifying microsatellites from paired-end Illumina high-throughput DNA sequencing data

    Science.gov (United States)

    Miller, Mark P.; Knaus, Brian J.; Mullins, Thomas D.; Haig, Susan M.

    2013-01-01

    SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (e.g., microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains 3 analysis modules along with a fourth control module that can automate analyses of large volumes of data. The modules are used to 1) identify the subset of paired-end sequences that pass Illumina quality standards, 2) align paired-end reads into a single composite DNA sequence, and 3) identify sequences that possess microsatellites (both simple and compound) conforming to user-specified parameters. The microsatellite search algorithm is extremely efficient, and we have used it to identify repeats with motifs from 2 to 25bp in length. Each of the 3 analysis modules can also be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc.). We demonstrate use of the program with data from the brine fly Ephydra packardi (Diptera: Ephydridae) and provide empirical timing benchmarks to illustrate program performance on a common desktop computer environment. We further show that the Illumina platform is capable of identifying large numbers of microsatellites, even when using unenriched sample libraries and a very small percentage of the sequencing capacity from a single DNA sequencing run. All modules from SSR_pipeline are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, and Windows).

  9. SSR_pipeline: a bioinformatic infrastructure for identifying microsatellites from paired-end Illumina high-throughput DNA sequencing data.

    Science.gov (United States)

    Miller, Mark P; Knaus, Brian J; Mullins, Thomas D; Haig, Susan M

    2013-01-01

    SSR_pipeline is a flexible set of programs designed to efficiently identify simple sequence repeats (e.g., microsatellites) from paired-end high-throughput Illumina DNA sequencing data. The program suite contains 3 analysis modules along with a fourth control module that can automate analyses of large volumes of data. The modules are used to 1) identify the subset of paired-end sequences that pass Illumina quality standards, 2) align paired-end reads into a single composite DNA sequence, and 3) identify sequences that possess microsatellites (both simple and compound) conforming to user-specified parameters. The microsatellite search algorithm is extremely efficient, and we have used it to identify repeats with motifs from 2 to 25 bp in length. Each of the 3 analysis modules can also be used independently to provide greater flexibility or to work with FASTQ or FASTA files generated from other sequencing platforms (Roche 454, Ion Torrent, etc.). We demonstrate use of the program with data from the brine fly Ephydra packardi (Diptera: Ephydridae) and provide empirical timing benchmarks to illustrate program performance on a common desktop computer environment. We further show that the Illumina platform is capable of identifying large numbers of microsatellites, even when using unenriched sample libraries and a very small percentage of the sequencing capacity from a single DNA sequencing run. All modules from SSR_pipeline are implemented in the Python programming language and can therefore be used from nearly any computer operating system (Linux, Macintosh, and Windows).

  10. High yield polyol synthesis of round- and sharp-end silver nanowires with high aspect ratio

    Energy Technology Data Exchange (ETDEWEB)

    Nekahi, A.; Marashi, S.P.H., E-mail: pmarashi@aut.ac.ir; Fatmesari, D. Haghshenas

    2016-12-01

    Long silver nanowires (average length of 28 μm, average aspect ratio of 130) with uniform diameter along their length were produced by polyol synthesis of AgNO{sub 3} in ethylene glycol in the presence of PVP as preferential growth agent. Nanowires were produced with no addition of chloride salts such as NaCl or CuCl{sub 2} (or other additives such as Na{sub 2}S) which are usually used for lowering reduction rate of Ag ions by additional etchant of O{sub 2}/Cl{sup −}. Lower reduction rate was obtained by increasing the injection time of PVP and AgNO{sub 3} solutions, which was the significant factor in the formation of nanowires. Therefore, there was enough time for reduced Ag atoms to be deposited preferentially in the direction of PVP chains, resulting in high yield (the fraction of nanowires in the products) of nanowires (more than 95%) with high aspect ratio. The produced nanowires had both round- and sharp-ends with pentagonal cross section. Higher energy level of Ag atoms in borders of MTPs, which increases the dissolution rate of precipitated atoms, in addition to partial melting of MTPs at high synthesis temperatures, leads to the curving of the surfaces of exposed (111) crystalline planes in some MTPs and the formation of round-end silver nanowires. - Highlights: • Long silver nanowires with high aspect ratio of 130 were produced. • More than 95% nanowires were produced in products. • The produced nanowires had round- and sharp-ends with pentagonal cross section. • Additives were needed neither for high yield synthesis nor for round-end nanowires. • Melting and etching of MTPs in high energy borders resulted to round-end nanowires.

  11. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  12. NASA Space Engineering Research Center for VLSI systems design

    Science.gov (United States)

    1991-01-01

    This annual review reports the center's activities and findings on very large scale integration (VLSI) systems design for 1990, including project status, financial support, publications, the NASA Space Engineering Research Center (SERC) Symposium on VLSI Design, research results, and outreach programs. Processor chips completed or under development are listed. Research results summarized include a design technique to harden complementary metal oxide semiconductors (CMOS) memory circuits against single event upset (SEU); improved circuit design procedures; and advances in computer aided design (CAD), communications, computer architectures, and reliability design. Also described is a high school teacher program that exposes teachers to the fundamentals of digital logic design.

  13. NASA Microgravity Science Competition for High-school-aged Student Teams

    Science.gov (United States)

    DeLombard, Richard; Stocker, Dennis; Hodanbosi, Carol; Baumann, Eric

    2002-01-01

    NASA participates in a wide variety of educational activities including competitive events. There are competitive events sponsored by NASA and student teams which are mentored by NASA centers. This participation by NASA in public forums serves to bring the excitement of aerospace science to students and educators. A new competition for highschool-aged student teams involving projects in microgravity has completed two pilot years and will have national eligibility for teams during the 2002-2003 school year. A team participating in the Dropping In a Microgravity Environment will research the field of microgravity, develop a hypothesis, and prepare a proposal for an experiment to be conducted in a microgravity drop tower facility. A team of NASA scientists and engineers will select the top proposals and those teams will then design and build their experiment apparatus. When the experiment apparatus are completed, team representatives will visit NASA Glenn in Cleveland, Ohio for operation of their facility and participate in workshops and center tours. Presented in this paper will be a description of DIME, an overview of the planning and execution of such a program, results from the first two pilot years, and a status of the first national competition.

  14. Final Report on XStack: Software Synthesis for High Productivity ExaScale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Solar-Lezama, Armando [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States). Computer Science and Artificial Intelligence Lab.

    2016-07-12

    The goal of the project was to develop a programming model that would significantly improve productivity in the high-performance computing domain by bringing together three components: a) Automated equivalence checking, b) Sketch-based program synthesis, and c) Autotuning. The report provides an executive summary of the research accomplished through this project. At the end of the report is appended a paper that describes in more detail the key technical accomplishments from this project, and which was published in SC 2014.

  15. Update on NASA Microelectronics Activities

    Science.gov (United States)

    Label, Kenneth A.; Sampson, Michael J.; Casey, Megan; Lauenstein, Jean-Marie

    2017-01-01

    Mission Statement: The NASA Electronic Parts and Packaging (NEPP) Program provides NASA's leadership for developing and maintaining guidance for the screening, qualification, test. and usage of EEE parts by NASA as well as in collaboration with other government Agencies and industry. NASA Space Technology Mission Directorate (STMD) "STMD rapidly develops, demonstrates, and infuses revolutionary, high-payoff technologies through transparent, collaborative partnerships, expanding the boundaries of the aerospace enterprise." Mission Statement: The Space Environments Testing Management Office (SETMO) will identify, prioritize, and manage a select suite of Agency key capabilities/assets that are deemed to be essential to the future needs of NASA or the nation, including some capabilities that lack an adequate business base over the budget horizon. NESC mission is to perform value-added independent testing, analysis, and assessments of NASA's high-risk projects to ensure safety and mission success. NASA Space Environments and Avionics Fellows as well as Radiation and EEE Parts Community of Practice (CoP) leads.

  16. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  17. An Overview of NASA's Integrated Design and Engineering Analysis (IDEA) Environment

    Science.gov (United States)

    Robinson, Jeffrey S.

    2011-01-01

    Historically, the design of subsonic and supersonic aircraft has been divided into separate technical disciplines (such as propulsion, aerodynamics and structures), each of which performs design and analysis in relative isolation from others. This is possible, in most cases, either because the amount of interdisciplinary coupling is minimal, or because the interactions can be treated as linear. The design of hypersonic airbreathing vehicles, like NASA's X-43, is quite the opposite. Such systems are dominated by strong non-linear interactions between disciplines. The design of these systems demands that a multi-disciplinary approach be taken. Furthermore, increased analytical fidelity at the conceptual design phase is highly desirable, as many of the non-linearities are not captured by lower fidelity tools. Only when these systems are designed from a true multi-disciplinary perspective, can the real performance benefits be achieved and complete vehicle systems be fielded. Toward this end, the Vehicle Analysis Branch at NASA Langley Research Center has been developing the Integrated Design and Engineering Analysis (IDEA) Environment. IDEA is a collaborative environment for parametrically modeling conceptual and preliminary designs for launch vehicle and high speed atmospheric flight configurations using the Adaptive Modeling Language (AML) as the underlying framework. The environment integrates geometry, packaging, propulsion, trajectory, aerodynamics, aerothermodynamics, engine and airframe subsystem design, thermal and structural analysis, and vehicle closure into a generative, parametric, unified computational model where data is shared seamlessly between the different disciplines. Plans are also in place to incorporate life cycle analysis tools into the environment which will estimate vehicle operability, reliability and cost. IDEA is currently being funded by NASA?s Hypersonics Project, a part of the Fundamental Aeronautics Program within the Aeronautics

  18. The DEVELOP National Program: Building Dual Capacity in Decision Makers and Young Professionals Through NASA Earth Observations

    Science.gov (United States)

    Childs, L. M.; Rogers, L.; Favors, J.; Ruiz, M.

    2012-12-01

    Through the years, NASA has played a distinct/important/vital role in advancing Earth System Science to meet the challenges of environmental management and policy decision making. Within NASA's Earth Science Division's Applied Sciences' Program, the DEVELOP National Program seeks to extend NASA Earth Science for societal benefit. DEVELOP is a capacity building program providing young professionals and students the opportunity to utilize NASA Earth observations and model output to demonstrate practical applications of those resources to society. Under the guidance of science advisors, DEVELOP teams work in alignment with local, regional, national and international partner organizations to identify the widest array of practical uses for NASA data to enhance related management decisions. The program's structure facilitates a two-fold approach to capacity building by fostering an environment of scientific and professional development opportunities for young professionals and students, while also providing end-user organizations enhanced management and decision making tools for issues impacting their communities. With the competitive nature and growing societal role of science and technology in today's global workplace, DEVELOP is building capacity in the next generation of scientists and leaders by fostering a learning and growing environment where young professionals possess an increased understanding of teamwork, personal development, and scientific/professional development and NASA's Earth Observation System. DEVELOP young professionals are partnered with end user organizations to conduct 10 week feasibility studies that demonstrate the use of NASA Earth science data for enhanced decision making. As a result of the partnership, end user organizations are introduced to NASA Earth Science technologies and capabilities, new methods to augment current practices, hands-on training with practical applications of remote sensing and NASA Earth science, improved remote

  19. INSPIRED High School Computing Academies

    Science.gov (United States)

    Doerschuk, Peggy; Liu, Jiangjiang; Mann, Judith

    2011-01-01

    If we are to attract more women and minorities to computing we must engage students at an early age. As part of its mission to increase participation of women and underrepresented minorities in computing, the Increasing Student Participation in Research Development Program (INSPIRED) conducts computing academies for high school students. The…

  20. Computer-Assisted, Self-Interviewing (CASI) Compared to Face-to-Face Interviewing (FTFI) with Open-Ended, Non-Sensitive Questions

    OpenAIRE

    John Fairweather PhD; Tiffany Rinne PhD; Gary Steel PhD

    2012-01-01

    This article reports results from research on cultural models, and assesses the effects of computers on data quality by comparing open-ended questions asked in two formats—face-to-face interviewing (FTFI) and computer-assisted, self-interviewing (CASI). We expected that for our non-sensitive topic, FTFI would generate fuller and richer accounts because the interviewer could facilitate the interview process. Although the interviewer indeed facilitated these interviews, which resulted in more w...

  1. A brain-computer interface as input channel for a standard assistive technology software.

    Science.gov (United States)

    Zickler, Claudia; Riccio, Angela; Leotta, Francesco; Hillian-Tress, Sandra; Halder, Sebastian; Holz, Elisa; Staiger-Sälzer, Pit; Hoogerwerf, Evert-Jan; Desideri, Lorenzo; Mattia, Donatella; Kübler, Andrea

    2011-10-01

    Recently brain-computer interface (BCI) control was integrated into the commercial assistive technology product QualiWORLD (QualiLife Inc., Paradiso-Lugano, CH). Usability of the first prototype was evaluated in terms of effectiveness (accuracy), efficiency (information transfer rate and subjective workload/NASA Task Load Index) and user satisfaction (Quebec User Evaluation of Satisfaction with assistive Technology, QUEST 2.0) by four end-users with severe disabilities. Three assistive technology experts evaluated the device from a third person perspective. The results revealed high performance levels in communication and internet tasks. Users and assistive technology experts were quite satisfied with the device. However, none could imagine using the device in daily life without improvements. Main obstacles were the EEG-cap and low speed.

  2. Architecture for Cognitive Networking within NASAs Future Space Communications Infrastructure

    Science.gov (United States)

    Clark, Gilbert J., III; Eddy, Wesley M.; Johnson, Sandra K.; Barnes, James; Brooks, David

    2016-01-01

    Future space mission concepts and designs pose many networking challenges for command, telemetry, and science data applications with diverse end-to-end data delivery needs. For future end-to-end architecture designs, a key challenge is meeting expected application quality of service requirements for multiple simultaneous mission data flows with options to use diverse onboard local data buses, commercial ground networks, and multiple satellite relay constellations in LEO, MEO, GEO, or even deep space relay links. Effectively utilizing a complex network topology requires orchestration and direction that spans the many discrete, individually addressable computer systems, which cause them to act in concert to achieve the overall network goals. The system must be intelligent enough to not only function under nominal conditions, but also adapt to unexpected situations, and reorganize or adapt to perform roles not originally intended for the system or explicitly programmed. This paper describes architecture features of cognitive networking within the future NASA space communications infrastructure, and interacting with the legacy systems and infrastructure in the meantime. The paper begins by discussing the need for increased automation, including inter-system collaboration. This discussion motivates the features of an architecture including cognitive networking for future missions and relays, interoperating with both existing endpoint-based networking models and emerging information-centric models. From this basis, we discuss progress on a proof-of-concept implementation of this architecture as a cognitive networking on-orbit application on the SCaN Testbed attached to the International Space Station.

  3. High performance parallel computing of flows in complex geometries: I. Methods

    International Nuclear Information System (INIS)

    Gourdain, N; Gicquel, L; Montagnac, M; Vermorel, O; Staffelbach, G; Garcia, M; Boussuge, J-F; Gazaix, M; Poinsot, T

    2009-01-01

    Efficient numerical tools coupled with high-performance computers, have become a key element of the design process in the fields of energy supply and transportation. However flow phenomena that occur in complex systems such as gas turbines and aircrafts are still not understood mainly because of the models that are needed. In fact, most computational fluid dynamics (CFD) predictions as found today in industry focus on a reduced or simplified version of the real system (such as a periodic sector) and are usually solved with a steady-state assumption. This paper shows how to overcome such barriers and how such a new challenge can be addressed by developing flow solvers running on high-end computing platforms, using thousands of computing cores. Parallel strategies used by modern flow solvers are discussed with particular emphases on mesh-partitioning, load balancing and communication. Two examples are used to illustrate these concepts: a multi-block structured code and an unstructured code. Parallel computing strategies used with both flow solvers are detailed and compared. This comparison indicates that mesh-partitioning and load balancing are more straightforward with unstructured grids than with multi-block structured meshes. However, the mesh-partitioning stage can be challenging for unstructured grids, mainly due to memory limitations of the newly developed massively parallel architectures. Finally, detailed investigations show that the impact of mesh-partitioning on the numerical CFD solutions, due to rounding errors and block splitting, may be of importance and should be accurately addressed before qualifying massively parallel CFD tools for a routine industrial use.

  4. Gettering high energy plasma in the end loss region of the Mirror Fusion Test Facility

    International Nuclear Information System (INIS)

    Goldner, A.I.; Margolies, D.S.

    1979-01-01

    The ions escaping from the end loss fan of the Mirror Fusion Test Facility (MFTF) neutralize when they hit the surface of the end dome. If the neutrals then bounce back into the oncoming plasma, they are likely to reionize, drawing power from the center of the plasma and reducing the overall electron temperature. In this paper we describe two methods for reducing the reionization rate and a computer code for estimating their effectiveness

  5. Scalability of a Low-Cost Multi-Teraflop Linux Cluster for High-End Classical Atomistic and Quantum Mechanical Simulations

    Science.gov (United States)

    Kikuchi, Hideaki; Kalia, Rajiv K.; Nakano, Aiichiro; Vashishta, Priya; Shimojo, Fuyuki; Saini, Subhash

    2003-01-01

    Scalability of a low-cost, Intel Xeon-based, multi-Teraflop Linux cluster is tested for two high-end scientific applications: Classical atomistic simulation based on the molecular dynamics method and quantum mechanical calculation based on the density functional theory. These scalable parallel applications use space-time multiresolution algorithms and feature computational-space decomposition, wavelet-based adaptive load balancing, and spacefilling-curve-based data compression for scalable I/O. Comparative performance tests are performed on a 1,024-processor Linux cluster and a conventional higher-end parallel supercomputer, 1,184-processor IBM SP4. The results show that the performance of the Linux cluster is comparable to that of the SP4. We also study various effects, such as the sharing of memory and L2 cache among processors, on the performance.

  6. Overview of NASA's In Space Robotic Servicing

    Science.gov (United States)

    Reed, Benjamin B.

    2015-01-01

    The panel discussion will start with a presentation of the work of the Satellite Servicing Capabilities Office (SSCO), a team responsible for the overall management, coordination, and implementation of satellite servicing technologies and capabilities for NASA. Born from the team that executed the five Hubble servicing missions, SSCO is now maturing a core set of technologies that support both servicing goals and NASA's exploration and science objectives, including: autonomous rendezvous and docking systems; dexterous robotics; high-speed, fault-tolerant computing; advanced robotic tools, and propellant transfer systems. SSCOs proposed Restore-L mission, under development since 2009, is rapidly advancing the core capabilities the fledgling satellite-servicing industry needs to jumpstart a new national industry. Restore-L is also providing key technologies and core expertise to the Asteroid Redirect Robotic Mission (ARRM), with SSCO serving as the capture module lead for the ARRM effort. Reed will present a brief overview of SSCOs history, capabilities and technologies.

  7. NASA-OAI HPCCP K-12 Program

    Science.gov (United States)

    1994-01-01

    The NASA-OAI High Performance Communication and Computing K- 12 School Partnership program has been completed. Cleveland School of the Arts, Empire Computech Center, Grafton Local Schools and the Bug O Nay Ge Shig School have all received network equipment and connections. Each school is working toward integrating computer and communications technology into their classroom curriculum. Cleveland School of the Arts students are creating computer software. Empire Computech Center is a magnet school for technology education at the elementary school level. Grafton Local schools is located in a rural community and is using communications technology to bring to their students some of the same benefits students from suburban and urban areas receive. The Bug O Nay Ge Shig School is located on an Indian Reservation in Cass Lake, MN. The students at this school are using the computer to help them with geological studies. A grant has been issued to the friends of the Nashville Library. Nashville is a small township in Holmes County, Ohio. A community organization has been formed to turn their library into a state of the art Media Center. Their goal is to have a place where rural students can learn about different career options and how to go about pursuing those careers. Taylor High School in Cincinnati, Ohio was added to the schools involved in the Wind Tunnel Project. A mini grant has been awarded to Taylor High School for computer equipment. The computer equipment is utilized in the school's geometry class to computationally design objects which will be tested for their aerodynamic properties in the Barberton Wind Tunnel. The students who create the models can view the test in the wind tunnel via desk top conferencing. Two teachers received stipends for helping with the Regional Summer Computer Workshop. Both teachers were brought in to teach a session within the workshop. They were selected to teach the session based on their expertise in particular software applications.

  8. Global Reach: A View of International Cooperation in NASA's Earth Science Enterprise

    Science.gov (United States)

    2004-01-01

    Improving life on Earth and understanding and protecting our home planet are foremost in the Vision and Mission of the National Aeronautics and Space Administration (NASA). NASA's Earth Science Enterprise end eavors to use the unique vantage point of space to study the Earth sy stem and improve the prediction of Earth system change. NASA and its international partners study Earth's land, atmosphere, ice, oceans, a nd biota and seek to provide objective scientific knowledge to decisi onmakers and scientists worldwide. This book describes NASA's extensi ve cooperation with its international partners.

  9. Stirling Technology Development at NASA GRC

    Science.gov (United States)

    Thieme, Lanny G.; Schreiber, Jeffrey G.; Mason, Lee S.

    2001-01-01

    The Department of Energy, Stirling Technology Company (STC), and NASA Glenn Research Center (NASA Glenn) are developing a free-piston Stirling convertor for a high efficiency Stirling Radioisotope Generator (SRG) for NASA Space Science missions. The SRG is being developed for multimission use, including providing electric power for unmanned Mars rovers and deep space missions. NASA Glenn is conducting an in-house technology project to assist in developing the convertor for space qualification and mission implementation. Recent testing of 55-We Technology Demonstration Convertors (TDCs) built by STC includes mapping of a second pair of TDCs, single TDC testing, and TDC electromagnetic interference and electromagnetic compatibility characterization on a nonmagnetic test stand. Launch environment tests of a single TDC without its pressure vessel to better understand the convertor internal structural dynamics and of dual-opposed TDCs with several engineering mounting structures with different natural frequencies have recently been completed. A preliminary life assessment has been completed for the TDC heater head, and creep testing of the IN718 material to be used for the flight convertors is underway. Long-term magnet aging tests are continuing to characterize any potential aging in the strength or demagnetization resistance of the magnets used in the linear alternator (LA). Evaluations are now beginning on key organic materials used in the LA and piston/rod surface coatings. NASA Glenn is also conducting finite element analyses for the LA, in part to look at the demagnetization margin on the permanent magnets. The world's first known integrated test of a dynamic power system with electric propulsion was achieved at NASA Glenn when a Hall-effect thruster was successfully operated with a free-piston Stirling power source. Cleveland State University is developing a multidimensional Stirling computational fluid dynamics code to significantly improve Stirling loss

  10. High performance computing in Windows Azure cloud

    OpenAIRE

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  11. High-performance computing — an overview

    Science.gov (United States)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  12. TOWARD END-TO-END MODELING FOR NUCLEAR EXPLOSION MONITORING: SIMULATION OF UNDERGROUND NUCLEAR EXPLOSIONS AND EARTHQUAKES USING HYDRODYNAMIC AND ANELASTIC SIMULATIONS, HIGH-PERFORMANCE COMPUTING AND THREE-DIMENSIONAL EARTH MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Rodgers, A; Vorobiev, O; Petersson, A; Sjogreen, B

    2009-07-06

    This paper describes new research being performed to improve understanding of seismic waves generated by underground nuclear explosions (UNE) by using full waveform simulation, high-performance computing and three-dimensional (3D) earth models. The goal of this effort is to develop an end-to-end modeling capability to cover the range of wave propagation required for nuclear explosion monitoring (NEM) from the buried nuclear device to the seismic sensor. The goal of this work is to improve understanding of the physical basis and prediction capabilities of seismic observables for NEM including source and path-propagation effects. We are pursuing research along three main thrusts. Firstly, we are modeling the non-linear hydrodynamic response of geologic materials to underground explosions in order to better understand how source emplacement conditions impact the seismic waves that emerge from the source region and are ultimately observed hundreds or thousands of kilometers away. Empirical evidence shows that the amplitudes and frequency content of seismic waves at all distances are strongly impacted by the physical properties of the source region (e.g. density, strength, porosity). To model the near-source shock-wave motions of an UNE, we use GEODYN, an Eulerian Godunov (finite volume) code incorporating thermodynamically consistent non-linear constitutive relations, including cavity formation, yielding, porous compaction, tensile failure, bulking and damage. In order to propagate motions to seismic distances we are developing a one-way coupling method to pass motions to WPP (a Cartesian anelastic finite difference code). Preliminary investigations of UNE's in canonical materials (granite, tuff and alluvium) confirm that emplacement conditions have a strong effect on seismic amplitudes and the generation of shear waves. Specifically, we find that motions from an explosion in high-strength, low-porosity granite have high compressional wave amplitudes and weak

  13. Use of NASA Near Real-Time and Archived Satellite Data to Support Disaster Assessment

    Science.gov (United States)

    McGrath, Kevin M.; Molthan, Andrew L.; Burks, Jason E.

    2014-01-01

    NASA's Short-term Prediction Research and Transition (SPoRT) Center partners with the NWS to provide near realtime data in support of a variety of weather applications, including disasters. SPoRT supports NASA's Applied Sciences Program: Disasters focus area by developing techniques that will aid the disaster monitoring, response, and assessment communities. SPoRT has explored a variety of techniques for utilizing archived and near real-time NASA satellite data. An increasing number of end-users - such as the NWS Damage Assessment Toolkit (DAT) - access geospatial data via a Web Mapping Service (WMS). SPoRT has begun developing open-standard Geographic Information Systems (GIS) data sets via WMS to respond to end-user needs.

  14. Evolving Metadata in NASA Earth Science Data Systems

    Science.gov (United States)

    Mitchell, A.; Cechini, M. F.; Walter, J.

    2011-12-01

    NASA's Earth Observing System (EOS) is a coordinated series of satellites for long term global observations. NASA's Earth Observing System Data and Information System (EOSDIS) is a petabyte-scale archive of environmental data that supports global climate change research by providing end-to-end services from EOS instrument data collection to science data processing to full access to EOS and other earth science data. On a daily basis, the EOSDIS ingests, processes, archives and distributes over 3 terabytes of data from NASA's Earth Science missions representing over 3500 data products ranging from various types of science disciplines. EOSDIS is currently comprised of 12 discipline specific data centers that are collocated with centers of science discipline expertise. Metadata is used in all aspects of NASA's Earth Science data lifecycle from the initial measurement gathering to the accessing of data products. Missions use metadata in their science data products when describing information such as the instrument/sensor, operational plan, and geographically region. Acting as the curator of the data products, data centers employ metadata for preservation, access and manipulation of data. EOSDIS provides a centralized metadata repository called the Earth Observing System (EOS) ClearingHouse (ECHO) for data discovery and access via a service-oriented-architecture (SOA) between data centers and science data users. ECHO receives inventory metadata from data centers who generate metadata files that complies with the ECHO Metadata Model. NASA's Earth Science Data and Information System (ESDIS) Project established a Tiger Team to study and make recommendations regarding the adoption of the international metadata standard ISO 19115 in EOSDIS. The result was a technical report recommending an evolution of NASA data systems towards a consistent application of ISO 19115 and related standards including the creation of a NASA-specific convention for core ISO 19115 elements. Part of

  15. Integrated computer network high-speed parallel interface

    International Nuclear Information System (INIS)

    Frank, R.B.

    1979-03-01

    As the number and variety of computers within Los Alamos Scientific Laboratory's Central Computer Facility grows, the need for a standard, high-speed intercomputer interface has become more apparent. This report details the development of a High-Speed Parallel Interface from conceptual through implementation stages to meet current and future needs for large-scle network computing within the Integrated Computer Network. 4 figures

  16. Plasma Oscillation Characterization of NASA's HERMeS Hall Thruster via High Speed Imaging

    Science.gov (United States)

    Huang, Wensheng; Kamhawi, Hani; Haag, Thomas W.

    2016-01-01

    For missions beyond low Earth orbit, spacecraft size and mass can be dominated by onboard chemical propulsion systems and propellants that may constitute more than 50 percent of the spacecraft mass. This impact can be substantially reduced through the utilization of Solar Electric Propulsion (SEP) due to its substantially higher specific impulse. Studies performed for NASA's Human Exploration and Operations Mission Directorate and Science Mission Directorate have demonstrated that a 50kW-class SEP capability can be enabling for both near term and future architectures and science missions. A high-power SEP element is integral to the Evolvable Mars Campaign, which presents an approach to establish an affordable evolutionary human exploration architecture. To enable SEP missions at the power levels required for these applications, an in-space demonstration of an operational 50kW-class SEP spacecraft has been proposed as a SEP Technology Demonstration Mission (TDM). In 2010 NASA's Space Technology Mission Directorate (STMD) began developing high-power electric propulsion technologies. The maturation of these critical technologies has made mission concepts utilizing high-power SEP viable.

  17. NASA work unit system file maintenance manual

    Science.gov (United States)

    1972-01-01

    The NASA Work Unit System is a management information system for research tasks (i.e., work units) performed under NASA grants and contracts. It supplies profiles on research efforts and statistics on fund distribution. The file maintenance operator can add, delete and change records at a remote terminal or can submit punched cards to the computer room for batch update. The system is designed for file maintenance by a person with little or no knowledge of data processing techniques.

  18. The Fermilab central computing facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-01-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front-end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS cluster interactive front-end, an Amdahl VM Computing engine, ACP farms, and (primarily) VMS workstations. This paper will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. (orig.)

  19. The Fermilab Central Computing Facility architectural model

    International Nuclear Information System (INIS)

    Nicholls, J.

    1989-05-01

    The goal of the current Central Computing Upgrade at Fermilab is to create a computing environment that maximizes total productivity, particularly for high energy physics analysis. The Computing Department and the Next Computer Acquisition Committee decided upon a model which includes five components: an interactive front end, a Large-Scale Scientific Computer (LSSC, a mainframe computing engine), a microprocessor farm system, a file server, and workstations. With the exception of the file server, all segments of this model are currently in production: a VAX/VMS Cluster interactive front end, an Amdahl VM computing engine, ACP farms, and (primarily) VMS workstations. This presentation will discuss the implementation of the Fermilab Central Computing Facility Architectural Model. Implications for Code Management in such a heterogeneous environment, including issues such as modularity and centrality, will be considered. Special emphasis will be placed on connectivity and communications between the front-end, LSSC, and workstations, as practiced at Fermilab. 2 figs

  20. The Role of Synthetic Biology in NASA's Missions

    Science.gov (United States)

    Rothschild, Lynn J.

    2016-01-01

    The time has come to for NASA to exploit synthetic biology in pursuit of its missions, including aeronautics, earth science, astrobiology and most notably, human exploration. Conversely, NASA advances the fundamental technology of synthetic biology as no one else can because of its unique expertise in the origin of life and life in extreme environments, including the potential for alternate life forms. This enables unique, creative "game changing" advances. NASA's requirement for minimizing upmass in flight will also drive the field toward miniaturization and automation. These drivers will greatly increase the utility of synthetic biology solutions for military, health in remote areas and commercial purposes. To this end, we have begun a program at NASA to explore the use of synthetic biology in NASA's missions, particular space exploration. As part of this program, we began hosting an iGEM team of undergraduates drawn from Brown and Stanford Universities to conduct synthetic biology research at NASA Ames Research Center. The 2011 team (http://2011.igem.org/Team:Brown-Stanford) produced an award-winning project on using synthetic biology as a basis for a human Mars settlement.

  1. The complete guide to high-end audio

    CERN Document Server

    Harley, Robert

    2015-01-01

    An updated edition of what many consider the "bible of high-end audio"   In this newly revised and updated fifth edition, Robert Harley, editor in chief of the Absolute Sound magazine, tells you everything you need to know about buying and enjoying high-quality hi-fi. With this book, discover how to get the best sound for your money, how to identify the weak links in your system and upgrade where it will do the most good, how to set up and tweak your system for maximum performance, and how to become a more perceptive and appreciative listener. Just a few of the secrets you will learn cover hi

  2. DOE research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  3. Research on the development efficiency of regional high-end talent in China: A complex network approach.

    Science.gov (United States)

    Zhang, Zhen; Wang, Minggang; Tian, Lixin; Zhang, Wenbin

    2017-01-01

    In this paper, based on the panel data of 31 provinces and cities in China from 1991 to 2016, the regional development efficiency matrix of high-end talent is obtained by DEA method, and the matrix is converted into a continuous change of complex networks through the construction of sliding window. Using a series of continuous changes in the complex network topology statistics, the characteristics of regional high-end talent development efficiency system are analyzed. And the results show that the average development efficiency of high-end talent in the western region is at a low level. After 2005, the national regional high-end talent development efficiency network has both short-range relevance and long-range relevance in the evolution process. The central region plays an important intermediary role in the national regional high-end talent development system. And the western region has high clustering characteristics. With the implementation of the high-end talent policies with regional characteristics by different provinces and cities, the relevance of high-end talent development efficiency in various provinces and cities presents a weakening trend, and the geographical characteristics of high-end talent are more and more obvious.

  4. 49 CFR 231.2 - Hopper cars and high-side gondolas with fixed ends.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Hopper cars and high-side gondolas with fixed ends... cars and high-side gondolas with fixed ends. (Cars with sides more than 36 inches above the floor are high-side cars.) (a) Hand brakes—(1) Number. Same as specified for “Box and other house cars” (see...

  5. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  6. Alterations of bone microstructure and strength in end-stage renal failure

    NARCIS (Netherlands)

    Trombetti, A.; Stoermann, C.; Chevalley, T.; Rietbergen, van B.; Hermann, F.R.; Martin, P.Y.; Rizzoli, R.

    2013-01-01

    Summary End-stage renal disease (ESRD) patients have a high risk of fractures. We evaluated bone microstructure and finite-element analysis-estimated strength and stiffness in patients with ESRD by high-resolution peripheral computed tomography. We observed an alteration of cortical and trabecular

  7. End-User Recommendations on LOGOMON - a Computer Based Speech Therapy System for Romanian Language

    Directory of Open Access Journals (Sweden)

    SCHIPOR, O. A.

    2010-11-01

    Full Text Available In this paper we highlight the relations between LOGOMON - a Computer Based Speech Therapy System and dyslalia's training steps. Dyslalia is a speech disorder that affects pronunciation of one or many sounds. This presentation of the system is completed by a research regarding end-user (i.e. teachers and parents attitude about the speech assisted therapy in general and about LOGOMON System in particular. The results of this research allow the improvement of our CBST system because the obtained information can be a source of adaptability to different expectations of the beneficiaries.

  8. Computer-automated evolution of an X-band antenna for NASA's Space Technology 5 mission.

    Science.gov (United States)

    Hornby, Gregory S; Lohn, Jason D; Linden, Derek S

    2011-01-01

    Whereas the current practice of designing antennas by hand is severely limited because it is both time and labor intensive and requires a significant amount of domain knowledge, evolutionary algorithms can be used to search the design space and automatically find novel antenna designs that are more effective than would otherwise be developed. Here we present our work in using evolutionary algorithms to automatically design an X-band antenna for NASA's Space Technology 5 (ST5) spacecraft. Two evolutionary algorithms were used: the first uses a vector of real-valued parameters and the second uses a tree-structured generative representation for constructing the antenna. The highest-performance antennas from both algorithms were fabricated and tested and both outperformed a hand-designed antenna produced by the antenna contractor for the mission. Subsequent changes to the spacecraft orbit resulted in a change in requirements for the spacecraft antenna. By adjusting our fitness function we were able to rapidly evolve a new set of antennas for this mission in less than a month. One of these new antenna designs was built, tested, and approved for deployment on the three ST5 spacecraft, which were successfully launched into space on March 22, 2006. This evolved antenna design is the first computer-evolved antenna to be deployed for any application and is the first computer-evolved hardware in space.

  9. Computing in high-energy physics

    International Nuclear Information System (INIS)

    Mount, Richard P.

    2016-01-01

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Lastly, I describe recent developments aimed at improving the overall coherence of high-energy physics software

  10. Computing in high-energy physics

    Science.gov (United States)

    Mount, Richard P.

    2016-04-01

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Finally, I describe recent developments aimed at improving the overall coherence of high-energy physics software.

  11. Coordinated Fault-Tolerance for High-Performance Computing Final Project Report

    Energy Technology Data Exchange (ETDEWEB)

    Panda, Dhabaleswar Kumar [The Ohio State University; Beckman, Pete

    2011-07-28

    existing publish-subscribe tools. We enhanced the intrinsic fault tolerance capabilities representative implementations of a variety of key HPC software subsystems and integrated them with the FTB. Targeting software subsystems included: MPI communication libraries, checkpoint/restart libraries, resource managers and job schedulers, and system monitoring tools. Leveraging the aforementioned infrastructure, as well as developing and utilizing additional tools, we have examined issues associated with expanded, end-to-end fault response from both system and application viewpoints. From the standpoint of system operations, we have investigated log and root cause analysis, anomaly detection and fault prediction, and generalized notification mechanisms. Our applications work has included libraries for fault-tolerance linear algebra, application frameworks for coupled multiphysics applications, and external frameworks to support the monitoring and response for general applications. Our final goal was to engage the high-end computing community to increase awareness of tools and issues around coordinated end-to-end fault management.

  12. A Linux Workstation for High Performance Graphics

    Science.gov (United States)

    Geist, Robert; Westall, James

    2000-01-01

    The primary goal of this effort was to provide a low-cost method of obtaining high-performance 3-D graphics using an industry standard library (OpenGL) on PC class computers. Previously, users interested in doing substantial visualization or graphical manipulation were constrained to using specialized, custom hardware most often found in computers from Silicon Graphics (SGI). We provided an alternative to expensive SGI hardware by taking advantage of third-party, 3-D graphics accelerators that have now become available at very affordable prices. To make use of this hardware our goal was to provide a free, redistributable, and fully-compatible OpenGL work-alike library so that existing bodies of code could simply be recompiled. for PC class machines running a free version of Unix. This should allow substantial cost savings while greatly expanding the population of people with access to a serious graphics development and viewing environment. This should offer a means for NASA to provide a spectrum of graphics performance to its scientists, supplying high-end specialized SGI hardware for high-performance visualization while fulfilling the requirements of medium and lower performance applications with generic, off-the-shelf components and still maintaining compatibility between the two.

  13. Integration Testing of a Modular Discharge Supply for NASA's High Voltage Hall Accelerator Thruster

    Science.gov (United States)

    Pinero, Luis R.; Kamhawi, hani; Drummond, Geoff

    2010-01-01

    NASA s In-Space Propulsion Technology Program is developing a high performance Hall thruster that can fulfill the needs of future Discovery-class missions. The result of this effort is the High Voltage Hall Accelerator thruster that can operate over a power range from 0.3 to 3.5 kW and a specific impulse from 1,000 to 2,800 sec, and process 300 kg of xenon propellant. Simultaneously, a 4.0 kW discharge power supply comprised of two parallel modules was developed. These power modules use an innovative three-phase resonant topology that can efficiently supply full power to the thruster at an output voltage range of 200 to 700 V at an input voltage range of 80 to 160 V. Efficiencies as high as 95.9 percent were measured during an integration test with the NASA103M.XL thruster. The accuracy of the master/slave current sharing circuit and various thruster ignition techniques were evaluated.

  14. High Energy Astrophysics and Cosmology from Space: NASA's Physics of the Cosmos Program

    Science.gov (United States)

    Hornschemeier, Ann

    2016-03-01

    We summarize currently-funded NASA activities in high energy astrophysics and cosmology, embodied in the NASA Physics of the Cosmos program, including updates on technology development and mission studies. The portfolio includes development of a space mission for measuring gravitational waves from merging supermassive black holes, currently envisioned as a collaboration with the European Space Agency (ESA) on its L3 mission and development of an X-ray observatory that will measure X-ray emission from the final stages of accretion onto black holes, currently envisioned as a NASA collaboration on ESA's Athena observatory. The portfolio also includes the study of cosmic rays and gamma ray photons resulting from a range of processes, of the physical process of inflation associated with the birth of the universe and of the nature of the dark energy that dominates the mass-energy of the modern universe. The program is supported by an analysis group called the PhysPAG that serves as a forum for community input and analysis and the talk will include a description of activities of this group.

  15. Bridging the Gap between NASA Hydrological Data and the Geospatial Community

    Science.gov (United States)

    Rui, Hualan; Teng, Bill; Vollmer, Bruce; Mocko, David M.; Beaudoing, Hiroko K.; Nigro, Joseph; Gary, Mark; Maidment, David; Hooper, Richard

    2011-01-01

    There is a vast and ever increasing amount of data on the Earth interconnected energy and hydrological systems, available from NASA remote sensing and modeling systems, and yet, one challenge persists: increasing the usefulness of these data for, and thus their use by, the geospatial communities. The Hydrology Data and Information Services Center (HDISC), part of the Goddard Earth Sciences DISC, has continually worked to better understand the hydrological data needs of the geospatial end users, to thus better able to bridge the gap between NASA data and the geospatial communities. This paper will cover some of the hydrological data sets available from HDISC, and the various tools and services developed for data searching, data subletting ; format conversion. online visualization and analysis; interoperable access; etc.; to facilitate the integration of NASA hydrological data by end users. The NASA Goddard data analysis and visualization system, Giovanni, is described. Two case examples of user-customized data services are given, involving the EPA BASINS (Better Assessment Science Integrating point & Non-point Sources) project and the CUAHSI Hydrologic Information System, with the common requirement of on-the-fly retrieval of long duration time series for a geographical point

  16. Computer-Assisted, Self-Interviewing (CASI Compared to Face-to-Face Interviewing (FTFI with Open-Ended, Non-Sensitive Questions

    Directory of Open Access Journals (Sweden)

    John Fairweather PhD

    2012-07-01

    Full Text Available This article reports results from research on cultural models, and assesses the effects of computers on data quality by comparing open-ended questions asked in two formats—face-to-face interviewing (FTFI and computer-assisted, self-interviewing (CASI. We expected that for our non-sensitive topic, FTFI would generate fuller and richer accounts because the interviewer could facilitate the interview process. Although the interviewer indeed facilitated these interviews, which resulted in more words in less time, the number of underlying themes found within the texts for each interview mode was the same, thus resulting in the same models of national culture and innovation being built for each mode. Our results, although based on an imperfect research design, suggest that CASI can be beneficial when using open-ended questions because CASI is easy to administer, capable of reaching more efficiently a large sample, and able to avoid the need to transcribe the recorded responses.

  17. Federal Plan for High-End Computing

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — Since the World War II era, when scientists, mathematicians, and engineers began using revolutionary electronic machinery that could rapidly perform complex...

  18. A Nanometer Aerosol Size Analyzer (nASA) for Rapid Measurement of High-concentration Size Distributions

    International Nuclear Information System (INIS)

    Han, H.-S.; Chen, D.-R.; Pui, David Y.H.; Anderson, Bruce E.

    2000-01-01

    We have developed a fast-response nanometer aerosol size analyzer (nASA) that is capable of scanning 30 size channels between 3 and 100 nm in a total time of 3 s. The analyzer includes a bipolar charger (Po 210 ), an extended-length nanometer differential mobility analyzer (Nano-DMA), and an electrometer (TSI 3068). This combination of components provides particle size spectra at a scan rate of 0.1 s per channel free of uncertainties caused by response-time-induced smearing. The nASA thus offers a fast response for aerosol size distribution measurements in high-concentration conditions and also eliminates the need for applying a de-smearing algorithm to resulting data. In addition, because of its thermodynamically stable means of particle detection, the nASA is useful for applications requiring measurements over a broad range of sample pressures and temperatures. Indeed, experimental transfer functions determined for the extended-length Nano-DMA using the tandem differential mobility analyzer (TDMA) technique indicate the nASA provides good size resolution at pressures as low as 200 Torr. Also, as was demonstrated in tests to characterize the soot emissions from the J85-GE engine of a T-38 aircraft, the broad dynamic concentration range of the nASA makes it particularly suitable for studies of combustion or particle formation processes. Further details of the nASA performance as well as results from calibrations, laboratory tests and field applications are presented below

  19. Preparing for the High Frontier: The Role and Training of NASA Astronauts in the Post- Space Shuttle Era

    Science.gov (United States)

    2011-01-01

    In May 2010, the National Research Council (NRC) was asked by NASA to address several questions related to the Astronaut Corps. The NRC s Committee on Human Spaceflight Crew Operations was tasked to answer several questions: 1. How should the role and size of the activities managed by the Johnson Space Center Flight Crew Operations Directorate change after space shuttle retirement and completion of the assembly of the International Space Station (ISS)? 2. What are the requirements for crew-related ground-based facilities after the Space Shuttle program ends? 3. Is the fleet of aircraft used for training the Astronaut Corps a cost-effective means of preparing astronauts to meet the requirements of NASA s human spaceflight program? Are there more cost-effective means of meeting these training requirements? Although the future of NASA s human spaceflight program has garnered considerable discussion in recent years and there is considerable uncertainty about what the program will involve in the coming years, the committee was not tasked to address whether human spaceflight should continue or what form it should take. The committee s task restricted it to studying activities managed by the Flight Crew Operations Directorate or those closely related to its activities, such as crew-related ground-based facilities and the training aircraft.

  20. Advanced Information Technology Investments at the NASA Earth Science Technology Office

    Science.gov (United States)

    Clune, T.; Seablom, M. S.; Moe, K.

    2012-12-01

    The NASA Earth Science Technology Office (ESTO) regularly makes investments for nurturing advanced concepts in information technology to enable rapid, low-cost acquisition, processing and visualization of Earth science data in support of future NASA missions and climate change research. In 2012, the National Research Council published a mid-term assessment of the 2007 decadal survey for future spacemissions supporting Earth science and applications [1]. The report stated, "Earth sciences have advanced significantly because of existing observational capabilities and the fruit of past investments, along with advances in data and information systems, computer science, and enabling technologies." The report found that NASA had responded favorably and aggressively to the decadal survey and noted the role of the recent ESTO solicitation for information systems technologies that partnered with the NASA Applied Sciences Program to support the transition into operations. NASA's future missions are key stakeholders for the ESTO technology investments. Also driving these investments is the need for the Agency to properly address questions regarding the prediction, adaptation, and eventual mitigation of climate change. The Earth Science Division has championed interdisciplinary research, recognizing that the Earth must be studied as a complete system in order toaddress key science questions [2]. Information technology investments in the low-mid technology readiness level (TRL) range play a key role in meeting these challenges. ESTO's Advanced Information Systems Technology (AIST) program invests in higher risk / higher reward technologies that solve the most challenging problems of the information processing chain. This includes the space segment, where the information pipeline begins, to the end user, where knowledge is ultimatelyadvanced. The objectives of the program are to reduce the risk, cost, size, and development time of Earth Science space-based and ground

  1. Architecture for Cognitive Networking within NASA's Future Space Communications Infrastructure

    Science.gov (United States)

    Clark, Gilbert; Eddy, Wesley M.; Johnson, Sandra K.; Barnes, James; Brooks, David

    2016-01-01

    Future space mission concepts and designs pose many networking challenges for command, telemetry, and science data applications with diverse end-to-end data delivery needs. For future end-to-end architecture designs, a key challenge is meeting expected application quality of service requirements for multiple simultaneous mission data flows with options to use diverse onboard local data buses, commercial ground networks, and multiple satellite relay constellations in LEO, GEO, MEO, or even deep space relay links. Effectively utilizing a complex network topology requires orchestration and direction that spans the many discrete, individually addressable computer systems, which cause them to act in concert to achieve the overall network goals. The system must be intelligent enough to not only function under nominal conditions, but also adapt to unexpected situations, and reorganize or adapt to perform roles not originally intended for the system or explicitly programmed. This paper describes an architecture enabling the development and deployment of cognitive networking capabilities into the envisioned future NASA space communications infrastructure. We begin by discussing the need for increased automation, including inter-system discovery and collaboration. This discussion frames the requirements for an architecture supporting cognitive networking for future missions and relays, including both existing endpoint-based networking models and emerging information-centric models. From this basis, we discuss progress on a proof-of-concept implementation of this architecture, and results of implementation and initial testing of a cognitive networking on-orbit application on the SCaN Testbed attached to the International Space Station.

  2. NASA and USGS invest in invasive species modeling to evaluate habitat for Africanized Honey Bees

    Science.gov (United States)

    2009-01-01

    Invasive non-native species, such as plants, animals, and pathogens, have long been an interest to the U.S. Geological Survey (USGS) and NASA. Invasive species cause harm to our economy (around $120 B/year), the environment (e.g., replacing native biodiversity, forest pathogens negatively affecting carbon storage), and human health (e.g., plague, West Nile virus). Five years ago, the USGS and NASA formed a partnership to improve ecological forecasting capabilities for the early detection and containment of the highest priority invasive species. Scientists from NASA Goddard Space Flight Center (GSFC) and the Fort Collins Science Center developed a longterm strategy to integrate remote sensing capabilities, high-performance computing capabilities and new spatial modeling techniques to advance the science of ecological invasions [Schnase et al., 2002].

  3. NASA's Use of Human Behavior Models for Concept Development and Evaluation

    Science.gov (United States)

    Gore, Brian F.

    2012-01-01

    Overview of NASA's use of computational approaches and methods to support research goals, of human performance models, with a focus on examples of the methods used in Code TH and TI at NASA Ames, followed by an in depth review of MIDAS' current FAA work.

  4. High performance computing in linear control

    International Nuclear Information System (INIS)

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  5. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    DEFF Research Database (Denmark)

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational thi...

  6. Applied high-speed imaging for the icing research program at NASA Lewis Research Center

    Science.gov (United States)

    Slater, Howard; Owens, Jay; Shin, Jaiwon

    1992-01-01

    The Icing Research Tunnel at NASA Lewis Research Center provides scientists a scaled, controlled environment to simulate natural icing events. The closed-loop, low speed, refrigerated wind tunnel offers the experimental capability to test for icing certification requirements, analytical model validation and calibration techniques, cloud physics instrumentation refinement, advanced ice protection systems, and rotorcraft icing methodology development. The test procedures for these objectives all require a high degree of visual documentation, both in real-time data acquisition and post-test image processing. Information is provided to scientific, technical, and industrial imaging specialists as well as to research personnel about the high-speed and conventional imaging systems will be on the recent ice protection technology program. Various imaging examples for some of the tests are presented. Additional imaging examples are available from the NASA Lewis Research Center's Photographic and Printing Branch.

  7. NASA's Earth Science Use of Commercially Availiable Remote Sensing Datasets: Cover Image

    Science.gov (United States)

    Underwood, Lauren W.; Goward, Samuel N.; Fearon, Matthew G.; Fletcher, Rose; Garvin, Jim; Hurtt, George

    2008-01-01

    The cover image incorporates high resolution stereo pairs acquired from the DigitalGlobe(R) QuickBird sensor. It shows a digital elevation model of Meteor Crater, Arizona at approximately 1.3 meter point-spacing. Image analysts used the Leica Photogrammetry Suite to produce the DEM. The outside portion was computed from two QuickBird panchromatic scenes acquired October 2006, while an Optech laser scan dataset was used for the crater s interior elevations. The crater s terrain model and image drape were created in a NASA Constellation Program project focused on simulating lunar surface environments for prototyping and testing lunar surface mission analysis and planning tools. This work exemplifies NASA s Scientific Data Purchase legacy and commercial high resolution imagery applications, as scientists use commercial high resolution data to examine lunar analog Earth landscapes for advanced planning and trade studies for future lunar surface activities. Other applications include landscape dynamics related to volcanism, hydrologic events, climate change, and ice movement.

  8. The NASA Ames PAH IR Spectroscopic Database: Computational Version 3.00 with Updated Content and the Introduction of Multiple Scaling Factors

    Science.gov (United States)

    Bauschlicher, Charles W., Jr.; Ricca, A.; Boersma, C.; Allamandola, L. J.

    2018-02-01

    Version 3.00 of the library of computed spectra in the NASA Ames PAH IR Spectroscopic Database (PAHdb) is described. Version 3.00 introduces the use of multiple scale factors, instead of the single scaling factor used previously, to align the theoretical harmonic frequencies with the experimental fundamentals. The use of multiple scale factors permits the use of a variety of basis sets; this allows new PAH species to be included in the database, such as those containing oxygen, and yields an improved treatment of strained species and those containing nitrogen. In addition, the computed spectra of 2439 new PAH species have been added. The impact of these changes on the analysis of an astronomical spectrum through database-fitting is considered and compared with a fit using Version 2.00 of the library of computed spectra. Finally, astronomical constraints are defined for the PAH spectral libraries in PAHdb.

  9. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  10. HEASARC - The High Energy Astrophysics Science Archive Research Center

    Science.gov (United States)

    Smale, Alan P.

    2011-01-01

    The High Energy Astrophysics Science Archive Research Center (HEASARC) is NASA's archive for high-energy astrophysics and cosmic microwave background (CMB) data, supporting the broad science goals of NASA's Physics of the Cosmos theme. It provides vital scientific infrastructure to the community by standardizing science data formats and analysis programs, providing open access to NASA resources, and implementing powerful archive interfaces. Over the next five years the HEASARC will ingest observations from up to 12 operating missions, while serving data from these and over 30 archival missions to the community. The HEASARC archive presently contains over 37 TB of data, and will contain over 60 TB by the end of 2014. The HEASARC continues to secure major cost savings for NASA missions, providing a reusable mission-independent framework for reducing, analyzing, and archiving data. This approach was recognized in the NRC Portals to the Universe report (2007) as one of the HEASARC's great strengths. This poster describes the past and current activities of the HEASARC and our anticipated developments in coming years. These include preparations to support upcoming high energy missions (NuSTAR, Astro-H, GEMS) and ground-based and sub-orbital CMB experiments, as well as continued support of missions currently operating (Chandra, Fermi, RXTE, Suzaku, Swift, XMM-Newton and INTEGRAL). In 2012 the HEASARC (which now includes LAMBDA) will support the final nine-year WMAP data release. The HEASARC is also upgrading its archive querying and retrieval software with the new Xamin system in early release - and building on opportunities afforded by the growth of the Virtual Observatory and recent developments in virtual environments and cloud computing.

  11. Stirling Technology Development at NASA GRC. Revised

    Science.gov (United States)

    Thieme, Lanny G.; Schreiber, Jeffrey G.; Mason, Lee S.

    2002-01-01

    The Department of Energy, Stirling Technology Company (STC), and NASA Glenn Research Center (NASA Glenn) are developing a free-piston Stirling convertor for a high-efficiency Stirling Radioisotope Generator (SRG) for NASA Space Science missions. The SRG is being developed for multimission use, including providing electric power for unmanned Mars rovers and deep space missions. NASA Glenn is conducting an in-house technology project to assist in developing the convertor for space qualification and mission implementation. Recent testing, of 55-We Technology Demonstration Convertors (TDC's) built by STC includes mapping, of a second pair of TDC's, single TDC testing, and TDC electromagnetic interference and electromagnetic compatibility characterization on a nonmagnetic test stand. Launch environment tests of a single TDC without its pressure vessel to better understand the convertor internal structural dynamics and of dual-opposed TDC's with several engineering mounting structures with different natural frequencies have recently been completed. A preliminary life assessment has been completed for the TDC heater head, and creep testing of the IN718 material to be used for the flight convertors is underway. Long-term magnet aging tests are continuing to characterize any potential aging in the strength or demagnetization resistance of the magnets used in the linear alternator (LA). Evaluations are now beginning on key organic materials used in the LA and piston/rod surface coatings. NASA Glenn is also conducting finite element analyses for the LA, in part to look at the demagnetization margin on the permanent magnets. The world's first known integrated test of a dynamic power system with electric propulsion was achieved at NASA Glenn when a Hall-effect thruster was successfully operated with a free-piston Stirling power source. Cleveland State University is developing a multidimensional Stirling computational fluid dynamics code to significantly improve Stirling loss

  12. Mathematical model and computer programme for theoretical calculation of calibration curves of neutron soil moisture probes with highly effective counters

    International Nuclear Information System (INIS)

    Kolev, N.A.

    1981-07-01

    A mathematical model based on the three group theory for theoretical calculation by means of computer of the calibration curves of neutron soil moisture probes with highly effective counters, is described. Methods for experimental correction of the mathematical model are discussed and proposed. The computer programme described allows the calibration of neutron probes with high or low effective counters, and central or end geometry, with or without linearizing of the calibration curve. The use of two calculation variants and printing of output data gives the possibility not only for calibration, but also for other researches. The separate data inputs for soil and probe temperature allow the temperature influence analysis. The computer programme and calculation examples are given. (author)

  13. GRID computing for experimental high energy physics

    International Nuclear Information System (INIS)

    Moloney, G.R.; Martin, L.; Seviour, E.; Taylor, G.N.; Moorhead, G.F.

    2002-01-01

    Full text: The Large Hadron Collider (LHC), to be completed at the CERN laboratory in 2006, will generate 11 petabytes of data per year. The processing of this large data stream requires a large, distributed computing infrastructure. A recent innovation in high performance distributed computing, the GRID, has been identified as an important tool in data analysis for the LHC. GRID computing has actual and potential application in many fields which require computationally intensive analysis of large, shared data sets. The Australian experimental High Energy Physics community has formed partnerships with the High Performance Computing community to establish a GRID node at the University of Melbourne. Through Australian membership of the ATLAS experiment at the LHC, Australian researchers have an opportunity to be involved in the European DataGRID project. This presentation will include an introduction to the GRID, and it's application to experimental High Energy Physics. We will present the results of our studies, including participation in the first LHC data challenge

  14. QoC-based Optimization of End-to-End M-Health Data Delivery Services

    NARCIS (Netherlands)

    Widya, I.A.; van Beijnum, Bernhard J.F.; Salden, Alfons

    2006-01-01

    This paper addresses how Quality of Context (QoC) can be used to optimize end-to-end mobile healthcare (m-health) data delivery services in the presence of alternative delivery paths, which is quite common in a pervasive computing and communication environment. We propose min-max-plus based

  15. Computed Flow Through An Artificial Heart Valve

    Science.gov (United States)

    Rogers, Stewart E.; Kwak, Dochan; Kiris, Cetin; Chang, I-Dee

    1994-01-01

    Report discusses computations of blood flow through prosthetic tilting disk valve. Computational procedure developed in simulation used to design better artificial hearts and valves by reducing or eliminating following adverse flow characteristics: large pressure losses, which prevent hearts from working efficiently; separated and secondary flows, which causes clotting; and high turbulent shear stresses, which damages red blood cells. Report reiterates and expands upon part of NASA technical memorandum "Computed Flow Through an Artificial Heart and Valve" (ARC-12983). Also based partly on research described in "Numerical Simulation of Flow Through an Artificial Heart" (ARC-12478).

  16. Designing end-user interfaces

    CERN Document Server

    Heaton, N

    1988-01-01

    Designing End-User Interfaces: State of the Art Report focuses on the field of human/computer interaction (HCI) that reviews the design of end-user interfaces.This compilation is divided into two parts. Part I examines specific aspects of the problem in HCI that range from basic definitions of the problem, evaluation of how to look at the problem domain, and fundamental work aimed at introducing human factors into all aspects of the design cycle. Part II consists of six main topics-definition of the problem, psychological and social factors, principles of interface design, computer intelligenc

  17. COMPARATIVE STUDY OF CLOUD COMPUTING AND MOBILE CLOUD COMPUTING

    OpenAIRE

    Nidhi Rajak*, Diwakar Shukla

    2018-01-01

    Present era is of Information and Communication Technology (ICT) and there are number of researches are going on Cloud Computing and Mobile Cloud Computing such security issues, data management, load balancing and so on. Cloud computing provides the services to the end user over Internet and the primary objectives of this computing are resource sharing and pooling among the end users. Mobile Cloud Computing is a combination of Cloud Computing and Mobile Computing. Here, data is stored in...

  18. Ground-glass opacity: High-resolution computed tomography and 64-multi-slice computed tomography findings comparison

    International Nuclear Information System (INIS)

    Sergiacomi, Gianluigi; Ciccio, Carmelo; Boi, Luca; Velari, Luca; Crusco, Sonia; Orlacchio, Antonio; Simonetti, Giovanni

    2010-01-01

    Objective: Comparative evaluation of ground-glass opacity using conventional high-resolution computed tomography technique and volumetric computed tomography by 64-row multi-slice scanner, verifying advantage of volumetric acquisition and post-processing technique allowed by 64-row CT scanner. Methods: Thirty-four patients, in which was assessed ground-glass opacity pattern by previous high-resolution computed tomography during a clinical-radiological follow-up for their lung disease, were studied by means of 64-row multi-slice computed tomography. Comparative evaluation of image quality was done by both CT modalities. Results: It was reported good inter-observer agreement (k value 0.78-0.90) in detection of ground-glass opacity with high-resolution computed tomography technique and volumetric Computed Tomography acquisition with moderate increasing of intra-observer agreement (k value 0.46) using volumetric computed tomography than high-resolution computed tomography. Conclusions: In our experience, volumetric computed tomography with 64-row scanner shows good accuracy in detection of ground-glass opacity, providing a better spatial and temporal resolution and advanced post-processing technique than high-resolution computed tomography.

  19. A brief overview of NASA Langley's research program in formal methods

    Science.gov (United States)

    1992-01-01

    An overview of NASA Langley's research program in formal methods is presented. The major goal of this work is to bring formal methods technology to a sufficiently mature level for use by the United States aerospace industry. Towards this goal, work is underway to design and formally verify a fault-tolerant computing platform suitable for advanced flight control applications. Also, several direct technology transfer efforts have been initiated that apply formal methods to critical subsystems of real aerospace computer systems. The research team consists of six NASA civil servants and contractors from Boeing Military Aircraft Company, Computational Logic Inc., Odyssey Research Associates, SRI International, University of California at Davis, and Vigyan Inc.

  20. Interfacing the Paramesh Computational Libraries to the Cactus Computational Framework, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We will design and implement an interface between the Paramesh computational libraries, developed and used by groups at NASA GSFC, and the Cactus computational...

  1. High prevalence of frailty in end-stage renal disease

    NARCIS (Netherlands)

    Drost, Diederik; Kalf, Annette; Vogtlander, Nils; van Munster, Barbara C.

    Purpose Prognosis of the increasing number of elderly patients with end-stage renal disease (ESRD) is poor with high risk of functional decline and mortality. Frailty seems to be a good predictor for those patients that will not benefit from dialysis. Varying prevalences between populations are

  2. Networking at NASA. Johnson Space Center

    Science.gov (United States)

    Garman, John R.

    1991-01-01

    A series of viewgraphs on computer networks at the Johnson Space Center (JSC) are given. Topics covered include information resource management (IRM) at JSC, the IRM budget by NASA center, networks evolution, networking as a strategic tool, the Information Services Directorate charter, and SSC network requirements, challenges, and status.

  3. High-performance computing for airborne applications

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Manuzatto, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-01-01

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  4. Computer implemented land cover classification using LANDSAT MSS digital data: A cooperative research project between the National Park Service and NASA. 3: Vegetation and other land cover analysis of Shenandoah National Park

    Science.gov (United States)

    Cibula, W. G.

    1981-01-01

    Four LANDSAT frames, each corresponding to one of the four seasons were spectrally classified and processed using NASA-developed computer programs. One data set was selected or two or more data sets were marged to improve surface cover classifications. Selected areas representing each spectral class were chosen and transferred to USGS 1:62,500 topographic maps for field use. Ground truth data were gathered to verify the accuracy of the classifications. Acreages were computed for each of the land cover types. The application of elevational data to seasonal LANDSAT frames resulted in the separation of high elevation meadows (both with and without recently emergent perennial vegetation) as well as areas in oak forests which have an evergreen understory as opposed to other areas which do not.

  5. VME as a front-end electronics system in high energy physics experiments

    International Nuclear Information System (INIS)

    Ohska, T.K.

    1990-01-01

    It is only a few years since the VME became a standard system, yet the VME system is already so much more popular than other systems. The VME system was developed for industrial applications and not for the scientific research, and high energy physics field is a tiny market when compared with the industrial market. Considerations made here indicate that the VME system would be a good one for a rear-end system, but would not be a good candidate for front-end electronics in physics experiments. Furthermore, there is a fear that the VXI bus could become popular in this field of instrumentation since the VXI system is backed up by major suppliers of instrumentation in the high energy physics field. VXI would not be an adequate system for front-end electronics, yet advertised to be one. It would be worse to see the VXI system to become a standard system for high energy physics instrumentation than the VME system to be one. The VXI system would do a mediocre job so that people might be misled to think that the VXI system can be used as front-end system. (N.K.)

  6. The Evolution of the NASA Commercial Crew Program Mission Assurance Process

    Science.gov (United States)

    Canfield, Amy C.

    2016-01-01

    In 2010, the National Aeronautics and Space Administration (NASA) established the Commercial Crew Program (CCP) in order to provide human access to the International Space Station and low Earth orbit via the commercial (non-governmental) sector. A particular challenge to NASA has been how to determine that the Commercial Provider's transportation system complies with programmatic safety requirements. The process used in this determination is the Safety Technical Review Board which reviews and approves provider submitted hazard reports. One significant product of the review is a set of hazard control verifications. In past NASA programs, 100% of these safety critical verifications were typically confirmed by NASA. The traditional Safety and Mission Assurance (S&MA) model does not support the nature of the CCP. To that end, NASA S&MA is implementing a Risk Based Assurance process to determine which hazard control verifications require NASA authentication. Additionally, a Shared Assurance Model is also being developed to efficiently use the available resources to execute the verifications.

  7. Portable Computer Technology (PCT) Research and Development Program Phase 2

    Science.gov (United States)

    Castillo, Michael; McGuire, Kenyon; Sorgi, Alan

    1995-01-01

    The subject of this project report, focused on: (1) Design and development of two Advanced Portable Workstation 2 (APW 2) units. These units incorporate advanced technology features such as a low power Pentium processor, a high resolution color display, National Television Standards Committee (NTSC) video handling capabilities, a Personal Computer Memory Card International Association (PCMCIA) interface, and Small Computer System Interface (SCSI) and ethernet interfaces. (2) Use these units to integrate and demonstrate advanced wireless network and portable video capabilities. (3) Qualification of the APW 2 systems for use in specific experiments aboard the Mir Space Station. A major objective of the PCT Phase 2 program was to help guide future choices in computing platforms and techniques for meeting National Aeronautics and Space Administration (NASA) mission objectives. The focus being on the development of optimal configurations of computing hardware, software applications, and network technologies for use on NASA missions.

  8. Linear Subpixel Learning Algorithm for Land Cover Classification from WELD using High Performance Computing

    Science.gov (United States)

    Ganguly, S.; Kumar, U.; Nemani, R. R.; Kalia, S.; Michaelis, A.

    2017-12-01

    In this work, we use a Fully Constrained Least Squares Subpixel Learning Algorithm to unmix global WELD (Web Enabled Landsat Data) to obtain fractions or abundances of substrate (S), vegetation (V) and dark objects (D) classes. Because of the sheer nature of data and compute needs, we leveraged the NASA Earth Exchange (NEX) high performance computing architecture to optimize and scale our algorithm for large-scale processing. Subsequently, the S-V-D abundance maps were characterized into 4 classes namely, forest, farmland, water and urban areas (with NPP-VIIRS - national polar orbiting partnership visible infrared imaging radiometer suite nighttime lights data) over California, USA using Random Forest classifier. Validation of these land cover maps with NLCD (National Land Cover Database) 2011 products and NAFD (North American Forest Dynamics) static forest cover maps showed that an overall classification accuracy of over 91% was achieved, which is a 6% improvement in unmixing based classification relative to per-pixel based classification. As such, abundance maps continue to offer an useful alternative to high-spatial resolution data derived classification maps for forest inventory analysis, multi-class mapping for eco-climatic models and applications, fast multi-temporal trend analysis and for societal and policy-relevant applications needed at the watershed scale.

  9. Creating a Rackspace and NASA Nebula compatible cloud using the OpenStack project (Invited)

    Science.gov (United States)

    Clark, R.

    2010-12-01

    NASA and Rackspace have both provided technology to the OpenStack that allows anyone to create a private Infrastructure as a Service (IaaS) cloud using open source software and commodity hardware. OpenStack is designed and developed completely in the open and with an open governance process. NASA donated Nova, which powers the compute portion of NASA Nebula Cloud Computing Platform, and Rackspace donated Swift, which powers Rackspace Cloud Files. The project is now in continuous development by NASA, Rackspace, and hundreds of other participants. When you create a private cloud using Openstack, you will have the ability to easily interact with your private cloud, a government cloud, and an ecosystem of public cloud providers, using the same API.

  10. Many-core technologies: The move to energy-efficient, high-throughput x86 computing (TFLOPS on a chip)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    With Moore's Law alive and well, more and more parallelism is introduced into all computing platforms at all levels of integration and programming to achieve higher performance and energy efficiency. Especially in the area of High-Performance Computing (HPC) users can entertain a combination of different hardware and software parallel architectures and programming environments. Those technologies range from vectorization and SIMD computation over shared memory multi-threading (e.g. OpenMP) to distributed memory message passing (e.g. MPI) on cluster systems. We will discuss HPC industry trends and Intel's approach to it from processor/system architectures and research activities to hardware and software tools technologies. This includes the recently announced new Intel(r) Many Integrated Core (MIC) architecture for highly-parallel workloads and general purpose, energy efficient TFLOPS performance, some of its architectural features and its programming environment. At the end we will have a br...

  11. Technical characteristics of the TRISPAL system high-energy end

    International Nuclear Information System (INIS)

    Meot, F.

    1996-04-01

    This document presents an overview of the principle design of the high-energy end of the TRISPAL high-intensity LINAC system, with detailed schemes of the different constituent parts and of the beam envelopes. This schemes are presented with the geometric and magnetic parameters of the optical elements. The aim of this document is to allow the cost evaluation of the complete system. (J.S.). 5 refs., 5 figs., 5 tabs., 1 append

  12. A NASA-wide approach toward cost-effective, high-quality software through reuse

    Science.gov (United States)

    Scheper, Charlotte O. (Editor); Smith, Kathryn A. (Editor)

    1993-01-01

    NASA Langley Research Center sponsored the second Workshop on NASA Research in Software Reuse on May 5-6, 1992 at the Research Triangle Park, North Carolina. The workshop was hosted by the Research Triangle Institute. Participants came from the three NASA centers, four NASA contractor companies, two research institutes and the Air Force's Rome Laboratory. The purpose of the workshop was to exchange information on software reuse tool development, particularly with respect to tool needs, requirements, and effectiveness. The participants presented the software reuse activities and tools being developed and used by their individual centers and programs. These programs address a wide range of reuse issues. The group also developed a mission and goals for software reuse within NASA. This publication summarizes the presentations and the issues discussed during the workshop.

  13. Heads in the Cloud: A Primer on Neuroimaging Applications of High Performance Computing.

    Science.gov (United States)

    Shatil, Anwar S; Younas, Sohail; Pourreza, Hossein; Figley, Chase R

    2015-01-01

    With larger data sets and more sophisticated analyses, it is becoming increasingly common for neuroimaging researchers to push (or exceed) the limitations of standalone computer workstations. Nonetheless, although high-performance computing platforms such as clusters, grids and clouds are already in routine use by a small handful of neuroimaging researchers to increase their storage and/or computational power, the adoption of such resources by the broader neuroimaging community remains relatively uncommon. Therefore, the goal of the current manuscript is to: 1) inform prospective users about the similarities and differences between computing clusters, grids and clouds; 2) highlight their main advantages; 3) discuss when it may (and may not) be advisable to use them; 4) review some of their potential problems and barriers to access; and finally 5) give a few practical suggestions for how interested new users can start analyzing their neuroimaging data using cloud resources. Although the aim of cloud computing is to hide most of the complexity of the infrastructure management from end-users, we recognize that this can still be an intimidating area for cognitive neuroscientists, psychologists, neurologists, radiologists, and other neuroimaging researchers lacking a strong computational background. Therefore, with this in mind, we have aimed to provide a basic introduction to cloud computing in general (including some of the basic terminology, computer architectures, infrastructure and service models, etc.), a practical overview of the benefits and drawbacks, and a specific focus on how cloud resources can be used for various neuroimaging applications.

  14. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  15. Summary of Pressure Gain Combustion Research at NASA

    Science.gov (United States)

    Perkins, H. Douglas; Paxson, Daniel E.

    2018-01-01

    NASA has undertaken a systematic exploration of many different facets of pressure gain combustion over the last 25 years in an effort to exploit the inherent thermodynamic advantage of pressure gain combustion over the constant pressure combustion process used in most aerospace propulsion systems. Applications as varied as small-scale UAV's, rotorcraft, subsonic transports, hypersonics and launch vehicles have been considered. In addition to studying pressure gain combustor concepts such as wave rotors, pulse detonation engines, pulsejets, and rotating detonation engines, NASA has studied inlets, nozzles, ejectors and turbines which must also process unsteady flow in an integrated propulsion system. Other design considerations such as acoustic signature, combustor material life and heat transfer that are unique to pressure gain combustors have also been addressed in NASA research projects. In addition to a wide range of experimental studies, a number of computer codes, from 0-D up through 3-D, have been developed or modified to specifically address the analysis of unsteady flow fields. Loss models have also been developed and incorporated into these codes that improve the accuracy of performance predictions and decrease computational time. These codes have been validated numerous times across a broad range of operating conditions, and it has been found that once validated for one particular pressure gain combustion configuration, these codes are readily adaptable to the others. All in all, the documentation of this work has encompassed approximately 170 NASA technical reports, conference papers and journal articles to date. These publications are very briefly summarized herein, providing a single point of reference for all of NASA's pressure gain combustion research efforts. This documentation does not include the significant contributions made by NASA research staff to the programs of other agencies, universities, industrial partners and professional society

  16. Compilation of Abstracts for SC12 Conference Proceedings

    Science.gov (United States)

    Morello, Gina Francine (Compiler)

    2012-01-01

    1 A Breakthrough in Rotorcraft Prediction Accuracy Using Detached Eddy Simulation; 2 Adjoint-Based Design for Complex Aerospace Configurations; 3 Simulating Hypersonic Turbulent Combustion for Future Aircraft; 4 From a Roar to a Whisper: Making Modern Aircraft Quieter; 5 Modeling of Extended Formation Flight on High-Performance Computers; 6 Supersonic Retropropulsion for Mars Entry; 7 Validating Water Spray Simulation Models for the SLS Launch Environment; 8 Simulating Moving Valves for Space Launch System Liquid Engines; 9 Innovative Simulations for Modeling the SLS Solid Rocket Booster Ignition; 10 Solid Rocket Booster Ignition Overpressure Simulations for the Space Launch System; 11 CFD Simulations to Support the Next Generation of Launch Pads; 12 Modeling and Simulation Support for NASA's Next-Generation Space Launch System; 13 Simulating Planetary Entry Environments for Space Exploration Vehicles; 14 NASA Center for Climate Simulation Highlights; 15 Ultrascale Climate Data Visualization and Analysis; 16 NASA Climate Simulations and Observations for the IPCC and Beyond; 17 Next-Generation Climate Data Services: MERRA Analytics; 18 Recent Advances in High-Resolution Global Atmospheric Modeling; 19 Causes and Consequences of Turbulence in the Earths Protective Shield; 20 NASA Earth Exchange (NEX): A Collaborative Supercomputing Platform; 21 Powering Deep Space Missions: Thermoelectric Properties of Complex Materials; 22 Meeting NASA's High-End Computing Goals Through Innovation; 23 Continuous Enhancements to the Pleiades Supercomputer for Maximum Uptime; 24 Live Demonstrations of 100-Gbps File Transfers Across LANs and WANs; 25 Untangling the Computing Landscape for Climate Simulations; 26 Simulating Galaxies and the Universe; 27 The Mysterious Origin of Stellar Masses; 28 Hot-Plasma Geysers on the Sun; 29 Turbulent Life of Kepler Stars; 30 Modeling Weather on the Sun; 31 Weather on Mars: The Meteorology of Gale Crater; 32 Enhancing Performance of NASAs High-End

  17. Computer simulations for the Mars Atmospheric and Volatile EvolutioN (MAVEN) mission through NASA's "Project Spectra!"

    Science.gov (United States)

    Christofferson, R.; Wood, E. L.; Euler, G.

    2012-12-01

    "Project Spectra!" is a standards-based light science and engineering program on solar system exploration that includes both hands-on paper and pencil activities as well as Flash-based computer games that help students solidify understanding of high-level planetary and solar physics. Using computer interactive games where students experience and manipulate the information makes abstract concepts accessible. Visualizing lessons with multi-media tools solidifies understanding and retention of knowledge. Since students can choose what to watch and explore, the interactives accommodate a broad range of learning styles. Students can go back and forth through the interactives if they've missed a concept or wish to view something again. In the end, students are asked critical thinking questions and conduct web-based research. As a part of the Mars Atmospheric and Volatile EvolutioN (MAVEN) mission education programming, we've developed two new "Project Spectra!" interactives that go hand-in-hand with a paper and pencil activity. The MAVEN mission will study volatiles in the upper atmosphere to help piece together Mars' climate history. In the first interactive, students explore black body radiation, albedo, and a simplified greenhouse effect to establish what factors contribute to overall planetary temperature and how they contribute. Students are asked to create a scenario in which a planet they build and design is able to maintain liquid water on the surface. In the second interactive, students are asked to consider Mars and the conditions needed for Mars to support water on the surface, keeping some variables fixed. Ideally, students will walk away with the very basic and critical elements required for climate studies, which has far-reaching implications beyond the study of Mars. These interactives are currently being pilot tested at Arvada High School in Colorado.

  18. Computer simulations for the Mars Atmospheric and Volatile EvolutioN (MAVEN) mission through NASA's 'Project Spectra!'

    Science.gov (United States)

    Wood, E. L.

    2013-12-01

    'Project Spectra!' is a standards-based light science and engineering program on solar system exploration that includes both hands-on paper and pencil activities as well as Flash-based computer games that help students solidify understanding of high-level planetary and solar physics. Using computer interactive games where students experience and manipulate the information makes abstract concepts accessible. Visualizing lessons with multi-media tools solidifies understanding and retention of knowledge. Since students can choose what to watch and explore, the interactives accommodate a broad range of learning styles. Students can go back and forth through the interactives if they've missed a concept or wish to view something again. In the end, students are asked critical thinking questions and conduct web-based research. As a part of the Mars Atmospheric and Volatile EvolutioN (MAVEN) mission education programming, we've developed two new 'Project Spectra!' interactives that go hand-in-hand with a paper and pencil activity. The MAVEN mission will study volatiles in the upper atmosphere to help piece together Mars' climate history. In the first interactive, students explore black body radiation, albedo, and a simplified greenhouse effect to establish what factors contribute to overall planetary temperature and how they contribute. Students are asked to create a scenario in which a planet they build and design is able to maintain liquid water on the surface. In the second interactive, students are asked to consider Mars and the conditions needed for Mars to support water on the surface, keeping some variables fixed. Ideally, students will walk away with the very basic and critical elements required for climate studies, which has far-reaching implications beyond the study of Mars. These interactives were pilot tested at Arvada High School in Colorado.

  19. The Untold Story of NASA's Trailblazers

    Indian Academy of Sciences (India)

    Johnson, played by Taraji P Henson, a young. African-American 'computer' (the term com- puter at the time referred to women who man- ually completed calculations relevant to the scientific problems being considered at NASA at the time). Under the supervision of Dorothy. Vaughan, the first woman of color supervisor.

  20. NASA's High Mountain Asia Team (HiMAT): collaborative research to study changes of the High Asia region

    Science.gov (United States)

    Arendt, A. A.; Houser, P.; Kapnick, S. B.; Kargel, J. S.; Kirschbaum, D.; Kumar, S.; Margulis, S. A.; McDonald, K. C.; Osmanoglu, B.; Painter, T. H.; Raup, B. H.; Rupper, S.; Tsay, S. C.; Velicogna, I.

    2017-12-01

    The High Mountain Asia Team (HiMAT) is an assembly of 13 research groups funded by NASA to improve understanding of cryospheric and hydrological changes in High Mountain Asia (HMA). Our project goals are to quantify historical and future variability in weather and climate over the HMA, partition the components of the water budget across HMA watersheds, explore physical processes driving changes, and predict couplings and feedbacks between physical and human systems through assessment of hazards and downstream impacts. These objectives are being addressed through analysis of remote sensing datasets combined with modeling and assimilation methods to enable data integration across multiple spatial and temporal scales. Our work to date has focused on developing improved high resolution precipitation, snow cover and snow water equivalence products through a variety of statistical uncertainty analysis, dynamical downscaling and assimilation techniques. These and other high resolution climate products are being used as input and validation for an assembly of land surface and General Circulation Models. To quantify glacier change in the region we have calculated multidecadal mass balances of a subset of HMA glaciers by comparing commercial satellite imagery with earlier elevation datasets. HiMAT is using these tools and datasets to explore the impact of atmospheric aerosols and surface impurities on surface energy exchanges, to determine drivers of glacier and snowpack melt rates, and to improve our capacity to predict future hydrological variability. Outputs from the climate and land surface assessments are being combined with landslide and glacier lake inventories to refine our ability to predict hazards in the region. Economic valuation models are also being used to assess impacts on water resources and hydropower. Field data of atmospheric aerosol, radiative flux and glacier lake conditions are being collected to provide ground validation for models and remote sensing

  1. K-12 Project Management Education: NASA Hunch Projects

    Science.gov (United States)

    Morgan, Joe; Zhan, Wei; Leonard, Matt

    2013-01-01

    To increase the interest in science, technology, engineering, and math (STEM) among high school students, the National Aeronautics and Space Administration (NASA) created the "High Schools United with NASA to Create Hardware" (HUNCH) program. To enhance the experience of the students, NASA sponsored two additional projects that require…

  2. An Evaluation of a High Pressure Regulator for NASA's Robotic Lunar Lander Spacecraft

    Science.gov (United States)

    Burnside, Christopher G.; Trinh, Huu P.; Pedersen, Kevin W.

    2013-01-01

    The Robotic Lunar Lander (RLL) development project office at NASA Marshall Space Flight Center is currently studying several lunar surface science mission concepts. The focus is on spacecraft carrying multiple science instruments and power systems that will allow extended operations on the lunar surface or other air-less bodies in the solar system. Initial trade studies of launch vehicle options indicate the spacecraft will be significantly mass and volume constrained. Because of the investment by the DOD in low mass, highly volume efficient components, NASA has investigated the potential integration of some of these technologies in space science applications. A 10,000 psig helium pressure regulator test activity has been conducted as part of the overall risk reduction testing for the RLL spacecraft. The regulator was subjected to typical NASA acceptance testing to assess the regulator response to the expected RLL mission requirements. The test results show the regulator can supply helium at a stable outlet pressure of 740 psig within a +/- 5% tolerance band and maintain a lock-up pressure less than the +5% above nominal outlet pressure for all tests conducted. Numerous leak tests demonstrated leakage less than 10-3 standard cubic centimeters per second (SCCS) for the internal seat leakage at lock-up and less than 10-5 SCCS for external leakage through the regulator body. The successful test has shown the potential for 10,000 psig helium systems in NASA spacecraft and has reduced risk associated with hardware availability and hardware ability to meet RLL mission requirements.

  3. Front-end data processing using the bit-sliced microprocessor

    International Nuclear Information System (INIS)

    Machen, D.R.

    1979-01-01

    A state-of-the-art computing device, based upon the high-speed bit-sliced microprocessor, was developed into hardware for front-end data processing in both control and experiment applications at the Los Alamos Scientific Laboratory. The CAMAC Instrumentation Standard provides the framework for the high-speed hardware, allowing data acquisition and processing to take place at the data source in a CAMAC crate. 5 figures

  4. Recent Electric Propulsion Development Activities for NASA Science Missions

    Science.gov (United States)

    Pencil, Eric J.

    2009-01-01

    (The primary source of electric propulsion development throughout NASA is managed by the In-Space Propulsion Technology Project at the NASA Glenn Research Center for the Science Mission Directorate. The objective of the Electric Propulsion project area is to develop near-term electric propulsion technology to enhance or enable science missions while minimizing risk and cost to the end user. Major hardware tasks include developing NASA s Evolutionary Xenon Thruster (NEXT), developing a long-life High Voltage Hall Accelerator (HIVHAC), developing an advanced feed system, and developing cross-platform components. The objective of the NEXT task is to advance next generation ion propulsion technology readiness. The baseline NEXT system consists of a high-performance, 7-kW ion thruster; a high-efficiency, 7-kW power processor unit (PPU); a highly flexible advanced xenon propellant management system (PMS); a lightweight engine gimbal; and key elements of a digital control interface unit (DCIU) including software algorithms. This design approach was selected to provide future NASA science missions with the greatest value in mission performance benefit at a low total development cost. The objective of the HIVHAC task is to advance the Hall thruster technology readiness for science mission applications. The task seeks to increase specific impulse, throttle-ability and lifetime to make Hall propulsion systems applicable to deep space science missions. The primary application focus for the resulting Hall propulsion system would be cost-capped missions, such as competitively selected, Discovery-class missions. The objective of the advanced xenon feed system task is to demonstrate novel manufacturing techniques that will significantly reduce mass, volume, and footprint size of xenon feed systems over conventional feed systems. This task has focused on the development of a flow control module, which consists of a three-channel flow system based on a piezo-electrically actuated

  5. End-to-End Airplane Detection Using Transfer Learning in Remote Sensing Images

    Directory of Open Access Journals (Sweden)

    Zhong Chen

    2018-01-01

    Full Text Available Airplane detection in remote sensing images remains a challenging problem due to the complexity of backgrounds. In recent years, with the development of deep learning, object detection has also obtained great breakthroughs. For object detection tasks in natural images, such as the PASCAL (Pattern Analysis, Statistical Modelling and Computational Learning VOC (Visual Object Classes Challenge, the major trend of current development is to use a large amount of labeled classification data to pre-train the deep neural network as a base network, and then use a small amount of annotated detection data to fine-tune the network for detection. In this paper, we use object detection technology based on deep learning for airplane detection in remote sensing images. In addition to using some characteristics of remote sensing images, some new data augmentation techniques have been proposed. We also use transfer learning and adopt a single deep convolutional neural network and limited training samples to implement end-to-end trainable airplane detection. Classification and positioning are no longer divided into multistage tasks; end-to-end detection attempts to combine them for optimization, which ensures an optimal solution for the final stage. In our experiment, we use remote sensing images of airports collected from Google Earth. The experimental results show that the proposed algorithm is highly accurate and meaningful for remote sensing object detection.

  6. Grid computing in high energy physics

    CERN Document Server

    Avery, P

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them. Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software r...

  7. Implementation of the Two-Point Angular Correlation Function on a High-Performance Reconfigurable Computer

    Directory of Open Access Journals (Sweden)

    Volodymyr V. Kindratenko

    2009-01-01

    Full Text Available We present a parallel implementation of an algorithm for calculating the two-point angular correlation function as applied in the field of computational cosmology. The algorithm has been specifically developed for a reconfigurable computer. Our implementation utilizes a microprocessor and two reconfigurable processors on a dual-MAP SRC-6 system. The two reconfigurable processors are used as two application-specific co-processors. Two independent computational kernels are simultaneously executed on the reconfigurable processors while data pre-fetching from disk and initial data pre-processing are executed on the microprocessor. The overall end-to-end algorithm execution speedup achieved by this implementation is over 90× as compared to a sequential implementation of the algorithm executed on a single 2.8 GHz Intel Xeon microprocessor.

  8. An overview of the NASA electronic components information management system

    Science.gov (United States)

    Kramer, G.; Waterbury, S.

    1991-01-01

    The NASA Parts Project Office (NPPO) comprehensive data system to support all NASA Electric, Electronic, and Electromechanical (EEE) parts management and technical data requirements is described. A phase delivery approach is adopted, comprising four principal phases. Phases 1 and 2 support Space Station Freedom (SSF) and use a centralized architecture with all data and processing kept on a mainframe computer. Phases 3 and 4 support all NASA centers and projects and implement a distributed system architecture, in which data and processing are shared among networked database servers. The Phase 1 system, which became operational in February of 1990, implements a core set of functions. Phase 2, scheduled for release in 1991, adds functions to the Phase 1 system. Phase 3, to be prototyped beginning in 1991 and delivered in 1992, introduces a distributed system, separate from the Phase 1 and 2 system, with a refined semantic data model. Phase 4 extends the data model and functionality of the Phase 3 system to provide support for the NASA design community, including integration with Computer Aided Design (CAD) environments. Phase 4 is scheduled for prototyping in 1992 to 93 and delivery in 1994.

  9. Evaluation of External Memory Access Performance on a High-End FPGA Hybrid Computer

    Directory of Open Access Journals (Sweden)

    Konstantinos Kalaitzis

    2016-10-01

    Full Text Available The motivation of this research was to evaluate the main memory performance of a hybrid super computer such as the Convey HC-x, and ascertain how the controller performs in several access scenarios, vis-à-vis hand-coded memory prefetches. Such memory patterns are very useful in stencil computations. The theoretical bandwidth of the memory of the Convey is compared with the results of our measurements. The accurate study of the memory subsystem is particularly useful for users when they are developing their application-specific personality. Experiments were performed to measure the bandwidth between the coprocessor and the memory subsystem. The experiments aimed mainly at measuring the reading access speed of the memory from Application Engines (FPGAs. Different ways of accessing data were used in order to find the most efficient way to access memory. This way was proposed for future work in the Convey HC-x. When performing a series of accesses to memory, non-uniform latencies occur. The Memory Controller of the Convey HC-x in the coprocessor attempts to cover this latency. We measure memory efficiency as a ratio of the number of memory accesses and the number of execution cycles. The result of this measurement converges to one in most cases. In addition, we performed experiments with hand-coded memory accesses. The analysis of the experimental results shows how the memory subsystem and Memory Controllers work. From this work we conclude that the memory controllers do an excellent job, largely because (transparently to the user they seem to cache large amounts of data, and hence hand-coding is not needed in most situations.

  10. NASA space geodesy program: Catalogue of site information

    Science.gov (United States)

    Bryant, M. A.; Noll, C. E.

    1993-01-01

    This is the first edition of the NASA Space Geodesy Program: Catalogue of Site Information. This catalogue supersedes all previous versions of the Crustal Dynamics Project: Catalogue of Site Information, last published in May 1989. This document is prepared under the direction of the Space Geodesy and Altimetry Projects Office (SGAPO), Code 920.1, Goddard Space Flight Center. SGAPO has assumed the responsibilities of the Crustal Dynamics Project, which officially ended December 31, 1991. The catalog contains information on all NASA supported sites as well as sites from cooperating international partners. This catalog is designed to provde descriptions and occupation histories of high-accuracy geodetic measuring sites employing space-related techniques. The emphasis of the catalog has been in the past, and continues to be with this edition, station information for facilities and remote locations utilizing the Satellite Laser Ranging (SLR), Lunar Laser Ranging (LLR), and Very Long Baseline Interferometry (VLBI) techniques. With the proliferation of high-quality Global Positioning System (GPS) receivers and Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS) transponders, many co-located at established SLR and VLBI observatories, the requirement for accurate station and localized survey information for an ever broadening base of scientists and engineers has been recognized. It is our objective to provide accurate station information to scientific groups interested in these facilities.

  11. Using Long-Distance Scientist Involvement to Enhance NASA Volunteer Network Educational Activities

    Science.gov (United States)

    Ferrari, K.

    2012-12-01

    Since 1999, the NASA/JPL Solar System Ambassadors (SSA) and Solar System Educators (SSEP) programs have used specially-trained volunteers to expand education and public outreach beyond the immediate NASA center regions. Integrating nationwide volunteers in these highly effective programs has helped optimize agency funding set aside for education. Since these volunteers were trained by NASA scientists and engineers, they acted as "stand-ins" for the mission team members in communities across the country. Through the efforts of these enthusiastic volunteers, students gained an increased awareness of NASA's space exploration missions through Solar System Ambassador classroom visits, and teachers across the country became familiarized with NASA's STEM (Science, Technology, Engineering and Mathematics) educational materials through Solar System Educator workshops; however the scientist was still distant. In 2003, NASA started the Digital Learning Network (DLN) to bring scientists into the classroom via videoconferencing. The first equipment was expensive and only schools that could afford the expenditure were able to benefit; however, recent advancements in software allow classrooms to connect to the DLN via personal computers and an internet connection. Through collaboration with the DLN at NASA's Jet Propulsion Laboratory and the Goddard Spaceflight Center, Solar System Ambassadors and Solar System Educators in remote parts of the country are able to bring scientists into their classroom visits or workshops as guest speakers. The goals of this collaboration are to provide special elements to the volunteers' event, allow scientists opportunities for education involvement with minimal effort, acquaint teachers with DLN services and enrich student's classroom learning experience.;

  12. NASA low-speed centrifugal compressor for 3-D viscous code assessment and fundamental flow physics research

    Science.gov (United States)

    Hathaway, M. D.; Wood, J. R.; Wasserbauer, C. A.

    1991-01-01

    A low speed centrifugal compressor facility recently built by the NASA Lewis Research Center is described. The purpose of this facility is to obtain detailed flow field measurements for computational fluid dynamic code assessment and flow physics modeling in support of Army and NASA efforts to advance small gas turbine engine technology. The facility is heavily instrumented with pressure and temperature probes, both in the stationary and rotating frames of reference, and has provisions for flow visualization and laser velocimetry. The facility will accommodate rotational speeds to 2400 rpm and is rated at pressures to 1.25 atm. The initial compressor stage being tested is geometrically and dynamically representative of modern high-performance centrifugal compressor stages with the exception of Mach number levels. Preliminary experimental investigations of inlet and exit flow uniformly and measurement repeatability are presented. These results demonstrate the high quality of the data which may be expected from this facility. The significance of synergism between computational fluid dynamic analysis and experimentation throughout the development of the low speed centrifugal compressor facility is demonstrated.

  13. Grid Computing in High Energy Physics

    International Nuclear Information System (INIS)

    Avery, Paul

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them.Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software resources, regardless of location); (4) collaboration (providing tools that allow members full and fair access to all collaboration resources and enable distributed teams to work effectively, irrespective of location); and (5) education, training and outreach (providing resources and mechanisms for training students and for communicating important information to the public).It is believed that computing infrastructures based on Data Grids and optical networks can meet these challenges and can offer data intensive enterprises in high energy physics and elsewhere a comprehensive, scalable framework for collaboration and resource sharing. A number of Data Grid projects have been underway since 1999. Interestingly, the most exciting and far ranging of these projects are led by collaborations of high energy physicists, computer scientists and scientists from other disciplines in support of experiments with massive, near-term data needs. I review progress in this

  14. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  15. NASA Tech Briefs, October 2013

    Science.gov (United States)

    2013-01-01

    Topics include: A Short-Range Distance Sensor with Exceptional Linearity; Miniature Trace Gas Detector Based on Microfabricated Optical Resonators; Commercial Non-Dispersive Infrared Spectroscopy Sensors for Sub-Ambient Carbon Dioxide Detection; Fast, Large-Area, Wide-Bandgap UV Photodetector for Cherenkov Light Detection; Mission Data System Java Edition Version 7; Adaptive Distributed Environment for Procedure Training (ADEPT); LEGEND, a LEO-to-GEO Environment Debris Model; Electronics/Computers; Millimeter-Wave Localizers for Aircraft-to-Aircraft Approach Navigation; Impedance Discontinuity Reduction Between High-Speed Differential Connectors and PCB Interfaces; SpaceCube Version 1.5; High-Pressure Lightweight Thrusters; Non-Magnetic, Tough, Corrosion- and Wear-Resistant Knives From Bulk Metallic Glasses and Composites; Ambient Dried Aerogels; Applications for Gradient Metal Alloys Fabricated Using Additive Manufacturing; Passivation of Flexible YBCO Superconducting Current Lead With Amorphous SiO2 Layer; Propellant-Flow-Actuated Rocket Engine Igniter; Lightweight Liquid Helium Dewar for High-Altitude Balloon Payloads; Method to Increase Performance of Foil Bearings Through Passive Thermal Management; Unibody Composite Pressurized Structure; JWST Integrated Science Instrument Module Alignment Optimization Tool; Radar Range Sidelobe Reduction Using Adaptive Pulse Compression Technique; Digitally Calibrated TR Modules Enabling Real-Time Beamforming SweepSAR Architectures; Electro-Optic Time-to-Space Converter for Optical Detector Jitter Mitigation; Partially Transparent Petaled Mask/Occulter for Visible-Range Spectrum; Educational NASA Computational and Scientific Studies (enCOMPASS); Coarse-Grain Bandwidth Estimation Scheme for Large-Scale Network; Detection of Moving Targets Using Soliton Resonance Effect; High-Efficiency Nested Hall Thrusters for Robotic Solar System Exploration; High-Voltage Clock Driver for Photon-Counting CCD Characterization; Development of

  16. CHEP95: Computing in high energy physics. Abstracts

    International Nuclear Information System (INIS)

    1995-01-01

    These proceedings cover the technical papers on computation in High Energy Physics, including computer codes, computer devices, control systems, simulations, data acquisition systems. New approaches on computer architectures are also discussed

  17. SPoRT: Transitioning NASA and NOAA Experimental Data to the Operational Weather Community

    Science.gov (United States)

    Jedlovec, Gary J.

    2013-01-01

    Established in 2002 to demonstrate the weather and forecasting application of real-time EOS measurements, the NASA Short-term Prediction Research and Transition (SPoRT) program has grown to be an end-to-end research to operations activity focused on the use of advanced NASA modeling and data assimilation approaches, nowcasting techniques, and unique high-resolution multispectral data from EOS satellites to improve short-term weather forecasts on a regional and local scale. With the ever-broadening application of real-time high resolution satellite data from current EOS, Suomi NPP, and planned JPSS and GOES-R sensors to weather forecast problems, significant challenges arise in the acquisition, delivery, and integration of the new capabilities into the decision making process of the operational weather community. For polar orbiting sensors such as MODIS, AIRS, VIIRS, and CRiS, the use of direct broadcast ground stations is key to the real-time delivery of the data and derived products in a timely fashion. With the ABI on the geostationary GOES-R satellite, the data volumes will likely increase by a factor of 5-10 from current data streams. However, the high data volume and limited bandwidth of end user facilities presents a formidable obstacle to timely access to the data. This challenge can be addressed through the use of subsetting techniques, innovative web services, and the judicious selection of data formats. Many of these approaches have been implemented by SPoRT for the delivery of real-time products to NWS forecast offices and other weather entities. Once available in decision support systems like AWIPS II, these new data and products must be integrated into existing and new displays that allow for the integration of the data with existing operational products in these systems. SPoRT is leading the way in demonstrating this enhanced capability. This paper will highlight the ways SPoRT is overcoming many of the challenges presented by the enormous data

  18. Nonlinear Aeroelastic Analysis of the HIAD TPS Coupon in the NASA 8' High Temperature Tunnel: Theory and Experiment

    Science.gov (United States)

    Goldman, Benjamin D.; Scott, Robert C,; Dowell, Earl H.

    2014-01-01

    The purpose of this work is to develop a set of theoretical and experimental techniques to characterize the aeroelasticity of the thermal protection system (TPS) on the NASA Hypersonic Inflatable Aerodynamic Decelerator (HIAD). A square TPS coupon experiences trailing edge oscillatory behavior during experimental testing in the 8' High Temperature Tunnel (HTT), which may indicate the presence of aeroelastic flutter. Several theoretical aeroelastic models have been developed, each corresponding to a different experimental test configuration. Von Karman large deflection theory is used for the plate-like components of the TPS, along with piston theory for the aerodynamics. The constraints between the individual TPS layers and the presence of a unidirectional foundation at the back of the coupon are included by developing the necessary energy expressions and using the Rayleigh Ritz method to derive the nonlinear equations of motion. Free vibrations and limit cycle oscillations are computed and the frequencies and amplitudes are compared with accelerometer and photogrammetry data from the experiments.

  19. STARS: An Integrated, Multidisciplinary, Finite-Element, Structural, Fluids, Aeroelastic, and Aeroservoelastic Analysis Computer Program

    Science.gov (United States)

    Gupta, K. K.

    1997-01-01

    A multidisciplinary, finite element-based, highly graphics-oriented, linear and nonlinear analysis capability that includes such disciplines as structures, heat transfer, linear aerodynamics, computational fluid dynamics, and controls engineering has been achieved by integrating several new modules in the original STARS (STructural Analysis RoutineS) computer program. Each individual analysis module is general-purpose in nature and is effectively integrated to yield aeroelastic and aeroservoelastic solutions of complex engineering problems. Examples of advanced NASA Dryden Flight Research Center projects analyzed by the code in recent years include the X-29A, F-18 High Alpha Research Vehicle/Thrust Vectoring Control System, B-52/Pegasus Generic Hypersonics, National AeroSpace Plane (NASP), SR-71/Hypersonic Launch Vehicle, and High Speed Civil Transport (HSCT) projects. Extensive graphics capabilities exist for convenient model development and postprocessing of analysis results. The program is written in modular form in standard FORTRAN language to run on a variety of computers, such as the IBM RISC/6000, SGI, DEC, Cray, and personal computer; associated graphics codes use OpenGL and IBM/graPHIGS language for color depiction. This program is available from COSMIC, the NASA agency for distribution of computer programs.

  20. Microtechnology in Space: NASA's Lab-on-a-Chip Applications Development Program

    Science.gov (United States)

    Monaco, Lisa; Spearing, Scott; Jenkins, Andy; Symonds, Wes; Mayer, Derek; Gouldie, Edd; Wainwright, Norm; Fries, Marc; Maule, Jake; Toporski, Jan

    2004-01-01

    NASA's Marshall Space Flight Center (MSFC) Lab on a Chip Application Development LOCAD) team has worked with microfluidic technology for the past few years in an effort to support NASA's Mission. In that time, such microfluidic based Lab-on-a-Chip (LOC) systems have become common technology in clinical and diagnostic laboratories. The approach is most attractive due to its highly miniaturized platform and ability to perform reagent handling (i-e., dilution, mixing, separation) and diagnostics for multiple reactions in an integrated fashion. LOCAD, along with Caliper Life Sciences has successfully developed the first LOC device for macromolecular crystallization using a workstation acquired specifically for designing custom chips, the Caliper 42. LOCAD uses this, along with a novel MSFC-designed and built workstation for microfluidic development. The team has a cadre of LOC devices that can be used to perform initial feasibility testing to determine the efficacy of the LOC approach for a specific application. Once applicability has been established, the LOCAD team, along with the Army's Aviation and Missile Command microfabrication facility, can then begin to custom design and fabricate a device per the user's specifications. This presentation will highlight the LOCAD team's proven and unique expertise that has been utilized to provide end to end capabilities associated with applying microfluidics for applications that include robotic life detection instrumentation, crew health monitoring and microbial and environmental monitoring for human Exploration.

  1. The Trick Simulation Toolkit: A NASA/Opensource Framework for Running Time Based Physics Models

    Science.gov (United States)

    Penn, John M.

    2016-01-01

    The Trick Simulation Toolkit is a simulation development environment used to create high fidelity training and engineering simulations at the NASA Johnson Space Center and many other NASA facilities. Its purpose is to generate a simulation executable from a collection of user-supplied models and a simulation definition file. For each Trick-based simulation, Trick automatically provides job scheduling, numerical integration, the ability to write and restore human readable checkpoints, data recording, interactive variable manipulation, a run-time interpreter, and many other commonly needed capabilities. This allows simulation developers to concentrate on their domain expertise and the algorithms and equations of their models. Also included in Trick are tools for plotting recorded data and various other supporting utilities and libraries. Trick is written in C/C++ and Java and supports both Linux and MacOSX computer operating systems. This paper describes Trick's design and use at NASA Johnson Space Center.

  2. NASA Technologies that Benefit Society

    Science.gov (United States)

    Griffin, Amanda

    2012-01-01

    Applications developed on Earth of technology needed for space flight have produced thousands of spinoffs that contribute to improving national security, the economy, productivity and lifestyle. Over the course of it s history, NASA has nurtured partnerships with the private sector to facilitate the transfer of NASA-developed technology. For every dollar spent on research and development in the space program, it receives back $7 back in the form of corporate and personal income taxes from increased jobs and economic growth. A new technology, known as Liquid-metal alloy, is the result of a project funded by NASA s Jet Propulsion Lab. The unique technology is a blend of titanium, zirconium, nickel, copper and beryllium that achieves a strength greater than titanium. NASA plans to use this metal in the construction of a drill that will help for the search of water beneath the surface of Mars. Many other applications include opportunities in aerospace, defense, military, automotive, medical instrumentation and sporting goods.Developed in the 1980 s, the original Sun Tigers Inc sunlight-filtering lens has withstood the test of time. This technology was first reported in 1987 by NASA s JPL. Two scientists from JPL were later tasked with studying the harmful effects of radiation produced during laser and welding work. They came up with a transparent welding curtain that absorbs, filters and scatters light to maximize protection of human eyes. The two scientists then began doing business as Eagle Eye Optics. Each pair of sunglasses comes complete with ultraviolet protection, dual layer scratch resistant coating, polarized filters for maximum protection against glare and high visual clarity. Sufficient evidence shows that damage to the eye, especially to the retina, starts much earlier than most people realize. Sun filtering sunglasses are important. Winglets seen at the tips of airplane wings are among aviations most visible fuel-saving, performance enhancing technology

  3. Low Cost Automated Manufacture of High Efficiency THINS ZTJ PV Blanket Technology (P-NASA12-007), Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA needs lower cost solar arrays with high performance for a variety of missions. While high efficiency, space-qualified solar cells are in themselves costly, >...

  4. NASA-VOF3D: A three-dimensional computer program for incompressible flows with free surfaces

    Science.gov (United States)

    Torrey, M. D.; Mjolsness, R. C.; Stein, L. R.

    1987-07-01

    Presented is the NASA-VOF3D three-dimensional, transient, free-surface hydrodynamics program. This three-dimensional extension of NASA-VOF2D will, in principle, permit treatment in full three-dimensional generality of the wide variety of applications that could be treated by NASA-VOF2D only within the two-dimensional idealization. In particular, it, like NASA-VOF2D, is specifically designed to calculate confined flows in a low g environment. The code is presently restricted to cylindrical geometry. The code is based on the fractional volume-of-fluid method and allows multiple free surfaces with surface tension and wall adhesion. It also has a partial cell treatment that allows curved boundaries and internal obstacles. This report provides a brief discussion of the numerical method, a code listing, and some sample problems.

  5. Heads in the Cloud: A Primer on Neuroimaging Applications of High Performance Computing

    Science.gov (United States)

    Shatil, Anwar S.; Younas, Sohail; Pourreza, Hossein; Figley, Chase R.

    2015-01-01

    With larger data sets and more sophisticated analyses, it is becoming increasingly common for neuroimaging researchers to push (or exceed) the limitations of standalone computer workstations. Nonetheless, although high-performance computing platforms such as clusters, grids and clouds are already in routine use by a small handful of neuroimaging researchers to increase their storage and/or computational power, the adoption of such resources by the broader neuroimaging community remains relatively uncommon. Therefore, the goal of the current manuscript is to: 1) inform prospective users about the similarities and differences between computing clusters, grids and clouds; 2) highlight their main advantages; 3) discuss when it may (and may not) be advisable to use them; 4) review some of their potential problems and barriers to access; and finally 5) give a few practical suggestions for how interested new users can start analyzing their neuroimaging data using cloud resources. Although the aim of cloud computing is to hide most of the complexity of the infrastructure management from end-users, we recognize that this can still be an intimidating area for cognitive neuroscientists, psychologists, neurologists, radiologists, and other neuroimaging researchers lacking a strong computational background. Therefore, with this in mind, we have aimed to provide a basic introduction to cloud computing in general (including some of the basic terminology, computer architectures, infrastructure and service models, etc.), a practical overview of the benefits and drawbacks, and a specific focus on how cloud resources can be used for various neuroimaging applications. PMID:27279746

  6. Heads in the Cloud: A Primer on Neuroimaging Applications of High Performance Computing

    Directory of Open Access Journals (Sweden)

    Anwar S. Shatil

    2015-01-01

    Full Text Available With larger data sets and more sophisticated analyses, it is becoming increasingly common for neuroimaging researchers to push (or exceed the limitations of standalone computer workstations. Nonetheless, although high-performance computing platforms such as clusters, grids and clouds are already in routine use by a small handful of neuroimaging researchers to increase their storage and/or computational power, the adoption of such resources by the broader neuroimaging community remains relatively uncommon. Therefore, the goal of the current manuscript is to: 1 inform prospective users about the similarities and differences between computing clusters, grids and clouds; 2 highlight their main advantages; 3 discuss when it may (and may not be advisable to use them; 4 review some of their potential problems and barriers to access; and finally 5 give a few practical suggestions for how interested new users can start analyzing their neuroimaging data using cloud resources. Although the aim of cloud computing is to hide most of the complexity of the infrastructure management from end-users, we recognize that this can still be an intimidating area for cognitive neuroscientists, psychologists, neurologists, radiologists, and other neuroimaging researchers lacking a strong computational background. Therefore, with this in mind, we have aimed to provide a basic introduction to cloud computing in general (including some of the basic terminology, computer architectures, infrastructure and service models, etc., a practical overview of the benefits and drawbacks, and a specific focus on how cloud resources can be used for various neuroimaging applications.

  7. End to end adaptive congestion control in TCP/IP networks

    CERN Document Server

    Houmkozlis, Christos N

    2012-01-01

    This book provides an adaptive control theory perspective on designing congestion controls for packet-switching networks. Relevant to a wide range of disciplines and industries, including the music industry, computers, image trading, and virtual groups, the text extensively discusses source oriented, or end to end, congestion control algorithms. The book empowers readers with clear understanding of the characteristics of packet-switching networks and their effects on system stability and performance. It provides schemes capable of controlling congestion and fairness and presents real-world app

  8. Eclipse 2017: Through the Eyes of NASA

    Science.gov (United States)

    Mayo, Louis; NASA Heliophysics Education Consortium

    2017-10-01

    The August 21, 2017 total solar eclipse across America was, by all accounts, the biggest science education program ever carried out by NASA, significantly larger than the Curiosity Mars landing and the New Horizons Pluto flyby. Initial accounting estimates over two billion people reached and website hits exceeding five billion. The NASA Science Mission Directorate spent over two years planning and developing this enormous public education program, establishing over 30 official NASA sites along the path of totality, providing imagery from 11 NASA space assets, two high altitude aircraft, and over 50 high altitude balloons. In addition, a special four focal plane ground based solar telescope was developed in partnership with Lunt Solar Systems that observed and processed the eclipse in 6K resolution. NASA EDGE and NASA TV broadcasts during the entirity of totality across the country reached hundreds of millions, world wide.This talk will discuss NASA's strategy, results, and lessons learned; and preview some of the big events we plan to feature in the near future.

  9. An end-to-end computing model for the Square Kilometre Array

    NARCIS (Netherlands)

    Jongerius, R.; Wijnholds, S.; Nijboer, R.; Corporaal, H.

    2014-01-01

    For next-generation radio telescopes such as the Square Kilometre Array, seemingly minor changes in scientific constraints can easily push computing requirements into the exascale domain. The authors propose a model for engineers and astronomers to understand these relations and make tradeoffs in

  10. NASA/CARES dual-use ceramic technology spinoff applications

    Science.gov (United States)

    Powers, Lynn M.; Janosik, Lesley A.; Gyekenyesi, John P.; Nemeth, Noel N.

    1994-01-01

    NASA has developed software that enables American industry to establish the reliability and life of ceramic structures in a wide variety of 21st Century applications. Designing ceramic components to survive at higher temperatures than the capability of most metals and in severe loading environments involves the disciplines of statistics and fracture mechanics. Successful application of advanced ceramics material properties and the use of a probabilistic brittle material design methodology. The NASA program, known as CARES (Ceramics Analysis and Reliability Evaluation of Structures), is a comprehensive general purpose design tool that predicts the probability of failure of a ceramic component as a function of its time in service. The latest version of this software, CARESALIFE, is coupled to several commercially available finite element analysis programs (ANSYS, MSC/NASTRAN, ABAQUS, COSMOS/N4, MARC), resulting in an advanced integrated design tool which is adapted to the computing environment of the user. The NASA-developed CARES software has been successfully used by industrial, government, and academic organizations to design and optimize ceramic components for many demanding applications. Industrial sectors impacted by this program include aerospace, automotive, electronic, medical, and energy applications. Dual-use applications include engine components, graphite and ceramic high temperature valves, TV picture tubes, ceramic bearings, electronic chips, glass building panels, infrared windows, radiant heater tubes, heat exchangers, and artificial hips, knee caps, and teeth.

  11. NASA's Astronant Family Support Office

    Science.gov (United States)

    Beven, Gary; Curtis, Kelly D.; Holland, Al W.; Sipes, Walter; VanderArk, Steve

    2014-01-01

    During the NASA-Mir program of the 1990s and due to the challenges inherent in the International Space Station training schedule and operations tempo, it was clear that a special focus on supporting families was a key to overall mission success for the ISS crewmembers pre-, in- and post-flight. To that end, in January 2001 the first Family Services Coordinator was hired by the Behavioral Health and Performance group at NASA JSC and matrixed from Medical Operations into the Astronaut Office's organization. The initial roles and responsibilities were driven by critical needs, including facilitating family communication during training deployments, providing mission-specific and other relevant trainings for spouses, serving as liaison for families with NASA organizations such as Medical Operations, NASA management and the Astronaut Office, and providing assistance to ensure success of an Astronaut Spouses Group. The role of the Family Support Office (FSO) has modified as the ISS Program matured and the needs of families changed. The FSO is currently an integral part of the Astronaut Office's ISS Operations Branch. It still serves the critical function of providing information to families, as well as being the primary contact for US and international partner families with resources at JSC. Since crews launch and return on Russian vehicles, the FSO has the added responsibility for coordinating with Flight Crew Operations, the families, and their guests for Soyuz launches, landings, and Direct Return to Houston post-flight. This presentation will provide a summary of the family support services provided for astronauts, and how they have changed with the Program and families the FSO serves. Considerations for future FSO services will be discussed briefly as NASA proposes one year missions and beyond ISS missions. Learning Objective: 1) Obtain an understanding of the reasons a Family Support Office was important for NASA. 2) Become familiar with the services provided for

  12. NASA Airborne Astronomy Ambassadors (AAA) Professional Development and NASA Connections

    Science.gov (United States)

    Backman, D. E.; Clark, C.; Harman, P. K.

    2017-12-01

    NASA's Airborne Astronomy Ambassadors (AAA) program is a three-part professional development (PD) experience for high school physics, astronomy, and earth science teachers. AAA PD consists of: (1) blended learning via webinars, asynchronous content learning, and in-person workshops, (2) a STEM immersion experience at NASA Armstrong's B703 science research aircraft facility in Palmdale, California, and (3) ongoing opportunities for connection with NASA astrophysics and planetary science Subject Matter Experts (SMEs). AAA implementation in 2016-18 involves partnerships between the SETI Institute and seven school districts in northern and southern California. AAAs in the current cohort were selected by the school districts based on criteria developed by AAA program staff working with WestEd evaluation consultants. The selected teachers were then randomly assigned by WestEd to a Group A or B to support controlled testing of student learning. Group A completed their PD during January - August 2017, then participated in NASA SOFIA science flights during fall 2017. Group B will act as a control during the 2017-18 school year, then will complete their professional development and SOFIA flights during 2018. A two-week AAA electromagnetic spectrum and multi-wavelength astronomy curriculum aligned with the Science Framework for California Public Schools and Next Generation Science Standards was developed by program staff for classroom delivery. The curriculum (as well as the AAA's pre-flight PD) capitalizes on NASA content by using "science snapshot" case studies regarding astronomy research conducted by SOFIA. AAAs also interact with NASA SMEs during flight weeks and will translate that interaction into classroom content. The AAA program will make controlled measurements of student gains in standards-based learning plus changes in student attitudes towards STEM, and observe & record the AAAs' implementation of curricular changes. Funded by NASA: NNX16AC51

  13. NASA Aerosciences Activities to Support Human Space Flight

    Science.gov (United States)

    LeBeau, Gerald J.

    2011-01-01

    The Lyndon B. Johnson Space Center (JSC) has been a critical element of the United State's human space flight program for over 50 years. It is the home to NASA s Mission Control Center, the astronaut corps, and many major programs and projects including the Space Shuttle Program, International Space Station Program, and the Orion Project. As part of JSC's Engineering Directorate, the Applied Aeroscience and Computational Fluid Dynamics Branch is charted to provide aerosciences support to all human spacecraft designs and missions for all phases of flight, including ascent, exo-atmospheric, and entry. The presentation will review past and current aeroscience applications and how NASA works to apply a balanced philosophy that leverages ground testing, computational modeling and simulation, and flight testing, to develop and validate related products. The speaker will address associated aspects of aerodynamics, aerothermodynamics, rarefied gas dynamics, and decelerator systems, involving both spacecraft vehicle design and analysis, and operational mission support. From these examples some of NASA leading aerosciences challenges will be identified. These challenges will be used to provide foundational motivation for the development of specific advanced modeling and simulation capabilities, and will also be used to highlight how development activities are increasing becoming more aligned with flight projects. NASA s efforts to apply principles of innovation and inclusion towards improving its ability to support the myriad of vehicle design and operational challenges will also be briefly reviewed.

  14. Design for reliability: NASA reliability preferred practices for design and test

    Science.gov (United States)

    Lalli, Vincent R.

    1994-01-01

    This tutorial summarizes reliability experience from both NASA and industry and reflects engineering practices that support current and future civil space programs. These practices were collected from various NASA field centers and were reviewed by a committee of senior technical representatives from the participating centers (members are listed at the end). The material for this tutorial was taken from the publication issued by the NASA Reliability and Maintainability Steering Committee (NASA Reliability Preferred Practices for Design and Test. NASA TM-4322, 1991). Reliability must be an integral part of the systems engineering process. Although both disciplines must be weighed equally with other technical and programmatic demands, the application of sound reliability principles will be the key to the effectiveness and affordability of America's space program. Our space programs have shown that reliability efforts must focus on the design characteristics that affect the frequency of failure. Herein, we emphasize that these identified design characteristics must be controlled by applying conservative engineering principles.

  15. New coil end design for the RHIC Arc dipole

    Energy Technology Data Exchange (ETDEWEB)

    Morgan, G.H.; Morgillo, A.; Power, K.; Thompson, P.

    1994-06-01

    To simplify production, the number of parts in the ends, about 64 in each coil end, was reduced by using thicker spacers between the turns, to about 23. A new computer program was written which gives a description of each turn closely resembling the turn as made. The output of this program is processed by newly written computer programs which change the parts descriptions into forms which are used by a computer-controlled, 5-axis milling machine. The solid spacers replace spacers assembled from laminations and improve the fit as well. The parts will be molded during production. The calculated harmonic content of the ends is compared with measurements on the first magnets built with the new ends.

  16. New coil end design for the RHIC Arc dipole

    International Nuclear Information System (INIS)

    Morgan, G.H.; Morgillo, A.; Power, K.; Thompson, P.

    1994-01-01

    To simplify production, the number of parts in the ends, about 64 in each coil end, was reduced by using thicker spacers between the turns, to about 23. A new computer program was written which gives a description of each turn closely resembling the turn as made. The output of this program is processed by newly written computer programs which change the parts descriptions into forms which are used by a computer-controlled, 5-axis milling machine. The solid spacers replace spacers assembled from laminations and improve the fit as well. The parts will be molded during production. The calculated harmonic content of the ends is compared with measurements on the first magnets built with the new ends

  17. The Roots of Beowulf

    Science.gov (United States)

    Fischer, James R.

    2014-01-01

    The first Beowulf Linux commodity cluster was constructed at NASA's Goddard Space Flight Center in 1994 and its origins are a part of the folklore of high-end computing. In fact, the conditions within Goddard that brought the idea into being were shaped by rich historical roots, strategic pressures brought on by the ramp up of the Federal High-Performance Computing and Communications Program, growth of the open software movement, microprocessor performance trends, and the vision of key technologists. This multifaceted story is told here for the first time from the point of view of NASA project management.

  18. Back-end interconnection. A generic concept for high volume manufacturing

    Energy Technology Data Exchange (ETDEWEB)

    Bosman, J.; Budel, T.; De Kok, C.J.G.M.

    2013-10-15

    The general method to realize series connection in thin film PV modules is monolithical interconnection through a sequence of laser scribes (P1, P2 and P3) and layer depositions. This method however implies that the deposition processes are interrupted several times, an undesirable situation in high volume processing. In order to eliminate this drawback we focus our developments on the so called 'back-end interconnection concept' in which series interconnection takes place AFTER the deposition of the functional layers of the thin film PV device. The process of making a back-end interconnection combines laser scribing, curing, sintering and inkjet processes. These different processes interacts with each other and are investigated in order to create processing strategies that are robust to ensure high volume production. The generic approach created a technology base that can be applied to any thin film PV technology.

  19. High Power MPD Thruster Development at the NASA Glenn Research Center

    Science.gov (United States)

    LaPointe, Michael R.; Mikellides, Pavlos G.; Reddy, Dhanireddy (Technical Monitor)

    2001-01-01

    Propulsion requirements for large platform orbit raising, cargo and piloted planetary missions, and robotic deep space exploration have rekindled interest in the development and deployment of high power electromagnetic thrusters. Magnetoplasmadynamic (MPD) thrusters can effectively process megawatts of power over a broad range of specific impulse values to meet these diverse in-space propulsion requirements. As NASA's lead center for electric propulsion, the Glenn Research Center has established an MW-class pulsed thruster test facility and is refurbishing a high-power steady-state facility to design, build, and test efficient gas-fed MPD thrusters. A complimentary numerical modeling effort based on the robust MACH2 code provides a well-balanced program of numerical analysis and experimental validation leading to improved high power MPD thruster performance. This paper reviews the current and planned experimental facilities and numerical modeling capabilities at the Glenn Research Center and outlines program plans for the development of new, efficient high power MPD thrusters.

  20. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  1. Robotic end-effector for rewaterproofing shuttle tiles

    Science.gov (United States)

    Manouchehri, Davoud; Hansen, Joseph M.; Wu, Cheng M.; Yamamoto, Brian S.; Graham, Todd

    1992-11-01

    This paper summarizes work by Rockwell International's Space Systems Division's Robotics Group at Downey, California. The work is part of a NASA-led team effort to automate Space Shuttle rewaterproofing in the Orbiter Processing Facility at the Kennedy Space Center and the ferry facility at the Ames-Dryden Flight Research Facility. Rockwell's effort focuses on the rewaterproofing end-effector, whose function is to inject hazardous dimethylethyloxysilane into thousands of ceramic tiles on the underside of the orbiter after each flight. The paper has five sections. First, it presents background on the present manual process. Second, end-effector requirements are presented, including safety and interface control. Third, a design is presented for the five end-effector systems: positioning, delivery, containment, data management, and command and control. Fourth, end-effector testing and integrating to the total system are described. Lastly, future applications for this technology are discussed.

  2. Adaptive Coding and Modulation Experiment With NASA's Space Communication and Navigation Testbed

    Science.gov (United States)

    Downey, Joseph; Mortensen, Dale; Evans, Michael; Briones, Janette; Tollis, Nicholas

    2016-01-01

    National Aeronautics and Space Administration (NASA)'s Space Communication and Navigation Testbed is an advanced integrated communication payload on the International Space Station. This paper presents results from an adaptive coding and modulation (ACM) experiment over S-band using a direct-to-earth link between the SCaN Testbed and the Glenn Research Center. The testing leverages the established Digital Video Broadcasting Second Generation (DVB-S2) standard to provide various modulation and coding options, and uses the Space Data Link Protocol (Consultative Committee for Space Data Systems (CCSDS) standard) for the uplink and downlink data framing. The experiment was conducted in a challenging environment due to the multipath and shadowing caused by the International Space Station structure. Several approaches for improving the ACM system are presented, including predictive and learning techniques to accommodate signal fades. Performance of the system is evaluated as a function of end-to-end system latency (round-trip delay), and compared to the capacity of the link. Finally, improvements over standard NASA waveforms are presented.

  3. DUKSUP: A Computer Program for High Thrust Launch Vehicle Trajectory Design and Optimization

    Science.gov (United States)

    Spurlock, O. Frank; Williams, Craig H.

    2015-01-01

    From the late 1960s through 1997, the leadership of NASAs Intermediate and Large class unmanned expendable launch vehicle projects resided at the NASA Lewis (now Glenn) Research Center (LeRC). One of LeRCs primary responsibilities --- trajectory design and performance analysis --- was accomplished by an internally-developed analytic three dimensional computer program called DUKSUP. Because of its Calculus of Variations-based optimization routine, this code was generally more capable of finding optimal solutions than its contemporaries. A derivation of optimal control using the Calculus of Variations is summarized including transversality, intermediate, and final conditions. The two point boundary value problem is explained. A brief summary of the codes operation is provided, including iteration via the Newton-Raphson scheme and integration of variational and motion equations via a 4th order Runge-Kutta scheme. Main subroutines are discussed. The history of the LeRC trajectory design efforts in the early 1960s is explained within the context of supporting the Centaur upper stage program. How the code was constructed based on the operation of the AtlasCentaur launch vehicle, the limits of the computers of that era, the limits of the computer programming languages, and the missions it supported are discussed. The vehicles DUKSUP supported (AtlasCentaur, TitanCentaur, and ShuttleCentaur) are briefly described. The types of missions, including Earth orbital and interplanetary, are described. The roles of flight constraints and their impact on launch operations are detailed (such as jettisoning hardware on heating, Range Safety, ground station tracking, and elliptical parking orbits). The computer main frames on which the code was hosted are described. The applications of the code are detailed, including independent check of contractor analysis, benchmarking, leading edge analysis, and vehicle performance improvement assessments. Several of DUKSUPs many major impacts on

  4. Workload assessment of surgeons: correlation between NASA TLX and blinks.

    Science.gov (United States)

    Zheng, Bin; Jiang, Xianta; Tien, Geoffrey; Meneghetti, Adam; Panton, O Neely M; Atkins, M Stella

    2012-10-01

    Blinks are known as an indicator of visual attention and mental stress. In this study, surgeons' mental workload was evaluated utilizing a paper assessment instrument (National Aeronautics and Space Administration Task Load Index, NASA TLX) and by examining their eye blinks. Correlation between these two assessments was reported. Surgeons' eye motions were video-recorded using a head-mounted eye-tracker while the surgeons performed a laparoscopic procedure on a virtual reality trainer. Blink frequency and duration were computed using computer vision technology. The level of workload experienced during the procedure was reported by surgeons using the NASA TLX. A total of 42 valid videos were recorded from 23 surgeons. After blinks were computed, videos were divided into two groups based on the blink frequency: infrequent group (≤ 6 blinks/min) and frequent group (more than 6 blinks/min). Surgical performance (measured by task time and trajectories of tool tips) was not significantly different between these two groups, but NASA TLX scores were significantly different. Surgeons who blinked infrequently reported a higher level of frustration (46 vs. 34, P = 0.047) and higher overall level of workload (57 vs. 47, P = 0.045) than those who blinked more frequently. The correlation coefficients (Pearson test) between NASA TLX and the blink frequency and duration were -0.17 and 0.446. Reduction of blink frequency and shorter blink duration matched the increasing level of mental workload reported by surgeons. The value of using eye-tracking technology for assessment of surgeon mental workload was shown.

  5. Bringing together high energy physicist and computer scientist

    International Nuclear Information System (INIS)

    Bock, R.K.

    1989-01-01

    The Oxford Conference on Computing in High Energy Physics approached the physics and computing issues with the question, ''Can computer science help?'' always in mind. This summary is a personal recollection of what I considered to be the highlights of the conference: the parts which contributed to my own learning experience. It can be used as a general introduction to the following papers, or as a brief overview of the current states of computer science within high energy physics. (orig.)

  6. HTMT-class Latency Tolerant Parallel Architecture for Petaflops Scale Computation

    Science.gov (United States)

    Sterling, Thomas; Bergman, Larry

    2000-01-01

    Computational Aero Sciences and other numeric intensive computation disciplines demand computing throughputs substantially greater than the Teraflops scale systems only now becoming available. The related fields of fluids, structures, thermal, combustion, and dynamic controls are among the interdisciplinary areas that in combination with sufficient resolution and advanced adaptive techniques may force performance requirements towards Petaflops. This will be especially true for compute intensive models such as Navier-Stokes are or when such system models are only part of a larger design optimization computation involving many design points. Yet recent experience with conventional MPP configurations comprising commodity processing and memory components has shown that larger scale frequently results in higher programming difficulty and lower system efficiency. While important advances in system software and algorithms techniques have had some impact on efficiency and programmability for certain classes of problems, in general it is unlikely that software alone will resolve the challenges to higher scalability. As in the past, future generations of high-end computers may require a combination of hardware architecture and system software advances to enable efficient operation at a Petaflops level. The NASA led HTMT project has engaged the talents of a broad interdisciplinary team to develop a new strategy in high-end system architecture to deliver petaflops scale computing in the 2004/5 timeframe. The Hybrid-Technology, MultiThreaded parallel computer architecture incorporates several advanced technologies in combination with an innovative dynamic adaptive scheduling mechanism to provide unprecedented performance and efficiency within practical constraints of cost, complexity, and power consumption. The emerging superconductor Rapid Single Flux Quantum electronics can operate at 100 GHz (the record is 770 GHz) and one percent of the power required by convention

  7. NASA program planning on nuclear electric propulsion

    International Nuclear Information System (INIS)

    Bennett, G.L.; Miller, T.J.

    1992-03-01

    As part of the focused technology planning for future NASA space science and exploration missions, NASA has initiated a focused technology program to develop the technologies for nuclear electric propulsion and nuclear thermal propulsion. Beginning in 1990, NASA began a series of interagency planning workshops and meetings to identify key technologies and program priorities for nuclear propulsion. The high-priority, near-term technologies that must be developed to make NEP operational for space exploration include scaling thrusters to higher power, developing high-temperature power processing units, and developing high power, low-mass, long-lived nuclear reactors. 28 refs

  8. High Throughput, High Yield Fabrication of High Quantum Efficiency Back-Illuminated Photon Counting, Far UV, UV, and Visible Detector Arrays

    Science.gov (United States)

    Nikzad, Shouleh; Hoenk, M. E.; Carver, A. G.; Jones, T. J.; Greer, F.; Hamden, E.; Goodsall, T.

    2013-01-01

    In this paper we discuss the high throughput end-to-end post fabrication processing of high performance delta-doped and superlattice-doped silicon imagers for UV, visible, and NIR applications. As an example, we present our results on far ultraviolet and ultraviolet quantum efficiency (QE) in a photon counting, detector array. We have improved the QE by nearly an order of magnitude over microchannel plates (MCPs) that are the state-of-the-art UV detectors for many NASA space missions as well as defense applications. These achievements are made possible by precision interface band engineering of Molecular Beam Epitaxy (MBE) and Atomic Layer Deposition (ALD).

  9. Overview of NASA/OAST efforts related to manufacturing technology

    Science.gov (United States)

    Saunders, N. T.

    1976-01-01

    An overview of some of NASA's current efforts related to manufacturing technology and some possible directions for the future are presented. The topics discussed are: computer-aided design, composite structures, and turbine engine components.

  10. WaterNet: The NASA Water Cycle Solutions Network

    Science.gov (United States)

    Houser, P. R.; Belvedere, D. R.; Pozzi, W. H.; Imam, B.; Schiffer, R.; Lawford, R.; Schlosser, C. A.; Gupta, H.; Welty, C.; Vorosmarty, C.; Matthews, D.

    2007-12-01

    Water is essential to life and directly impacts and constrains society's welfare, progress, and sustainable growth, and is continuously being transformed by climate change, erosion, pollution, and engineering practices. The water cycle is a critical resource for industry, agriculture, natural ecosystems, fisheries, aquaculture, hydroelectric power, recreation, and water supply, and is central to drought, flood, transportation-aviation, and disease hazards. It is therefore a national priority to use advancements in scientific observations and knowledge to develop solutions to the water challenges faced by society. NASA's unique role is to use its view from space to improve water and energy cycle monitoring and prediction. NASA has collected substantial water cycle information and knowledge that must be transitioned to develop solutions for all twelve National Priority Application (NPA) areas. NASA cannot achieve this goal alone -it must establish collaborations and interoperability with existing networks and nodes of research organizations, operational agencies, science communities, and private industry. Therefore, WaterNet: The NASA Water Cycle Solutions Network goal is to improve and optimize the sustained ability of water cycle researchers, stakeholders, organizations and networks to interact, identify, harness, and extend NASA research results to augment decision support tools and meet national needs. WaterNet is a catalyst for discovery and sharing of creative solutions to water problems. It serves as a creative, discovery process that is the entry-path for a research-to-solutions systems engineering NASA framework, with the end result to ultimately improve decision support.

  11. Compact, High Energy 2-micron Coherent Doppler Wind Lidar Development for NASA's Future 3-D Winds Measurement from Space

    Science.gov (United States)

    Singh, Upendra N.; Koch, Grady; Yu, Jirong; Petros, Mulugeta; Beyon, Jeffrey; Kavaya, Michael J.; Trieu, Bo; Chen, Songsheng; Bai, Yingxin; Petzar, paul; hide

    2010-01-01

    This paper presents an overview of 2-micron laser transmitter development at NASA Langley Research Center for coherent-detection lidar profiling of winds. The novel high-energy, 2-micron, Ho:Tm:LuLiF laser technology developed at NASA Langley was employed to study laser technology currently envisioned by NASA for future global coherent Doppler lidar winds measurement. The 250 mJ, 10 Hz laser was designed as an integral part of a compact lidar transceiver developed for future aircraft flight. Ground-based wind profiles made with this transceiver will be presented. NASA Langley is currently funded to build complete Doppler lidar systems using this transceiver for the DC-8 aircraft in autonomous operation. Recently, LaRC 2-micron coherent Doppler wind lidar system was selected to contribute to the NASA Science Mission Directorate (SMD) Earth Science Division (ESD) hurricane field experiment in 2010 titled Genesis and Rapid Intensification Processes (GRIP). The Doppler lidar system will measure vertical profiles of horizontal vector winds from the DC-8 aircraft using NASA Langley s existing 2-micron, pulsed, coherent detection, Doppler wind lidar system that is ready for DC-8 integration. The measurements will typically extend from the DC-8 to the earth s surface. They will be highly accurate in both wind magnitude and direction. Displays of the data will be provided in real time on the DC-8. The pulsed Doppler wind lidar of NASA Langley Research Center is much more powerful than past Doppler lidars. The operating range, accuracy, range resolution, and time resolution will be unprecedented. We expect the data to play a key role, combined with the other sensors, in improving understanding and predictive algorithms for hurricane strength and track. 1

  12. End-To-END Performance of the future MOMA intrument aboard the EXOMARS MISSION

    Science.gov (United States)

    Buch, A.; Pinnick, V. T.; Szopa, C.; Grand, N.; Danell, R.; van Amerom, F. H. W.; Freissinet, C.; Glavin, D. P.; Stalport, F.; Arevalo, R. D., Jr.; Coll, P. J.; Steininger, H.; Raulin, F.; Goesmann, F.; Mahaffy, P. R.; Brinckerhoff, W. B.

    2016-12-01

    After the SAM experiment aboard the curiosity rover, the Mars Organic Molecule Analyzer (MOMA) experiment aboard the future ExoMars mission will be the continuation of the search for the organic composition of the Mars surface with the advantage that the sample will be extracted as deep as 2 meters below the martian surface to minimize effects of radiation and oxidation on organic materials. To analyse the wide range of organic composition (volatile and non volatils compounds) of the martian soil MOMA is composed with an UV laser desorption / ionization (LDI) and a pyrolysis gas chromatography ion trap mass spectrometry (pyr-GC-ITMS). In order to analyse refractory organic compounds and chirality samples which undergo GC-ITMS analysis may be submitted to a derivatization process, consisting of the reaction of the sample components with specific reactants (MTBSTFA [1], DMF-DMA [2] or TMAH [3]). To optimize and test the performance of the GC-ITMS instrument we have performed several coupling tests campaigns between the GC, providing by the French team (LISA, LATMOS, CentraleSupelec), and the MS, providing by the US team (NASA, GSFC). Last campaign has been done with the ITU models wich is similar to the flight model and wich include the oven and the taping station providing by the German team (MPS). The results obtained demonstrate the current status of the end-to-end performance of the gas chromatography-mass spectrometry mode of operation. References:[1] Buch, A. et al. (2009) J chrom. A, 43, 143-151. [2] Freissinet et al. (2011) J Chrom A, 1306, 59-71. [3] Geffroy-Rodier, C. et al. (2009) JAAP, 85, 454-459. Acknowledgements: Funding provided by the Mars Exploration Program (point of contact, George Tahu, NASA/HQ). MOMA is a collaboration between NASA and ESA (PI Goesmann, MPS). MOMA-GC team acknowledges support from the French Space Agency (CNES), French National Programme of Planetology (PNP), National French Council (CNRS), Pierre Simon Laplace Institute.

  13. High Performance Computing-Accelerated Metrology for Large Optical Telescopes, Phase II

    Data.gov (United States)

    National Aeronautics and Space Administration — NASA has unique non-contact precision metrology requirements for dimensionally inspecting the global position and orientation of large and highly-polished...

  14. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  15. Development of a space radiation Monte Carlo computer simulation based on the FLUKA and ROOT codes

    CERN Document Server

    Pinsky, L; Ferrari, A; Sala, P; Carminati, F; Brun, R

    2001-01-01

    This NASA funded project is proceeding to develop a Monte Carlo-based computer simulation of the radiation environment in space. With actual funding only initially in place at the end of May 2000, the study is still in the early stage of development. The general tasks have been identified and personnel have been selected. The code to be assembled will be based upon two major existing software packages. The radiation transport simulation will be accomplished by updating the FLUKA Monte Carlo program, and the user interface will employ the ROOT software being developed at CERN. The end-product will be a Monte Carlo-based code which will complement the existing analytic codes such as BRYNTRN/HZETRN presently used by NASA to evaluate the effects of radiation shielding in space. The planned code will possess the ability to evaluate the radiation environment for spacecraft and habitats in Earth orbit, in interplanetary space, on the lunar surface, or on a planetary surface such as Mars. Furthermore, it will be usef...

  16. A Pilot Computer-Aided Design and Manufacturing Curriculum that Promotes Engineering

    Science.gov (United States)

    2002-01-01

    Elizabeth City State University (ECSU) is located in a community that is mostly rural in nature. The area is economically deprived when compared to the rest of the state. Many businesses lack the computerized equipment and skills needed to propel upward in today's technologically advanced society. This project will close the ever-widening gap between advantaged and disadvantaged workers as well as increase their participation with industry, NASA and/or other governmental agencies. Everyone recognizes computer technology as the catalyst for advances in design, prototyping, and manufacturing or the art of machining. Unprecedented quality control and cost-efficiency improvements are recognized through the use of computer technology. This technology has changed the manufacturing industry with advanced high-tech capabilities needed by NASA. With the ever-widening digital divide, we must continue to provide computer technology to those who are socio-economically disadvantaged.

  17. Science@NASA: Direct to People!

    Science.gov (United States)

    Koczor, Ronald J.; Adams, Mitzi; Gallagher, Dennis; Whitaker, Ann (Technical Monitor)

    2002-01-01

    Science@NASA is a science communication effort sponsored by NASA's Marshall Space Flight Center. It is the result of a four year research project between Marshall, the University of Florida College of Journalism and Communications and the internet communications company, Bishop Web Works. The goals of Science@NASA are to inform, inspire, and involve people in the excitement of NASA science by bringing that science directly to them. We stress not only the reporting of the facts of a particular topic, but also the context and importance of the research. Science@NASA involves several levels of activity from academic communications research to production of content for 6 websites, in an integrated process involving all phases of production. A Science Communications Roundtable Process is in place that includes scientists, managers, writers, editors, and Web technical experts. The close connection between the scientists and the writers/editors assures a high level of scientific accuracy in the finished products. The websites each have unique characters and are aimed at different audience segments: 1. http://science.nasa.gov. (SNG) Carries stories featuring various aspects of NASA science activity. The site carries 2 or 3 new stories each week in written and audio formats for science-attentive adults. 2. http://liftoff.msfc.nasa.gov. Features stories from SNG that are recast for a high school level audience. J-Track and J-Pass applets for tracking satellites are our most popular product. 3. http://kids. msfc.nasa.gov. This is the Nursemaids site and is aimed at a middle school audience. The NASAKids Club is a new feature at the site. 4. http://www.thursdaysclassroom.com . This site features lesson plans and classroom activities for educators centered around one of the science stories carried on SNG. 5. http://www.spaceweather.com. This site gives the status of solar activity and its interactions with the Earth's ionosphere and magnetosphere.

  18. Accessing NASA Technology with the World Wide Web

    Science.gov (United States)

    Nelson, Michael L.; Bianco, David J.

    1995-01-01

    NASA Langley Research Center (LaRC) began using the World Wide Web (WWW) in the summer of 1993, becoming the first NASA installation to provide a Center-wide home page. This coincided with a reorganization of LaRC to provide a more concentrated focus on technology transfer to both aerospace and non-aerospace industry. Use of WWW and NCSA Mosaic not only provides automated information dissemination, but also allows for the implementation, evolution and integration of many technology transfer and technology awareness applications. This paper describes several of these innovative applications, including the on-line presentation of the entire Technology OPportunities Showcase (TOPS), an industrial partnering showcase that exists on the Web long after the actual 3-day event ended. The NASA Technical Report Server (NTRS) provides uniform access to many logically similar, yet physically distributed NASA report servers. WWW is also the foundation of the Langley Software Server (LSS), an experimental software distribution system which will distribute LaRC-developed software. In addition to the more formal technology distribution projects, WWW has been successful in connecting people with technologies and people with other people.

  19. Agglomeration Economies and the High-Tech Computer

    OpenAIRE

    Wallace, Nancy E.; Walls, Donald

    2004-01-01

    This paper considers the effects of agglomeration on the production decisions of firms in the high-tech computer cluster. We build upon an alternative definition of the high-tech computer cluster developed by Bardhan et al. (2003) and we exploit a new data source, the National Establishment Time-Series (NETS) Database, to analyze the spatial distribution of firms in this industry. An essential contribution of this research is the recognition that high-tech firms are heterogeneous collections ...

  20. Terahertz Computed Tomography of NASA Thermal Protection System Materials

    Science.gov (United States)

    Roth, D. J.; Reyes-Rodriguez, S.; Zimdars, D. A.; Rauser, R. W.; Ussery, W. W.

    2011-01-01

    A terahertz axial computed tomography system has been developed that uses time domain measurements in order to form cross-sectional image slices and three-dimensional volume renderings of terahertz-transparent materials. The system can inspect samples as large as 0.0283 cubic meters (1 cubic foot) with no safety concerns as for x-ray computed tomography. In this study, the system is evaluated for its ability to detect and characterize flat bottom holes, drilled holes, and embedded voids in foam materials utilized as thermal protection on the external fuel tanks for the Space Shuttle. X-ray micro-computed tomography was also performed on the samples to compare against the terahertz computed tomography results and better define embedded voids. Limits of detectability based on depth and size for the samples used in this study are loosely defined. Image sharpness and morphology characterization ability for terahertz computed tomography are qualitatively described.

  1. Status report of the end-to-end ASKAP software system: towards early science operations

    Science.gov (United States)

    Guzman, Juan Carlos; Chapman, Jessica; Marquarding, Malte; Whiting, Matthew

    2016-08-01

    The Australian SKA Pathfinder (ASKAP) is a novel centimetre radio synthesis telescope currently in the commissioning phase and located in the midwest region of Western Australia. It comprises of 36 x 12 m diameter reflector antennas each equipped with state-of-the-art and award winning Phased Array Feeds (PAF) technology. The PAFs provide a wide, 30 square degree field-of-view by forming up to 36 separate dual-polarisation beams at once. This results in a high data rate: 70 TB of correlated visibilities in an 8-hour observation, requiring custom-written, high-performance software running in dedicated High Performance Computing (HPC) facilities. The first six antennas equipped with first-generation PAF technology (Mark I), named the Boolardy Engineering Test Array (BETA) have been in use since 2014 as a platform to test PAF calibration and imaging techniques, and along the way it has been producing some great science results. Commissioning of the ASKAP Array Release 1, that is the first six antennas with second-generation PAFs (Mark II) is currently under way. An integral part of the instrument is the Central Processor platform hosted at the Pawsey Supercomputing Centre in Perth, which executes custom-written software pipelines, designed specifically to meet the ASKAP imaging requirements of wide field of view and high dynamic range. There are three key hardware components of the Central Processor: The ingest nodes (16 x node cluster), the fast temporary storage (1 PB Lustre file system) and the processing supercomputer (200 TFlop system). This High-Performance Computing (HPC) platform is managed and supported by the Pawsey support team. Due to the limited amount of data generated by BETA and the first ASKAP Array Release, the Central Processor platform has been running in a more "traditional" or user-interactive mode. But this is about to change: integration and verification of the online ingest pipeline starts in early 2016, which is required to support the full

  2. The NASA Plan: To award eight percent of prime and subcontracts to socially and economically disadvantaged businesses

    Science.gov (United States)

    1990-01-01

    It is NASA's intent to provide small disadvantaged businesses, including women-owned, historically black colleges and universities and minority education institutions the maximum practicable opportunity to receive a fair proportion of NASA prime and subcontracted awards. Annually, NASA will establish socioeconomic procurement goals including small disadvantaged business goals, with a target of reaching the eight percent level by the end of FY 1994. The NASA Associate Administrators, who are responsible for the programs at the various NASA Centers, will be held accountable for full implementation of the socioeconomic procurement plans. Various aspects of this plan, including its history, are discussed.

  3. Free-time and fixed end-point multi-target optimal control theory: Application to quantum computing

    International Nuclear Information System (INIS)

    Mishima, K.; Yamashita, K.

    2011-01-01

    Graphical abstract: The two-state Deutsch-Jozsa algortihm used to demonstrate the utility of free-time and fixed-end point multi-target optimal control theory. Research highlights: → Free-time and fixed-end point multi-target optimal control theory (FRFP-MTOCT) was constructed. → The features of our theory include optimization of the external time-dependent perturbations with high transition probabilities, that of the temporal duration, the monotonic convergence, and the ability to optimize multiple-laser pulses simultaneously. → The advantage of the theory and a comparison with conventional fixed-time and fixed end-point multi-target optimal control theory (FIFP-MTOCT) are presented by comparing data calculated using the present theory with those published previously [K. Mishima, K. Yamashita, Chem. Phys. 361 (2009) 106]. → The qubit system of our interest consists of two polar NaCl molecules coupled by dipole-dipole interaction. → The calculation examples show that our theory is useful for minor adjustment of the external fields. - Abstract: An extension of free-time and fixed end-point optimal control theory (FRFP-OCT) to monotonically convergent free-time and fixed end-point multi-target optimal control theory (FRFP-MTOCT) is presented. The features of our theory include optimization of the external time-dependent perturbations with high transition probabilities, that of the temporal duration, the monotonic convergence, and the ability to optimize multiple-laser pulses simultaneously. The advantage of the theory and a comparison with conventional fixed-time and fixed end-point multi-target optimal control theory (FIFP-MTOCT) are presented by comparing data calculated using the present theory with those published previously [K. Mishima, K. Yamashita, Chem. Phys. 361, (2009), 106]. The qubit system of our interest consists of two polar NaCl molecules coupled by dipole-dipole interaction. The calculation examples show that our theory is useful for minor

  4. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  5. CSP: A Multifaceted Hybrid Architecture for Space Computing

    Science.gov (United States)

    Rudolph, Dylan; Wilson, Christopher; Stewart, Jacob; Gauvin, Patrick; George, Alan; Lam, Herman; Crum, Gary Alex; Wirthlin, Mike; Wilson, Alex; Stoddard, Aaron

    2014-01-01

    Research on the CHREC Space Processor (CSP) takes a multifaceted hybrid approach to embedded space computing. Working closely with the NASA Goddard SpaceCube team, researchers at the National Science Foundation (NSF) Center for High-Performance Reconfigurable Computing (CHREC) at the University of Florida and Brigham Young University are developing hybrid space computers that feature an innovative combination of three technologies: commercial-off-the-shelf (COTS) devices, radiation-hardened (RadHard) devices, and fault-tolerant computing. Modern COTS processors provide the utmost in performance and energy-efficiency but are susceptible to ionizing radiation in space, whereas RadHard processors are virtually immune to this radiation but are more expensive, larger, less energy-efficient, and generations behind in speed and functionality. By featuring COTS devices to perform the critical data processing, supported by simpler RadHard devices that monitor and manage the COTS devices, and augmented with novel uses of fault-tolerant hardware, software, information, and networking within and between COTS devices, the resulting system can maximize performance and reliability while minimizing energy consumption and cost. NASA Goddard has adopted the CSP concept and technology with plans underway to feature flight-ready CSP boards on two upcoming space missions.

  6. Department of Energy research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-08-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programmatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models, the execution of which is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex, and consequently it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  7. The NASA/Baltimore Applications Project (BAP). Computer aided dispatch and communications system for the Baltimore Fire Department: A case study of urban technology application

    Science.gov (United States)

    Levine, A. L.

    1981-01-01

    An engineer and a computer expert from Goddard Space Flight Center were assigned to provide technical assistance in the design and installation of a computer assisted system for dispatching and communicating with fire department personnel and equipment in Baltimore City. Primary contributions were in decision making and management processes. The project is analyzed from four perspectives: (1) fire service; (2) technology transfer; (3) public administration; and (5) innovation. The city benefitted substantially from the approach and competence of the NASA personnel. Given the proper conditions, there are distinct advantages in having a nearby Federal laboratory provide assistance to a city on a continuing basis, as is done in the Baltimore Applications Project.

  8. Integrating aerodynamic surface modeling for computational fluid dynamics with computer aided structural analysis, design, and manufacturing

    Science.gov (United States)

    Thorp, Scott A.

    1992-01-01

    This presentation will discuss the development of a NASA Geometry Exchange Specification for transferring aerodynamic surface geometry between LeRC systems and grid generation software used for computational fluid dynamics research. The proposed specification is based on a subset of the Initial Graphics Exchange Specification (IGES). The presentation will include discussion of how the NASA-IGES standard will accommodate improved computer aided design inspection methods and reverse engineering techniques currently being developed. The presentation is in viewgraph format.

  9. NASA PEMFC Development Background and History

    Science.gov (United States)

    Hoberecht, Mark

    2011-01-01

    NASA has been developing proton-exchange-membrane (PEM) fuel cell power systems for the past decade, as an upgraded technology to the alkaline fuel cells which presently provide power for the Shuttle Orbiter. All fuel cell power systems consist of one or more fuel cell stacks in combination with appropriate balance-of-plant hardware. Traditional PEM fuel cells are characterized as flow-through, in which recirculating reactant streams remove product water from the fuel cell stack. NASA recently embarked on the development of non-flow-through fuel cell systems, in which reactants are dead-ended into the fuel cell stack and product water is removed by internal wicks. This simplifies the fuel cell power system by eliminating the need for pumps to provide reactant circulation, and mechanical water separators to remove the product water from the recirculating reactant streams. By eliminating these mechanical components, the resulting fuel cell power system has lower mass, volume, and parasitic power requirements, along with higher reliability and longer life. Four vendors have designed and fabricated non-flow-through fuel cell stacks under NASA funding. One of these vendors is considered the "baseline" vendor, and the remaining three vendors are competing for the "alternate" role. Each has undergone testing of their stack hardware integrated with a NASA balance-of-plant. Future Exploration applications for this hardware include primary fuel cells for a Lunar Lander and regenerative fuel cells for Surface Systems.

  10. Rapid prototyping of soil moisture estimates using the NASA Land Information System

    Science.gov (United States)

    Anantharaj, V.; Mostovoy, G.; Li, B.; Peters-Lidard, C.; Houser, P.; Moorhead, R.; Kumar, S.

    2007-12-01

    The Land Information System (LIS), developed at the NASA Goddard Space Flight Center, is a functional Land Data Assimilation System (LDAS) that incorporates a suite of land models in an interoperable computational framework. LIS has been integrated into a computational Rapid Prototyping Capabilities (RPC) infrastructure. LIS consists of a core, a number of community land models, data servers, and visualization systems - integrated in a high-performance computing environment. The land surface models (LSM) in LIS incorporate surface and atmospheric parameters of temperature, snow/water, vegetation, albedo, soil conditions, topography, and radiation. Many of these parameters are available from in-situ observations, numerical model analysis, and from NASA, NOAA, and other remote sensing satellite platforms at various spatial and temporal resolutions. The computational resources, available to LIS via the RPC infrastructure, support e- Science experiments involving the global modeling of land-atmosphere studies at 1km spatial resolutions as well as regional studies at finer resolutions. The Noah Land Surface Model, available with-in the LIS is being used to rapidly prototype soil moisture estimates in order to evaluate the viability of other science applications for decision making purposes. For example, LIS has been used to further extend the utility of the USDA Soil Climate Analysis Network of in-situ soil moisture observations. In addition, LIS also supports data assimilation capabilities that are used to assimilate remotely sensed soil moisture retrievals from the AMSR-E instrument onboard the Aqua satellite. The rapid prototyping of soil moisture estimates using LIS and their applications will be illustrated during the presentation.

  11. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1991-03-15

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour.

  12. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour

  13. Meeting Report--NASA Radiation Biomarker Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Straume, Tore; Amundson, Sally A,; Blakely, William F.; Burns, Frederic J.; Chen, Allen; Dainiak, Nicholas; Franklin, Stephen; Leary, Julie A.; Loftus, David J.; Morgan, William F.; Pellmar, Terry C.; Stolc, Viktor; Turteltaub, Kenneth W.; Vaughan, Andrew T.; Vijayakumar, Srinivasan; Wyrobek, Andrew J.

    2008-05-01

    A summary is provided of presentations and discussions from the NASA Radiation Biomarker Workshop held September 27-28, 2007, at NASA Ames Research Center in Mountain View, California. Invited speakers were distinguished scientists representing key sectors of the radiation research community. Speakers addressed recent developments in the biomarker and biotechnology fields that may provide new opportunities for health-related assessment of radiation-exposed individuals, including for long-duration space travel. Topics discussed include the space radiation environment, biomarkers of radiation sensitivity and individual susceptibility, molecular signatures of low-dose responses, multivariate analysis of gene expression, biomarkers in biodefense, biomarkers in radiation oncology, biomarkers and triage following large-scale radiological incidents, integrated and multiple biomarker approaches, advances in whole-genome tiling arrays, advances in mass-spectrometry proteomics, radiation biodosimetry for estimation of cancer risk in a rat skin model, and confounding factors. Summary conclusions are provided at the end of the report.

  14. Use of a silicon surface-barrier detector for measurement of high-energy end loss electrons in a tandem mirror

    International Nuclear Information System (INIS)

    Saito, T.; Kiwamoto, Y.; Honda, T.; Kasugai, A.; Kurihara, K.; Miyoshi, S.

    1991-01-01

    An apparatus for the measurement of high-energy electrons (10--500 keV) with a silicon surface-barrier detector is described. The apparatus has special features. In particular, a fast CAMAC transient digitizer is used to directly record the wave form of a pulse train from the detector and then pulse heights are analyzed with a computer instead of on a conventional pulse height analyzer. With this method the system is capable of detecting electrons with a count rate as high as ∼300--400 kilocounts/s without serious deterioration of performance. Moreover, piled up signals are reliably eliminated from analysis. The system has been applied to measure electron-cyclotron-resonance-heating-induced end loss electrons in the GAMMA 10 tandem mirror and has yielded information relating to electron heating and diffusion in velocity space

  15. A wideband high-linearity RF receiver front-end in CMOS

    NARCIS (Netherlands)

    Arkesteijn, V.J.; Klumperink, Eric A.M.; Nauta, Bram

    This paper presents a wideband high-linearity RF receiver-front-end, implemented in standard 0.18 μm CMOS technology. The design employs a noise-canceling LNA in combination with two passive mixers, followed by lowpass-filtering and amplification at IF. The achieved bandwidth is >2 GHz, with a noise

  16. GPU-based high-performance computing for radiation therapy

    International Nuclear Information System (INIS)

    Jia, Xun; Jiang, Steve B; Ziegenhein, Peter

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. (topical review)

  17. Navier-Stokes Simulation of Airconditioning Facility of a Large Modem Computer Room

    Science.gov (United States)

    2005-01-01

    NASA recently assembled one of the world's fastest operational supercomputers to meet the agency's new high performance computing needs. This large-scale system, named Columbia, consists of 20 interconnected SGI Altix 512-processor systems, for a total of 10,240 Intel Itanium-2 processors. High-fidelity CFD simulations were performed for the NASA Advanced Supercomputing (NAS) computer room at Ames Research Center. The purpose of the simulations was to assess the adequacy of the existing air handling and conditioning system and make recommendations for changes in the design of the system if needed. The simulations were performed with NASA's OVERFLOW-2 CFD code which utilizes overset structured grids. A new set of boundary conditions were developed and added to the flow solver for modeling the roomls air-conditioning and proper cooling of the equipment. Boundary condition parameters for the flow solver are based on cooler CFM (flow rate) ratings and some reasonable assumptions of flow and heat transfer data for the floor and central processing units (CPU) . The geometry modeling from blue prints and grid generation were handled by the NASA Ames software package Chimera Grid Tools (CGT). This geometric model was developed as a CGT-scripted template, which can be easily modified to accommodate any changes in shape and size of the room, locations and dimensions of the CPU racks, disk racks, coolers, power distribution units, and mass-storage system. The compute nodes are grouped in pairs of racks with an aisle in the middle. High-speed connection cables connect the racks with overhead cable trays. The cool air from the cooling units is pumped into the computer room from a sub-floor through perforated floor tiles. The CPU cooling fans draw cool air from the floor tiles, which run along the outside length of each rack, and eject warm air into the center isle between the racks. This warm air is eventually drawn into the cooling units located near the walls of the room. One

  18. Innovative Educational Aerospace Research at the Northeast High School Space Research Center

    Science.gov (United States)

    Luyet, Audra; Matarazzo, Anthony; Folta, David

    1997-01-01

    Northeast High Magnet School of Philadelphia, Pennsylvania is a proud sponsor of the Space Research Center (SPARC). SPARC, a model program of the Medical, Engineering, and Aerospace Magnet school, provides talented students the capability to successfully exercise full simulations of NASA manned missions. These simulations included low-Earth Shuttle missions and Apollo lunar missions in the past, and will focus on a planetary mission to Mars this year. At the end of each scholastic year, a simulated mission, lasting between one and eight days, is performed involving 75 students as specialists in seven teams The groups are comprised of Flight Management, Spacecraft Communications (SatCom), Computer Networking, Spacecraft Design and Engineering, Electronics, Rocketry, Robotics, and Medical teams in either the mission operations center or onboard the spacecraft. Software development activities are also required in support of these simulations The objective of this paper is to present the accomplishments, technology innovations, interactions, and an overview of SPARC with an emphasis on how the program's educational activities parallel NASA mission support and how this education is preparing student for the space frontier.

  19. Computing, Information and Communications Technology (CICT) Website

    Science.gov (United States)

    Hardman, John; Tu, Eugene (Technical Monitor)

    2002-01-01

    The Computing, Information and Communications Technology Program (CICT) was established in 2001 to ensure NASA's Continuing leadership in emerging technologies. It is a coordinated, Agency-wide effort to develop and deploy key enabling technologies for a broad range of mission-critical tasks. The NASA CICT program is designed to address Agency-specific computing, information, and communications technology requirements beyond the projected capabilities of commercially available solutions. The areas of technical focus have been chosen for their impact on NASA's missions, their national importance, and the technical challenge they provide to the Program. In order to meet its objectives, the CICT Program is organized into the following four technology focused projects: 1) Computing, Networking and Information Systems (CNIS); 2) Intelligent Systems (IS); 3) Space Communications (SC); 4) Information Technology Strategic Research (ITSR).

  20. NASA Intelligent Systems Project: Results, Accomplishments and Impact on Science Missions

    Science.gov (United States)

    Coughlan, Joseph C.

    2005-01-01

    The Intelligent Systems Project was responsible for much of NASA's programmatic investment in artificial intelligence and advanced information technologies. IS has completed three major project milestones which demonstrated increased capabilities in autonomy, human centered computing, and intelligent data understanding. Autonomy involves the ability of a robot to place an instrument on a remote surface with a single command cycle. Human centered computing supported a collaborative, mission centric data and planning system for the Mars Exploration Rovers and data understanding has produced key components of a terrestrial satellite observation system with automated modeling and data analysis capabilities. This paper summarizes the technology demonstrations and metrics which quantify and summarize these new technologies which are now available for future Nasa missions.

  1. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASAs Space Launch System

    Science.gov (United States)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The engineering development of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS) requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The nominal and off-nominal characteristics of SLS's elements and subsystems must be understood and matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex systems engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model-based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model-based algorithms and their development lifecycle from inception through FSW certification are an important focus of SLS's development effort to further ensure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. To test and validate these M&FM algorithms a dedicated test-bed was developed for full Vehicle Management End-to-End Testing (VMET). For addressing fault management (FM

  2. Design of an end station for a high current ion implantation system

    International Nuclear Information System (INIS)

    Kranik, J.R.

    1979-01-01

    During the last 4 to 5 years IBM has been involved in an effort to develop a high current Ion Implantation system with pre-deposition capabilities. The system is dedicated to Arsenic implants, involving doses > 1 x 10 15 ions/cm 2 in the energy range of 30 to 60 keV. A major portion of this effort involved the design of an associated end station capable of producing high uniformity implants with beam currents in the 0.5 to 6.0 mA range. The end station contains all components from the exit of the analyzing magnet, including the exit beamline, process chamber, scan system, wafer handling system, high vacuum pumping package, beam optics, dosimetry system, and associated electronic controls. The unit was restricted to a six wafer (82 mm) batch size to maintain process line compatibility. In addition, implant dose non-uniformity objectives were established at +- 3% (2σ) within a wafer and +- 2% (2σ) wafer-to-wafer. Also, the system was to be capable of implanting 24 wafers/hour at a dose of 7.5 x 10 15 ions/cm 2 . Major consideration in the design was afforded to high reliability, ease of maintenance and production level throughput capabilities. The rationale and evolution of the final end station design is described. (author)

  3. Development of a global computable general equilibrium model coupled with detailed energy end-use technology

    International Nuclear Information System (INIS)

    Fujimori, Shinichiro; Masui, Toshihiko; Matsuoka, Yuzuru

    2014-01-01

    Highlights: • Detailed energy end-use technology information is considered within a CGE model. • Aggregated macro results of the detailed model are similar to traditional model. • The detailed model shows unique characteristics in the household sector. - Abstract: A global computable general equilibrium (CGE) model integrating detailed energy end-use technologies is developed in this paper. The paper (1) presents how energy end-use technologies are treated within the model and (2) analyzes the characteristics of the model’s behavior. Energy service demand and end-use technologies are explicitly considered, and the share of technologies is determined by a discrete probabilistic function, namely a Logit function, to meet the energy service demand. Coupling with detailed technology information enables the CGE model to have more realistic representation in the energy consumption. The proposed model in this paper is compared with the aggregated traditional model under the same assumptions in scenarios with and without mitigation roughly consistent with the two degree climate mitigation target. Although the results of aggregated energy supply and greenhouse gas emissions are similar, there are three main differences between the aggregated and the detailed technologies models. First, GDP losses in mitigation scenarios are lower in the detailed technology model (2.8% in 2050) as compared with the aggregated model (3.2%). Second, price elasticity and autonomous energy efficiency improvement are heterogeneous across regions and sectors in the detailed technology model, whereas the traditional aggregated model generally utilizes a single value for each of these variables. Third, the magnitude of emissions reduction and factors (energy intensity and carbon factor reduction) related to climate mitigation also varies among sectors in the detailed technology model. The household sector in the detailed technology model has a relatively higher reduction for both energy

  4. End-to-End Trade-space Analysis for Designing Constellation Missions

    Science.gov (United States)

    LeMoigne, J.; Dabney, P.; Foreman, V.; Grogan, P.; Hache, S.; Holland, M. P.; Hughes, S. P.; Nag, S.; Siddiqi, A.

    2017-12-01

    Multipoint measurement missions can provide a significant advancement in science return and this science interest coupled with many recent technological advances are driving a growing trend in exploring distributed architectures for future NASA missions. Distributed Spacecraft Missions (DSMs) leverage multiple spacecraft to achieve one or more common goals. In particular, a constellation is the most general form of DSM with two or more spacecraft placed into specific orbit(s) for the purpose of serving a common objective (e.g., CYGNSS). Because a DSM architectural trade-space includes both monolithic and distributed design variables, DSM optimization is a large and complex problem with multiple conflicting objectives. Over the last two years, our team has been developing a Trade-space Analysis Tool for Constellations (TAT-C), implemented in common programming languages for pre-Phase A constellation mission analysis. By evaluating alternative mission architectures, TAT-C seeks to minimize cost and maximize performance for pre-defined science goals. This presentation will describe the overall architecture of TAT-C including: a User Interface (UI) at several levels of details and user expertise; Trade-space Search Requests that are created from the Science requirements gathered by the UI and validated by a Knowledge Base; a Knowledge Base to compare the current requests to prior mission concepts to potentially prune the trade-space; a Trade-space Search Iterator which, with inputs from the Knowledge Base, and, in collaboration with the Orbit & Coverage, Reduction & Metrics, and Cost& Risk modules, generates multiple potential architectures and their associated characteristics. TAT-C leverages the use of the Goddard Mission Analysis Tool (GMAT) to compute coverage and ancillary data, modeling orbits to balance accuracy and performance. The current version includes uniform and non-uniform Walker constellations as well as Ad-Hoc and precessing constellations, and its

  5. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  6. NASA Goddard Space Flight Center presents Enhancing Standards Based Science Curriculum through NASA Content Relevancy: A Model for Sustainable Teaching-Research Integration Dr. Robert Gabrys, Raquel Marshall, Dr. Evelina Felicite-Maurice, Erin McKinley

    Science.gov (United States)

    Marshall, R. H.; Gabrys, R.

    2016-12-01

    NASA Goddard Space Flight Center has developed a systemic educator professional development model for the integration of NASA climate change resources into the K-12 classroom. The desired outcome of this model is to prepare teachers in STEM disciplines to be globally engaged and knowledgeable of current climate change research and its potential for content relevancy alignment to standard-based curriculum. The application and mapping of the model is based on the state education needs assessment, alignment to the Next Generation Science Standards (NGSS), and implementation framework developed by the consortium of district superintendents and their science supervisors. In this presentation, we will demonstrate best practices for extending the concept of inquiry-based and project-based learning through the integration of current NASA climate change research into curriculum unit lessons. This model includes a significant teacher development component focused on capacity development for teacher instruction and pedagogy aimed at aligning NASA climate change research to related NGSS student performance expectations and subsequent Crosscutting Concepts, Science and Engineering Practices, and Disciplinary Core Ideas, a need that was presented by the district steering committee as critical for ensuring sustainability and high-impact in the classroom. This model offers a collaborative and inclusive learning community that connects classroom teachers to NASA climate change researchers via an ongoing consultant/mentoring approach. As a result of the first year of implementation of this model, Maryland teachers are implementing NGSS unit lessons that guide students in open-ended research based on current NASA climate change research.

  7. NASA Strategy to Safely Live and Work in the Space Radiation Environment

    Science.gov (United States)

    Cucinotta, Francis; Wu, Honglu; Corbin, Barbara; Sulzman, Frank; Kreneck, Sam

    2007-01-01

    This viewgraph document reviews the radiation environment that is a significant potential hazard to NASA's goals for space exploration, of living and working in space. NASA has initiated a Peer reviewed research program that is charged with arriving at an understanding of the space radiation problem. To this end NASA Space Radiation Laboratory (NSRL) was constructed to simulate the harsh cosmic and solar radiation found in space. Another piece of the work was to develop a risk modeling tool that integrates the results from research efforts into models of human risk to reduce uncertainties in predicting risk of carcinogenesis, central nervous system damage, degenerative tissue disease, and acute radiation effects acute radiation effects.

  8. Computation of hypersonic flows with finite rate condensation and evaporation of water

    Science.gov (United States)

    Perrell, Eric R.; Candler, Graham V.; Erickson, Wayne D.; Wieting, Alan R.

    1993-01-01

    A computer program for modelling 2D hypersonic flows of gases containing water vapor and liquid water droplets is presented. The effects of interphase mass, momentum and energy transfer are studied. Computations are compared with existing quasi-1D calculations on the nozzle of the NASA Langley Eight Foot High Temperature Tunnel, a hypersonic wind tunnel driven by combustion of natural gas in oxygen enriched air.

  9. NASA Engineering Safety Center NASA Aerospace Flight Battery Systems Working Group 2007 Proactive Task Status

    Science.gov (United States)

    Manzo, Michelle A.

    2007-01-01

    In 2007, the NASA Engineering Safety Center (NESC) chartered the NASA Aerospace Flight Battery Systems Working Group to bring forth and address critical battery-related performance/manufacturing issues for NASA and the aerospace community. A suite of tasks identifying and addressing issues related to Ni-H2 and Li-ion battery chemistries was submitted and selected for implementation. The current NESC funded are: (1) Wet Life of Ni-H2 Batteries (2) Binding Procurement (3) NASA Lithium-Ion Battery Guidelines (3a) Li-Ion Performance Assessment (3b) Li-Ion Guidelines Document (3b-i) Assessment of Applicability of Pouch Cells for Aerospace Missions (3b-ii) High Voltage Risk Assessment (3b-iii) Safe Charge Rates for Li-Ion Cells (4) Availability of Source Material for Li-Ion Cells (5) NASA Aerospace Battery Workshop This presentation provides a brief overview of the tasks in the 2007 plan and serves as an introduction to more detailed discussions on each of the specific tasks.

  10. Disruption Tolerant Networking Flight Validation Experiment on NASA's EPOXI Mission

    Science.gov (United States)

    Wyatt, Jay; Burleigh, Scott; Jones, Ross; Torgerson, Leigh; Wissler, Steve

    2009-01-01

    In October and November of 2008, the Jet Propulsion Laboratory installed and tested essential elements of Delay/Disruption Tolerant Networking (DTN) technology on the Deep Impact spacecraft. This experiment, called Deep Impact Network Experiment (DINET), was performed in close cooperation with the EPOXI project which has responsibility for the spacecraft. During DINET some 300 images were transmitted from the JPL nodes to the spacecraft. Then they were automatically forwarded from the spacecraft back to the JPL nodes, exercising DTN's bundle origination, transmission, acquisition, dynamic route computation, congestion control, prioritization, custody transfer, and automatic retransmission procedures, both on the spacecraft and on the ground, over a period of 27 days. All transmitted bundles were successfully received, without corruption. The DINET experiment demonstrated DTN readiness for operational use in space missions. This activity was part of a larger NASA space DTN development program to mature DTN to flight readiness for a wide variety of mission types by the end of 2011. This paper describes the DTN protocols, the flight demo implementation, validation metrics which were created for the experiment, and validation results.

  11. Batteries at NASA - Today and Beyond

    Science.gov (United States)

    Reid, Concha M.

    2015-01-01

    NASA uses batteries for virtually all of its space missions. Batteries can be bulky and heavy, and some chemistries are more prone to safety issues than others. To meet NASA's needs for safe, lightweight, compact and reliable batteries, scientists and engineers at NASA develop advanced battery technologies that are suitable for space applications and that can satisfy these multiple objectives. Many times, these objectives compete with one another, as the demand for more and more energy in smaller packages dictates that we use higher energy chemistries that are also more energetic by nature. NASA partners with companies and universities, like Xavier University of Louisiana, to pool our collective knowledge and discover innovative technical solutions to these challenges. This talk will discuss a little about NASA's use of batteries and why NASA seeks more advanced chemistries. A short primer on battery chemistries and their chemical reactions is included. Finally, the talk will touch on how the work under the Solid High Energy Lithium Battery (SHELiB) grant to develop solid lithium-ion conducting electrolytes and solid-state batteries can contribute to NASA's mission.

  12. Fuzzy-TLX: using fuzzy integrals for evaluating human mental workload with NASA-Task Load indeX in laboratory and field studies.

    Science.gov (United States)

    Mouzé-Amady, Marc; Raufaste, Eric; Prade, Henri; Meyer, Jean-Pierre

    2013-01-01

    The aim of this study was to assess mental workload in which various load sources must be integrated to derive reliable workload estimates. We report a new algorithm for computing weights from qualitative fuzzy integrals and apply it to the National Aeronautics and Space Administration -Task Load indeX (NASA-TLX) subscales in order to replace the standard pair-wise weighting technique (PWT). In this paper, two empirical studies were reported: (1) In a laboratory experiment, age- and task-related variables were investigated in 53 male volunteers and (2) In a field study, task- and job-related variables were studied on aircrews during 48 commercial flights. The results found in this study were as follows: (i) in the experimental setting, fuzzy estimates were highly correlated with classical (using PWT) estimates; (ii) in real work conditions, replacing PWT by automated fuzzy treatments simplified the NASA-TLX completion; (iii) the algorithm for computing fuzzy estimates provides a new classification procedure sensitive to various variables of work environments and (iv) subjective and objective measures can be used for the fuzzy aggregation of NASA-TLX subscales. NASA-TLX, a classical tool for mental workload assessment, is based on a weighted sum of ratings from six subscales. A new algorithm, which impacts on input data collection and computes weights and indexes from qualitative fuzzy integrals, is evaluated through laboratory and field studies. Pros and cons are discussed.

  13. Public Access to NASA's Earth Science Data

    Science.gov (United States)

    Behnke, J.; James, N.

    2013-12-01

    Many steps have been taken over the past 20 years to make NASA's Earth Science data more accessible to the public. The data collected by NASA represent a significant public investment in research. NASA holds these data in a public trust to promote comprehensive, long-term Earth science research. Consequently, NASA developed a free, open and non-discriminatory policy consistent with existing international policies to maximize access to data and to keep user costs as low as possible. These policies apply to all data archived, maintained, distributed or produced by NASA data systems. The Earth Observing System Data and Information System (EOSDIS) is a major core capability within NASA Earth Science Data System Program. EOSDIS is designed to ingest, process, archive, and distribute data from approximately 90 instruments. Today over 6800 data products are available to the public through the EOSDIS. Last year, EOSDIS distributed over 636 million science data products to the user community, serving over 1.5 million distinct users. The system supports a variety of science disciplines including polar processes, land cover change, radiation budget, and most especially global climate change. A core philosophy of EOSDIS is that the general user is best served by providing discipline specific support for the data. To this end, EOSDIS has collocated NASA Earth science data with centers of science discipline expertise, called Distributed Active Archive Centers (DAACs). DAACs are responsible for data management, archive and distribution of data products. There are currently twelve DAACs in the EOSDIS system. The centralized entrance point to the NASA Earth Science data collection can be found at http://earthdata.nasa.gov. Over the years, we have developed several methods for determining needs of the user community including use of the American Customer Satisfaction Index survey and a broad metrics program. Annually, we work with an independent organization (CFI Group) to send this

  14. High-Degree Neurons Feed Cortical Computations.

    Directory of Open Access Journals (Sweden)

    Nicholas M Timme

    2016-05-01

    Full Text Available Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree or sends out (out-degree. To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to

  15. Improvements to the Ionizing Radiation Risk Assessment Program for NASA Astronauts

    Science.gov (United States)

    Semones, E. J.; Bahadori, A. A.; Picco, C. E.; Shavers, M. R.; Flores-McLaughlin, J.

    2011-01-01

    To perform dosimetry and risk assessment, NASA collects astronaut ionizing radiation exposure data from space flight, medical imaging and therapy, aviation training activities and prior occupational exposure histories. Career risk of exposure induced death (REID) from radiation is limited to 3 percent at a 95 percent confidence level. The Radiation Health Office at Johnson Space Center (JSC) is implementing a program to integrate the gathering, storage, analysis and reporting of astronaut ionizing radiation dose and risk data and records. This work has several motivations, including more efficient analyses and greater flexibility in testing and adopting new methods for evaluating risks. The foundation for these improvements is a set of software tools called the Astronaut Radiation Exposure Analysis System (AREAS). AREAS is a series of MATLAB(Registered TradeMark)-based dose and risk analysis modules that interface with an enterprise level SQL Server database by means of a secure web service. It communicates with other JSC medical and space weather databases to maintain data integrity and consistency across systems. AREAS is part of a larger NASA Space Medicine effort, the Mission Medical Integration Strategy, with the goal of collecting accurate, high-quality and detailed astronaut health data, and then securely, timely and reliably presenting it to medical support personnel. The modular approach to the AREAS design accommodates past, current, and future sources of data from active and passive detectors, space radiation transport algorithms, computational phantoms and cancer risk models. Revisions of the cancer risk model, new radiation detection equipment and improved anthropomorphic computational phantoms can be incorporated. Notable hardware updates include the Radiation Environment Monitor (which uses Medipix technology to report real-time, on-board dosimetry measurements), an updated Tissue-Equivalent Proportional Counter, and the Southwest Research Institute

  16. NASA's Internal Space Weather Working Group

    Science.gov (United States)

    St. Cyr, O. C.; Guhathakurta, M.; Bell, H.; Niemeyer, L.; Allen, J.

    2011-01-01

    Measurements from many of NASA's scientific spacecraft are used routinely by space weather forecasters, both in the U.S. and internationally. ACE, SOHO (an ESA/NASA collaboration), STEREO, and SDO provide images and in situ measurements that are assimilated into models and cited in alerts and warnings. A number of years ago, the Space Weather laboratory was established at NASA-Goddard, along with the Community Coordinated Modeling Center. Within that organization, a space weather service center has begun issuing alerts for NASA's operational users. NASA's operational user community includes flight operations for human and robotic explorers; atmospheric drag concerns for low-Earth orbit; interplanetary navigation and communication; and the fleet of unmanned aerial vehicles, high altitude aircraft, and launch vehicles. Over the past three years we have identified internal stakeholders within NASA and formed a Working Group to better coordinate their expertise and their needs. In this presentation we will describe this activity and some of the challenges in forming a diverse working group.

  17. High-performance scientific computing in the cloud

    Science.gov (United States)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  18. Quantum Accelerators for High-Performance Computing Systems

    OpenAIRE

    Britt, Keith A.; Mohiyaddin, Fahd A.; Humble, Travis S.

    2017-01-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantu...

  19. Computer-aided engineering in High Energy Physics

    International Nuclear Information System (INIS)

    Bachy, G.; Hauviller, C.; Messerli, R.; Mottier, M.

    1988-01-01

    Computing, standard tool for a long time in the High Energy Physics community, is being slowly introduced at CERN in the mechanical engineering field. The first major application was structural analysis followed by Computer-Aided Design (CAD). Development work is now progressing towards Computer-Aided Engineering around a powerful data base. This paper gives examples of the power of this approach applied to engineering for accelerators and detectors

  20. Turbine Seal Research at NASA GRC

    Science.gov (United States)

    Proctor, Margaret P.; Steinetz, Bruce M.; Delgado, Irebert R.; Hendricks, Robert C.

    2011-01-01

    Low-leakage, long-life turbomachinery seals are important to both Space and Aeronautics Missions. (1) Increased payload capability (2) Decreased specific fuel consumption and emissions (3) Decreased direct operating costs. NASA GRC has a history of significant accomplishments and collaboration with industry and academia in seals research. NASA's unique, state-of-the-art High Temperature, High Speed Turbine Seal Test Facility is an asset to the U.S. Engine / Seal Community. Current focus is on developing experimentally validated compliant, non-contacting, high temperature seal designs, analysis, and design methodologies to enable commercialization.

  1. Computer controlled high voltage system

    Energy Technology Data Exchange (ETDEWEB)

    Kunov, B; Georgiev, G; Dimitrov, L [and others

    1996-12-31

    A multichannel computer controlled high-voltage power supply system is developed. The basic technical parameters of the system are: output voltage -100-3000 V, output current - 0-3 mA, maximum number of channels in one crate - 78. 3 refs.

  2. FERMI: a digital Front End and Readout MIcrosystem for high resolution calorimetry

    International Nuclear Information System (INIS)

    Alexanian, H.; Appelquist, G.; Bailly, P.

    1995-01-01

    We present a digital solution for the front-end electronics of high resolution calorimeters at future colliders. It is based on analogue signal compression, high speed A/D converters, a fully programmable pipeline and a digital signal processing (DSP) chain with local intelligence and system supervision. This digital solution is aimed at providing maximal front-end processing power by performing waveform analysis using DSP methods. For the system integration of the multichannel device a multi-chip, silicon-on-silicon multi-chip module (MCM) has been adopted. This solution allows a high level of integration of complex analogue and digital functions, with excellent flexibility in mixing technologies for the different functional blocks. This type of multichip integration provides a high degree of reliability and programmability at both the function and the system level, with the additional possibility of customising the microsystem to detector-specific requirements. For enhanced reliability in high radiation environments, fault tolerance strategies, i.e. redundancy, reconfigurability, majority voting and coding for error detection and correction, are integrated into the design. ((orig.))

  3. High-Precision Computation and Mathematical Physics

    International Nuclear Information System (INIS)

    Bailey, David H.; Borwein, Jonathan M.

    2008-01-01

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

  4. Transverse axial plane anatomy of the temporal bone employing high spatial resolution computed tomography

    International Nuclear Information System (INIS)

    Russell, E.J.; Koslow, M.; Lasjaunias, P.; Bergeron, R.T.; Chase, N.

    1982-01-01

    Anatomical relationships of temporal bone structures are demonstrated by thin section edge detection computed tomography. Many otic structures are best appreciated in axial view, but reorientation to anatomy as seen in this plane is needed for optimal diagnosis. A level by level review of key structure is presented toward this end. The limitations and advantages of computed tomography are discussed. (orig.)

  5. High performance parallel computers for science

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  6. Computer proficiency questionnaire: assessing low and high computer proficient seniors.

    Science.gov (United States)

    Boot, Walter R; Charness, Neil; Czaja, Sara J; Sharit, Joseph; Rogers, Wendy A; Fisk, Arthur D; Mitzner, Tracy; Lee, Chin Chin; Nair, Sankaran

    2015-06-01

    Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users. To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ. The CPQ demonstrated excellent reliability (Cronbach's α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions. The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  7. Improved OMI Nitrogen Dioxide Retrievals Aided by NASA's A-Train High-Resolution Data

    Science.gov (United States)

    Lamsal, L. N.; Krotkov, N. A.; Vasilkov, A. P.; Marchenko, S. V.; Qin, W.; Yang, E. S.; Fasnacht, Z.; Haffner, D. P.; Swartz, W. H.; Spurr, R. J. D.; Joiner, J.

    2017-12-01

    Space-based global observation of nitrogen dioxide (NO2) is among the main objectives of the NASA Aura Ozone Monitoring Instrument (OMI) mission, aimed at advancing our understanding of the sources and trends of nitrogen oxides (NOx). These applications benefit from improved retrieval techniques and enhancement in data quality. Here, we describe our recent and planned updates to the NASA OMI standard NO2 products. The products and documentation are publicly available from the NASA Goddard Earth Sciences Data and Information Services Center (https://disc.gsfc.nasa.gov/datasets/OMNO2_V003/summary/). The major changes include (1) improvements in spectral fitting algorithms for NO2 and cloud, (2) improved information in the vertical distribution of NO2, and (3) use of geometry-dependent surface reflectivity information derived from NASA's Aqua MODIS over land and the Cox-Munk slope distribution over ocean with a contribution from water-leaving radiance. These algorithm updates, which lead to more accurate tropospheric NO2 retrievals from OMI, are relevant for other past, contemporary, and future satellite instruments.

  8. Evolution of data stewardship over two decades at a NASA data center

    Science.gov (United States)

    Armstrong, E. M.; Moroni, D. F.; Hausman, J.; Tsontos, V. M.

    2013-12-01

    physical domain was still critical, especially relevant to assessments of data quality, additional skills in computer science, statistics and system engineering also became necessary. Furthermore, the level of effort to implement data curation has not expanded linearly either. Management of ongoing data operations demands increased productivity on a continual basis and larger volumes of data, with constraints on funding, must be managed with proportionately less human resources. The role of data curation has also changed within the perspective of satellite missions. In many early missions, data management and curation was an afterthought (since there were no explicit data management plans written into the proposals), while current NASA mission proposals must have explicit data management plans to identify resources and funds for archiving, distribution and implementing overall data stewardship. In conclusion, the role of the data scientist/engineer at the PO.DAAC has shifted from supporting singular missions and primarily representing a point of contact for the science community to complete end-to-end stewardship through the implementation of a robust set of dataset lifecycle policies from ingest, to archiving, including data quality assessment for a broad swath of parameter based datasets that can number in the hundreds.

  9. A model-based software development methodology for high-end automotive components

    NARCIS (Netherlands)

    Ravanan, Mahmoud

    2014-01-01

    This report provides a model-based software development methodology for high-end automotive components. The V-model is used as a process model throughout the development of the software platform. It offers a framework that simplifies the relation between requirements, design, implementation,

  10. The rationale/benefits of nuclear thermal rocket propulsion for NASA's lunar space transportation system

    Science.gov (United States)

    Borowski, Stanley K.

    1994-09-01

    The solid core nuclear thermal rocket (NTR) represents the next major evolutionary step in propulsion technology. With its attractive operating characteristics, which include high specific impulse (approximately 850-1000 s) and engine thrust-to-weight (approximately 4-20), the NTR can form the basis for an efficient lunar space transportation system (LTS) capable of supporting both piloted and cargo missions. Studies conducted at the NASA Lewis Research Center indicate that an NTR-based LTS could transport a fully-fueled, cargo-laden, lunar excursion vehicle to the Moon, and return it to low Earth orbit (LEO) after mission completion, for less initial mass in LEO than an aerobraked chemical system of the type studied by NASA during its '90-Day Study.' The all-propulsive NTR-powered LTS would also be 'fully reusable' and would have a 'return payload' mass fraction of approximately 23 percent--twice that of the 'partially reusable' aerobraked chemical system. Two NTR technology options are examined--one derived from the graphite-moderated reactor concept developed by NASA and the AEC under the Rover/NERVA (Nuclear Engine for Rocket Vehicle Application) programs, and a second concept, the Particle Bed Reactor (PBR). The paper also summarizes NASA's lunar outpost scenario, compares relative performance provided by different LTS concepts, and discusses important operational issues (e.g., reusability, engine 'end-of life' disposal, etc.) associated with using this important propulsion technology.

  11. NASA/NOAA: Earth Science Electronic Theater 1999

    Science.gov (United States)

    Hasler, A. Fritz

    1999-01-01

    new Earth sensing satellites, HyperImage datasets, because they have such high resolution in the spectral, temporal, spatial, and dynamic range domains. The traditional numerical spreadsheet paradigm has been extended to develop a scientific visualization approach for processing HyperImage datasets and 3D model results interactively. The advantages of extending the powerful spreadsheet style of computation to multiple sets of images and organizing image processing were demonstrated using the Distributed image SpreadSheet (DISS). The DISS is being used as a high performance testbed Next Generation Internet (NGI) VisAnalysis of: 1) El Nino SSTs and NDVI response 2) Latest GOES 10 5-min rapid Scans of 26 day 5000 frame movie of March & April '98 weather and tornadic storms 3) TRMM rainfall and lightning 4)GOES 9 satellite images/winds and NOAA aircraft radar of hurricane Luis, 5) lightning detector data merged with GOES image sequences, 6) Japanese GMS, TRMM, & ADEOS data 7) Chinese FY2 data 8) Meteosat & ERS/ATSR data 9) synchronized manipulation of multiple 3D numerical model views; and others will be illustrated. The Image SpreadSheet has been highly successful in producing Earth science visualizations for public outreach. Many of these visualizations have been widely disseminated through the world wide web pages of the HPCC/LTP/RSD program which can be found at http://rsd.gsfc.nasa.gov/rsd The one min interval animations of Hurricane Luis on ABC Nightline and the color perspective rendering of Hurricane Fran published by TIME, LIFE, Newsweek, Popular Science, National Geographic, Scientific American, and the "Weekly Reader" are some of the examples which will be shown.

  12. After Installation: Ubiquitous Computing and High School Science in Three Experienced, High-Technology Schools

    Science.gov (United States)

    Drayton, Brian; Falk, Joni K.; Stroud, Rena; Hobbs, Kathryn; Hammerman, James

    2010-01-01

    There are few studies of the impact of ubiquitous computing on high school science, and the majority of studies of ubiquitous computing report only on the early stages of implementation. The present study presents data on 3 high schools with carefully elaborated ubiquitous computing systems that have gone through at least one "obsolescence cycle"…

  13. NASA Airborne Science Program: NASA Stratospheric Platforms

    Science.gov (United States)

    Curry, Robert E.

    2010-01-01

    The National Aeronautics and Space Administration conducts a wide variety of remote sensing projects using several unique aircraft platforms. These vehicles have been selected and modified to provide capabilities that are particularly important for geophysical research, in particular, routine access to very high altitudes, long range, long endurance, precise trajectory control, and the payload capacity to operate multiple, diverse instruments concurrently. While the NASA program has been in operation for over 30 years, new aircraft and technological advances that will expand the capabilities for airborne observation are continually being assessed and implemented. This presentation will review the current state of NASA's science platforms, recent improvements and new missions concepts as well as provide a survey of emerging technologies unmanned aerial vehicles for long duration observations (Global Hawk and Predator). Applications of information technology that allow more efficient use of flight time and the ability to rapidly reconfigure systems for different mission objectives are addressed.

  14. Optical interconnection networks for high-performance computing systems

    International Nuclear Information System (INIS)

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  15. Computer simulation of processes in the dead–end furnace

    International Nuclear Information System (INIS)

    Zavorin, A S; Khaustov, S A; Zaharushkin, Russia N A

    2014-01-01

    We study turbulent combustion of natural gas in the reverse flame of fire–tube boiler simulated with the ANSYS Fluent 12.1.4 engineering simulation software. Aerodynamic structure and volumetric pressure fields of the flame were calculated. The results are presented in graphical form. The effect of the twist parameter for a drag coefficient of dead–end furnace was estimated. Finite element method was used for simulating the following processes: the combustion of methane in air oxygen, radiant and convective heat transfer, turbulence. Complete geometric model of the dead–end furnace based on boiler drawings was considered

  16. Unique Education and Workforce Development for NASA Engineers

    Science.gov (United States)

    Forsgren, Roger C.; Miller, Lauren L.

    2010-01-01

    NASA engineers are some of the world's best-educated graduates, responsible for technically complex, highly significant scientific programs. Even though these professionals are highly proficient in traditional analytical competencies, there is a unique opportunity to offer continuing education that further enhances their overall scientific minds. With a goal of maintaining the Agency's passionate, "best in class" engineering workforce, the NASA Academy of Program/Project & Engineering Leadership (APPEL) provides educational resources encouraging foundational learning, professional development, and knowledge sharing. NASA APPEL is currently partnering with the scientific community's most respected subject matter experts to expand its engineering curriculum beyond the analytics and specialized subsystems in the areas of: understanding NASA's overall vision and its fundamental basis, and the Agency initiatives supporting them; sharing NASA's vast reservoir of engineering experience, wisdom, and lessons learned; and innovatively designing hardware for manufacturability, assembly, and servicing. It takes collaboration and innovation to educate an organization that possesses such a rich and important historyand a future that is of great global interest. NASA APPEL strives to intellectually nurture the Agency's technical professionals, build its capacity for future performance, and exemplify its core valuesalJ to better enable NASA to meet its strategic visionand beyond.

  17. Semi-automated categorization of open-ended questions

    Directory of Open Access Journals (Sweden)

    Matthias Schonlau

    2016-08-01

    Full Text Available Text data from open-ended questions in surveys are difficult to analyze and are frequently ignored. Yet open-ended questions are important because they do not constrain respondents’ answer choices. Where open-ended questions are necessary, sometimes multiple human coders hand-code answers into one of several categories. At the same time, computer scientists have made impressive advances in text mining that may allow automation of such coding. Automated algorithms do not achieve an overall accuracy high enough to entirely replace humans. We categorize open-ended questions soliciting narrative responses using text mining for easy-to-categorize answers and humans for the remainder using expected accuracies to guide the choice of the threshold delineating between “easy” and “hard”. Employing multinomial boosting avoids the common practice of converting machine learning “confidence scores” into pseudo-probabilities. This approach is illustrated with examples from open-ended questions related to respondents’ advice to a patient in a hypothetical dilemma, a follow-up probe related to respondents’ perception of disclosure/privacy risk, and from a question on reasons for quitting smoking from a follow-up survey from the Ontario Smoker’s Helpline. Targeting 80% combined accuracy, we found that 54%-80% of the data could be categorized automatically in research surveys.

  18. Current and Future Parts Management at NASA

    Science.gov (United States)

    Sampson, Michael J.

    2011-01-01

    This presentation provides a high level view of current and future electronic parts management at NASA. It describes a current perspective of the new human space flight direction that NASA is beginning to take and how that could influence parts management in the future. It provides an overview of current NASA electronic parts policy and how that is implemented at the NASA flight Centers. It also describes some of the technical challenges that lie ahead and suggests approaches for their mitigation. These challenges include: advanced packaging, obsolescence and counterfeits, the global supply chain and Commercial Crew, a new direction by which NASA will utilize commercial launch vehicles to get astronauts to the International Space Station.

  19. End of paper registration forms for new computer users

    CERN Multimedia

    2007-01-01

    As of 3rd December 2007 it will be possible for new users to sign the Computer Centre User Registration Form electronically. As before, new users will still need to go to their computing group administrator, who will make the electronic request for account creation using CRA and give the new user his or her initial password. The difference is that the requested accounts will be created and usable almost immediately. Users will then have 3 days within which they must go to the web page http://cern.ch/cernaccount and click on ‘New User’. They will be required to follow a short computer security awareness training course, read the CERN Computing Rules and then confirm that they accept the rules. If this is not completed within 3 days all their computer accounts will be blocked and they will have to contact the Helpdesk to unblock their accounts and get a second chance to complete the registration. During the introductory phase the existing paper forms will also be accepted ...

  20. Evaluation of strategies for end storage of high-level reactor fuel

    International Nuclear Information System (INIS)

    2001-01-01

    This report evaluates a national strategy for end-storage of used high-level reactor fuel from the research reactors at Kjeller and in Halden. This strategy presupposes that all the important phases in handling the high-level material, including temporary storage and deposition, are covered. The quantity of spent fuel from Norwegian reactors is quite small. In addition to the technological issues, ethical, environmental, safety and economical requirements are emphasized

  1. Strategic project selection based on evidential reasoning approach for high-end equipment manufacturing industry

    Directory of Open Access Journals (Sweden)

    Lu Guangyan

    2017-01-01

    Full Text Available With the rapid development of science and technology, emerging information technologies have significantly changed the daily life of people. In such context, strategic project selection for high-end equipment manufacturing industries faces more and more complexities and uncertainties with the consideration of several complex criteria. For example, a group of experts rather than a single expert should be invited to select strategic project for high-end equipment manufacturing industries and the experts may feel difficulty to express their preferences towards different strategic projects due to their limited cognitive capabilities. In order to handle these complexities and uncertainties, the criteria framework of strategic project selection is firstly constructed based on the characteristics of high-end equipment manufacturing industries and then evidential reasoning (ER approach is introduced in this paper to help experts express their uncertain preferences and aggregate these preferences to generate an appropriate strategic project. A real case of strategic project selection in a high-speed train manufacturing enterprise is investigated to demonstrate the validity of the ER approach in solving strategic project selection problem.

  2. 49 CFR 231.3 - Drop-end high-side gondola cars.

    Science.gov (United States)

    2010-10-01

    ... 49 Transportation 4 2010-10-01 2010-10-01 false Drop-end high-side gondola cars. 231.3 Section 231... gondola cars. (a) Hand brakes—(1) Number. Same as specified for “Box and other house cars” (see § 231.1(a)(1)). (2) Dimensions. Same as specified for “Box and other house cars” (see § 231.1(a)(2)). (3...

  3. NASA Thesaurus

    Data.gov (United States)

    National Aeronautics and Space Administration — The NASA Thesaurus contains the authorized NASA subject terms used to index and retrieve materials in the NASA Technical Reports Server (NTRS) and the NTRS...

  4. High-Precision Computation: Mathematical Physics and Dynamics

    International Nuclear Information System (INIS)

    Bailey, D.H.; Barrio, R.; Borwein, J.M.

    2010-01-01

    At the present time, IEEE 64-bit oating-point arithmetic is suficiently accurate for most scientic applications. However, for a rapidly growing body of important scientic computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion e ort. This pa- per presents a survey of recent applications of these techniques and provides someanalysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, studies of the one structure constant, scattering amplitudes of quarks, glu- ons and bosons, nonlinear oscillator theory, experimental mathematics, evaluation of orthogonal polynomials, numerical integration of ODEs, computation of periodic orbits, studies of the splitting of separatrices, detection of strange nonchaotic at- tractors, Ising theory, quantum held theory, and discrete dynamical systems. We conclude that high-precision arithmetic facilities are now an indispensable compo- nent of a modern large-scale scientic computing environment.

  5. High-Precision Computation: Mathematical Physics and Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, D. H.; Barrio, R.; Borwein, J. M.

    2010-04-01

    At the present time, IEEE 64-bit oating-point arithmetic is suficiently accurate for most scientic applications. However, for a rapidly growing body of important scientic computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion e ort. This pa- per presents a survey of recent applications of these techniques and provides someanalysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, studies of the one structure constant, scattering amplitudes of quarks, glu- ons and bosons, nonlinear oscillator theory, experimental mathematics, evaluation of orthogonal polynomials, numerical integration of ODEs, computation of periodic orbits, studies of the splitting of separatrices, detection of strange nonchaotic at- tractors, Ising theory, quantum held theory, and discrete dynamical systems. We conclude that high-precision arithmetic facilities are now an indispensable compo- nent of a modern large-scale scientic computing environment.

  6. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  7. Software Engineering Tools for Scientific Models

    Science.gov (United States)

    Abrams, Marc; Saboo, Pallabi; Sonsini, Mike

    2013-01-01

    Software tools were constructed to address issues the NASA Fortran development community faces, and they were tested on real models currently in use at NASA. These proof-of-concept tools address the High-End Computing Program and the Modeling, Analysis, and Prediction Program. Two examples are the NASA Goddard Earth Observing System Model, Version 5 (GEOS-5) atmospheric model in Cell Fortran on the Cell Broadband Engine, and the Goddard Institute for Space Studies (GISS) coupled atmosphere- ocean model called ModelE, written in fixed format Fortran.

  8. NASA High-Reynolds Number Circulation Control Research - Overview of CFD and Planned Experiments

    Science.gov (United States)

    Milholen, W. E., II; Jones, Greg S.; Cagle, Christopher M.

    2010-01-01

    A new capability to test active flow control concepts and propulsion simulations at high Reynolds numbers in the National Transonic Facility at the NASA Langley Research Center is being developed. This technique is focused on the use of semi-span models due to their increased model size and relative ease of routing high-pressure air to the model. A new dual flow-path high-pressure air delivery station has been designed, along with a new high performance transonic sem -si pan wing model. The modular wind tunnel model is designed for testing circulation control concepts at both transonic cruise and low-speed high-lift conditions. The ability of the model to test other active flow control techniques will be highlighted. In addition, a new higher capacity semi-span force and moment wind tunnel balance has been completed and calibrated to enable testing at transonic conditions.

  9. Human Centered Design and Development for NASA's MerBoard

    Science.gov (United States)

    Trimble, Jay

    2003-01-01

    This viewgraph presentation provides an overview of the design and development process for NASA's MerBoard. These devices are large interactive display screens which can be shown on the user's computer, which will allow scientists in many locations to interpret and evaluate mission data in real-time. These tools are scheduled to be used during the 2003 Mars Exploration Rover (MER) expeditions. Topics covered include: mission overview, Mer Human Centered Computers, FIDO 2001 observations and MerBoard prototypes.

  10. Cloud@Home: A New Enhanced Computing Paradigm

    Science.gov (United States)

    Distefano, Salvatore; Cunsolo, Vincenzo D.; Puliafito, Antonio; Scarpa, Marco

    Cloud computing is a distributed computing paradigm that mixes aspects of Grid computing, ("… hardware and software infrastructure that provides dependable, consistent, pervasive, and inexpensive access to high-end computational capabilities" (Foster, 2002)) Internet Computing ("…a computing platform geographically distributed across the Internet" (Milenkovic et al., 2003)), Utility computing ("a collection of technologies and business practices that enables computing to be delivered seamlessly and reliably across multiple computers, ... available as needed and billed according to usage, much like water and electricity are today" (Ross & Westerman, 2004)) Autonomic computing ("computing systems that can manage themselves given high-level objectives from administrators" (Kephart & Chess, 2003)), Edge computing ("… provides a generic template facility for any type of application to spread its execution across a dedicated grid, balancing the load …" Davis, Parikh, & Weihl, 2004) and Green computing (a new frontier of Ethical computing1 starting from the assumption that in next future energy costs will be related to the environment pollution).

  11. High-energy high-efficiency Nd:YLF laser end-pump by 808 nm diode

    Science.gov (United States)

    Ma, Qinglei; Mo, Haiding; Zhao, Jay

    2018-04-01

    A model is developed to calculate the optimal pump position for end-pump configuration. The 808 nm wing pump is employed to spread the absorption inside the crystal. By the optimal laser cavity design, a high-energy high-efficiency Nd:YLF laser operating at 1053 nm is presented. In cw operation, a 13.6 W power is obtained with a slope efficiency of 51% with respect to 30 W incident pump power. The beam quality is near diffraction limited with M2 ∼ 1.02. In Q-switch operation, a pulse energy of 5 mJ is achieved with a peak power of 125 kW at 1 kHz repetition rate.

  12. Science panel to study mega-computers to assess potential energy contributions

    CERN Multimedia

    Jones, D

    2003-01-01

    "Energy Department advisers plan to examine high-end computing in the coming year and assess how computing power could be used to further DOE's basic research agenda on combustion, fusion and other topics" (1 page).

  13. Distributed Large Data-Object Environments: End-to-End Performance Analysis of High Speed Distributed Storage Systems in Wide Area ATM Networks

    Science.gov (United States)

    Johnston, William; Tierney, Brian; Lee, Jason; Hoo, Gary; Thompson, Mary

    1996-01-01

    We have developed and deployed a distributed-parallel storage system (DPSS) in several high speed asynchronous transfer mode (ATM) wide area networks (WAN) testbeds to support several different types of data-intensive applications. Architecturally, the DPSS is a network striped disk array, but is fairly unique in that its implementation allows applications complete freedom to determine optimal data layout, replication and/or coding redundancy strategy, security policy, and dynamic reconfiguration. In conjunction with the DPSS, we have developed a 'top-to-bottom, end-to-end' performance monitoring and analysis methodology that has allowed us to characterize all aspects of the DPSS operating in high speed ATM networks. In particular, we have run a variety of performance monitoring experiments involving the DPSS in the MAGIC testbed, which is a large scale, high speed, ATM network and we describe our experience using the monitoring methodology to identify and correct problems that limit the performance of high speed distributed applications. Finally, the DPSS is part of an overall architecture for using high speed, WAN's for enabling the routine, location independent use of large data-objects. Since this is part of the motivation for a distributed storage system, we describe this architecture.

  14. Automatic Coregistration and orthorectification (ACRO) and subsequent mosaicing of NASA high-resolution imagery over the Mars MC11 quadrangle, using HRSC as a baseline

    Science.gov (United States)

    Sidiropoulos, Panagiotis; Muller, Jan-Peter; Watson, Gillian; Michael, Gregory; Walter, Sebastian

    2018-02-01

    This work presents the coregistered, orthorectified and mosaiced high-resolution products of the MC11 quadrangle of Mars, which have been processed using novel, fully automatic, techniques. We discuss the development of a pipeline that achieves fully automatic and parameter independent geometric alignment of high-resolution planetary images, starting from raw input images in NASA PDS format and following all required steps to produce a coregistered geotiff image, a corresponding footprint and useful metadata. Additionally, we describe the development of a radiometric calibration technique that post-processes coregistered images to make them radiometrically consistent. Finally, we present a batch-mode application of the developed techniques over the MC11 quadrangle to validate their potential, as well as to generate end products, which are released to the planetary science community, thus assisting in the analysis of Mars static and dynamic features. This case study is a step towards the full automation of signal processing tasks that are essential to increase the usability of planetary data, but currently, require the extensive use of human resources.

  15. The new generation of PowerPC VMEbus front end computers for the CERN SPS and LEP accelerators system

    OpenAIRE

    Charrue, P; Bland, A; Ghinet, F; Ribeiro, P

    1995-01-01

    The CERN SPS and LEP PowerPC project is aimed at introducing a new generation of PowerPC VMEbus processor modules running the LynxOS real-time operating system. This new generation of front end computers using the state-of-the-art microprocessor technology will first replace the obsolete XENIX PC based systems (about 140 installations) successfully used since 1988 to control the LEP accelerator. The major issues addressed in the scope of this large scale project are the technical specificatio...

  16. Crew and Thermal Systems Strategic Communications Initiatives in Support of NASA's Strategic Goals

    Science.gov (United States)

    Paul, Heather L.; Lamberth, Erika Guillory; Jennings, Mallory A.

    2012-01-01

    NASA has defined strategic goals to invest in next-generation technologies and innovations, inspire students to become the future leaders of space exploration, and expand partnerships with industry and academia around the world. The Crew and Thermal Systems Division (CTSD) at the NASA Johnson Space Center actively supports these NASA initiatives. In July 2011, CTSD created a strategic communications team to communicate CTSD capabilities, technologies, and personnel to external technical audiences for business development and collaborative initiatives, and to students, educators, and the general public for education and public outreach efforts. This paper summarizes the CTSD Strategic Communications efforts and metrics through the first half of fiscal year 2012 with projections for end of fiscal year data.

  17. Operating Systems for Low-End Devices in the Internet of Things: a Survey

    OpenAIRE

    Hahm , Oliver; Baccelli , Emmanuel; Petersen , Hauke; Tsiftes , Nicolas

    2016-01-01

    International audience; The Internet of Things (IoT) is projected to soon interconnect tens of billions of new devices, in large part also connected to the Internet. IoT devices include both high-end devices which can use traditional go-to operating systems (OS) such as Linux, and low-end devices which cannot, due to stringent resource constraints, e.g. very limited memory, computational power, and power supply. However, large-scale IoT software development, deployment, and maintenance requir...

  18. Influence of suture technique and suture material selection on the mechanics of end-to-end and end-to-side anastomoses.

    Science.gov (United States)

    Baumgartner, N; Dobrin, P B; Morasch, M; Dong, Q S; Mrkvicka, R

    1996-05-01

    Experiments were performed in dogs to evaluate the mechanics of 26 end-to-end and 42 end-to-side artery-vein graft anastomoses constructed with continuous polypropylene sutures (Surgilene; Davis & Geck, Division of American Cyanamid Co., Danbury, Conn.), continuous polybutester sutures (Novafil; Davis & Geck), and interrupted stitches with either suture material. After construction, the grafts and adjoining arteries were excised, mounted in vitro at in situ length, filled with a dilute barium sulfate suspension, and pressurized in 25 mm Hg steps up to 200 mm Hg. Radiographs were obtained at each pressure. The computed cross-sectional areas of the anastomoses were compared with those of the native arteries at corresponding pressures. Results showed that for the end-to-end anastomoses at 100 mm Hg the cross-sectional areas of the continuous Surgilene anastomoses were 70% of the native artery cross-sectional areas, the cross-sectional areas of the continuous Novafil anastomoses were 90% of the native artery cross-sectional areas, and the cross-sectional areas of the interrupted anastomoses were 107% of the native artery cross-sectional areas (p anastomoses demonstrated no differences in cross-sectional areas or compliance for the three suture techniques. This suggests that, unlike with end-to-end anastomoses, when constructing an end-to-side anastomosis in patients any of the three suture techniques may be acceptable.

  19. National Aeronautics and Space Administration (NASA) Earth Science Research for Energy Management. Part 1; Overview of Energy Issues and an Assessment of the Potential for Application of NASA Earth Science Research

    Science.gov (United States)

    Zell, E.; Engel-Cox, J.

    2005-01-01

    Effective management of energy resources is critical for the U.S. economy, the environment, and, more broadly, for sustainable development and alleviating poverty worldwide. The scope of energy management is broad, ranging from energy production and end use to emissions monitoring and mitigation and long-term planning. Given the extensive NASA Earth science research on energy and related weather and climate-related parameters, and rapidly advancing energy technologies and applications, there is great potential for increased application of NASA Earth science research to selected energy management issues and decision support tools. The NASA Energy Management Program Element is already involved in a number of projects applying NASA Earth science research to energy management issues, with a focus on solar and wind renewable energy and developing interests in energy modeling, short-term load forecasting, energy efficient building design, and biomass production.

  20. Parallel computing for event reconstruction in high-energy physics

    International Nuclear Information System (INIS)

    Wolbers, S.

    1993-01-01

    Parallel computing has been recognized as a solution to large computing problems. In High Energy Physics offline event reconstruction of detector data is a very large computing problem that has been solved with parallel computing techniques. A review of the parallel programming package CPS (Cooperative Processes Software) developed and used at Fermilab for offline reconstruction of Terabytes of data requiring the delivery of hundreds of Vax-Years per experiment is given. The Fermilab UNIX farms, consisting of 180 Silicon Graphics workstations and 144 IBM RS6000 workstations, are used to provide the computing power for the experiments. Fermilab has had a long history of providing production parallel computing starting with the ACP (Advanced Computer Project) Farms in 1986. The Fermilab UNIX Farms have been in production for over 2 years with 24 hour/day service to experimental user groups. Additional tools for management, control and monitoring these large systems will be described. Possible future directions for parallel computing in High Energy Physics will be given

  1. NASA ERA Integrated CFD for Wind Tunnel Testing of Hybrid Wing-Body Configuration

    Science.gov (United States)

    Garcia, Joseph A.; Melton, John E.; Schuh, Michael; James, Kevin D.; Long, Kurt R.; Vicroy, Dan D.; Deere, Karen A.; Luckring, James M.; Carter, Melissa B.; Flamm, Jeffrey D.; hide

    2016-01-01

    NASAs Environmentally Responsible Aviation (ERA) Project explores enabling technologies to reduce aviations impact on the environment. One research challenge area for the project has been to study advanced airframe and engine integration concepts to reduce community noise and fuel burn. In order to achieve this, complex wind tunnel experiments at both the NASA Langley Research Centers (LaRC) 14x22 and the Ames Research Centers 40x80 low-speed wind tunnel facilities were conducted on a Boeing Hybrid Wing Body (HWB) configuration. These wind tunnel tests entailed various entries to evaluate the propulsion airframe interference effects including aerodynamic performance and aeroacoustics. In order to assist these tests in producing high quality data with minimal hardware interference, extensive Computational Fluid Dynamic (CFD) simulations were performed for everything from sting design and placement for both the wing body and powered ejector nacelle systems to the placement of aeroacoustic arrays to minimize its impact on the vehicles aerodynamics. This paper will provide a high level summary of the CFD simulations that NASA performed in support of the model integration hardware design as well as some simulation guideline development based on post-test aerodynamic data. In addition, the paper includes details on how multiple CFD codes (OVERFLOW, STAR-CCM+, USM3D, and FUN3D) were efficiently used to provide timely insight into the wind tunnel experimental setup and execution.

  2. The Living Universe: NASA and the Development of Astrobiology

    Science.gov (United States)

    Dick, Steven J.; Strick, James E.

    2004-01-01

    In the opening weeks of 1998 a news article in the British journal Nature reported that NASA was about to enter biology in a big way. A "virtual" Astrobiology Institute was gearing up for business, and NASA administrator Dan Goldin told his external advisory council that he would like to see spending on the new institute eventually reach $100 million per year. "You just wait for the screaming from the physical scientists (when that happens)," Goldin was quoted as saying. Nevertheless, by the time of the second Astrobiology Science Conference in 2002, attended by seven hundred scientists from many disciplines, NASA spending on astrobiology had reached nearly half that amount and was growing at a steady pace. Under NASA leadership numerous institutions around the world applied the latest scientific techniques in the service of astrobiology's ambitious goal: the study of what NASA's 1996 Strategic Plan termed the "living universe." This goal embraced nothing less than an understanding of the origin, history, and distribution of life in the universe, including Earth. Astrobiology, conceived as a broad interdisciplinary research program, held the prospect of being the science for the twenty-first century which would unlock the secrets to some of the great questions of humanity. It is no surprise that these age-old questions should continue into the twenty-first century. But that the effort should be spearheaded by NASA was not at all obvious to those - inside and outside the agency - who thought NASA's mission was human spaceflight, rather than science, especially biological science. NASA had, in fact, been involved for four decades in "exobiology," a field that embraced many of the same questions but which had stagnated after the 1976 Viking missions to Mars. In this volume we tell the colorful story of the rise of the discipline of exobiology, how and why it morphed into astrobiology at the end of the twentieth century, and why NASA was the engine for both the

  3. Computer architecture evaluation for structural dynamics computations: Project summary

    Science.gov (United States)

    Standley, Hilda M.

    1989-01-01

    The intent of the proposed effort is the examination of the impact of the elements of parallel architectures on the performance realized in a parallel computation. To this end, three major projects are developed: a language for the expression of high level parallelism, a statistical technique for the synthesis of multicomputer interconnection networks based upon performance prediction, and a queueing model for the analysis of shared memory hierarchies.

  4. Information Power Grid: Distributed High-Performance Computing and Large-Scale Data Management for Science and Engineering

    Science.gov (United States)

    Johnston, William E.; Gannon, Dennis; Nitzberg, Bill

    2000-01-01

    We use the term "Grid" to refer to distributed, high performance computing and data handling infrastructure that incorporates geographically and organizationally dispersed, heterogeneous resources that are persistent and supported. This infrastructure includes: (1) Tools for constructing collaborative, application oriented Problem Solving Environments / Frameworks (the primary user interfaces for Grids); (2) Programming environments, tools, and services providing various approaches for building applications that use aggregated computing and storage resources, and federated data sources; (3) Comprehensive and consistent set of location independent tools and services for accessing and managing dynamic collections of widely distributed resources: heterogeneous computing systems, storage systems, real-time data sources and instruments, human collaborators, and communications systems; (4) Operational infrastructure including management tools for distributed systems and distributed resources, user services, accounting and auditing, strong and location independent user authentication and authorization, and overall system security services The vision for NASA's Information Power Grid - a computing and data Grid - is that it will provide significant new capabilities to scientists and engineers by facilitating routine construction of information based problem solving environments / frameworks. Such Grids will knit together widely distributed computing, data, instrument, and human resources into just-in-time systems that can address complex and large-scale computing and data analysis problems. Examples of these problems include: (1) Coupled, multidisciplinary simulations too large for single systems (e.g., multi-component NPSS turbomachine simulation); (2) Use of widely distributed, federated data archives (e.g., simultaneous access to metrological, topological, aircraft performance, and flight path scheduling databases supporting a National Air Space Simulation systems}; (3

  5. n x 10 Gbps Offload NIC for NASA, NLR, Grid Computing, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — This Phase 1 proposal addresses the 2008 NASA SBIR Research Topic S6.04 Data Management - Storage, Mining and Visualization (GSFC). The subtopic we address is...

  6. An Analysis of Cloud Computing with Amazon Web Services for the Atmospheric Science Data Center

    Science.gov (United States)

    Gleason, J. L.; Little, M. M.

    2013-12-01

    NASA science and engineering efforts rely heavily on compute and data handling systems. The nature of NASA science data is such that it is not restricted to NASA users, instead it is widely shared across a globally distributed user community including scientists, educators, policy decision makers, and the public. Therefore NASA science computing is a candidate use case for cloud computing where compute resources are outsourced to an external vendor. Amazon Web Services (AWS) is a commercial cloud computing service developed to use excess computing capacity at Amazon, and potentially provides an alternative to costly and potentially underutilized dedicated acquisitions whenever NASA scientists or engineers require additional data processing. AWS desires to provide a simplified avenue for NASA scientists and researchers to share large, complex data sets with external partners and the public. AWS has been extensively used by JPL for a wide range of computing needs and was previously tested on a NASA Agency basis during the Nebula testing program. Its ability to support the Langley Science Directorate needs to be evaluated by integrating it with real world operational needs across NASA and the associated maturity that would come with that. The strengths and weaknesses of this architecture and its ability to support general science and engineering applications has been demonstrated during the previous testing. The Langley Office of the Chief Information Officer in partnership with the Atmospheric Sciences Data Center (ASDC) has established a pilot business interface to utilize AWS cloud computing resources on a organization and project level pay per use model. This poster discusses an effort to evaluate the feasibility of the pilot business interface from a project level perspective by specifically using a processing scenario involving the Clouds and Earth's Radiant Energy System (CERES) project.

  7. User Interface Technology Transfer to NASA's Virtual Wind Tunnel System

    Science.gov (United States)

    vanDam, Andries

    1998-01-01

    Funded by NASA grants for four years, the Brown Computer Graphics Group has developed novel 3D user interfaces for desktop and immersive scientific visualization applications. This past grant period supported the design and development of a software library, the 3D Widget Library, which supports the construction and run-time management of 3D widgets. The 3D Widget Library is a mechanism for transferring user interface technology from the Brown Graphics Group to the Virtual Wind Tunnel system at NASA Ames as well as the public domain.

  8. A computer control system for the PNC high power cw electron linac. Concept and hardware

    Energy Technology Data Exchange (ETDEWEB)

    Emoto, T.; Hirano, K.; Takei, Hayanori; Nomura, Masahiro; Tani, S. [Power Reactor and Nuclear Fuel Development Corp., Oarai, Ibaraki (Japan). Oarai Engineering Center; Kato, Y.; Ishikawa, Y.

    1998-06-01

    Design and construction of a high power cw (Continuous Wave) electron linac for studying feasibility of nuclear waste transmutation was started in 1989 at PNC. The PNC accelerator (10 MeV, 20 mA average current, 4 ms pulse width, 50 Hz repetition) is dedicated machine for development of the high current acceleration technology in future need. The computer control system is responsible for accelerator control and supporting the experiment for high power operation. The feature of the system is the measurements of accelerator status simultaneously and modularity of software and hardware for easily implemented for modification or expansion. The high speed network (SCRAM Net {approx} 15 MB/s), Ethernet, and front end processors (Digital Signal Processor) were employed for the high speed data taking and control. The system was designed to be standard modules and software implemented man machine interface. Due to graphical-user-interface and object-oriented-programming, the software development environment is effortless programming and maintenance. (author)

  9. Climate Modeling Computing Needs Assessment

    Science.gov (United States)

    Petraska, K. E.; McCabe, J. D.

    2011-12-01

    This paper discusses early findings of an assessment of computing needs for NASA science, engineering and flight communities. The purpose of this assessment is to document a comprehensive set of computing needs that will allow us to better evaluate whether our computing assets are adequately structured to meet evolving demand. The early results are interesting, already pointing out improvements we can make today to get more out of the computing capacity we have, as well as potential game changing innovations for the future in how we apply information technology to science computing. Our objective is to learn how to leverage our resources in the best way possible to do more science for less money. Our approach in this assessment is threefold: Development of use case studies for science workflows; Creating a taxonomy and structure for describing science computing requirements; and characterizing agency computing, analysis, and visualization resources. As projects evolve, science data sets increase in a number of ways: in size, scope, timelines, complexity, and fidelity. Generating, processing, moving, and analyzing these data sets places distinct and discernable requirements on underlying computing, analysis, storage, and visualization systems. The initial focus group for this assessment is the Earth Science modeling community within NASA's Science Mission Directorate (SMD). As the assessment evolves, this focus will expand to other science communities across the agency. We will discuss our use cases, our framework for requirements and our characterizations, as well as our interview process, what we learned and how we plan to improve our materials after using them in the first round of interviews in the Earth Science Modeling community. We will describe our plans for how to expand this assessment, first into the Earth Science data analysis and remote sensing communities, and then throughout the full community of science, engineering and flight at NASA.

  10. EGI-EUDAT integration activity - Pair data and high-throughput computing resources together

    Science.gov (United States)

    Scardaci, Diego; Viljoen, Matthew; Vitlacil, Dejan; Fiameni, Giuseppe; Chen, Yin; sipos, Gergely; Ferrari, Tiziana

    2016-04-01

    EGI (www.egi.eu) is a publicly funded e-infrastructure put together to give scientists access to more than 530,000 logical CPUs, 200 PB of disk capacity and 300 PB of tape storage to drive research and innovation in Europe. The infrastructure provides both high throughput computing and cloud compute/storage capabilities. Resources are provided by about 350 resource centres which are distributed across 56 countries in Europe, the Asia-Pacific region, Canada and Latin America. EUDAT (www.eudat.eu) is a collaborative Pan-European infrastructure providing research data services, training and consultancy for researchers, research communities, research infrastructures and data centres. EUDAT's vision is to enable European researchers and practitioners from any research discipline to preserve, find, access, and process data in a trusted environment, as part of a Collaborative Data Infrastructure (CDI) conceived as a network of collaborating, cooperating centres, combining the richness of numerous community-specific data repositories with the permanence and persistence of some of Europe's largest scientific data centres. EGI and EUDAT, in the context of their flagship projects, EGI-Engage and EUDAT2020, started in March 2015 a collaboration to harmonise the two infrastructures, including technical interoperability, authentication, authorisation and identity management, policy and operations. The main objective of this work is to provide end-users with a seamless access to an integrated infrastructure offering both EGI and EUDAT services and, then, pairing data and high-throughput computing resources together. To define the roadmap of this collaboration, EGI and EUDAT selected a set of relevant user communities, already collaborating with both infrastructures, which could bring requirements and help to assign the right priorities to each of them. In this way, from the beginning, this activity has been really driven by the end users. The identified user communities are

  11. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    Science.gov (United States)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The development of the Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large complex systems engineering challenge being addressed in part by focusing on the specific subsystems handling of off-nominal mission and fault tolerance. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML), the Mission and Fault Management (M&FM) algorithms are crafted and vetted in specialized Integrated Development Teams composed of multiple development disciplines. NASA also has formed an M&FM team for addressing fault management early in the development lifecycle. This team has developed a dedicated Vehicle Management End-to-End Testbed (VMET) that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. The flexibility of VMET enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the algorithms utilizing actual subsystem models. The intent is to validate the algorithms and substantiate them with performance baselines for each of the vehicle subsystems in an independent platform exterior to flight software test processes. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test processes. Risk reduction is addressed by working with other organizations such as S

  12. Packaging a successful NASA mission to reach a large audience within a small budget. Earth's Dynamic Space: Solar-Terrestrial Physics & NASA's Polar Mission

    Science.gov (United States)

    Fox, N. J.; Goldberg, R.; Barnes, R. J.; Sigwarth, J. B.; Beisser, K. B.; Moore, T. E.; Hoffman, R. A.; Russell, C. T.; Scudder, J.; Spann, J. F.; Newell, P. T.; Hobson, L. J.; Gribben, S. P.; Obrien, J. E.; Menietti, J. D.; Germany, G. G.; Mobilia, J.; Schulz, M.

    2004-12-01

    To showcase the on-going and wide-ranging scope of the Polar science discoveries, the Polar science team has created a one-stop shop for a thorough introduction to geospace physics, in the form of a DVD with supporting website. The DVD, Earth's Dynamic Space: Solar-Terrestrial Physics & NASA's Polar Mission, can be viewed as an end-to-end product or split into individual segments and tailored to lesson plans. Capitalizing on the Polar mission and its amazing science return, the Polar team created an exciting multi-use DVD intended for audiences ranging from a traditional classroom and after school clubs, to museums and science centers. The DVD tackles subjects such as the aurora, the magnetosphere and space weather, whilst highlighting the science discoveries of the Polar mission. This platform introduces the learner to key team members as well as the science principles. Dramatic visualizations are used to illustrate the complex principles that describe Earth’s dynamic space. In order to produce such a wide-ranging product on a shoe-string budget, the team poured through existing NASA resources to package them into the Polar story, and visualizations were created using Polar data to complement the NASA stock footage. Scientists donated their time to create and review scripts in order to make this a real team effort, working closely with the award winning audio-visual group at JHU/Applied Physics Laboratory. The team was excited to be invited to join NASA’s Sun-Earth Day 2005 E/PO program and the DVD will be distributed as part of the supporting educational packages.

  13. Embedded High Performance Scalable Computing Systems

    National Research Council Canada - National Science Library

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  14. NASA Applied Sciences' DEVELOP National Program: Training the Next Generation of Remote Sensing Scientists

    Science.gov (United States)

    Childs, Lauren; Brozen, Madeline; Hillyer, Nelson

    2010-01-01

    Since its inception over a decade ago, the DEVELOP National Program has provided students with experience in utilizing and integrating satellite remote sensing data into real world-applications. In 1998, DEVELOP began with three students and has evolved into a nationwide internship program with over 200 students participating each year. DEVELOP is a NASA Applied Sciences training and development program extending NASA Earth science research and technology to society. Part of the NASA Science Mission Directorate s Earth Science Division, the Applied Sciences Program focuses on bridging the gap between NASA technology and the public by conducting projects that innovatively use NASA Earth science resources to research environmental issues. Project outcomes focus on assisting communities to better understand environmental change over time. This is accomplished through research with global, national, and regional partners to identify the widest array of practical uses of NASA data. DEVELOP students conduct research in areas that examine how NASA science can better serve society. Projects focus on practical applications of NASA s Earth science research results. Each project is designed to address at least one of the Applied Sciences focus areas, use NASA s Earth observation sources and meet partners needs. DEVELOP research teams partner with end-users and organizations who use project results for policy analysis and decision support, thereby extending the benefits of NASA science and technology to the public.

  15. High-performance computing in accelerating structure design and analysis

    International Nuclear Information System (INIS)

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  16. A Comparison of High- and Low-Distress Marriages that End in Divorce

    Science.gov (United States)

    Amato, Paul R.; Hohmann-Marriott, Bryndl

    2007-01-01

    We used data from Waves 1 and 2 of the National Survey of Families and Households to study high- and low-distress marriages that end in divorce. A cluster analysis of 509 couples who divorced between waves revealed that about half were in high-distress relationships and the rest in low-distress relationships. These 2 groups were not artifacts of…

  17. Collaborative Aerospace Research and Fellowship Program at NASA Glenn Research Center

    Science.gov (United States)

    Heyward, Ann O.; Kankam, Mark D.

    2004-01-01

    During the summer of 2004, a 10-week activity for university faculty entitled the NASA-OAI Collaborative Aerospace Research and Fellowship Program (CFP) was conducted at the NASA Glenn Research Center in collaboration with the Ohio Aerospace Institute (OAI). This is a companion program to the highly successful NASA Faculty Fellowship Program and its predecessor, the NASA-ASEE Summer Faculty Fellowship Program that operated for 38 years at Glenn. The objectives of CFP parallel those of its companion, viz., (1) to further the professional knowledge of qualified engineering and science faculty,(2) to stimulate an exchange of ideas between teaching participants and employees of NASA, (3) to enrich and refresh the research and teaching activities of participants institutions, and (4) to contribute to the research objectives of Glenn. However, CFP, unlike the NASA program, permits faculty to be in residence for more than two summers and does not limit participation to United States citizens. Selected fellows spend 10 weeks at Glenn working on research problems in collaboration with NASA colleagues and participating in related activities of the NASA-ASEE program. This year's program began officially on June 1, 2004 and continued through August 7, 2004. Several fellows had program dates that differed from the official dates because university schedules vary and because some of the summer research projects warranted a time extension beyond the 10 weeks for satisfactory completion of the work. The stipend paid to the fellows was $1200 per week and a relocation allowance of $1000 was paid to those living outside a 50-mile radius of the Center. In post-program surveys from this and previous years, the faculty cited numerous instances where participation in the program has led to new courses, new research projects, new laboratory experiments, and grants from NASA to continue the work initiated during the summer. Many of the fellows mentioned amplifying material, both in

  18. NASA/DOD Aerospace Knowledge Diffusion Research Project. Paper 6: Aerospace knowledge diffusion in the academic community: A report of phase 3 activities of the NASA/DOD Aerospace Knowledge Diffusion Research Project

    Science.gov (United States)

    Pinelli, Thomas E.; Kennedy, John M.

    1990-01-01

    Descriptive and analytical data regarding the flow of aerospace-based scientific and technical information (STI) in the academic community are presented. An overview is provided of the Federal Aerospace Knowledge Diffusion Research Project, illustrating a five-year program on aerospace knowledge diffusion. Preliminary results are presented of the project's research concerning the information-seeking habits, practices, and attitudes of U.S. aerospace engineering and science students and faculty. The type and amount of education and training in the use of information sources are examined. The use and importance ascribed to various information products by U.S. aerospace faculty and students including computer and other information technology is assessed. An evaluation of NASA technical reports is presented and it is concluded that NASA technical reports are rated high in terms of quality and comprehensiveness, citing Engineering Index and IAA as the most frequently used materials by faculty and students.

  19. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatest number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern

  20. The End-of-Life Phase of High-Grade Glioma Patients: Dying With Dignity?

    NARCIS (Netherlands)

    Sizoo, E.M.; Taphoorn, M.J.B.; Uitdehaag, B.M.J.; Heimans, J.J.; Deliens, L.; Reijneveld, J.C.; Pasman, H.R.W.

    2013-01-01

    Background. In the end-of-life (EOL) phase, high-grade glioma (HGG) patients have a high symptom burden and often lose independence because of physical and cognitive dysfunction. This might affect the patient's personal dignity. We aimed to (a) assess the proportion of HGG patients dying with

  1. Preliminary data from lithium hydride ablation tests conducted by NASA, Ames Research Center

    International Nuclear Information System (INIS)

    Elliott, R.D.

    1970-01-01

    A series of ablation tests of lithium hydride has been made by NASA-Ames in one of their high-enthalpy arc-heated wind tunnels. Two-inch diameter cylindrical samples of the hydride, supplied by A. I., were subjected to heating on their ends for time periods up to 10 seconds. After each test, the amount of material removed from each sample was measured. The rates of loss of material were correlated with the heat input rates in terms of a heat of ablation, which ranged from 2100 to 3500 Btu/lb. The higher values were obtained when the hydride contained a matrix such as steel honeycomb of steel wool. (U.S.)

  2. NASA's "Eyes" Focus on Education

    Science.gov (United States)

    Hussey, K.

    2016-12-01

    NASA's "Eyes on…" suite of products continues to grow in capability and popularity. The "Eyes on the Earth", "Eyes on the Solar System" and "Eyes on Exoplanets" real-time, 3D interactive visualization products have proven themselves as highly effective demonstration and communication tools for NASA's Earth and Space Science missions. This presentation will give a quick look at the latest updates to the "Eyes" suite plus what is being done to make them tools for STEM Education.

  3. AGU testifies on NASA Budget

    Science.gov (United States)

    Simarski, Lynn Teo

    Witnesses from outside the U.S. government—including Frank Eden, representing AGU—testified about the National Aeronautics and Space Administration's budget on March 12 before the House Science Committee's subcommittee on space. One major topic of the hearing was familiar: what should NASA's top priority be, space science or human exploration of space.“Obviously this committee has a huge job of trying to set priorities—consistent with the budget restraints—that will end up giving the American taxpayer the most bang for his buck, as well as providing direction for our space program,” said F. James Sensenbrenner, Jr. (R-Wis.), the subcommittee's ranking Republican. Another recurring topic, cited by the subcommittee's new chairman, Ralph M. Hall (D-Tex.), as well as by other committee members, was how to translate NASA-developed technologies into commercial gain for the U.S. in the global marketplace. Hall and others also posed a number of questions on a topic the chairman called a special concern of his: whether it would be economically and scientifically plausible for the U.S. to use the Soviet space station Mir for certain activities, such as medical applications.

  4. NASA Astrophysics Technology Needs

    Science.gov (United States)

    Stahl, H. Philip

    2012-01-01

    July 2010, NASA Office of Chief Technologist (OCT) initiated an activity to create and maintain a NASA integrated roadmap for 15 key technology areas which recommend an overall technology investment strategy and prioritize NASA?s technology programs to meet NASA?s strategic goals. Science Instruments, Observatories and Sensor Systems(SIOSS) roadmap addresses technology needs to achieve NASA?s highest priority objectives -- not only for the Science Mission Directorate (SMD), but for all of NASA.

  5. Embracing Open Source for NASA's Earth Science Data Systems

    Science.gov (United States)

    Baynes, Katie; Pilone, Dan; Boller, Ryan; Meyer, David; Murphy, Kevin

    2017-01-01

    The overarching purpose of NASAs Earth Science program is to develop a scientific understanding of Earth as a system. Scientific knowledge is most robust and actionable when resulting from transparent, traceable, and reproducible methods. Reproducibility includes open access to the data as well as the software used to arrive at results. Additionally, software that is custom-developed for NASA should be open to the greatest degree possible, to enable re-use across Federal agencies, reduce overall costs to the government, remove barriers to innovation, and promote consistency through the use of uniform standards. Finally, Open Source Software (OSS) practices facilitate collaboration between agencies and the private sector. To best meet these ends, NASAs Earth Science Division promotes the full and open sharing of not only all data, metadata, products, information, documentation, models, images, and research results but also the source code used to generate, manipulate and analyze them. This talk focuses on the challenges to open sourcing NASA developed software within ESD and the growing pains associated with establishing policies running the gamut of tracking issues, properly documenting build processes, engaging the open source community, maintaining internal compliance, and accepting contributions from external sources. This talk also covers the adoption of existing open source technologies and standards to enhance our custom solutions and our contributions back to the community. Finally, we will be introducing the most recent OSS contributions from NASA Earth Science program and promoting these projects for wider community review and adoption.

  6. Experiences with a high-blockage model tested in the NASA Ames 12-foot pressure wind tunnel

    Science.gov (United States)

    Coder, D. W.

    1984-01-01

    Representation of the flow around full-scale ships was sought in the subsonic wind tunnels in order to a Hain Reynolds numbers as high as possible. As part of the quest to attain the largest possible Reynolds number, large models with high blockage are used which result in significant wall interference effects. Some experiences with such a high blockage model tested in the NASA Ames 12-foot pressure wind tunnel are summarized. The main results of the experiment relating to wind tunnel wall interference effects are also presented.

  7. NASA satellite communications application research. Phase 2: Efficient high power, solid state amplifier for EFH communications

    Science.gov (United States)

    Benet, James

    1993-01-01

    The final report describes the work performed from 9 Jun. 1992 to 31 Jul. 1993 on the NASA Satellite Communications Application Research (SCAR) Phase 2 program, Efficient High Power, Solid State Amplifier for EHF Communications. The purpose of the program was to demonstrate the feasibility of high-efficiency, high-power, EHF solid state amplifiers that are smaller, lighter, more efficient, and less costly than existing traveling wave tube (TWT) amplifiers by combining the output power from up to several hundred solid state amplifiers using a unique orthomode spatial power combiner (OSPC).

  8. NASA's Impacts Towards Improving International Water Management Using Satellites

    Science.gov (United States)

    Toll, D. L.; Doorn, B.; Searby, N. D.; Entin, J. K.; Lawford, R. G.; Mohr, K. I.; Lee, C. M.

    2013-12-01

    Key objectives of the NASA's Water Resources and Capacity Building Programs are to discover and demonstrate innovative uses and practical benefits of NASA's advanced system technologies for improved water management. This presentation will emphasize NASA's water research, applications, and capacity building activities using satellites and models to contribute to water issues including water availability, transboundary water, flooding and droughts to international partners, particularly developing countries. NASA's free and open exchange of Earth data observations and products helps engage and improve integrated observation networks and enables national and multi-national regional water cycle research and applications that are especially useful in data sparse regions of most developing countries. NASA satellite and modeling products provide a huge volume of valuable data extending back over 50 years across a broad range of spatial (local to global) and temporal (hourly to decadal) scales and include many products that are available in near real time (see earthdata.nasa.gov). To further accomplish these objectives NASA works to actively partner with public and private groups (e.g. federal agencies, universities, NGO's, and industry) in the U.S. and internationally to ensure the broadest use of its satellites and related information and products and to collaborate with regional end users who know the regions and their needs best. The event will help demonstrate the strong partnering and the use of satellite data to provide synoptic and repetitive spatial coverage helping water managers' deal with complex issues. This presentation will outline and describe NASA's international water related research, applications and capacity building programs' efforts to address developing countries critical water challenges in Asia, African and Latin America. This will specifically highlight impacts and case studies from NASA's programs in Water Resources (e.g., drought, snow

  9. The NASA Advanced Space Power Systems Project

    Science.gov (United States)

    Mercer, Carolyn R.; Hoberecht, Mark A.; Bennett, William R.; Lvovich, Vadim F.; Bugga, Ratnakumar

    2015-01-01

    The goal of the NASA Advanced Space Power Systems Project is to develop advanced, game changing technologies that will provide future NASA space exploration missions with safe, reliable, light weight and compact power generation and energy storage systems. The development effort is focused on maturing the technologies from a technology readiness level of approximately 23 to approximately 56 as defined in the NASA Procedural Requirement 7123.1B. Currently, the project is working on two critical technology areas: High specific energy batteries, and regenerative fuel cell systems with passive fluid management. Examples of target applications for these technologies are: extending the duration of extravehicular activities (EVA) with high specific energy and energy density batteries; providing reliable, long-life power for rovers with passive fuel cell and regenerative fuel cell systems that enable reduced system complexity. Recent results from the high energy battery and regenerative fuel cell technology development efforts will be presented. The technical approach, the key performance parameters and the technical results achieved to date in each of these new elements will be included. The Advanced Space Power Systems Project is part of the Game Changing Development Program under NASAs Space Technology Mission Directorate.

  10. A Bayes Theory-Based Modeling Algorithm to End-to-end Network Traffic

    OpenAIRE

    Zhao Hong-hao; Meng Fan-bo; Zhao Si-wen; Zhao Si-hang; Lu Yi

    2016-01-01

    Recently, network traffic has exponentially increasing due to all kind of applications, such as mobile Internet, smart cities, smart transportations, Internet of things, and so on. the end-to-end network traffic becomes more important for traffic engineering. Usually end-to-end traffic estimation is highly difficult. This paper proposes a Bayes theory-based method to model the end-to-end network traffic. Firstly, the end-to-end network traffic is described as a independent identically distrib...

  11. Installation of new Generation General Purpose Computer (GPC) compact unit

    Science.gov (United States)

    1991-01-01

    In the Kennedy Space Center's (KSC's) Orbiter Processing Facility (OPF) high bay 2, Spacecraft Electronics technician Ed Carter (right), wearing clean suit, prepares for (26864) and installs (26865) the new Generation General Purpose Computer (GPC) compact IBM unit in Atlantis', Orbiter Vehicle (OV) 104's, middeck avionics bay as Orbiter Systems Quality Control technician Doug Snider looks on. Both men work for NASA contractor Lockheed Space Operations Company. All three orbiters are being outfitted with the compact IBM unit, which replaces a two-unit earlier generation computer.

  12. The NASA Low-Pressure Turbine Flow Physics Program: A Review

    Science.gov (United States)

    Ashpis, David E.

    2002-01-01

    An overview of the NASA Glenn Low-Pressure Turbine (LPT) Flow Physics Program will be presented. The flow in the LPT is unique for the gas turbine. It is characterized by low Reynolds number and high freestream turbulence intensity and is dominated by interplay of three basic mechanisms: transition, separation and wake interaction. The flow of most interest is on the suction surface, where large losses are generated due to separation. The LPT is a large, multistage, heavy, jet engine component that suffers efficiency degradation between takeoff and cruise conditions due to decrease in Reynolds number with altitude. The performance penalty is around 2 points for large commercial bypass engines and as much as 7 points for small, high cruise altitude, military engines. The gas-turbine industry is very interested in improving the performance of the LPT and in reducing its weight, part count and cost. Many improvements can be accomplished by improved airfoil design, mainly by increasing the airfoil loading that can yield reduction of airfoils and improved performance. In addition, there is a strong interest in reducing the design cycle time and cost. Key enablers of the needed improvements are computational tools that can accurately predict LPT flows. Current CFD tools in use cannot yet satisfactorily predict the unsteady, transitional and separated flow in the LPT. The main reasons are inadequate transition & turbulence models and incomplete understanding of the LPT flow physics. NASA Glenn has established its LPT program to answer these needs. The main goal of the program is to develop and assess models for unsteady CFD of LPT flows. An approach that consists of complementing and augmenting experimental and computational work elements has been adopted. The work is performed in-house and by several academic institutions, in cooperation and interaction with industry. The program was reviewed at the Minnowbrook II meeting in 1997. This review will summarize the progress

  13. Lessons from NASA Applied Sciences Program: Success Factors in Applying Earth Science in Decision Making

    Science.gov (United States)

    Friedl, L. A.; Cox, L.

    2008-12-01

    The NASA Applied Sciences Program collaborates with organizations to discover and demonstrate applications of NASA Earth science research and technology to decision making. The desired outcome is for public and private organizations to use NASA Earth science products in innovative applications for sustained, operational uses to enhance their decisions. In addition, the program facilitates the end-user feedback to Earth science to improve products and demands for research. The Program thus serves as a bridge between Earth science research and technology and the applied organizations and end-users with management, policy, and business responsibilities. Since 2002, the Applied Sciences Program has sponsored over 115 applications-oriented projects to apply Earth observations and model products to decision making activities. Projects have spanned numerous topics - agriculture, air quality, water resources, disasters, public health, aviation, etc. The projects have involved government agencies, private companies, universities, non-governmental organizations, and foreign entities in multiple types of teaming arrangements. The paper will examine this set of applications projects and present specific examples of successful use of Earth science in decision making. The paper will discuss scientific, organizational, and management factors that contribute to or impede the integration of the Earth science research in policy and management. The paper will also present new methods the Applied Sciences Program plans to implement to improve linkages between science and end users.

  14. NASA Remote Sensing Technologies for Improved Integrated Water Resources Management

    Science.gov (United States)

    Toll, D. L.; Doorn, B.; Searby, N. D.; Entin, J. K.; Lee, C. M.

    2014-12-01

    This presentation will emphasize NASA's water research, applications, and capacity building activities using satellites and models to contribute to water issues including water availability, transboundary water, flooding and droughts for improved Integrated Water Resources Management (IWRM). NASA's free and open exchange of Earth data observations and products helps engage and improve integrated observation networks and enables national and multi-national regional water cycle research and applications that are especially useful in data sparse regions of most developing countries. NASA satellite and modeling products provide a huge volume of valuable data extending back over 50 years across a broad range of spatial (local to global) and temporal (hourly to decadal) scales and include many products that are available in near real time (see earthdata.nasa.gov). To further accomplish these objectives NASA works to actively partner with public and private groups (e.g. federal agencies, universities, NGO's, and industry) in the U.S. and international community to ensure the broadest use of its satellites and related information and products and to collaborate with regional end users who know the regions and their needs best. Key objectives of this talk will highlight NASA's Water Resources and Capacity Building Programs with their objective to discover and demonstrate innovative uses and practical benefits of NASA's advanced system technologies for improved water management in national and international applications. The event will help demonstrate the strong partnering and the use of satellite data to provide synoptic and repetitive spatial coverage helping water managers' deal with complex issues. The presentation will also demonstrate how NASA is a major contributor to water tasks and activities in GEOSS (Global Earth Observing System of Systems) and GEO (Group on Earth Observations).

  15. An Intelligent Computer-aided Training System (CAT) for Diagnosing Adult Illiterates: Integrating NASA Technology into Workplace Literacy

    Science.gov (United States)

    Yaden, David B., Jr.

    1991-01-01

    An important part of NASA's mission involves the secondary application of its technologies in the public and private sectors. One current application being developed is The Adult Literacy Evaluator, a simulation-based diagnostic tool designed to assess the operant literacy abilities of adults having difficulties in learning to read and write. Using Intelligent Computer-Aided Training (ICAT) system technology in addition to speech recognition, closed-captioned television (CCTV), live video and other state-of-the-art graphics and storage capabilities, this project attempts to overcome the negative effects of adult literacy assessment by allowing the client to interact with an intelligent computer system which simulates real-life literacy activities and materials and which measures literacy performance in the actual context of its use. The specific objectives of the project are as follows: (1) to develop a simulation-based diagnostic tool to assess adults' prior knowledge about reading and writing processes in actual contexts of application; (2) to provide a profile of readers' strengths and weaknesses; and (3) to suggest instructional strategies and materials which can be used as a beginning point for remediation. In the first and development phase of the project, descriptions of literacy events and environments are being written and functional literacy documents analyzed for their components. From these descriptions, scripts are being generated which define the interaction between the student, an on-screen guide and the simulated literacy environment.

  16. NASA research in aeropropulsion

    Energy Technology Data Exchange (ETDEWEB)

    Stewart, W.L.; Weber, R.J.

    1981-12-01

    Future advances in aircraft propulsion systems will be aided by the research performed by NASA and its contractors. This paper gives selected examples of recent accomplishments and current activities relevant to the principal classes of civil and military aircraft. Some instances of new emerging technologies with potential high impact on further progress are discussed. NASA research described includes noise abatement and fuel economy measures for commercial subsonic, supersonic, commuter, and general aviation aircraft, aircraft engines of the jet, turboprop, diesel and rotary types, VTOL, X-wing rotocraft, helicopters, and ''stealth'' aircraft. Applications to military aircraft are also discussed.

  17. The end-of-life phase of high-grade glioma patients: dying with dignity?

    NARCIS (Netherlands)

    Sizoo, Eefje M.; Taphoorn, Martin J. B.; Uitdehaag, Bernard; Heimans, Jan J.; Deliens, Luc; Reijneveld, Jaap C.; Pasman, H. Roeline W.

    2013-01-01

    In the end-of-life (EOL) phase, high-grade glioma (HGG) patients have a high symptom burden and often lose independence because of physical and cognitive dysfunction. This might affect the patient's personal dignity. We aimed to (a) assess the proportion of HGG patients dying with dignity as

  18. Risk Management of NASA Projects

    Science.gov (United States)

    Sarper, Hueseyin

    1997-01-01

    Various NASA Langley Research Center and other center projects were attempted for analysis to obtain historical data comparing pre-phase A study and the final outcome for each project. This attempt, however, was abandoned once it became clear that very little documentation was available. Next, extensive literature search was conducted on the role of risk and reliability concepts in project management. Probabilistic risk assessment (PRA) techniques are being used with increasing regularity both in and outside of NASA. The value and the usage of PRA techniques were reviewed for large projects. It was found that both civilian and military branches of the space industry have traditionally refrained from using PRA, which was developed and expanded by nuclear industry. Although much has changed with the end of the cold war and the Challenger disaster, it was found that ingrained anti-PRA culture is hard to stop. Examples of skepticism against the use of risk management and assessment techniques were found both in the literature and in conversations with some technical staff. Program and project managers need to be convinced that the applicability and use of risk management and risk assessment techniques is much broader than just in the traditional safety-related areas of application. The time has come to begin to uniformly apply these techniques. The whole idea of risk-based system can maximize the 'return on investment' that the public demands. Also, it would be very useful if all project documents of NASA Langley Research Center, pre-phase A through final report, are carefully stored in a central repository preferably in electronic format.

  19. NASA's Bio-Inspired Acoustic Absorber Concept

    Science.gov (United States)

    Koch, L. Danielle

    2017-01-01

    Transportation noise pollutes our worlds cities, suburbs, parks, and wilderness areas. NASAs fundamental research in aviation acoustics is helping to find innovative solutions to this multifaceted problem. NASA is learning from nature to develop the next generation of quiet aircraft.The number of road vehicles and airplanes has roughly tripled since the 1960s. Transportation noise is audible in nearly all the counties across the US. Noise can damage your hearing, raise your heart rate and blood pressure, disrupt your sleep, and make communication difficult. Noise pollution threatens wildlife when it prevents animals from hearing prey, predators, and mates. Noise regulations help drive industry to develop quieter aircraft. Noise standards for aircraft have been developed by the International Civil Aviation Organization and adopted by the US Federal Aviation Administration. The US National Park Service is working with the Federal Aviation Administration to try to balance the demand for access to the parks and wilderness areas with preservation of the natural soundscape. NASA is helping by conceptualizing quieter, more efficient aircraft of the future and performing the fundamental research to make these concepts a reality someday. Recently, NASA has developed synthetic structures that can absorb sound well over a wide frequency range, and particularly below 1000 Hz, and which mimic the acoustic performance of bundles of natural reeds. We are adapting these structures to control noise on aircraft, and spacecraft. This technology might be used in many other industrial or architectural applications where acoustic absorbers have tight constraints on weight and thickness, and may be exposed to high temperatures or liquids. Information about this technology is being made available through reports and presentations available through the NASA Technical Report Server, http:ntrs.nasa.gov. Organizations who would like to collaborate with NASA or commercialize NASAs technology

  20. The Principals and Practice of Distributed High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The potential of Distributed Processing Systems to deliver computing capabilities with qualities ranging from high availability and reliability to easy expansion in functionality and capacity were recognized and formalized in the 1970’s. For more three decade these principals Distributed Computing guided the development of the HTCondor resource and job management system. The widely adopted suite of software tools offered by HTCondor are based on novel distributed computing technologies and are driven by the evolving needs of High Throughput scientific applications. We will review the principals that underpin our work, the distributed computing frameworks and technologies we developed and the lessons we learned from delivering effective and dependable software tools in an ever changing landscape computing technologies and needs that range today from a desktop computer to tens of thousands of cores offered by commercial clouds. About the speaker Miron Livny received a B.Sc. degree in Physics and Mat...