WorldWideScience

Sample records for radar performance computer

  1. Radar Landmass Simulation Computer Programming (Interim Report).

    Science.gov (United States)

    RADAR SCANNING, TERRAIN), (*NAVAL TRAINING, RADAR OPERATORS), (*FLIGHT SIMULATORS, TERRAIN AVOIDANCE), (* COMPUTER PROGRAMMING , INSTRUCTION MANUALS), PLAN POSITION INDICATORS, REAL TIME, DISPLAY SYSTEMS, RADAR IMAGES, SIMULATION

  2. An Implementation of real-time phased array radar fundamental functions on DSP-focused, high performance embedded computing platform

    Science.gov (United States)

    Yu, Xining; Zhang, Yan; Patel, Ankit; Zahrai, Allen; Weber, Mark

    2016-05-01

    This paper investigates the feasibility of real-time, multiple channel processing of a digital phased array system backend design, with focus on high-performance embedded computing (HPEC) platforms constructed based on general purpose digital signal processor (DSP). Serial RapidIO (SRIO) is used as inter-chip connection backend protocol to support the inter-core communications and parallelisms. Performance benchmark was obtained based on a SRIO system chassis and emulated configuration similar to a field scale demonstrator of Multi-functional Phased Array Radar (MPAR). An interesting aspect of this work is comparison between "raw and low-level" DSP processing and emerging tools that systematically take advantages of the parallelism and multi-core capability, such as OpenCL and OpenMP. Comparisons with other backend HPEC solutions, such as FPGA and GPU, are also provided through analysis and experiments.

  3. Performance indicators modern surveillance radar

    NARCIS (Netherlands)

    Nooij, P.N.C.; Theil, A.

    2014-01-01

    Blake chart computations are widely employed to rank detection coverage capabilities of competitive search radar systems. Developed for comparable 2D radar systems with a mechanically rotating reflector antenna, it was not necessary to regard update rate and plot quality in Blake's chart. To

  4. Performance indicators modern surveillance radar

    NARCIS (Netherlands)

    Nooij, P.N.C.; Theil, A.

    2014-01-01

    Blake chart computations are widely employed to rank detection coverage capabilities of competitive search radar systems. Developed for comparable 2D radar systems with a mechanically rotating reflector antenna, it was not necessary to regard update rate and plot quality in Blake's chart. To charact

  5. An Implementation of Real-Time Phased Array Radar Fundamental Functions on a DSP-Focused, High-Performance, Embedded Computing Platform

    Directory of Open Access Journals (Sweden)

    Xining Yu

    2016-09-01

    Full Text Available This paper investigates the feasibility of a backend design for real-time, multiple-channel processing digital phased array system, particularly for high-performance embedded computing platforms constructed of general purpose digital signal processors. First, we obtained the lab-scale backend performance benchmark from simulating beamforming, pulse compression, and Doppler filtering based on a Micro Telecom Computing Architecture (MTCA chassis using the Serial RapidIO protocol in backplane communication. Next, a field-scale demonstrator of a multifunctional phased array radar is emulated by using the similar configuration. Interestingly, the performance of a barebones design is compared to that of emerging tools that systematically take advantage of parallelism and multicore capabilities, including the Open Computing Language.

  6. Distributed Computing Framework for Synthetic Radar Application

    Science.gov (United States)

    Gurrola, Eric M.; Rosen, Paul A.; Aivazis, Michael

    2006-01-01

    We are developing an extensible software framework, in response to Air Force and NASA needs for distributed computing facilities for a variety of radar applications. The objective of this work is to develop a Python based software framework, that is the framework elements of the middleware that allows developers to control processing flow on a grid in a distributed computing environment. Framework architectures to date allow developers to connect processing functions together as interchangeable objects, thereby allowing a data flow graph to be devised for a specific problem to be solved. The Pyre framework, developed at the California Institute of Technology (Caltech), and now being used as the basis for next-generation radar processing at JPL, is a Python-based software framework. We have extended the Pyre framework to include new facilities to deploy processing components as services, including components that monitor and assess the state of the distributed network for eventual real-time control of grid resources.

  7. Analysis of the computational requirements of a pulse-doppler radar signal processor

    CSIR Research Space (South Africa)

    Broich, R

    2012-05-01

    Full Text Available architectures [1]. These simplifications are often degrading to algorithmic performance and thus to the entire radar system. In this paper the different computational operations that are used in pulse-Doppler radar signal processing are explored, in order...H z to 10 GH z Fig. 1. Radar signal processor (RSP) flow of operations purpose computer architectures [3]. An abstract machine, in which only memory reads, writes, additions and multiplica- tions are considered to be significant operations...

  8. Sea clutter scattering, the K distribution and radar performance

    CERN Document Server

    Ward, Keith; Watts, Simon

    2013-01-01

    Sea Clutter: Scattering, the K Distribution and Radar Performance, 2nd Edition gives an authoritative account of our current understanding of radar sea clutter. Topics covered include the characteristics of radar sea clutter, modelling radar scattering by the ocean surface, statistical models of sea clutter, the simulation of clutter and other random processes, detection of small targets in sea clutter, imaging ocean surface features, radar detection performance calculations, CFAR detection, and the specification and measurement of radar performance. The calculation of the performance of pract

  9. Computing Optimum Heights for Balloon-Borne Radar

    Science.gov (United States)

    1993-11-01

    ducting, a " radar hole" against other raytrace niodels (IREPS, could develop. Although the radar beam. EREPS) that are considered accurate. The may be...TD-1369, Naval Ocean Systems Center, San Diego, CA, October 1985. ,quires, M.F., Caribbean Basin Radar Network Raytrace Study, USAPETAC/PR-91/005...IlI-AFETAC/PR-93IoO5 * AD-A286 832 COMPUTING OPTIMUM HEIGHTS for BALLOON-BORNE RADAR by Michael F. Squires IjxEA NOVEMBER 1993 DTIC QUAI’ii E’T" 2T

  10. Comparison of mimo radar concepts: Detection performance

    NARCIS (Netherlands)

    Rossum, W.L. van; Huizing, A.G.

    2007-01-01

    In this paper, four different array radar concepts are compared: pencil beam, floodlight, monostatic MIMO, and multistatic MIMO. The array radar concepts show an increase in complexity accompanied by an increase in diversity. The comparison between the radar concepts is made by investigating the

  11. Comparison of mimo radar concepts: Detection performance

    NARCIS (Netherlands)

    Rossum, W.L. van; Huizing, A.G.

    2007-01-01

    In this paper, four different array radar concepts are compared: pencil beam, floodlight, monostatic MIMO, and multistatic MIMO. The array radar concepts show an increase in complexity accompanied by an increase in diversity. The comparison between the radar concepts is made by investigating the det

  12. Comparison of mimo radar concepts: Detection performance

    NARCIS (Netherlands)

    Rossum, W.L. van; Huizing, A.G.

    2007-01-01

    In this paper, four different array radar concepts are compared: pencil beam, floodlight, monostatic MIMO, and multistatic MIMO. The array radar concepts show an increase in complexity accompanied by an increase in diversity. The comparison between the radar concepts is made by investigating the det

  13. HF Over-the-Horizon Radar System Performance Analysis

    Science.gov (United States)

    2007-09-01

    target detection technique and radar equations are applied. Chapter V uses PROPLAB model simulation to bring in the principle of raytracing and... RADAR SYSTEM PERFORMANCE ANALYSIS by Bin-Yi Liu September 2007 Thesis Co-Advisors: Phillip E. Pace Jeffrey B. Knorr THIS PAGE...Thesis 4. TITLE AND SUBTITLE HF Over-the-Horizon Radar System Performance Analysis 6. AUTHOR(S) Bin-Yi Liu 5. FUNDING NUMBERS 7. PERFORMING

  14. On detection performance and system configuration of MIMO radar

    Institute of Scientific and Technical Information of China (English)

    TANG Jun; WU Yong; PENG YingNing; WANG XiuTan

    2009-01-01

    Multiple-input multiple-output (MIMO) radar is a new concept with some new characteristics, such as multiple orthogonal waveforms and omnidirectional coverage. Based on Stein's lemma, we use relative entropy as a precise and general measure of error exponent to study detection performance for both MIMO radar and phased array radar. And based on derived analytical results, we further study the system configuration problem of Bistatic MIMO radar systems, where transmitters and receivers are located in different positions. Some interesting results are presented. For phased array radar, when the total numbers of transmitters and receivers are fixed, we should always make the number of transmitters equal to the number of receivers. For MIMO radar, we should use a small number of transmitters in low signal noise ratio (SNR) region, and make the number of transmitters equal to the number of receivers in high SNR region. These results are instructive for deployment of bistatic MIMO radar systems in the future.

  15. Tactile Radar: experimenting a computer game with visually disabled.

    Science.gov (United States)

    Kastrup, Virgínia; Cassinelli, Alvaro; Quérette, Paulo; Bergstrom, Niklas; Sampaio, Eliana

    2017-09-18

    Visually disabled people increasingly use computers in everyday life, thanks to novel assistive technologies better tailored to their cognitive functioning. Like sighted people, many are interested in computer games - videogames and audio-games. Tactile-games are beginning to emerge. The Tactile Radar is a device through which a visually disabled person is able to detect distal obstacles. In this study, it is connected to a computer running a tactile-game. The game consists in finding and collecting randomly arranged coins in a virtual room. The study was conducted with nine congenital blind people including both sexes, aged 20-64 years old. Complementary methods of first and third person were used: the debriefing interview and the quasi-experimental design. The results indicate that the Tactile Radar is suitable for the creation of computer games specifically tailored for visually disabled people. Furthermore, the device seems capable of eliciting a powerful immersive experience. Methodologically speaking, this research contributes to the consolidation and development of first and third person complementary methods, particularly useful in disabled people research field, including the evaluation by users of the Tactile Radar effectiveness in a virtual reality context. Implications for rehabilitation Despite the growing interest in virtual games for visually disabled people, they still find barriers to access such games. Through the development of assistive technologies such as the Tactile Radar, applied in virtual games, we can create new opportunities for leisure, socialization and education for visually disabled people. The results of our study indicate that the Tactile Radar is adapted to the creation of video games for visually disabled people, providing a playful interaction with the players.

  16. Low level range coverage performance prediction for VHF radar

    Science.gov (United States)

    Kuschel, H.

    1989-09-01

    A VHF radar frequencies the range coverage is not strictly limited by the quasi-optical horizon like at microwave radar frequencies but is extended due to diffraction propagation. This effect, here called beyond-the-horizon (BTH) detection capability is strongly dependent on the propagation path and thus on the terrain structure. The availability of digital terrain maps gives way to the use of computerized methods for the prediction of radar range coverage in real environment. In combination with wave propagation models suitable for diffraction at terrain structures, digital terrain data can even be used for the prediction of BTH target detectability at VHF radar. Here the digital landmass system (DLSS) terrain database was used in combination with a multiple-knife-edge diffraction model to predict the diffraction attenuation between the radar and the potential target positions, especially beyond the optical horizon. The propagation paths extracted from the database are modeled as a sequence of diffraction screens suited for the application of a Fresnel-Kirchhoff algorithm yielding the knife-edge-diffraction attenuation. This terrain related propagation model was verified by a large number of measurements at different frequencies. Implemented in a fast computer system, this prediction model can be used for mission planning of air operations. Considering hostile VHF radar coverage and terrain condition for flight path optimization or, on the other hand it can assist in siting mobile radars for gap filling according to the actual threat situation. Calculations of the diffraction propagation using the prediction model, yield range coverage patterns in real terrain situations, allowing to quantify the BTH detection advantage of VHF radar compared to microwave radar. An experimental large wavelength radar LARA (VHF) built flying targets beyond the close horizon. Here, especially the detection of hiding helicopters by exploiting diffractive wave propagation was examined

  17. Magellan: radar performance and data products.

    Science.gov (United States)

    Pettengill, G H; Ford, P G; Johnson, W T; Raney, R K; Soderblom, L A

    1991-04-12

    The Magellan Venus orbiter carries only one scientific instrument: a 12.6-centimeter wavelength radar system shared among three data-taking modes. The synthetic-aperture mode images radar echoes from the Venus surface at a resolution of between 120 and 300 meters, depending on spacecraft altitude. In the altimetric mode, relative height measurement accuracies may approach 5 meters, depending on the terrain's roughness, although orbital uncertainties place a floor of about 50 meters on the absolute uncertainty. In areas of extremely rough topography, accuracy is limited by the inherent line-of-sight radar resolution of about 88 meters. The maximum elevation observed to date, corresponding to a planetary radius of 6062 kilometers, lies within Maxwell Mons. When used as a thermal emission radiometer, the system can determine surface emissivities to an absolute accuracy of about 0.02. Mosaicked and archival digital data products will be released in compact disk (CDROM) format.

  18. Magellan: Radar performance and data products

    Science.gov (United States)

    Pettengill, G.H.; Ford, P.G.; Johnson, W.T.K.; Raney, R.K.; Soderblom, L.A.

    1991-01-01

    The Magellan Venus orbiter carries only one scientific instrument: a 12.6-centimeter-wavelength radar system shared among three data-taking modes. The syntheticaperture mode images radar echoes from the Venus surface at a resolution of between 120 and 300 meters, depending on spacecraft altitude. In the altimetric mode, relative height measurement accuracies may approach 5 meters, depending on the terrain's roughness, although orbital uncertainties place a floor of about 50 meters on the absolute uncertainty. In areas of extremely rough topography, accuracy is limited by the inherent line-of-sight radar resolution of about 88 meters. The maximum elevation observed to date, corresponding to a planetary radius of 6062 kilometers, lies within Maxwell Mons. When used as a thermal emission radiometer, the system can determine surface emissivities to an absolute accuracy of about 0.02. Mosaicked and archival digital data products will be released in compact disk (CDROM) format.

  19. Polarization differences in airborne ground penetrating radar performance for landmine detection

    Science.gov (United States)

    Dogaru, Traian; Le, Calvin

    2016-05-01

    The U.S. Army Research Laboratory (ARL) has investigated the ultra-wideband (UWB) radar technology for detection of landmines, improvised explosive devices and unexploded ordnance, for over two decades. This paper presents a phenomenological study of the radar signature of buried landmines in realistic environments and the performance of airborne synthetic aperture radar (SAR) in detecting these targets as a function of multiple parameters: polarization, depression angle, soil type and burial depth. The investigation is based on advanced computer models developed at ARL. The analysis includes both the signature of the targets of interest and the clutter produced by rough surface ground. Based on our numerical simulations, we conclude that low depression angles and H-H polarization offer the highest target-to-clutter ratio in the SAR images and therefore the best radar performance of all the scenarios investigated.

  20. On detection performance of MIMO radar for Rician target

    Institute of Scientific and Technical Information of China (English)

    TANG Jun; WU Yong; PENG YingNing; WANG XiuTan

    2009-01-01

    By using spatial dlversity, multiple-input-multiple-output (MIMO) radar can improve detection performance for fluctuating targets. In this paper, we propose a spatial fluctuation target model for MIMO radar, where targets are classified as non-fluctuating target, Rayleigh target and Rician target. Based on Stein's lemma, we use relative entropy to study detection performance of optimum detector for Riclan target. It is found that in low signal noise ratio (SNR) region, the performance improvement of MIMO radar for detecting Rician target depends on array gain, which is related to the number of receivers. In high SNR region, the improvement of performance depends on diversity gain, which is related to the product of the number of receivers and the number of transmitters. The conclusions of this paper are Important for designing MIMO radar system.

  1. Preliminary performance analysis of the advanced pulse compression noise radar waveform

    Science.gov (United States)

    Govoni, Mark A.; Moyer, Lee R.

    2012-06-01

    Noise radar systems encounter target fluctuation behavior similar to that of conventional systems. For noise radar systems, however, the fluctuations are not only dictated by target composition and geometry, but also by the non-uniform power envelope of their random transmit signals. This third dependency is of interest and serves as the basis for the preliminary analysis conducted in this manuscript. General conclusions are drawn on the implications of having a random power envelope and the impacts it could have on both the transmit and receive processes. Using an advanced pulse compression noise (APCN) radar waveform as the constituent signal, a computer simulation aids in quantifying potential losses and the impacts they might have on the detection performance of a real radar system.

  2. Computationally efficient DOD and DOA estimation for bistatic MIMO radar with propagator method

    Science.gov (United States)

    Zhang, Xiaofei; Wu, Hailang; Li, Jianfeng; Xu, Dazhuan

    2012-09-01

    In this article, we consider a computationally efficient direction of departure and direction of arrival estimation problem for a bistatic multiple-input multiple-output (MIMO) radar. The computational loads of the propagator method (PM) can be significantly smaller since the PM does not require any eigenvalue decomposition of the cross correlation matrix and singular value decomposition of the received data. An improved PM algorithm is proposed to obtain automatically paired transmit and receive angle estimations in the MIMO radar. The proposed algorithm has very close angle estimation performance to conventional PM, which has a much higher complexity than our algorithm. For high signal-to-noise ratio, the proposed algorithm has very close angle estimation to estimation of signal parameters via rotational invariance technique algorithm. The variance of the estimation error and Cramér-Rao bound of angle estimation are derived. Simulation results verify the usefulness of our algorithm.

  3. Agile beam laser radar using computational imaging for robotic perception

    Science.gov (United States)

    Powers, Michael A.; Stann, Barry L.; Giza, Mark M.

    2015-05-01

    This paper introduces a new concept that applies computational imaging techniques to laser radar for robotic perception. We observe that nearly all contemporary laser radars for robotic (i.e., autonomous) applications use pixel basis scanning where there is a one-to-one correspondence between world coordinates and the measurements directly produced by the instrument. In such systems this is accomplished through beam scanning and/or the imaging properties of focal-plane optics. While these pixel-basis measurements yield point clouds suitable for straightforward human interpretation, the purpose of robotic perception is the extraction of meaningful features from a scene, making human interpretability and its attendant constraints mostly unnecessary. The imposing size, weight, power and cost of contemporary systems is problematic, and relief from factors that increase these metrics is important to the practicality of robotic systems. We present a system concept free from pixel basis sampling constraints that promotes efficient and adaptable sensing modes. The cornerstone of our approach is agile and arbitrary beam formation that, when combined with a generalized mathematical framework for imaging, is suited to the particular challenges and opportunities of robotic perception systems. Our hardware concept looks toward future systems with optical device technology closely resembling modern electronically-scanned-array radar that may be years away from practicality. We present the design concept and results from a prototype system constructed and tested in a laboratory environment using a combination of developed hardware and surrogate devices for beam formation. The technological status and prognosis for key components in the system is discussed.

  4. A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing

    Directory of Open Access Journals (Sweden)

    Zhixin Li

    2017-01-01

    Full Text Available Synthetic Aperture Radar (SAR raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC methods have demonstrated their potential for accelerating simulation, the input/output (I/O bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.

  5. A Fast Synthetic Aperture Radar Raw Data Simulation Using Cloud Computing.

    Science.gov (United States)

    Li, Zhixin; Su, Dandan; Zhu, Haijiang; Li, Wei; Zhang, Fan; Li, Ruirui

    2017-01-08

    Synthetic Aperture Radar (SAR) raw data simulation is a fundamental problem in radar system design and imaging algorithm research. The growth of surveying swath and resolution results in a significant increase in data volume and simulation period, which can be considered to be a comprehensive data intensive and computing intensive issue. Although several high performance computing (HPC) methods have demonstrated their potential for accelerating simulation, the input/output (I/O) bottleneck of huge raw data has not been eased. In this paper, we propose a cloud computing based SAR raw data simulation algorithm, which employs the MapReduce model to accelerate the raw data computing and the Hadoop distributed file system (HDFS) for fast I/O access. The MapReduce model is designed for the irregular parallel accumulation of raw data simulation, which greatly reduces the parallel efficiency of graphics processing unit (GPU) based simulation methods. In addition, three kinds of optimization strategies are put forward from the aspects of programming model, HDFS configuration and scheduling. The experimental results show that the cloud computing based algorithm achieves 4_ speedup over the baseline serial approach in an 8-node cloud environment, and each optimization strategy can improve about 20%. This work proves that the proposed cloud algorithm is capable of solving the computing intensive and data intensive issues in SAR raw data simulation, and is easily extended to large scale computing to achieve higher acceleration.

  6. Performance analysis of acceleration resolution for radar signal

    Institute of Scientific and Technical Information of China (English)

    ZHAO; Hongzhong; (赵宏钟); FU; Qiang; (付; 强)

    2003-01-01

    The high acceleration of moving targets has brought severe problems in radar signal processing, such as the decrease in output signal-noise-ratio and the deterioration of Doppler resolution. This paper presents an acceleration ambiguity function (AAF) for characterizing the acceleration effects and the acceleration resolution property in radar signal processing. The definition of the acceleration resolution based on AAF is also presented. Using AAF as an analyzing tool, some factors are derived including the loss factor of output SNR, the broadening factor of Doppler resolution, and the optimal accumulative time (OPT) caused by acceleration in linear-phase matched filtering. The convergent property of quadratic-phase matched-filter for searching for and estimating the acceleration is discussed. The results and conclusions are helpful for the quantitative analysis of the acceleration effects on signal processing, and for evaluation of the performance of acceleration in radar signal waveform design.

  7. Detection Performance of Compressive Sensing Applied to Radar

    NARCIS (Netherlands)

    Anitori, L.; Otten, M.P.G.; Hoogeboom, P.

    2011-01-01

    In this paper some results are presented on detection performance of radar using Compressive Sensing. Compressive sensing is a recently developed theory which allows reconstruction of sparse signals with a number of measurements much lower than implied by the Nyquist rate. In this work the behavior

  8. High Performance Computing Today

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, Jack; Meuer,Hans; Simon,Horst D.; Strohmaier,Erich

    2000-04-01

    In last 50 years, the field of scientific computing has seen a rapid change of vendors, architectures, technologies and the usage of systems. Despite all these changes the evolution of performance on a large scale however seems to be a very steady and continuous process. Moore's Law is often cited in this context. If the authors plot the peak performance of various computers of the last 5 decades in Figure 1 that could have been called the supercomputers of their time they indeed see how well this law holds for almost the complete lifespan of modern computing. On average they see an increase in performance of two magnitudes of order every decade.

  9. Simulating Radar Signals for Detection Performance Evaluation.

    Science.gov (United States)

    1981-02-01

    constant false alarm rate (CFAR) processing, and non- linear operations. The detection performance is determined by Monte Carlo sampling techniques...AND DSIH. 0213 r C21.c FROtRAW By NCRPAN FRENK ~R FROM TI-E BASIC ALGOVITI4 BY EH9RLES 01 C RADER (BOTH OF vfIT LINCOLN LAPORATCRYi. MAY 1967. THE

  10. Effects of Stereoscopic 3D Digital Radar Displays on Air Traffic Controller Performance

    Science.gov (United States)

    2013-03-01

    Effects of Stereoscopic 3D Digital Radar Displays on Air Traffic Controller Performance THESIS...and is not subject to copyright protection in the United States. AFIT-ENV-13-M-24 Effects of Stereoscopic 3D Digital Radar Displays on Air... Stereoscopic 3D Digital Radar Displays on Air Traffic Controller Performance Jason G. Russi Technical Sergeant, USAF Approved

  11. Computationally Efficient DOA Tracking Algorithm in Monostatic MIMO Radar with Automatic Association

    Directory of Open Access Journals (Sweden)

    Huaxin Yu

    2014-01-01

    Full Text Available We consider the problem of tracking the direction of arrivals (DOA of multiple moving targets in monostatic multiple-input multiple-output (MIMO radar. A low-complexity DOA tracking algorithm in monostatic MIMO radar is proposed. The proposed algorithm obtains DOA estimation via the difference between previous and current covariance matrix of the reduced-dimension transformation signal, and it reduces the computational complexity and realizes automatic association in DOA tracking. Error analysis and Cramér-Rao lower bound (CRLB of DOA tracking are derived in the paper. The proposed algorithm not only can be regarded as an extension of array-signal-processing DOA tracking algorithm in (Zhang et al. (2008, but also is an improved version of the DOA tracking algorithm in (Zhang et al. (2008. Furthermore, the proposed algorithm has better DOA tracking performance than the DOA tracking algorithm in (Zhang et al. (2008. The simulation results demonstrate effectiveness of the proposed algorithm. Our work provides the technical support for the practical application of MIMO radar.

  12. Cognitive MIMO Frequency Diverse Array Radar with High LPI Performance

    Directory of Open Access Journals (Sweden)

    Ling Huang

    2016-01-01

    Full Text Available Frequency diverse array (FDA has its unique advantage in realizing low probability of intercept (LPI technology for its dependent beam pattern. In this paper, we proposed a cognitive radar based on the frequency diverse array multiple-input multiple-output (MIMO. To implement LPI of FDA MIMO transmit signals, a scheme for array weighting design is proposed, which is to minimize the energy of the target location and maximize the energy of the receiver. This is based on the range dependent characteristics of the frequency diverse array transmit beam pattern. To realize the objective problem, the algorithm is proposed as follows: the second-order nonconvex optimization problem is converted into a convex problem and solved by the bisection method and convex optimization. To get the information of target, the FDA MIMO radar is proposed to estimate the target parameters. Simulation results show that the proposed approach is effective in decreasing the detection probability of radar with lossless detection performance of the receive signal.

  13. Analytical Research by Computer Simulation of Developmental Polarimetric/Frequency Agile Pulsed Radars.

    Science.gov (United States)

    1982-12-01

    one and one half meters radar length, made up of five reflectors randomly spaced, and having a radar cross section of five square meters each (Figures...1. Odd bounce scattering matrix (flat plate, trihedral corner reflector ) for linear polarization (see Figure 6) 2. Even bounce scattering matrix...is radar cross - section in meters squared R is range to target in meters Ls is system loss (unitless) Because this analysis is performed in the voltage

  14. Performance limits for maritime Inverse Synthetic Aperture Radar (ISAR).

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin Walter

    2013-11-01

    The performance of an Inverse Synthetic Aperture Radar (ISAR) system depends on a variety of factors, many which are interdependent in some manner. In this report we specifically examine ISAR as applied to maritime targets (e.g. ships). It is often difficult to get your arms around the problem of ascertaining achievable performance limits, and yet those limits exist and are dictated by physics. This report identifies and explores those limits, and how they depend on hardware system parameters and environmental conditions. Ultimately, this leads to a characterization of parameters that offer optimum performance for the overall ISAR system. While the information herein is not new to the literature, its collection into a single report hopes to offer some value in reducing the seek time.

  15. Multiple Target Localization with Bistatic Radar Using Heuristic Computational Intelligence Techniques

    Directory of Open Access Journals (Sweden)

    Fawad Zaman

    2015-01-01

    Full Text Available We assume Bistatic Phase Multiple Input Multiple Output radar having passive Centrosymmetric Cross Shape Sensor Array (CSCA on its receiver. Let the transmitter of this Bistatic radar send coherent signals using a subarray that gives a fairly wide beam with a large solid angle so as to cover up any potential relevant target in the near field. We developed Heuristic Computational Intelligence (HCI based techniques to jointly estimate the range, amplitude, and elevation and azimuth angles of these multiple targets impinging on the CSCA. In this connection, first the global search optimizers, that is,are developed separately Particle Swarm Optimization (PSO and Differential Evolution (DE are developed separately, and, to enhance the performances further, both of them are hybridized with a local search optimizer called Active Set Algorithm (ASA. Initially, the performance of PSO, DE, PSO hybridized with ASA, and DE hybridized with ASA are compared with each other and then with some traditional techniques available in literature using root mean square error (RMSE as figure of merit.

  16. Radiosonde pressure sensor performance - Evaluation using tracking radars

    Science.gov (United States)

    Parsons, C. L.; Norcross, G. A.; Brooks, R. L.

    1984-01-01

    The standard balloon-borne radiosonde employed for synoptic meteorology provides vertical profiles of temperature, pressure, and humidity as a function of elapsed time. These parameters are used in the hypsometric equation to calculate the geopotential altitude at each sampling point during the balloon's flight. It is important that the vertical location information be accurate. The present investigation was conducted with the objective to evaluate the altitude determination accuracy of the standard radiosonde throughout the entire balloon profile. The tests included two other commercially available pressure sensors to see if they could provide improved accuracy in the stratosphere. The pressure-measuring performance of standard baroswitches, premium baroswitches, and hypsometers in balloon-borne sondes was correlated with tracking radars. It was found that the standard and premium baroswitches perform well up to about 25 km altitude, while hypsometers provide more reliable data above 25 km.

  17. Computer monitors drilling performance

    Energy Technology Data Exchange (ETDEWEB)

    1984-05-01

    Computer systems that can monitor over 40 drilling variables, display them graphically, record and transmit the information have been developed separately by two French companies. The systems, Vigigraphic and Visufora, involve the linking of a master computer with various surface and downhole sensors to measure the data on a real-time (as experienced) basis and compute the information. Vigigraphic is able to produce graphic displays grouped on four screens - drilling, tripping, geological and mud data. It computes at least 200 variables from the sensor readings, and it can store over 100 variables. Visufora allows the operator to group the drilling variables as desired. It can monitor and analyze surface and downhole parameters. The system can be linked with MWD tools. Twenty channels of input are assigned to surface values and the remaining 20 channels can be used to monitor downhole instrumentation.

  18. Effect of radar undesirable characteristics on the performance of spectral feature landmine detection technique

    Science.gov (United States)

    Ho, K. C.; Gader, P. D.; Wilson, J. N.; Frigui, H.

    2010-04-01

    A factor that could affect the performance of ground penetrating radar for landmine detection is self-signature. The radar self-signature is created by the internal coupling of the radar itself and it appears constant in different scans. Although not varying much, the radar self-signature can create hyperbolic shape or anomaly pattern after ground alignment and thereby increasing the amount of false detections. This paper examines the effect of radar self-signature on the performance of the subspace spectral feature landmine detection algorithm. Experimental results in the presence of strong radar self-signatures will be given and performance comparison with the pre-screener that is based on anomaly detection will be made.

  19. High-performance computers for unmanned vehicles

    Science.gov (United States)

    Toms, David; Ettinger, Gil J.

    2005-10-01

    The present trend of increasing functionality onboard unmanned vehicles is made possible by rapid advances in high-performance computers (HPCs). An HPC is characterized by very high computational capability (100s of billions of operations per second) contained in lightweight, rugged, low-power packages. HPCs are critical to the processing of sensor data onboard these vehicles. Operations such as radar image formation, target tracking, target recognition, signal intelligence signature collection and analysis, electro-optic image compression, and onboard data exploitation are provided by these machines. The net effect of an HPC is to minimize communication bandwidth requirements and maximize mission flexibility. This paper focuses on new and emerging technologies in the HPC market. Emerging capabilities include new lightweight, low-power computing systems: multi-mission computing (using a common computer to support several sensors); onboard data exploitation; and large image data storage capacities. These new capabilities will enable an entirely new generation of deployed capabilities at reduced cost. New software tools and architectures available to unmanned vehicle developers will enable them to rapidly develop optimum solutions with maximum productivity and return on investment. These new technologies effectively open the trade space for unmanned vehicle designers.

  20. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  1. Ground Object Recognition using Laser Radar Data : Geometric Fitting, Performance Analysis, and Applications

    OpenAIRE

    Grönwall, Christna

    2006-01-01

    This thesis concerns detection and recognition of ground object using data from laser radar systems. Typical ground objects are vehicles and land mines. For these objects, the orientation and articulation are unknown. The objects are placed in natural or urban areas where the background is unstructured and complex. The performance of laser radar systems is analyzed, to achieve models of the uncertainties in laser radar data. A ground object recognition method is presented. It handles general,...

  2. Computer-Related Task Performance

    DEFF Research Database (Denmark)

    Longstreet, Phil; Xiao, Xiao; Sarker, Saonee

    2016-01-01

    The existing information system (IS) literature has acknowledged computer self-efficacy (CSE) as an important factor contributing to enhancements in computer-related task performance. However, the empirical results of CSE on performance have not always been consistent, and increasing an individua...

  3. Laser Radar Receiver Performance Improvement by Inter Symbol Interference

    Science.gov (United States)

    Mao, Xuesong; Inoue, Daisuke; Matsubara, Hiroyuki; Kagami, Manabu

    The power of laser radar received echoes varies over a large range due to many factors such as target distance, size, reflection ratio, etc, which leads to the difficulty of decoding codes from the received noise buried signals for spectrum code modulated laser radar. Firstly, a pseudo-random noise (PN) code modulated laser radar model is given, and the problem to be addressed is discussed. Then, a novel method based on Inter Symbol Interference (ISI) is proposed for resolving the problem, providing that only Additive White Gaussian Noise (AWGN) exists. The ISI effect is introduced by using a high pass filter (HPF). The results show that ISI improves laser radar receiver decoding ratio, thus the peak of the correlation function of decoded codes and modulation codes. Finally, the effect of proposed method is verified by a simple experiment.

  4. Computationally efficient beampattern synthesis for dual-function radar-communications

    Science.gov (United States)

    Hassanien, Aboulnasr; Amin, Moeness G.; Zhang, Yimin D.

    2016-05-01

    The essence of amplitude-modulation based dual-function radar-communications is to modulate the sidelobe of the transmit beampattern while keeping the main beam, where the radar function takes place, unchanged during the entire processing interval. The number of distinct sidelobe levels (SLL) required for information embedding grows exponentially with the number of bits being embedded. We propose a simple and computationally cheap method for transmit beampattern synthesis which requires designing and storing only two beamforming weight vectors. The proposed method first designs a principal transmit beamforming weight vector based on the requirements dictated by the radar function of the DFRC system. Then, a second weight vectors is obtained by enforcing a deep null towards the intended communication directions. Additional SLLs can be realized by simply taking weighted linear combinations of the two available weight vectors. The effectiveness of the proposed method for beampattern synthesis is verified using simulations examples.

  5. Power versus performance tradeoffs of GPU-accelerated backprojection-based synthetic aperture radar image formation

    Science.gov (United States)

    Portillo, Ricardo; Arunagiri, Sarala; Teller, Patricia J.; Park, Song J.; Nguyen, Lam H.; Deroba, Joseph C.; Shires, Dale

    2011-06-01

    The continuing miniaturization and parallelization of computer hardware has facilitated the development of mobile and field-deployable systems that can accommodate terascale processing within once prohibitively small size and weight constraints. General-purpose Graphics Processing Units (GPUs) are prominent examples of such terascale devices. Unfortunately, the added computational capability of these devices often comes at the cost of larger demands on power, an already strained resource in these systems. This study explores power versus performance issues for a workload that can take advantage of GPU capability and is targeted to run in field-deployable environments, i.e., Synthetic Aperture Radar (SAR). Specifically, we focus on the Image Formation (IF) computational phase of SAR, often the most compute intensive, and evaluate two different state-of-the-art GPU implementations of this IF method. Using real and simulated data sets, we evaluate performance tradeoffs for single- and double-precision versions of these implementations in terms of time-to-solution, image output quality, and total energy consumption. We employ fine-grain direct-measurement techniques to capture isolated power utilization and energy consumption of the GPU device, and use general and radarspecific metrics to evaluate image output quality. We show that double-precision IF can provide slight image improvement to low-reflective areas of SAR images, but note that the added quality may not be worth the higher power and energy costs associated with higher precision operations.

  6. Synergetic Optimization of Missile Shapes for Aerodynamic and Radar Cross-Section Performance Based on Multi- objective Evolutionary Algorithm

    Institute of Scientific and Technical Information of China (English)

    刘洪

    2004-01-01

    A multiple-objective evolutionary algorithm (MOEA) with a new Decision Making (DM) scheme for MOD of conceptual missile shapes was presented, which is contrived to determine suitable tradeoffs from Pareto optimal set using interactive preference articulation. There are two objective functions, to maximize ratio of lift to drag and to minimize radar cross-section (RCS) value. 3D computational electromagnetic solver was used to evaluate RCS, electromagnetic performance. 3D Navier-Stokes flow solver was adopted to evaluate aerodynamic performance. A flight mechanics solver was used to analyze the stability of the missile. Based on the MOEA, a synergetic optimization of missile shapes for aerodynamic and radar cross-section performance is completed. The results show that the proposed approach can be used in more complex optimization case of flight vehicles.

  7. Design, Performance and Optimization for Multimodal Radar Operation

    Directory of Open Access Journals (Sweden)

    Surendra S. Bhat

    2012-09-01

    Full Text Available This paper describes the underlying methodology behind an adaptive multimodal radar sensor that is capable of progressively optimizing its range resolution depending upon the target scattering features. It consists of a test-bed that enables the generation of linear frequency modulated waveforms of various bandwidths. This paper discusses a theoretical approach to optimizing the bandwidth used by the multimodal radar. It also discusses the various experimental results obtained from measurement. The resolution predicted from theory agrees quite well with that obtained from experiments for different target arrangements.

  8. Partially Adaptive Phased Array Fed Cylindrical Reflector Technique for High Performance Synthetic Aperture Radar System

    Science.gov (United States)

    Hussein, Z.; Hilland, J.

    2001-01-01

    Spaceborne microwave radar instruments demand a high-performance antenna with a large aperature to address key science themes such as climate variations and predictions and global water and energy cycles.

  9. Computer Architecture Performance Evaluation Methods

    CERN Document Server

    Eeckhout, Lieven

    2010-01-01

    Performance evaluation is at the foundation of computer architecture research and development. Contemporary microprocessors are so complex that architects cannot design systems based on intuition and simple models only. Adequate performance evaluation methods are absolutely crucial to steer the research and development process in the right direction. However, rigorous performance evaluation is non-trivial as there are multiple aspects to performanceevaluation, such as picking workloads, selecting an appropriate modeling or simulation approach, running the model and interpreting the results usi

  10. Cued search algorithm with uncertain detection performance for phased array radars

    Institute of Scientific and Technical Information of China (English)

    Jianbin Lu; Hui Xiao; Zemin Xi; Mingmin Zhang

    2013-01-01

    A cued search algorithm with uncertain detection per-formance is proposed for phased array radars. Firstly, a target search model based on the information gain criterion is presented with known detection performance, and the statistical characteris-tic of the detection probability is calculated by using the fluctuant model of the target radar cross section (RCS). Secondly, when the detection probability is completely unknown, its probability den-sity function is modeled with a beta distribution, and its posterior probability distribution with the radar observation is derived based on the Bayesian theory. Final y simulation results show that the cued search algorithm with a known RCS fluctuant model can achieve the best performance, and the algorithm with the detection probability modeled as a beta distribution is better than that with a random selected detection probability because the model parame-ters can be updated by the radar observation to approach to the real value of the detection probability.

  11. Study of detection performance of passive bistatic radars based on FM broadcast

    Institute of Scientific and Technical Information of China (English)

    Shan Tao; Tao Ran; Wang Yue; Zhou Siyong

    2007-01-01

    The passive bistatic radar based on the FM broadcast has inherent superiority with respect to its survivability. In this article, the ambiguity function (AF) and the cross ambiguity function (CAF) of the FM radio signal are analyzed and illustrated.The Kolmogorov Smirnov (K-S) test verifies that the amplitude probability density function of the CAF side lobes is exponential; the distribution of the target is also deduced. Finally, the detection performance of the passive radar is studied, and the result shows that this new type bistatic radar has favorable detection capability.

  12. Impact of frequency and polarization diversity on a terahertz radar's imaging performance

    Science.gov (United States)

    Cooper, Ken B.; Dengler, Robert J.; Llombart, Nuria

    2011-05-01

    The Jet Propulsion Laboratory's 675 GHz, 25 m standoff imaging radar can achieve >1 Hz real time frame rates over 40x40 cm fields of view for rapid detection of person-borne concealed weapons. In its normal mode of operation, the radar generates imagery based solely on the time-of-flight, or range, between the radar and target. With good clothing penetration at 675 GHz, a hidden object will be detectable as an anomaly in the range-to-surface profile of a subject. Here we report on results of two modifications in the radar system that were made to asses its performance using somewhat different detection approaches. First, the radar's operating frequency and bandwidth were cut in half, to 340 GHz and 13 GHz, where there potential system advantages include superior transmit power and clothing penetration, as well as a lower cost of components. In this case, we found that the twofold reduction in range and cross-range resolution sharply limited the quality of through-clothes imagery, although some improvement is observed for detection of large targets concealed by very thick clothing. The second radar modification tested involved operation in a fully polarimetric mode, where enhanced image contrast might occur between surfaces with different material or geometric characteristics. Results from these tests indicated that random speckle dominates polarimetric power imagery, making it an unattractive approach for contrast improvement. Taken together, the experiments described here underscore the primary importance of high resolution imaging in THz radar applications for concealed weapons detection.

  13. Test and Evaluation of the Airport Surveillance Radar Performance Monitor.

    Science.gov (United States)

    1978-09-01

    u r i n g a por t ion of the data run. Wi th the front—panel calibrate switch set to a nomina l or mi nim~m error as indicated on Results showed that...alarms remained appeared to be due to ambient temperature wi thin .1 dB of nomina l for normal and Nil. In changes occurring in the radar equipment th

  14. High-performance synthetic aperture radar image formation on commodity multicore architectures

    Science.gov (United States)

    McFarlin, Daniel S.; Franchetti, Franz; Püschel, Markus; Moura, José M. F.

    2009-05-01

    Synthetic Aperture Radar (SAR) image processing platforms have to process increasingly large datasets under and hard real-time deadlines. Upgrading these platforms is expensive. An attractive solution to this problem is to couple high performance, general-purpose Commercial-Off-The-Shelf (COTS) architectures such as IBM's Cell BE and Intel's Core with software implementations of SAR algorithms. While this approach provides great flexibility, achieving the requisite performance is difficult and time-consuming. The reason is the highly parallel nature and general complexity of modern COTS microarchitectures. To achieve the best performance, developers have to interweave of various complex optimizations including multithreading, the use of SIMD vector extensions, and careful tuning to the memory hierarchy. In this paper, we demonstrate the computer generation of high performance code for SAR implementations on Intel's multicore platforms based on the Spiral framework and system. The key is to express SAR and its building blocks in Spiral's formal domain-specific language to enable automatic vectorization, parallelization, and memory hierarchy tuning through rewriting at a high abstraction level and automatic exploration of choices. We show that Spiral produces code for the latest Intel quadcore platforms that surpasses competing hand-tuned implementations on the Cell Blade, an architecture with twice as many cores and three times the memory bandwidth. Specifically, we show an average performance of 39 Gigaflops/sec for 16-Megapixel and 100-Megapixel SAR images with runtimes of 0.56 and 3.76 seconds respectively.

  15. Detection performance analysis for MIMO radar with distributed apertures in Gaussian colored noise

    Institute of Scientific and Technical Information of China (English)

    GUAN Jian; HUANG Yong

    2009-01-01

    This paper establishes the classic linear model of signal of the MIMO radar system with distributed apertures.Based on this model,the design principle and detection performance of MIMO radar detector is investigated under conditions of Gaussian colored noise and partially correlated observation channels.First,the research on design principle of detector shows that the clutter suppression and matched filtering can be independently implemented at each receiving aperture,which greatly reduces the difficulty in implementation of these detectors.Based on these results,a Max detector is proposed for the case where partial channels are disabled due to strong noise and stealth techniques.The second part is the performance analysis of detector.The Fishier divergence coefficient and the statistical equivalent decomposition of limit statistics are used to theoretically analyze the detection performance of AMF detector,and then the analytical expressions of the detection performance of the AMF detector is derived.Analysis results show that both the colored nature of noise and the correlation among observation channels can reduce the capability of spatial diversity of the MIMO radar system,change the target RCSs among observation channels from quick fluctuation to slow fluctuation,and degenerate the detection performance of this radar system into that of the phased array radar system at high signal-to-noise ratio.

  16. A parametric study of rate of advance and area coverage rate performance of synthetic aperture radar.

    Energy Technology Data Exchange (ETDEWEB)

    Raynal, Ann Marie [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Hensley, Jr., William H. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Burns, Bryan L. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Doerry, Armin Walter [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2014-11-01

    The linear ground distance per unit time and ground area covered per unit time of producing synthetic aperture radar (SAR) imagery, termed rate of advance (ROA) and area coverage rate (ACR), are important metrics for platform and radar performance in surveillance applications. These metrics depend on many parameters of a SAR system such as wavelength, aircraft velocity, resolution, antenna beamwidth, imaging mode, and geometry. Often the effects of these parameters on rate of advance and area coverage rate are non-linear. This report addresses the impact of different parameter spaces as they relate to rate of advance and area coverage rate performance.

  17. Coherent Performance Analysis of the HJ-1-C Synthetic Aperture Radar

    OpenAIRE

    Li Hai-ying; Zhang Shan-shan; Li Shi-qiang; Zhang Hua-chun

    2014-01-01

    Synthetic Aperture Radar (SAR) is a coherent imaging radar. Hence, coherence is critical in SAR imaging. In a coherent system, several sources can degrade performance. Based on the HJ-1-C SAR system implementation and sensor characteristics, this study evaluates the effect of frequency stability and pulse-to-pulse timing jitter on the SAR coherent performance. A stable crystal oscillator with short-term stability of 10×1.0−10 / 5 ms is used to generate the reference frequency by using a direc...

  18. A Comparative Study on different AI Techniques towards Performance Evaluation in RRM(Radar Resource Management

    Directory of Open Access Journals (Sweden)

    Madhusudhan H S

    2012-08-01

    Full Text Available The multifunction radar (MFR has to make a decision as to which functions are to be performed first or which must be degraded or even not done at all when there are not enough resources to be allocated. The process of making these decisions and determining their allocation as a function of time is known as Radar Resource Management (RRM. The RRM has two basic issues: task prioritization and task scheduling. The task prioritization is an important factor in the task scheduler. The other factor is the required scheduling time, which is decided by the environment, the target scenario and the performance requirements of radar functions. The required scheduling time could be improved by using advanced algorithm [1, 6].

  19. Radar equations for modern radar

    CERN Document Server

    Barton, David K

    2012-01-01

    Based on the classic Radar Range-Performance Analysis from 1980, this practical volume extends that work to ensure applicability of radar equations to the design and analysis of modern radars. This unique book helps you identify what information on the radar and its environment is needed to predict detection range. Moreover, it provides equations and data to improve the accuracy of range calculations. You find detailed information on propagation effects, methods of range calculation in environments that include clutter, jamming and thermal noise, as well as loss factors that reduce radar perfo

  20. Performance Estimates of the Pseudo-Random Method for Radar Detection

    OpenAIRE

    2014-01-01

    A performance of the pseudo-random method for the radar detection is analyzed. The radar sends a pseudo-random sequence of length $N$, and receives echo from $r$ targets. We assume the natural assumptions of uniformity on the channel and of the square root cancellation on the noise. Then for $r \\leq N^{1-\\delta}$, where $\\delta > 0$, the following holds: (i) the probability of detection goes to one, and (ii) the expected number of false targets goes to zero, as $N$ goes to infinity.

  1. Intelligent Motion Compensation for Improving the Tracking Performance of Shipborne Phased Array Radar

    Directory of Open Access Journals (Sweden)

    J. Mar

    2013-01-01

    Full Text Available The shipborne phased array radar must be able to compensate the ship’s motion and track the maneuvering targets automatically. In this paper, the real-time beam pointing error compensation mechanism of a planar array antenna for ship’s motion is designed to combine with the Kalman filtering. The effect of beam pointing error on the tracking performance of shipborne phased array radar is examined. A compensation mechanism, which can automatically correct the beam pointing error of the planar antenna array, is proposed for shipborne phased array radar in order to achieve the required tracking accuracy over the long dwell time. The automatic beam pointing error compensation mechanism employs the parallel fuzzy basis function network (FBFN architecture to estimate the beam pointing error caused by roll and pitch of the ship. In the simulation, the models of roll and pitch are used to evaluate the performance of beam pointing error estimation mechanism based on the proposed parallel FBFN architecture. In addition, the effect of automatic beam pointing error compensation mechanism on the tracking performance of adaptive extended Kalman filter (AEKF implemented in ship borne phased array radar is also investigated. Simulations find out that the proposed algorithms are stable and accurate.

  2. Interference Suppression Performance of Automotive UWB Radars Using Pseudo Random Sequences

    Directory of Open Access Journals (Sweden)

    I. Pasya

    2015-12-01

    Full Text Available Ultra wideband (UWB automotive radars have attracted attention from the viewpoint of reducing traffic accidents. The performance of automotive radars may be degraded by interference from nearby radars using the same frequency. In this study, a scenario where two cars pass each other on a road was considered. Considering the utilization of cross-polarization, the desired-to-undesired signal power ratio (DUR was found to vary approximately from -10 to 30 dB. Different pseudo random sequences were employed for spectrum spreading the different radar signals to mitigate the interference effects. This paper evaluates the interference suppression provided by maximum length sequence (MLS and Gold sequence (GS through numerical simulations of the radar’s performance in terms of probability of false alarm and probability of detection. It was found that MLS and GS yielded nearly the same performance when the DUR is -10 dB (worst case; for example when fixing the probability of false alarm to 0.0001, the probabilities of detection were 0.964 and 0.946 respectively. The GS are more advantageous than MLS due to larger number of different sequences having the same length in GS than in MLS.

  3. Study of high-altitude radar altimeter model accuracy and SITAN performance using HAAFT data

    Energy Technology Data Exchange (ETDEWEB)

    Shieves, T.C.; Callahan, M.W.

    1979-07-01

    Radar altimetry data, inertial navigation data, and scoring data were collected under the HAAFT program by Martin Marietta Corporation for the United States Air Force over several areas in the western United States at altitudes ranging from 3 to 20 km. The study reported here uses the HAAFT data in conjunction with Defense Mapping Agency (DMA) topographic data to evaluate the accuracy of a high-altitude pulsed-radar altimeter model and the resulting performance of the terrain-aided guidance concept SITAN. Previous SITAN flight tests at low altitudes (less than 1500 m AGL) have demonstrated 6-20 m CEP. The high-altitude flight test data analyzed herein show a SITAN CEP of 120 m. The radar altimeter model was required to achieve this performance includes the effects of the internal track loop, AGC loop, antenna beamwidth, and the terrain radar cross section and provided a factor of 6 improvement over simple nadir ground clearance for rough terrain. It is postulated that high-altitude CEP could be reduced to 50 m or less if an altimeter were designed specifically for high-altitude terrain sensing.

  4. Efficient Backprojection-Based Synthetic Aperture Radar Computation with Many-Core Processors

    Directory of Open Access Journals (Sweden)

    Jongsoo Park

    2013-01-01

    Full Text Available Tackling computationally challenging problems with high efficiency often requires the combination of algorithmic innovation, advanced architecture, and thorough exploitation of parallelism. We demonstrate this synergy through synthetic aperture radar (SAR via backprojection, an image reconstruction method that can require hundreds of TFLOPS. Computation cost is significantly reduced by our new algorithm of approximate strength reduction; data movement cost is economized by software locality optimizations facilitated by advanced architecture support; parallelism is fully harnessed in various patterns and granularities. We deliver over 35 billion backprojections per second throughput per compute node on an Intel® Xeon® processor E5-2670-based cluster, equipped with Intel® Xeon Phi™ coprocessors. This corresponds to processing a 3K×3K image within a second using a single node. Our study can be extended to other settings: backprojection is applicable elsewhere including medical imaging, approximate strength reduction is a general code transformation technique, and many-core processors are emerging as a solution to energy-efficient computing.

  5. Direction synthesis in DOA estimation for monostatic multiple input multiple output (MIMO) radar based on synthetic impulse and aperture radar (SIAR) and its performance analysis

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    A new direction synthetic method for monostatic multiple input multiple output (MIMO) radar is presented based on synthetic impulse and aperture radar (SIAR) system. Concerned with the monostatic MIMO radar which simultaneously emits orthogonal signals with multi-carrier-frequency and possesses sparsely distributed transmitting and receiving arrays with respective location, as well as the situation for the presence of multipath propagation in the low flying target’s echo, the method integrates the aperture of the transmitting arrays with the receiving arrays to form the digital beam-forming (DBF) in azimuth and elevation dimensions. And a study has been made of planar general MUSIC algorithm based on decorrelating the multipath signals of multi-carrier-frequency MIMO radar. Through compensat-ing the phase delay of both the transmitting and the receiving arrays and synthe-sizing the transmitting beam in two dimensions at the receiver, the angular resolu-tion and measurement accuracy are improved and the computational complexity is reduced after transforming the three-dimensional (3D) parameter estimation prob-lem into a two-dimensional (2D) one. Finally, the Cramer-Rao Bounds (CRBs) of DOA estimation for azimuth and elevation is put forward with the exsiting multipath propagation. Results of computer simulation demonstrate the validity of the new method.

  6. Direction synthesis in DOA estimation for monostatic multiple input multiple output(MIMO) radar based on synthetic impulse and aperture radar (SIAR) and its performance analysis

    Institute of Scientific and Technical Information of China (English)

    ZHAO GuangHui; CHEN BaiXiao; ZHU ShouPing

    2008-01-01

    A new direction synthetic method for monostatic multiple input multiple output (MIMO) radar is presented based on synthetic impulse and aperture radar (SIAR) system. Concerned with the monostatic MIMO radar which simultaneously emits orthogonal signals with multi-carrier-frequency and possesses sparsely distributed transmitting and receiving arrays with respective location, as well as the situation for the presence of multipath propagation in the low flying target's echo, the method integrates the aperture of the transmitting arrays with the receiving arrays to form the digital beam-forming (DBF) in azimuth and elevation dimensions. And a study has been made of planar general MUSIC algorithm based on decorrelating the multipath signals of multi-carrier-frequency MIMO radar. Through compensat-ing the phase delay of both the transmitting and the receiving arrays and synthe-sizing the transmitting beam in two dimensions at the receiver, the angular resolu-tion and measurement accuracy are improved and the computational complexity is reduced after transforming the three-dimensional (3D) parameter estimation prob-lem into a two-dimensional (2D) one. Finally, the Cramer-Rao Bounds (CRBs) of DOA estimation for azimuth and elevation is put forward with the exsitJng multipath propagation. Results of computer simulation demonstrate the validity of the new method.

  7. Performance limits for exo-clutter Ground Moving Target Indicator (GMTI) radar.

    Energy Technology Data Exchange (ETDEWEB)

    Doerry, Armin Walter

    2010-09-01

    The performance of a Ground Moving Target Indicator (GMTI) radar system depends on a variety of factors, many which are interdependent in some manner. It is often difficult to 'get your arms around' the problem of ascertaining achievable performance limits, and yet those limits exist and are dictated by physics. This report identifies and explores those limits, and how they depend on hardware system parameters and environmental conditions. Ultimately, this leads to a characterization of parameters that offer optimum performance for the overall GMTI radar system. While the information herein is not new to the literature, its collection into a single report hopes to offer some value in reducing the 'seek time'.

  8. Performance of the Dual-frequency Precipitation Radar on the GPM core satellite

    Science.gov (United States)

    Iguchi, Toshio; Seto, Shinta; Awaka, Jun; Meneghini, Robert; Kubota, Takuji; Oki, Riko; Chandra, Venkatchalam; Kawamoto, Nozomi

    2016-04-01

    The GPM core satellite was launched on February 28, 2014. This paper describes some of the results of precipitation measurements with the Dual-Frequency Precipitation Radar (DPR) on the GPM core satellite. The DPR, which was developed by Japan Aerospace Exploration Agency (JAXA) and National Institute of Information and Communications Technology (NICT), consists of two radars: Ku-band precipitation radar (KuPR) and Ka-band radar (KaPR). The performance of the DPR is evaluated by comparing the level 2 products with the corresponding TRMM/PR data and surface rain measurements. The scanning geometry and footprint size of KuPR and those of PR are nearly identical. The major differences between them are the sensitivity, visiting frequency, and the rain retrieval algorithm. KuPR's sensitivity is twice as good as PR. The increase of sensitivity reduces the cases of missing light rain. Since relatively light rain prevails in Japan, the difference in sensitivity may cause a few percentage points in the bias. Comparisons of the rain estimates by GPM/DPR with AMeDAS rain gauge data over Japan show that annual KuPR's estimates over Japan agree quite well with the rain gauge estimates although the monthly or local statistics of these two kinds of data scatter substantially. KuPR's esimates are closer to the gauge estimates than the TRMM/PR. Possible sources of the differences that include sampling errors, sensitivity, and the algorithm are examined.

  9. Principles of modern radar systems

    CERN Document Server

    Carpentier, Michel H

    1988-01-01

    Introduction to random functions ; signal and noise : the ideal receiver ; performance of radar systems equipped with ideal receivers ; analysis of the operating principles of some types of radar ; behavior of real targets, fluctuation of targets ; angle measurement using radar ; data processing of radar information, radar coverage ; applications to electronic scanning antennas to radar ; introduction to Hilbert spaces.

  10. Validation of vertical refractivity profiles as required for performance prediction of coastal surveillance radars

    CSIR Research Space (South Africa)

    Naicker, K

    2011-04-01

    Full Text Available for modeling the detection performance of coastal surveillance radars. Validation is provided through meteorological and radio wave propagation measurements undertaken in False Bay, South Africa. Keywords-vertical refractive profiles, evaporation ducts.... An atmospheric duct is a horizontal layer in troposphere, where the refractive conditions are such that an EM wave will be channeled or guided as discussed above. This results in EM wave propagation over great ranges. In a littoral environment, evaporation...

  11. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  12. Coherent Performance Analysis of the HJ-1-C Synthetic Aperture Radar

    Directory of Open Access Journals (Sweden)

    Li Hai-ying

    2014-06-01

    Full Text Available Synthetic Aperture Radar (SAR is a coherent imaging radar. Hence, coherence is critical in SAR imaging. In a coherent system, several sources can degrade performance. Based on the HJ-1-C SAR system implementation and sensor characteristics, this study evaluates the effect of frequency stability and pulse-to-pulse timing jitter on the SAR coherent performance. A stable crystal oscillator with short-term stability of 10×1.0−10 / 5 ms is used to generate the reference frequency by using a direct multiplier and divider. Azimuth ISLR degradation owing to the crystal oscillator phase noise is negligible. The standard deviation of the pulse-to-pulse timing jitter of HJ-1-C SAR is lower than 2ns (rms and the azimuth random phase error in the synthetic aperture time slightly degrades the side lobe of the azimuth impulse response. The mathematical expressions and simulation results are presented and suggest that the coherent performance of the HJ-1-C SAR system meets the requirements of synthetic aperture radar imaging.

  13. High-performance scientific computing

    CERN Document Server

    Berry, Michael W; Gallopoulos, Efstratios

    2012-01-01

    This book presents the state of the art in parallel numerical algorithms, applications, architectures, and system software. The book examines various solutions for issues of concurrency, scale, energy efficiency, and programmability, which are discussed in the context of a diverse range of applications. Features: includes contributions from an international selection of world-class authorities; examines parallel algorithm-architecture interaction through issues of computational capacity-based codesign and automatic restructuring of programs using compilation techniques; reviews emerging applic

  14. Ablation and radar-wave transmission performances of the nitride ceramic matrix composites

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The 2.5 dimensional silica fiber reinforced nitride matrix composites (2.5D SiO2f/Si3N4-BN) were prepared through the preceramic polymer impregnation pyro- lysis (PIP) method. The ablation and radar-wave transparent performances of the composite at high temperature were evaluated under arc jet. The composition and ablation surface microstructures were studied by X-ray diffraction (XRD) and scanning electron microscope (SEM). The results show that the 2.5D SiO2f/Si3N4-BN composites have a linear ablation rate of 0.33 mm/s and high radar-wave trans- parent ratio of 98.6%. The fused layer and the matrix are protected by each other, and no fused layer accumulates on the ablation surface. The nitride composite is a high-temperature ablation resistivity and microwave transparent material.

  15. Ablation and radar-wave transmission performances of the nitride ceramic matrix composites

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    The 2.5 dimensional silica fiber reinforced nitride matrix composites (2.5D SiO2f/Si3N4-BN) were prepared through the preceramic polymer impregnation pyrolysis (PIP) method. The ablation and radar-wave transparent performances of the composite at high temperature were evaluated under arc jet. The composition and ablation surface microstructures were studied by X-ray diffraction (XRD) and scanning electron microscope (SEM). The results show that the 2.5D SiO2f/Si3N4-BN composites have a linear ablation rate of 0.33 mm/s and high radar-wave transparent ratio of 98.6%. The fused layer and the matrix are protected by each other, and no fused layer accumulates on the ablation surface. The nitride composite is a high-temperature ablation resistivity and microwave transparent material.

  16. Introduction to High Performance Scientific Computing

    OpenAIRE

    2016-01-01

    The field of high performance scientific computing lies at the crossroads of a number of disciplines and skill sets, and correspondingly, for someone to be successful at using high performance computing in science requires at least elementary knowledge of and skills in all these areas. Computations stem from an application context, so some acquaintance with physics and engineering sciences is desirable. Then, problems in these application areas are typically translated into linear algebraic, ...

  17. China's High Performance Computer Standard Commission Established

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    @@ China's High Performance Computer Standard Commission was established on March 28, 2007, under the guidance of the Science and Technology Bureau of the Ministry of Information Industry. It will prepare relevant professional standards on high performance computers to break through the monopoly in the field by foreign manufacturers and vendors.

  18. A Weather Radar Simulator for the Evaluation of Polarimetric Phased Array Performance

    Energy Technology Data Exchange (ETDEWEB)

    Byrd, Andrew D.; Ivic, Igor R.; Palmer, Robert D.; Isom, Bradley M.; Cheong, Boon Leng; Schenkman, Alexander D.; Xue, Ming

    2016-07-01

    A radar simulator capable of generating time series data for a polarimetric phased array weather radar has been designed and implemented. The received signals are composed from a high-resolution numerical prediction weather model. Thousands of scattering centers, each with an independent randomly generated Doppler spectrum, populate the field of view of the radar. The moments of the scattering center spectra are derived from the numerical weather model, and the scattering center positions are updated based on the three-dimensional wind field. In order to accurately emulate the effects of the system-induced cross-polar contamination, the array is modeled using a complete set of dual-polarization radiation patterns. The simulator offers reconfigurable element patterns and positions as well as access to independent time series data for each element, resulting in easy implementation of any beamforming method. It also allows for arbitrary waveform designs and is able to model the effects of quantization on waveform performance. Simultaneous, alternating, quasi-simultaneous, and pulse-to-pulse phase coded modes of polarimetric signal transmission have been implemented. This framework allows for realistic emulation of the effects of cross-polar fields on weather observations, as well as the evaluation of possible techniques for the mitigation of those effects.

  19. US QCD computational performance studies with PERI

    Science.gov (United States)

    Zhang, Y.; Fowler, R.; Huck, K.; Malony, A.; Porterfield, A.; Reed, D.; Shende, S.; Taylor, V.; Wu, X.

    2007-07-01

    We report on some of the interactions between two SciDAC projects: The National Computational Infrastructure for Lattice Gauge Theory (USQCD), and the Performance Engineering Research Institute (PERI). Many modern scientific programs consistently report the need for faster computational resources to maintain global competitiveness. However, as the size and complexity of emerging high end computing (HEC) systems continue to rise, achieving good performance on such systems is becoming ever more challenging. In order to take full advantage of the resources, it is crucial to understand the characteristics of relevant scientific applications and the systems these applications are running on. Using tools developed under PERI and by other performance measurement researchers, we studied the performance of two applications, MILC and Chroma, on several high performance computing systems at DOE laboratories. In the case of Chroma, we discuss how the use of C++ and modern software engineering and programming methods are driving the evolution of performance tools.

  20. US QCD computational performance studies with PERI

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Y [Renaissance Computing Institute, Chapel Hill NC (United States); Fowler, R [Renaissance Computing Institute, Chapel Hill NC (United States); Huck, K [University of Oregon, Eugene OR (United States); Malony, A [University of Oregon, Eugene OR (United States); Porterfield, A [Renaissance Computing Institute, Chapel Hill NC (United States); Reed, D [Renaissance Computing Institute, Chapel Hill NC (United States); Shende, S [University of Oregon, Eugene OR (United States); Taylor, V [Texas A and M University, College Station TX (United States); Wu, X [Texas A and M University, College Station TX (United States)

    2007-07-15

    We report on some of the interactions between two SciDAC projects: The National Computational Infrastructure for Lattice Gauge Theory (USQCD), and the Performance Engineering Research Institute (PERI). Many modern scientific programs consistently report the need for faster computational resources to maintain global competitiveness. However, as the size and complexity of emerging high end computing (HEC) systems continue to rise, achieving good performance on such systems is becoming ever more challenging. In order to take full advantage of the resources, it is crucial to understand the characteristics of relevant scientific applications and the systems these applications are running on. Using tools developed under PERI and by other performance measurement researchers, we studied the performance of two applications, MILC and Chroma, on several high performance computing systems at DOE laboratories. In the case of Chroma, we discuss how the use of C++ and modern software engineering and programming methods are driving the evolution of performance tools.

  1. A Management System for Computer Performance Evaluation.

    Science.gov (United States)

    1981-12-01

    large unused capacity indicates a potential cost performance improvement (i.e. the potential to perform more within current costs or reduce costs ...necessary to bring the performance of the computer system in line with operational goals. : (Ref. 18 : 7) The General Accouting Office estimates that the...tasks in attempting to improve the efficiency and effectiveness of their computer systems. Cost began to plan an important role in the life of a

  2. Dawning4000A high performance computer

    Institute of Scientific and Technical Information of China (English)

    SUN Ninghui; MENG Dan

    2007-01-01

    Dawning4000A is an AMD Opteron-based Linux Cluster with 11.2Tflops peak performance and 8.06Tflops Linpack performance.It was developed for the Shanghai Supercomputer Center (SSC)as one of the computing power stations of the China National Grid (CNGrid)project.The Massively Cluster Computer (MCC)architecture is proposed to put added-value on the industry standard system.Several grid-enabling components are developed to support the running environment of the CNGrid.It is an achievement for a high performance computer with the low-cost approach.

  3. An integrated radar model solution for mission level performance and cost trades

    Science.gov (United States)

    Hodge, John; Duncan, Kerron; Zimmerman, Madeline; Drupp, Rob; Manno, Mike; Barrett, Donald; Smith, Amelia

    2017-05-01

    A fully integrated Mission-Level Radar model is in development as part of a multi-year effort under the Northrop Grumman Mission Systems (NGMS) sector's Model Based Engineering (MBE) initiative to digitally interconnect and unify previously separate performance and cost models. In 2016, an NGMS internal research and development (IR and D) funded multidisciplinary team integrated radio frequency (RF), power, control, size, weight, thermal, and cost models together using a commercial-off-the-shelf software, ModelCenter, for an Active Electronically Scanned Array (AESA) radar system. Each represented model was digitally connected with standard interfaces and unified to allow end-to-end mission system optimization and trade studies. The radar model was then linked to the Air Force's own mission modeling framework (AFSIM). The team first had to identify the necessary models, and with the aid of subject matter experts (SMEs) understand and document the inputs, outputs, and behaviors of the component models. This agile development process and collaboration enabled rapid integration of disparate models and the validation of their combined system performance. This MBE framework will allow NGMS to design systems more efficiently and affordably, optimize architectures, and provide increased value to the customer. The model integrates detailed component models that validate cost and performance at the physics level with high-level models that provide visualization of a platform mission. This connectivity of component to mission models allows hardware and software design solutions to be better optimized to meet mission needs, creating cost-optimal solutions for the customer, while reducing design cycle time through risk mitigation and early validation of design decisions.

  4. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  5. POLARIS: ESA's airborne ice sounding radar front-end design, performance assessment and first results

    DEFF Research Database (Denmark)

    Hernández, Carlos Cilla; Krozer, Viktor; Vidkjær, Jens;

    2009-01-01

    This paper addresses the design, implementation and experimental performance assessment of the RF front-end of an airborne P-band ice sounding radar. The ice sounder design comprises commercial-of-the-shelf modules and newly purpose-built components at a centre frequency of 435 MHz with 20......% relative bandwidth. The transmitter uses two amplifiers combined in parallel to generate more than >128 W peak power, with system >60% PAE and 47 dB in-band to out-of-band signal ratio. The four channel receiver features digitally controlled variable gain to achieve more than 100 dB dynamic range, 2.4 d...

  6. Surveillance Radar Design Options as a Function of Cataloguing Performance Requirements

    Science.gov (United States)

    Krag, H.; Klinkrad, H.

    2009-03-01

    Europe is preparing for the development of an autonomous space surveillance and situational awareness system. First concept and capability analysis studies have led to a draft proposal for the surveillance and tracking part of the system. This foresees, in a first deployment step, ground-based surveillance and tracking radar systems, a network of optical telescopes and a data centre. In a second step the system is planned to be extended by adding space-based assets and the associated ground-segment. The terrestrial part of the system will be responsible for the build-up and maintenance of a catalogue of space objects. Studies showed that one large phased array radar alone could act as the single means for the generation of a catalogue of LEO objects (apogee altitudes radar search window for a minimum time span to enable orbit determination of sufficient accuracy. Catalogue maintenance requires objects to be re-observable after limited time spans so that they can be clearly correlated. Today, the user requirements on the performance of the system are under definition. Different options to specify the desired system performance in terms of the resulting object catalogue have been proposed. One of them is to specify a certain coverage level of the existing NORAD catalogue. A second one is to request full coverage of all objects above a certain diameter threshold. Both approaches have certain advantages (e.g. the first one being verifiable by tests and the second one leading to an unbiased/independent catalogue). However, these requirements might impose different system designs in terms of the sensor location, and the dimensions and the orientation of the search field. This paper outlines the consequences of the cataloguing performance requirements on the high-level system design. Simulation tools are used to investigate key parameters such as the optimum radar wavelength, the viewing direction and search field dimensions as a function of these specifications. First

  7. Cloud Computing for Complex Performance Codes.

    Energy Technology Data Exchange (ETDEWEB)

    Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Klein, Brandon Thorin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Miner, John Gifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.

  8. PERFORMANCE EVALUATION OF THE RADAR DETECTION WITH VARIOUS METHODS INFORMATION PROCESSING INTERFERENCE

    Directory of Open Access Journals (Sweden)

    V. E. Emelyanov

    2015-01-01

    Full Text Available A model of evaluation of the characteristics of radar detection when exposed to impulse noise. An algorithm for determining the acceptable conditions of electromagnetic environment with unintentional electromagnetic interference for ATC radars.

  9. An Adaptive Middleware for Improved Computational Performance

    DEFF Research Database (Denmark)

    Bonnichsen, Lars Frydendal

    , we are improving computational performance by exploiting modern hardware features, such as dynamic voltage-frequency scaling and transactional memory. Adapting software is an iterative process, requiring that we continually revisit it to meet new requirements or realities; a time consuming process......The performance improvements in computer systems over the past 60 years have been fueled by an exponential increase in energy efficiency. In recent years, the phenomenon known as the end of Dennard’s scaling has slowed energy efficiency improvements — but improving computer energy efficiency...... is more important now than ever. Traditionally, most improvements in computer energy efficiency have come from improvements in lithography — the ability to produce smaller transistors — and computer architecture - the ability to apply those transistors efficiently. Since the end of scaling, we have seen...

  10. Numeric Computation of the Radar Cross Section of In flight Projectiles

    Science.gov (United States)

    2016-11-01

    the motion of a spinning ballistic projectile in the mobile i-j-k frame and that of a spinning top in the fixed x-y-z ground frame. The pitch and yaw...right). When no spin , pitch, and yaw motions are accounted for, these pictures describe the radar-projectile relative orientation in the AFDTD radar... top of one another. ....................................................................41 Fig. 19 Dynamic RCS vs. time curves obtained for the

  11. High-performance Scientific Computing using Parallel Computing to Improve Performance Optimization Problems

    Directory of Open Access Journals (Sweden)

    Florica Novăcescu

    2011-10-01

    Full Text Available HPC (High Performance Computing has become essential for the acceleration of innovation and the companies’ assistance in creating new inventions, better models and more reliable products as well as obtaining processes and services at low costs. The information in this paper focuses particularly on: description the field of high performance scientific computing, parallel computing, scientific computing, parallel computers, and trends in the HPC field, presented here reveal important new directions toward the realization of a high performance computational society. The practical part of the work is an example of use of the HPC tool to accelerate solving an electrostatic optimization problem using the Parallel Computing Toolbox that allows solving computational and data-intensive problems using MATLAB and Simulink on multicore and multiprocessor computers.

  12. Deep Stochastic Radar Models

    OpenAIRE

    Wheeler, Tim Allan; Holder, Martin; Winner, Hermann; Kochenderfer, Mykel

    2017-01-01

    Accurate simulation and validation of advanced driver assistance systems requires accurate sensor models. Modeling automotive radar is complicated by effects such as multipath reflections, interference, reflective surfaces, discrete cells, and attenuation. Detailed radar simulations based on physical principles exist but are computationally intractable for realistic automotive scenes. This paper describes a methodology for the construction of stochastic automotive radar models based on deep l...

  13. Function Follows Performance in Evolutionary Computational Processing

    DEFF Research Database (Denmark)

    Pasold, Anke; Foged, Isak Worre

    2011-01-01

    As the title ‘Function Follows Performance in Evolutionary Computational Processing’ suggests, this paper explores the potentials of employing multiple design and evaluation criteria within one processing model in order to account for a number of performative parameters desired within varied...

  14. Function Follows Performance in Evolutionary Computational Processing

    DEFF Research Database (Denmark)

    Pasold, Anke; Foged, Isak Worre

    2011-01-01

    architectural projects. At the core lies the formulation of a methodology that is based upon the idea of human and computational selection in accordance with pre-defined performance criteria that can be adapted to different requirements by the mere change of parameter input in order to reach location specific......As the title ‘Function Follows Performance in Evolutionary Computational Processing’ suggests, this paper explores the potentials of employing multiple design and evaluation criteria within one processing model in order to account for a number of performative parameters desired within varied...

  15. Performability evaluation of the SIFT computer

    Science.gov (United States)

    Meyer, J. F.; Furchtgott, D. G.; Wu, L. T.

    1979-01-01

    Performability modeling and evaluation techniques are applied to the SIFT computer as it might operate in the computational evironment of an air transport mission. User-visible performance of the total system (SIFT plus its environment) is modeled as a random variable taking values in a set of levels of accomplishment. These levels are defined in terms of four attributes of total system behavior: safety, no change in mission profile, no operational penalties, and no economic process whose states describe the internal structure of SIFT as well as relavant conditions of the environment. Base model state trajectories are related to accomplishment levels via a capability function which is formulated in terms of a 3-level model hierarchy. Performability evaluation algorithms are then applied to determine the performability of the total system for various choices of computer and environment parameter values. Numerical results of those evaluations are presented and, in conclusion, some implications of this effort are discussed.

  16. A Parallel, High-Fidelity Radar Model

    Science.gov (United States)

    Horsley, M.; Fasenfest, B.

    2010-09-01

    Accurate modeling of Space Surveillance sensors is necessary for a variety of applications. Accurate models can be used to perform trade studies on sensor designs, locations, and scheduling. In addition, they can be used to predict system-level performance of the Space Surveillance Network to a collision or satellite break-up event. A high fidelity physics-based radar simulator has been developed for Space Surveillance applications. This simulator is designed in a modular fashion, where each module describes a particular physical process or radar function (radio wave propagation & scattering, waveform generation, noise sources, etc.) involved in simulating the radar and its environment. For each of these modules, multiple versions are available in order to meet the end-users needs and requirements. For instance, the radar simulator supports different atmospheric models in order to facilitate different methods of simulating refraction of the radar beam. The radar model also has the capability to use highly accurate radar cross sections generated by the method of moments, accelerated by the fast multipole method. To accelerate this computationally expensive model, it is parallelized using MPI. As a testing framework for the radar model, it is incorporated into the Testbed Environment for Space Situational Awareness (TESSA). TESSA is based on a flexible, scalable architecture, designed to exploit high-performance computing resources and allow physics-based simulation of the SSA enterprise. In addition to the radar models, TESSA includes hydrodynamic models of satellite intercept and debris generation, orbital propagation algorithms, optical brightness calculations, optical system models, object detection algorithms, orbit determination algorithms, simulation analysis and visualization tools. Within this framework, observations and tracks generated by the new radar model are compared to results from a phenomenological radar model. In particular, the new model will be

  17. Creating illusion in computer aided performance

    OpenAIRE

    Marshall, Joe

    2009-01-01

    This thesis studies the creation of illusion in computer aided performance. Illusion is created here by using deceptions, and a design framework is presented which suggests several different deception strategies which may be useful. The framework has been developed in an iterative process in tandem with the development of 3 real world performances which were used to explore deception strategies. The first case study presents a system for augmenting juggling performance. The techniques tha...

  18. The Financial Performance of Cloud Computing

    OpenAIRE

    I-Cheng Chang; Bao-Ru Guo; Chuang-Chun Liu

    2014-01-01

    With advanced information technology, compare to the past operation, the significant difference is the companies no longer need to set up their own costly the large server, instead an external professional supplier "cloud computing" technology would be increasingly adopted nowadays. The main purpose of this study is to examine the financial performance whether would be improved after enterprises implemented cloud computing technology. According to the analysis results, we found that the cost ...

  19. Performance Assessment of a Microwave Tomographic Approach for the Forward Looking Radar Configuration

    Science.gov (United States)

    Catapano, Ilaria; Soldovieri, Francesco; González-Huici, María A.

    2014-11-01

    This paper deals with the application and the performance analysis of a microwave tomography approach for Forward-Looking Radar (FLR) bistatic illumination. The imaging problem is faced by adopting an inverse scattering algorithm based on an approximated model of the electromagnetic scattering. In particular, the Born Approximation is used to describe the wave-material interaction and the targets are assumed to be embedded in a homogenous medium. The adoption of a simplified model of the electromagnetic scattering allows us to analyse how the reconstruction capabilities depend on the measurement configuration. An investigation of the resolution limits in the FLR case is performed and some numerical results are provided in order to show the effectiveness of the proposed approach in cases resembling the ones occurring in real situations.

  20. Adaptive radar resource management

    CERN Document Server

    Moo, Peter

    2015-01-01

    Radar Resource Management (RRM) is vital for optimizing the performance of modern phased array radars, which are the primary sensor for aircraft, ships, and land platforms. Adaptive Radar Resource Management gives an introduction to radar resource management (RRM), presenting a clear overview of different approaches and techniques, making it very suitable for radar practitioners and researchers in industry and universities. Coverage includes: RRM's role in optimizing the performance of modern phased array radars The advantages of adaptivity in implementing RRMThe role that modelling and

  1. A Methodology for Determining Statistical Performance Compliance for Airborne Doppler Radar with Forward-Looking Turbulence Detection Capability

    Science.gov (United States)

    Bowles, Roland L.; Buck, Bill K.

    2009-01-01

    The objective of the research developed and presented in this document was to statistically assess turbulence hazard detection performance employing airborne pulse Doppler radar systems. The FAA certification methodology for forward looking airborne turbulence radars will require estimating the probabilities of missed and false hazard indications under operational conditions. Analytical approaches must be used due to the near impossibility of obtaining sufficient statistics experimentally. This report describes an end-to-end analytical technique for estimating these probabilities for Enhanced Turbulence (E-Turb) Radar systems under noise-limited conditions, for a variety of aircraft types, as defined in FAA TSO-C134. This technique provides for one means, but not the only means, by which an applicant can demonstrate compliance to the FAA directed ATDS Working Group performance requirements. Turbulence hazard algorithms were developed that derived predictive estimates of aircraft hazards from basic radar observables. These algorithms were designed to prevent false turbulence indications while accurately predicting areas of elevated turbulence risks to aircraft, passengers, and crew; and were successfully flight tested on a NASA B757-200 and a Delta Air Lines B737-800. Application of this defined methodology for calculating the probability of missed and false hazard indications taking into account the effect of the various algorithms used, is demonstrated for representative transport aircraft and radar performance characteristics.

  2. The Warp computer: Architecture, implementation, and performance

    Energy Technology Data Exchange (ETDEWEB)

    Annaratone, M.; Arnould, E.; Gross, T.; Kung, H.T.; Lam, M.; Menzilcioglu, O.; Webb, J.A.

    1987-12-01

    The Warp machine is a systolic array computer of linearly connected cells, each of which is a programmable processor capable of performing 10 million floating-point operations per second (10 MFLOPS). A typical Warp array includes ten cells, thus having a peak computation rate of 100 MFLOPS. The Warp array can be extended to include more cells to accommodate applications capable of using the increased computational bandwidth. Warp is integrated as an attached processor into a Unix host system. Programs for Warp are written in a high-level language supported by an optimizing complier. This paper describes the architecture, implementation, and performance of the Warp machine. Each major architectural decision is discussed and evaluated with system, software, and application considerations. The programming model and tools developed for the machine are also described. The paper concludes with performance data for a large number of applications.

  3. Misleading Performance Claims in Parallel Computations

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2009-05-29

    In a previous humorous note entitled 'Twelve Ways to Fool the Masses,' I outlined twelve common ways in which performance figures for technical computer systems can be distorted. In this paper and accompanying conference talk, I give a reprise of these twelve 'methods' and give some actual examples that have appeared in peer-reviewed literature in years past. I then propose guidelines for reporting performance, the adoption of which would raise the level of professionalism and reduce the level of confusion, not only in the world of device simulation but also in the larger arena of technical computing.

  4. High-performance computing for airborne applications

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Heather M [Los Alamos National Laboratory; Manuzzato, Andrea [Los Alamos National Laboratory; Fairbanks, Tom [Los Alamos National Laboratory; Dallmann, Nicholas [Los Alamos National Laboratory; Desgeorges, Rose [Los Alamos National Laboratory

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  5. Linear algebra on high-performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, J.J.; Sorensen, D.C.

    1986-01-01

    This paper surveys work recently done at Argonne National Laboratory in an attempt to discover ways to construct numerical software for high-performance computers. The numerical algorithms are taken from several areas of numerical linear algebra. We discuss certain architectural features of advanced-computer architectures that will affect the design of algorithms. The technique of restructuring algorithms in terms of certain modules is reviewed. This technique has proved successful in obtaining a high level of transportability without severe loss of performance on a wide variety of both vector and parallel computers. The module technique is demonstrably effective for dense linear algebra problems. However, in the case of sparse and structured problems it may be difficult to identify general modules that will be as effective. New algorithms have been devised for certain problems in this category. We present examples in three important areas: banded systems, sparse QR factorization, and symmetric eigenvalue problems. 32 refs., 10 figs., 6 tabs.

  6. Phased-array radar design application of radar fundamentals

    CERN Document Server

    Jeffrey, Thomas

    2009-01-01

    Phased-Array Radar Design is a text-reference designed for electrical engineering graduate students in colleges and universities as well as for corporate in-house training programs for radar design engineers, especially systems engineers and analysts who would like to gain hands-on, practical knowledge and skills in radar design fundamentals, advanced radar concepts, trade-offs for radar design and radar performance analysis.

  7. Performative Computation-aided Design Optimization

    Directory of Open Access Journals (Sweden)

    Ming Tang

    2012-12-01

    Full Text Available This article discusses a collaborative research and teaching project between the University of Cincinnati, Perkins+Will’s Tech Lab, and the University of North Carolina Greensboro. The primary investigation focuses on the simulation, optimization, and generation of architectural designs using performance-based computational design approaches. The projects examine various design methods, including relationships between building form, performance and the use of proprietary software tools for parametric design.

  8. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  9. Performance of high-resolution X-band radar for rainfall measurement in The Netherlands

    Directory of Open Access Journals (Sweden)

    C. Z. van de Beek

    2010-02-01

    Full Text Available This study presents an analysis of 195 rainfall events gathered with the X-band weather radar SOLIDAR and a tipping bucket rain gauge network near Delft, The Netherlands, between May 1993 and April 1994. The aim of this paper is to present a thorough analysis of a climatological dataset using a high spatial (120 m and temporal (16 s resolution X-band radar. This makes it a study of the potential for high-resolution rainfall measurements with non-polarimetric X-band radar over flat terrain. An appropriate radar reflectivity – rain rate relation is derived from measurements of raindrop size distributions and compared with radar – rain gauge data. The radar calibration is assessed using a long-term comparison of rain gauge measurements with corresponding radar reflectivities as well as by analyzing the evolution of the stability of ground clutter areas over time. Three different methods for ground clutter correction as well as the effectiveness of forward and backward attenuation correction algorithms have been studied. Five individual rainfall events are discussed in detail to illustrate the strengths and weaknesses of high-resolution X-band radar and the effectiveness of the presented correction methods. X-band radar is found to be able to measure the space-time variation of rainfall at high resolution, far greater than what can be achieved by rain gauge networks or a typical operational C-band weather radar. On the other hand, SOLIDAR can suffer from receiver saturation, wet radome attenuation as well as signal loss along the path. During very strong convective situations the signal can even be lost completely. In combination with several rain gauges for quality control, high resolution X-band radar is considered to be suitable for rainfall monitoring over relatively small (urban catchments. These results offer great prospects for the new high resolution polarimetric doppler X-band radar IDRA.

  10. Performance of high-resolution X-band radar for rainfall measurement in The Netherlands

    Directory of Open Access Journals (Sweden)

    C. Z. van de Beek

    2009-09-01

    Full Text Available This study presents an analysis of 195 rainfall events gathered with the X-band weather radar SOLIDAR and a tipping bucket rain gauge network near Delft, The Netherlands, between May 1993 and April 1994. The high spatial (120 m and temporal (16 s resolution of the radar combined with the extent of the database make this study a climatological analysis of the potential for high-resolution rainfall measurement with non-polarimetric X-band radar over completely flat terrain. An appropriate radar reflectivity – rain rate relation is derived from measurements of raindrop size distributions and compared with radar – rain gauge data. The radar calibration is assessed using a long-term comparison of rain gauge measurements with corresponding radar reflectivities as well as by analyzing the evolution of the stability of ground clutter areas over time. Three different methods for ground clutter correction as well as the effectiveness of forward and backward attenuation correction algorithms have been studied. Five individual rainfall events are discussed in detail to illustrate the strengths and weaknesses of high-resolution X-band radar and the effectiveness of the presented correction methods. X-band radar is found to be able to measure the space-time variation of rainfall at high resolution, far greater than can be achieved by rain gauge networks or a typical operational C-band weather radar. On the other hand, SOLIDAR can suffer from receiver saturation, wet radome attenuation as well as signal loss along the path. During very strong convective situations the signal can even be lost completely. In combination with several rain gauges for quality control, high resolution X-band radar is considered to be suitable for rainfall monitoring over relatively small (urban catchments. These results offer great prospects for the new high resolution polarimetric doppler X-band radar IDRA.

  11. Coded continuous wave meteor radar

    Science.gov (United States)

    Vierinen, Juha; Chau, Jorge L.; Pfeffer, Nico; Clahsen, Matthias; Stober, Gunter

    2016-03-01

    The concept of a coded continuous wave specular meteor radar (SMR) is described. The radar uses a continuously transmitted pseudorandom phase-modulated waveform, which has several advantages compared to conventional pulsed SMRs. The coding avoids range and Doppler aliasing, which are in some cases problematic with pulsed radars. Continuous transmissions maximize pulse compression gain, allowing operation at lower peak power than a pulsed system. With continuous coding, the temporal and spectral resolution are not dependent on the transmit waveform and they can be fairly flexibly changed after performing a measurement. The low signal-to-noise ratio before pulse compression, combined with independent pseudorandom transmit waveforms, allows multiple geographically separated transmitters to be used in the same frequency band simultaneously without significantly interfering with each other. Because the same frequency band can be used by multiple transmitters, the same interferometric receiver antennas can be used to receive multiple transmitters at the same time. The principles of the signal processing are discussed, in addition to discussion of several practical ways to increase computation speed, and how to optimally detect meteor echoes. Measurements from a campaign performed with a coded continuous wave SMR are shown and compared with two standard pulsed SMR measurements. The type of meteor radar described in this paper would be suited for use in a large-scale multi-static network of meteor radar transmitters and receivers. Such a system would be useful for increasing the number of meteor detections to obtain improved meteor radar data products.

  12. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  13. High performance computing and communications panel report

    Energy Technology Data Exchange (ETDEWEB)

    1992-12-01

    In FY92, a presidential initiative entitled High Performance Computing and Communications (HPCC) was launched, aimed at securing U.S. preeminence in high performance computing and related communication technologies. The stated goal of the initiative is threefold: extend U.S. technological leadership in high performance computing and computer communications; provide wide dissemination and application of the technologies; and spur gains in U.S. productivity and industrial competitiveness, all within the context of the mission needs of federal agencies. Because of the importance of the HPCC program to the national well-being, especially its potential implication for industrial competitiveness, the Assistant to the President for Science and Technology has asked that the President's Council of Advisors in Science and Technology (PCAST) establish a panel to advise PCAST on the strengths and weaknesses of the HPCC program. The report presents a program analysis based on strategy, balance, management, and vision. Both constructive recommendations for program improvement and positive reinforcement of successful program elements are contained within the report.

  14. Ranging and target detection performance through lossy media using an ultrawideband S-band through-wall sensing noise radar

    Science.gov (United States)

    Smith, Sonny; Narayanan, Ram M.

    2013-05-01

    An S-band noise radar has been developed for through-wall ranging and tracking of targets. Ranging to target is achieved by the cross-correlation between the time-delayed reflected return signal and the replica of the transmit signal; both are bandlimited ultrawideband (UWB) noise signals. Furthermore, successive scene subtraction allows for target tracking using the range profiles created by the cross-correlation technique. In this paper, we explore the performance of the radar system for target detection through varied, lossy media (e.g. a 4-inch thick brick wall and an 8-inch thick cinder-block wall) via correlation measurements using the S-band radar system. Moreover, we present a qualitative analysis of the S-band noise radar as operated under disparate testing configurations (i.e. different walls, targets, and distances.) with different antennas (e.g. dual polarized horns, helical antennas with different ground planes, etc.). In addition, we discuss key concepts of the noise radar design, considerations for an antenna choice, as well as experimental results for a few scenarios.

  15. Computer performance optimization systems, applications, processes

    CERN Document Server

    Osterhage, Wolfgang W

    2013-01-01

    Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting p

  16. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  17. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  18. Performance Analysis of Ultra-Wideband Channel for Short-Range Monopulse Radar at Ka-Band

    Directory of Open Access Journals (Sweden)

    Naohiko Iwakiri

    2012-01-01

    Full Text Available High-range resolution is inherently provided with Ka-band ultra-wideband (UWB vehicular radars. The authors have developed a prototype UWB monopulse radar equipped with a two-element receiving antenna array and reported its measurement results. In this paper, a more detailed verification using these measurements is presented. The measurements were analyzed employing matched filtering and eigendecomposition, and then multipath components were extracted to examine the behavior of received UWB monopulse signals. Next, conventional direction finding algorithms based on narrowband assumption were evaluated using the extracted multipath components, resulting in acceptable angle-of-arrival (AOA from the UWB monopulse signal regardless of wideband signals. Performance degradation due to a number of averaging the received monopulses was also examined to design suitable radar's waveforms.

  19. Numerical simulation of imaging laser radar system

    Science.gov (United States)

    Han, Shaokun; Lu, Bo; Jiang, Ming; Liu, Xunliang

    2008-03-01

    Rational and effective design of imaging laser radar systems is the key of imaging laser radar system research. Design must fully consider the interrelationship between various parameters. According to the parameters, choose suitable laser, detector and other components. To use of mathematical modeling and computer simulation is an effective imaging laser radar system design methods. This paper based on the distance equation, using the detection statistical methods, from the laser radar range coverage, detection probability, false-alarm rate, SNR to build the laser radar system mathematical models. In the process of setting up the mathematical models to fully consider the laser, atmosphere, detector and other factors on the performance that is to make the models be able to respond accurately the real situation. Based on this using C# and Matlab designed a simulation software.

  20. Cramer-Rao Bounds and Coherence Performance Analysis for Next Generation Radar with Pulse Trains

    Directory of Open Access Journals (Sweden)

    Ning Zhang

    2013-04-01

    Full Text Available We study the Cramer-Rao bounds of parameter estimation and coherence performance for the next generation radar (NGR. In order to enhance the performance of NGR, the signal model of NGR with master-slave architecture based on a single pulse is extended to the case of pulse trains, in which multiple pulses are emitted from all sensors and then integrated spatially and temporally in a unique master sensor. For the MIMO mode of NGR where orthogonal waveforms are emitted, we derive the closed-form Cramer-Rao bound (CRB for the estimates of generalized coherence parameters (GCPs, including the time delay differences, total phase differences and Doppler frequencies with respect to different sensors. For the coherent mode of NGR where the coherent waveforms are emitted after pre-compensation using the estimates of GCPs, we develop a performance bound of signal-to-noise ratio (SNR gain for NGR based on the aforementioned CRBs, taking all the estimation errors into consideration. It is shown that greatly improved estimation accuracy and coherence performance can be obtained with pulse trains employed in NGR. Numerical examples demonstrate the validity of the theoretical results.

  1. Coded continuous wave meteor radar

    Science.gov (United States)

    Chau, J. L.; Vierinen, J.; Pfeffer, N.; Clahsen, M.; Stober, G.

    2016-12-01

    The concept of a coded continuous wave specular meteor radar (SMR) is described. The radar uses a continuously transmitted pseudorandom phase-modulated waveform, which has several advantages compared to conventional pulsed SMRs. The coding avoids range and Doppler aliasing, which are in some cases problematic with pulsed radars. Continuous transmissions maximize pulse compression gain, allowing operation at lower peak power than a pulsed system. With continuous coding, the temporal and spectral resolution are not dependent on the transmit waveform and they can be fairly flexibly changed after performing a measurement. The low signal-to-noise ratio before pulse compression, combined with independent pseudorandom transmit waveforms, allows multiple geographically separated transmitters to be used in the same frequency band simultaneously without significantly interfering with each other. Because the same frequency band can be used by multiple transmitters, the same interferometric receiver antennas can be used to receive multiple transmitters at the same time. The principles of the signal processing are discussed, in addition to discussion of several practical ways to increase computation speed, and how to optimally detect meteor echoes. Measurements from a campaign performed with a coded continuous wave SMR are shown and compared with two standard pulsed SMR measurements. The type of meteor radar described in this paper would be suited for use in a large-scale multi-static network of meteor radar transmitters and receivers. Such a system would be useful for increasing the number of meteor detections to obtain improved meteor radar data products, such as wind fields. This type of a radar would also be useful for over-the-horizon radar, ionosondes, and observations of field-aligned-irregularities.

  2. The effects of phased-array antennas on the performance of radars utilizing pseudo-random noise coding

    Science.gov (United States)

    Howard, R. L.; Belcher, M. L.; Corey, L. E.

    This paper examines how the phased-array antenna affects a radar's performance when pseudorandom noise (PRN)-coded waveforms are used. Dispersion loss, compressed pulse shapes, and suppression of wideband sidelobes or grating lobes are examined, and their interdependencies for systems using PRN-coded waveforms are considered. It is shown that these performance characteristics are a function of signal bandwidth, subarray size, and antenna scan angle. The choice of filtering schemes in the receiver can also impact the performance.

  3. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    Directory of Open Access Journals (Sweden)

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  4. Monitoring SLAC High Performance UNIX Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

  5. Numerical Computation of the Radar Cross Section of Rockets and Artillery Rounds

    Science.gov (United States)

    2015-09-01

    some of its modern versions have reached a high degree of technological sophistication (such as the US-made, global positioning system [GPS]-guided... electrodynamics : The finite-difference time-domain method. Norwood (MA): Artech; 2000. 5. ARL DSRC Web page [accessed August 2015]. http://www.arl.hpc.mil. 6...Richards M, Scheer J, Holm W. Principles of modern radar. Raleigh: SciTech Publishing; 2010. 9. Hubral P, Tygel M. Analysis of the Rayleigh pulse

  6. Compression Techniques for Improved Algorithm Computational Performance

    Science.gov (United States)

    Zalameda, Joseph N.; Howell, Patricia A.; Winfree, William P.

    2005-01-01

    Analysis of thermal data requires the processing of large amounts of temporal image data. The processing of the data for quantitative information can be time intensive especially out in the field where large areas are inspected resulting in numerous data sets. By applying a temporal compression technique, improved algorithm performance can be obtained. In this study, analysis techniques are applied to compressed and non-compressed thermal data. A comparison is made based on computational speed and defect signal to noise.

  7. Large/Complex Antenna Performance Validation for Spaceborne Radar/Radiometeric Instruments

    Science.gov (United States)

    Focardi, Paolo; Harrell, Jefferson; Vacchione, Joseph

    2013-01-01

    Over the past decade, Earth observing missions which employ spaceborne combined radar & radiometric instruments have been developed and implemented. These instruments include the use of large and complex deployable antennas whose radiation characteristics need to be accurately determined over 4 pisteradians. Given the size and complexity of these antennas, the performance of the flight units cannot be readily measured. In addition, the radiation performance is impacted by the presence of the instrument's service platform which cannot easily be included in any measurement campaign. In order to meet the system performance knowledge requirements, a two pronged approach has been employed. The first is to use modeling tools to characterize the system and the second is to build a scale model of the system and use RF measurements to validate the results of the modeling tools. This paper demonstrates the resulting level of agreement between scale model and numerical modeling for two recent missions: (1) the earlier Aquarius instrument currently in Earth orbit and (2) the upcoming Soil Moisture Active Passive (SMAP) mission. The results from two modeling approaches, Ansoft's High Frequency Structure Simulator (HFSS) and TICRA's General RF Applications Software Package (GRASP), were compared with measurements of approximately 1/10th scale models of the Aquarius and SMAP systems. Generally good agreement was found between the three methods but each approach had its shortcomings as will be detailed in this paper.

  8. Large/Complex Antenna Performance Validation for Spaceborne Radar/Radiometeric Instruments

    Science.gov (United States)

    Focardi, Paolo; Harrell, Jefferson; Vacchione, Joseph

    2013-01-01

    Over the past decade, Earth observing missions which employ spaceborne combined radar & radiometric instruments have been developed and implemented. These instruments include the use of large and complex deployable antennas whose radiation characteristics need to be accurately determined over 4 pisteradians. Given the size and complexity of these antennas, the performance of the flight units cannot be readily measured. In addition, the radiation performance is impacted by the presence of the instrument's service platform which cannot easily be included in any measurement campaign. In order to meet the system performance knowledge requirements, a two pronged approach has been employed. The first is to use modeling tools to characterize the system and the second is to build a scale model of the system and use RF measurements to validate the results of the modeling tools. This paper demonstrates the resulting level of agreement between scale model and numerical modeling for two recent missions: (1) the earlier Aquarius instrument currently in Earth orbit and (2) the upcoming Soil Moisture Active Passive (SMAP) mission. The results from two modeling approaches, Ansoft's High Frequency Structure Simulator (HFSS) and TICRA's General RF Applications Software Package (GRASP), were compared with measurements of approximately 1/10th scale models of the Aquarius and SMAP systems. Generally good agreement was found between the three methods but each approach had its shortcomings as will be detailed in this paper.

  9. GPU Computing to Improve Game Engine Performance

    Directory of Open Access Journals (Sweden)

    Abu Asaduzzaman

    2014-07-01

    Full Text Available Although the graphics processing unit (GPU was originally designed to accelerate the image creation for output to display, today’s general purpose GPU (GPGPU computing offers unprecedented performance by offloading computing-intensive portions of the application to the GPGPU, while running the remainder of the code on the central processing unit (CPU. The highly parallel structure of a many core GPGPU can process large blocks of data faster using multithreaded concurrent processing. A game engine has many “components” and multithreading can be used to implement their parallelism. However, effective implementation of multithreading in a multicore processor has challenges, such as data and task parallelism. In this paper, we investigate the impact of using a GPGPU with a CPU to design high-performance game engines. First, we implement a separable convolution filter (heavily used in image processing with the GPGPU. Then, we implement a multiobject interactive game console in an eight-core workstation using a multithreaded asynchronous model (MAM, a multithreaded synchronous model (MSM, and an MSM with data parallelism (MSMDP. According to the experimental results, speedup of about 61x and 5x is achieved due to GPGPU and MSMDP implementation, respectively. Therefore, GPGPU-assisted parallel computing has the potential to improve multithreaded game engine performance.

  10. Computational Tools to Assess Turbine Biological Performance

    Energy Technology Data Exchange (ETDEWEB)

    Richmond, Marshall C.; Serkowski, John A.; Rakowski, Cynthia L.; Strickler, Brad; Weisbeck, Molly; Dotson, Curtis L.

    2014-07-24

    Public Utility District No. 2 of Grant County (GCPUD) operates the Priest Rapids Dam (PRD), a hydroelectric facility on the Columbia River in Washington State. The dam contains 10 Kaplan-type turbine units that are now more than 50 years old. Plans are underway to refit these aging turbines with new runners. The Columbia River at PRD is a migratory pathway for several species of juvenile and adult salmonids, so passage of fish through the dam is a major consideration when upgrading the turbines. In this paper, a method for turbine biological performance assessment (BioPA) is demonstrated. Using this method, a suite of biological performance indicators is computed based on simulated data from a CFD model of a proposed turbine design. Each performance indicator is a measure of the probability of exposure to a certain dose of an injury mechanism. Using known relationships between the dose of an injury mechanism and frequency of injury (dose–response) from laboratory or field studies, the likelihood of fish injury for a turbine design can be computed from the performance indicator. By comparing the values of the indicators from proposed designs, the engineer can identify the more-promising alternatives. We present an application of the BioPA method for baseline risk assessment calculations for the existing Kaplan turbines at PRD that will be used as the minimum biological performance that a proposed new design must achieve.

  11. PREFACE: High Performance Computing Symposium 2011

    Science.gov (United States)

    Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod

    2012-02-01

    HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.

  12. Rainfall Estimation and Performance Characterization Using an X-band Dual-Polarization Radar in the San Francisco Bay Area

    Science.gov (United States)

    Cifelli, R.; Chen, H.; Chandra, C. V.

    2016-12-01

    estimation (QPE) in the Bay Area. The radar rainfall products are evaluated with rain gauge observations collected by SCVWD. The comparison with gages show the excellent performance of X-band radar for rainfall monitoring in the Bay Area.

  13. The path toward HEP High Performance Computing

    Science.gov (United States)

    Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-06-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from

  14. NINJA: Java for High Performance Numerical Computing

    Directory of Open Access Journals (Sweden)

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  15. Benchmarking: More Aspects of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Ravindrudu, Rahul [Iowa State Univ., Ames, IA (United States)

    2004-01-01

    pattern for the left-looking factorization. The right-looking algorithm performs better for in-core data, but the left-looking will perform better for out-of-core data due to the reduced I/O operations. Hence the conclusion that out-of-core algorithms will perform better when designed from start. The out-of-core and thread based computation do not interact in this case, since I/O is not done by the threads. The performance of the thread based computation does not depend on I/O as the algorithms are in the BLAS algorithms which assumes all the data to be in memory. This is the reason the out-of-core results and OpenMP threads results were presented separately and no attempt to combine them was made. In general, the modified HPL performs better with larger block sizes, due to less I/O involved for out-of-core part and better cache utilization for the thread based computation.

  16. Computational studies for C-band polarimetric radar parameters of ensembles of tumbling and melting ice particles and comparison with measurements; Modellrechnungen fuer polarimetrische Radarparameter im C-Band fuer Ensembles taumelnder und schmelzender Eispartikeln und Vergleich mit Messungen

    Energy Technology Data Exchange (ETDEWEB)

    Doelling, I. [Deutsche Forschungsanstalt fuer Luft- und Raumfahrt e.V. (DLR), Wessling (Germany). Inst. fuer Physik der Atmosphaere

    1997-12-31

    The dependence of radar polarimetric parameters on the characteristics of an ensemble of melting and tumbling particles were investigated by model calculations. The particles were defined by their sizes, shapes, tumbling and melting behaviour. The separate influences of these variables on the radar parameters is described. The particles were treated as oblate spheroids. The melting behaviour was described by Maxwell Garnet and Bruggeman mixing rules. The distribution function for the tumbling angle was assumed as a Gauss function, all other distributions were assumed as monodisperse. The calculations were performed with the T-matrix-method. For particles with large diameters resonance effects in dependence on the melting state of the particles were observed. Calculation results indicate that melting particles tumble to a much higher degree than rain drops. During the field experiment CLEOPATRA coordinated radar and in situ data in a melting layer were gathered. The radar measurements and model calculations for Z{sub DR}, D{sub LDR} and D{sub CDR} were compared with in situ measurements. The such derived axis ratio are in good agreement with the in situ data. The computational results and the particle classification scheme by Hoeller (1995) show qualitative good agreement. (orig.) 90 refs.

  17. An investigation of the RCS (radar cross section) computation of grid cavities

    Science.gov (United States)

    Sabihi, Ahmad

    2014-12-01

    In this paper, the aperture of a cavity is covered by a metallic grid net. This metallic grid is to reduce RCS deduced by impinging radar ray on the aperture. A radar ray incident on a grid net installed on a cavity may create six types of propagation. 1-Incident rays entering inside the cavity and backscattered from it.2-Incidebnt rays on the grid net and created reection rays as an array of scatterers. These rays may create a wave with phase difference of 180 degree with respect to the exiting rays from the cavity.3-Incident rays on the grid net create surface currents owing on the net and make travelling waves, which regenerate the magnetic and electric fields. These fields make again propagated waves against incident ones.4-Creeping waves.5-Diffracted rays due to leading edges of net's elements.6-Mutual impedance among elements of the net could be effective on the resultant RCS. Therefore, the author compares the effects of three out of six properties to a cavity without grid net. This comparison shows that RCS prediction of cavity having a grid net is much more reduced than that of without one.

  18. An investigation of the RCS (radar cross section) computation of grid cavities

    Energy Technology Data Exchange (ETDEWEB)

    Sabihi, Ahmad [Department of Mathematical Sciences, Sharif University of Technology, Tehran (Iran, Islamic Republic of)

    2014-12-10

    In this paper, the aperture of a cavity is covered by a metallic grid net. This metallic grid is to reduce RCS deduced by impinging radar ray on the aperture. A radar ray incident on a grid net installed on a cavity may create six types of propagation. 1-Incident rays entering inside the cavity and backscattered from it.2-Incidebnt rays on the grid net and created reection rays as an array of scatterers. These rays may create a wave with phase difference of 180 degree with respect to the exiting rays from the cavity.3-Incident rays on the grid net create surface currents owing on the net and make travelling waves, which regenerate the magnetic and electric fields. These fields make again propagated waves against incident ones.4-Creeping waves.5-Diffracted rays due to leading edges of net’s elements.6-Mutual impedance among elements of the net could be effective on the resultant RCS. Therefore, the author compares the effects of three out of six properties to a cavity without grid net. This comparison shows that RCS prediction of cavity having a grid net is much more reduced than that of without one.

  19. Assessment of radar interferometry performance for ground subsidence monitoring due to underground mining

    Energy Technology Data Exchange (ETDEWEB)

    Ng, A.H.M.; Chang, H.C.; Ge, L.L.; Rizos, C.; Omura, M. [Cooperative Research Centre for Spatial Information, Carlton, Vic. (Australia)

    2009-07-01

    This paper describes the results from the recently launched SAR satellites for the purpose of subsidence monitoring over underground coal mine sites in the state of New South Wales, Australia, using differential interferometric synthetic aperture radar (DInSAR) technique. The quality of the mine subsidence monitoring results is mainly constrained by noise due to the spatial and temporal decorrelation between the interferometric pair and the phase discontinuities in the interferogram. This paper reports oil the analysis of the impact of these two factors on the performance of DInSAR for monitoring ground deformation. Simulations were carried out prior to real data analyses. SAR data acquired using different operating frequencies, for example, X-, C- and L-band, from the TerraSAR-X, ERS-1/2, ENVISAT, JERS-1 and ALOS satellite missions, were examined. The simulation results showed that the new satellites ALOS, TerraSAR-X and COSMO-SkyMed perform much better than the satellites launched before 2006. ALOS and ENVISAT satellite SAR images with similar temporal coverage were searched for the test site. The ALOS PALSAR DInSAR results have been compared to DInSAR results obtained from ENVISAT ASAR data to investigate the performance of both satellites for ground subsidence monitoring. Strong phase discontinuities and decorrelation have been observed in almost all ENVISAT interferograms and hence it is not possible to generate the displacement maps without errors. However these problems are minimal in ALOS PALSAR interferograms due to its spatial resolution and longer wavelength. Hence ALOS PALSAR is preferred for ground subsidence monitoring in areas covered by vegetation and where there is a high rate ground deformation.

  20. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  1. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  2. Multidimensional radar picture

    Science.gov (United States)

    Waz, Mariusz

    2010-05-01

    In marine navigation systems, the three-dimensional (3D) visualization is often and often used. Echosonders and sonars working in hydroacustic systems can present pictures in three dimensions. Currently, vector maps also offer 3D presentation. This presentation is used in aviation and underwater navigation. In the nearest future three-dimensional presentation may be obligatory presentation in displays of navigation systems. A part of these systems work with radar and communicates with it transmitting data in a digital form. 3D presentation of radar picture require a new technology to develop. In the first step it is necessary to compile digital form of radar signal. The modern navigation radar do not present data in three-dimensional form. Progress in technology of digital signal processing make it possible to create multidimensional radar pictures. For instance, the RSC (Radar Scan Converter) - digital radar picture recording and transforming tool can be used to create new picture online. Using RSC and techniques of modern computer graphics multidimensional radar pictures can be generated. The radar pictures mentioned should be readable for ECDIS. The paper presents a method for generating multidimensional radar picture from original signal coming from radar receiver.

  3. Coded continuous wave meteor radar

    Directory of Open Access Journals (Sweden)

    J. Vierinen

    2015-07-01

    Full Text Available The concept of coded continuous wave meteor radar is introduced. The radar uses a continuously transmitted pseudo-random waveform, which has several advantages: coding avoids range aliased echoes, which are often seen with commonly used pulsed specular meteor radars (SMRs; continuous transmissions maximize pulse compression gain, allowing operation with significantly lower peak transmit power; the temporal resolution can be changed after performing a measurement, as it does not depend on pulse spacing; and the low signal to noise ratio allows multiple geographically separated transmitters to be used in the same frequency band without significantly interfering with each other. The latter allows the same receiver antennas to be used to receive multiple transmitters. The principles of the signal processing are discussed, in addition to discussion of several practical ways to increase computation speed, and how to optimally detect meteor echoes. Measurements from a campaign performed with a coded continuous wave SMR are shown and compared with two standard pulsed SMR measurements. The type of meteor radar described in this paper would be suited for use in a large scale multi-static network of meteor radar transmitters and receivers. This would, for example, provide higher spatio-temporal resolution for mesospheric wind field measurements.

  4. Coded continuous wave meteor radar

    Science.gov (United States)

    Vierinen, J.; Chau, J. L.; Pfeffer, N.; Clahsen, M.; Stober, G.

    2015-07-01

    The concept of coded continuous wave meteor radar is introduced. The radar uses a continuously transmitted pseudo-random waveform, which has several advantages: coding avoids range aliased echoes, which are often seen with commonly used pulsed specular meteor radars (SMRs); continuous transmissions maximize pulse compression gain, allowing operation with significantly lower peak transmit power; the temporal resolution can be changed after performing a measurement, as it does not depend on pulse spacing; and the low signal to noise ratio allows multiple geographically separated transmitters to be used in the same frequency band without significantly interfering with each other. The latter allows the same receiver antennas to be used to receive multiple transmitters. The principles of the signal processing are discussed, in addition to discussion of several practical ways to increase computation speed, and how to optimally detect meteor echoes. Measurements from a campaign performed with a coded continuous wave SMR are shown and compared with two standard pulsed SMR measurements. The type of meteor radar described in this paper would be suited for use in a large scale multi-static network of meteor radar transmitters and receivers. This would, for example, provide higher spatio-temporal resolution for mesospheric wind field measurements.

  5. Investigating Performance of Various Natural Computing Algorithms

    Directory of Open Access Journals (Sweden)

    Bharat V. Chawda

    2017-01-01

    Full Text Available Nature is there since millenniums. Natural elements have withstood harsh complexities since years and have proved their efficiency in tackling them. This aspect has inspired many researchers to design algorithms based on phenomena in the natural world since the last couple of decades. Such algorithms are known as natural computing algorithms or nature inspired algorithms. These algorithms have established their ability to solve a large number of real-world complex problems by providing optimal solutions within the reasonable time duration. This paper presents an investigation by assessing the performance of some of the well-known natural computing algorithms with their variations. These algorithms include Genetic Algorithms, Ant Colony Optimization, River Formation Dynamics, Firefly Algorithm and Cuckoo Search. The Traveling Sales man Problem is used here as a test bed problem for performance evaluation of these algorithms. It is a kind of combinatorial optimization problem and known as one the most famous NP-Hard problems. It is simple and easy to understand, but at the same time, very difficult to find the optimal solution in a reasonable time – particularly with the increase in a number of cities. The source code for the above natural computing algorithms is developed in MATLAB R2015b and applied on several TSP instances given in TSPLIB library. Results obtained are analyzed based on various criteria such as tour length, required iterations, convergence time and quality of solutions. Conclusions derived from this analysis help to establish the superiority of Firefly Algorithms over the other algorithms in comparative terms.

  6. The path toward HEP High Performance Computing

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  7. Using phase for radar scatterer classification

    Science.gov (United States)

    Moore, Linda J.; Rigling, Brian D.; Penno, Robert P.; Zelnio, Edmund G.

    2017-04-01

    Traditional synthetic aperture radar (SAR) systems tend to discard phase information of formed complex radar imagery prior to automatic target recognition (ATR). This practice has historically been driven by available hardware storage, processing capabilities, and data link capacity. Recent advances in high performance computing (HPC) have enabled extremely dense storage and processing solutions. Therefore, previous motives for discarding radar phase information in ATR applications have been mitigated. First, we characterize the value of phase in one-dimensional (1-D) radar range profiles with respect to the ability to correctly estimate target features, which are currently employed in ATR algorithms for target discrimination. These features correspond to physical characteristics of targets through radio frequency (RF) scattering phenomenology. Physics-based electromagnetic scattering models developed from the geometrical theory of diffraction are utilized for the information analysis presented here. Information is quantified by the error of target parameter estimates from noisy radar signals when phase is either retained or discarded. Operating conditions (OCs) of signal-tonoise ratio (SNR) and bandwidth are considered. Second, we investigate the value of phase in 1-D radar returns with respect to the ability to correctly classify canonical targets. Classification performance is evaluated via logistic regression for three targets (sphere, plate, tophat). Phase information is demonstrated to improve radar target classification rates, particularly at low SNRs and low bandwidths.

  8. 76 FR 60939 - Metal Fatigue Analysis Performed by Computer Software

    Science.gov (United States)

    2011-09-30

    ... COMMISSION Metal Fatigue Analysis Performed by Computer Software AGENCY: Nuclear Regulatory Commission... applicants' analyses and methodologies using the computer software package, WESTEMS TM , to demonstrate... by Computer Software Addressees All holders of, and applicants for, a power reactor operating...

  9. Performance Evaluation of Target Detection with a Near-Space Vehicle-Borne Radar in Blackout Condition.

    Science.gov (United States)

    Li, Yanpeng; Li, Xiang; Wang, Hongqiang; Deng, Bin; Qin, Yuliang

    2016-01-06

    Radar is a very important sensor in surveillance applications. Near-space vehicle-borne radar (NSVBR) is a novel installation of a radar system, which offers many benefits, like being highly suited to the remote sensing of extremely large areas, having a rapidly deployable capability and having low vulnerability to electronic countermeasures. Unfortunately, a target detection challenge arises because of complicated scenarios, such as nuclear blackout, rain attenuation, etc. In these cases, extra care is needed to evaluate the detection performance in blackout situations, since this a classical problem along with the application of an NSVBR. However, the existing evaluation measures are the probability of detection and the receiver operating curve (ROC), which cannot offer detailed information in such a complicated application. This work focuses on such requirements. We first investigate the effect of blackout on an electromagnetic wave. Performance evaluation indexes are then built: three evaluation indexes on the detection capability and two evaluation indexes on the robustness of the detection process. Simulation results show that the proposed measure will offer information on the detailed performance of detection. These measures are therefore very useful in detecting the target of interest in a remote sensing system and are helpful for both the NSVBR designers and users.

  10. Performance Evaluation of Target Detection with a Near-Space Vehicle-Borne Radar in Blackout Condition

    Directory of Open Access Journals (Sweden)

    Yanpeng Li

    2016-01-01

    Full Text Available Radar is a very important sensor in surveillance applications. Near-space vehicle-borne radar (NSVBR is a novel installation of a radar system, which offers many benefits, like being highly suited to the remote sensing of extremely large areas, having a rapidly deployable capability and having low vulnerability to electronic countermeasures. Unfortunately, a target detection challenge arises because of complicated scenarios, such as nuclear blackout, rain attenuation, etc. In these cases, extra care is needed to evaluate the detection performance in blackout situations, since this a classical problem along with the application of an NSVBR. However, the existing evaluation measures are the probability of detection and the receiver operating curve (ROC, which cannot offer detailed information in such a complicated application. This work focuses on such requirements. We first investigate the effect of blackout on an electromagnetic wave. Performance evaluation indexes are then built: three evaluation indexes on the detection capability and two evaluation indexes on the robustness of the detection process. Simulation results show that the proposed measure will offer information on the detailed performance of detection. These measures are therefore very useful in detecting the target of interest in a remote sensing system and are helpful for both the NSVBR designers and users.

  11. Performance Aspects of Synthesizable Computing Systems

    DEFF Research Database (Denmark)

    Schleuniger, Pascal

    . However, high setup and design costs make ASICs economically viable only for high volume production. Therefore, FPGAs are increasingly being used in low and medium volume markets. The evolution of FPGAs has reached a point where multiple processor cores, dedicated accelerators, and a large number...... of interfaces can be integrated on a single device. This thesis consists of ve parts that address performance aspects of synthesizable computing systems on FPGAs. First, it is evaluated how synthesizable processor cores can exploit current state-of-the-art FPGA architectures. This evaluation results...... in a processor architecture optimized for a high throughput on modern FPGA architectures. The current hardware implementation, the Tinuso I core, can be clocked as high as 376MHz on a Xilinx Virtex 6 device and consumes fewer hardware resources than similar commercial processor congurations. The Tinuso...

  12. Performance evaluation of a computed radiography system

    Energy Technology Data Exchange (ETDEWEB)

    Roussilhe, J.; Fallet, E. [Carestream Health France, 71 - Chalon/Saone (France); Mango, St.A. [Carestream Health, Inc. Rochester, New York (United States)

    2007-07-01

    Computed radiography (CR) standards have been formalized and published in Europe and in the US. The CR system classification is defined in those standards by - minimum normalized signal-to-noise ratio (SNRN), and - maximum basic spatial resolution (SRb). Both the signal-to-noise ratio (SNR) and the contrast sensitivity of a CR system depend on the dose (exposure time and conditions) at the detector. Because of their wide dynamic range, the same storage phosphor imaging plate can qualify for all six CR system classes. The exposure characteristics from 30 to 450 kV, the contrast sensitivity, and the spatial resolution of the KODAK INDUSTREX CR Digital System have been thoroughly evaluated. This paper will present some of the factors that determine the system's spatial resolution performance. (authors)

  13. Computer modeling of thermoelectric generator performance

    Science.gov (United States)

    Chmielewski, A. B.; Shields, V.

    1982-01-01

    Features of the DEGRA 2 computer code for simulating the operations of a spacecraft thermoelectric generator are described. The code models the physical processes occurring during operation. Input variables include the thermoelectric couple geometry and composition, the thermoelectric materials' properties, interfaces and insulation in the thermopile, the heat source characteristics, mission trajectory, and generator electrical requirements. Time steps can be specified and sublimation of the leg and hot shoe is accounted for, as are shorts between legs. Calculations are performed for conduction, Peltier, Thomson, and Joule heating, the cold junction can be adjusted for solar radition, and the legs of the thermoelectric couple are segmented to enhance the approximation accuracy. A trial run covering 18 couple modules yielded data with 0.3% accuracy with regard to test data. The model has been successful with selenide materials, SiGe, and SiN4, with output of all critical operational variables.

  14. Scalable resource management in high performance computers.

    Energy Technology Data Exchange (ETDEWEB)

    Frachtenberg, E. (Eitan); Petrini, F. (Fabrizio); Fernandez Peinador, J. (Juan); Coll, S. (Salvador)

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  15. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  16. Development and Performance of an Ultrawideband Stepped-Frequency Radar for Landmine and Improvised Explosive Device (IED) Detection

    Science.gov (United States)

    Phelan, Brian R.; Gallagher, Kyle A.; Sherbondy, Kelly D.; Ranney, Kenneth I.; Narayanan, Ram M.

    2014-11-01

    Under support from the Army Research Laboratory's Partnerships in Research Transition program, a stepped-frequency radar (SFR) is currently under development, which allows for manipulation of the radiated spectrum while still maintaining an effective ultra-wide bandwidth. The SFR is a vehicle-mounted forward-looking ground-penetrating radar designed for high-resolution detection of buried landmines and improvised explosive devices. The SFR can be configured to precisely excise prohibited or interfering frequency bands and also possesses frequency-hopping capabilities. This paper discusses the expected performance features of the SFR as derived from laboratory testing and characterization. Ghosts and artifacts appearing in the range profile arise from gaps in the operating band when the system is configured to omit specific frequencies. An analysis of these effects is discussed and our current solution is presented. Future prospects for the SFR are also discussed, including data collection campaigns at the Army's Adelphi Laboratory Center and the Countermine Test Site.

  17. A Novel Blind Source Separation Algorithm and Performance Analysis of Weak Signal against Strong Interference in Passive Radar Systems

    Directory of Open Access Journals (Sweden)

    Chengjie Li

    2016-01-01

    Full Text Available In Passive Radar System, obtaining the mixed weak object signal against the super power signal (jamming is still a challenging task. In this paper, a novel framework based on Passive Radar System is designed for weak object signal separation. Firstly, we propose an Interference Cancellation algorithm (IC-algorithm to extract the mixed weak object signals from the strong jamming. Then, an improved FastICA algorithm with K-means cluster is designed to separate each weak signal from the mixed weak object signals. At last, we discuss the performance of the proposed method and verify the novel method based on several simulations. The experimental results demonstrate the effectiveness of the proposed method.

  18. International Conference on Modern Mathematical Methods and High Performance Computing in Science and Technology

    CERN Document Server

    Srivastava, HM; Venturino, Ezio; Resch, Michael; Gupta, Vijay

    2016-01-01

    The book discusses important results in modern mathematical models and high performance computing, such as applied operations research, simulation of operations, statistical modeling and applications, invisibility regions and regular meta-materials, unmanned vehicles, modern radar techniques/SAR imaging, satellite remote sensing, coding, and robotic systems. Furthermore, it is valuable as a reference work and as a basis for further study and research. All contributing authors are respected academicians, scientists and researchers from around the globe. All the papers were presented at the international conference on Modern Mathematical Methods and High Performance Computing in Science & Technology (M3HPCST 2015), held at Raj Kumar Goel Institute of Technology, Ghaziabad, India, from 27–29 December 2015, and peer-reviewed by international experts. The conference provided an exceptional platform for leading researchers, academicians, developers, engineers and technocrats from a broad range of disciplines ...

  19. Compressive Detection Using Sub-Nyquist Radars for Sparse Signals

    Directory of Open Access Journals (Sweden)

    Ying Sun

    2016-01-01

    Full Text Available This paper investigates the compression detection problem using sub-Nyquist radars, which is well suited to the scenario of high bandwidths in real-time processing because it would significantly reduce the computational burden and save power consumption and computation time. A compressive generalized likelihood ratio test (GLRT detector for sparse signals is proposed for sub-Nyquist radars without ever reconstructing the signal involved. The performance of the compressive GLRT detector is analyzed and the theoretical bounds are presented. The compressive GLRT detection performance of sub-Nyquist radars is also compared to the traditional GLRT detection performance of conventional radars, which employ traditional analog-to-digital conversion (ADC at Nyquist sampling rates. Simulation results demonstrate that the former can perform almost as well as the latter with a very small fraction of the number of measurements required by traditional detection in relatively high signal-to-noise ratio (SNR cases.

  20. Performance of high-resolution X-band radar for rainfall measurement in the Netherlands

    NARCIS (Netherlands)

    Beek, van de C.Z.; Leijnsel, H.; Stricker, J.N.M.; Uijlenhoet, R.; Russchenberg, H.W.J.

    2010-01-01

    This study presents an analysis of 195 rainfall events gathered with the X-band weather radar SOLIDAR and a tipping bucket rain gauge network near Delft, The Netherlands, between May 1993 and April 1994. The aim of this paper is to present a thorough analysis of a climatological dataset using a high

  1. Performance of high-resolution X-band radar for rainfall measurement in The Netherlands

    NARCIS (Netherlands)

    Van de Beek, C.Z.; Leijnse, H.; Stricker, J.N.M.; Uijlenhoet, R.; Russchenberg, H.W.J.

    2010-01-01

    This study presents an analysis of 195 rainfall events gathered with the X-band weather radar SOLIDAR and a tipping bucket rain gauge network near Delft, The Netherlands, between May 1993 and April 1994. The aim of this paper is to present a thorough analysis of a climatological dataset using a high

  2. Performance Analysis of a High Resolution Airborne FM-CW Synthetic Aperture Radar

    NARCIS (Netherlands)

    Wit, J.J.M. de; Hoogeboom, P.

    2003-01-01

    Compact FM-CW technology combined with high resolution SAR techniques should pave the way for a small and cost effective imaging radar. A research project has been inìtiated to investigate the feasibility of FM-CW SAR. Within the framework of the project an operational airborne FM-CW SAR demonstrato

  3. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  4. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  5. Evaluating iterative reconstruction performance in computed tomography.

    Science.gov (United States)

    Chen, Baiyu; Ramirez Giraldo, Juan Carlos; Solomon, Justin; Samei, Ehsan

    2014-12-01

    Iterative reconstruction (IR) offers notable advantages in computed tomography (CT). However, its performance characterization is complicated by its potentially nonlinear behavior, impacting performance in terms of specific tasks. This study aimed to evaluate the performance of IR with both task-specific and task-generic strategies. The performance of IR in CT was mathematically assessed with an observer model that predicted the detection accuracy in terms of the detectability index (d'). d' was calculated based on the properties of the image noise and resolution, the observer, and the detection task. The characterizations of image noise and resolution were extended to accommodate the nonlinearity of IR. A library of tasks was mathematically modeled at a range of sizes (radius 1-4 mm), contrast levels (10-100 HU), and edge profiles (sharp and soft). Unique d' values were calculated for each task with respect to five radiation exposure levels (volume CT dose index, CTDIvol: 3.4-64.8 mGy) and four reconstruction algorithms (filtered backprojection reconstruction, FBP; iterative reconstruction in imaging space, IRIS; and sinogram affirmed iterative reconstruction with strengths of 3 and 5, SAFIRE3 and SAFIRE5; all provided by Siemens Healthcare, Forchheim, Germany). The d' values were translated into the areas under the receiver operating characteristic curve (AUC) to represent human observer performance. For each task and reconstruction algorithm, a threshold dose was derived as the minimum dose required to achieve a threshold AUC of 0.9. A task-specific dose reduction potential of IR was calculated as the difference between the threshold doses for IR and FBP. A task-generic comparison was further made between IR and FBP in terms of the percent of all tasks yielding an AUC higher than the threshold. IR required less dose than FBP to achieve the threshold AUC. In general, SAFIRE5 showed the most significant dose reduction potentials (11-54 mGy, 77%-84%), followed by

  6. Radar clutter classification

    Science.gov (United States)

    Stehwien, Wolfgang

    1989-11-01

    The problem of classifying radar clutter as found on air traffic control radar systems is studied. An algorithm based on Bayes decision theory and the parametric maximum a posteriori probability classifier is developed to perform this classification automatically. This classifier employs a quadratic discriminant function and is optimum for feature vectors that are distributed according to the multivariate normal density. Separable clutter classes are most likely to arise from the analysis of the Doppler spectrum. Specifically, a feature set based on the complex reflection coefficients of the lattice prediction error filter is proposed. The classifier is tested using data recorded from L-band air traffic control radars. The Doppler spectra of these data are examined; the properties of the feature set computed using these data are studied in terms of both the marginal and multivariate statistics. Several strategies involving different numbers of features, class assignments, and data set pretesting according to Doppler frequency and signal to noise ratio were evaluated before settling on a workable algorithm. Final results are presented in terms of experimental misclassification rates and simulated and classified plane position indicator displays.

  7. Interference Suppression Performance Comparison between Colocated MIMO Radar and Phased Array Radar%集中式MIMO雷达与相控阵雷达干扰抑制性能对比

    Institute of Scientific and Technical Information of China (English)

    李涛

    2016-01-01

    In order to compare the interference suppression performance between the traditional phased ar-ray radar and colocated multiple-input multiple-output( MIMO) radar,this paper develops the signal to in-terference-noise ratio( SINR) output and improvement factor for colocated MIMO radar and phased array radar in theory. Numerical simulation indicates that colocated MIMO radar has a better interference sup-pression performance through increasing SINR output.%针对传统相控阵雷达与集中式多输入多输出( MIMO)雷达的干扰抑制性能优劣问题,对集中式MIMO雷达与相控阵雷达的信干噪比和改善因子进行了对比分析,从理论上研究了两种体制雷达的干扰抑制能力并进行了数字仿真。仿真结果表明,与传统相控阵雷达相比,集中式MIMO雷达通过提升信干噪比输出增强了干扰抑制能力。

  8. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...... that involve several types of numerical computations. The computers considered in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  9. Computational Analysis of Safety Injection Tank Performance

    Energy Technology Data Exchange (ETDEWEB)

    Cho, Jai Oan; Nietiadia, Yohanes Setiawan; Lee, Jeong Ik [KAIST, Daejeon (Korea, Republic of); Addad, Yacine; Yoon, Ho Joon [Khalifa University of Science Technology and Research, Abu Dhabi (United Arab Emirates)

    2015-10-15

    The APR 1400 is a large pressurized water reactor (PWR). Just like many other water reactors, it has an emergency core cooling system (ECCS). One of the most important components in the ECCS is the safety injection tank (SIT). Inside the SIT, a fluidic device is installed, which passively controls the mass flow of the safety injection and eliminates the need for low pressure safety injection pumps. As more passive safety mechanisms are being pursued, it has become more important to understand flow structure and the loss mechanism within the fluidic device. Current computational fluid dynamics (CFD) calculations have had limited success in predicting the fluid flow accurately. This study proposes to find a more exact result using CFD and more realistic modeling. The SIT of APR1400 was analyzed using MARS and CFD. CFD calculation was executed first to obtain the form loss factor. Using the two form loss factors from the vendor and calculation, calculation using MARS was performed to compare with experiment. The accumulator model in MARS was quite accurate in predicting the water level. The pipe model showed some difference with the experimental data in the water level.

  10. Efficient High Performance Computing on Heterogeneous Platforms

    NARCIS (Netherlands)

    Shen, J.

    2015-01-01

    Heterogeneous platforms are mixes of different processing units in a compute node (e.g., CPUs+GPUs, CPU+MICs) or a chip package (e.g., APUs). This type of platforms keeps gaining popularity in various computer systems ranging from supercomputers to mobile devices. In this context, improving their

  11. Efficient High Performance Computing on Heterogeneous Platforms

    NARCIS (Netherlands)

    Shen, J.

    2015-01-01

    Heterogeneous platforms are mixes of different processing units in a compute node (e.g., CPUs+GPUs, CPU+MICs) or a chip package (e.g., APUs). This type of platforms keeps gaining popularity in various computer systems ranging from supercomputers to mobile devices. In this context, improving their ef

  12. Computation of Three-Dimensional Combustor Performance

    Science.gov (United States)

    Srivatsa, S.

    1985-01-01

    Existing steady-state 3-D computer program for calculating gasturbine flow fields modified to include computation of soot and nitrogen oxide emission. In addition, radiation calculation corrected for soot particles. These advanced tools offer potential of reducing design and development time required for gas-turbine combustors.

  13. Performance comparison of pulse-pair and wavelets methods for the pulse Doppler weather radar spectrum

    CERN Document Server

    Lagha, Mohand; Bergheul, Said; Rezoug, Tahar; Bettayeb, Maamar

    2012-01-01

    In the civilian aviation field, the radar detection of hazardous weather phenomena (winds) is very important. This detection will allow the avoidance of these phenomena and consequently will enhance the safety of flights. In this work, we have used the wavelets method to estimate the mean velocity of winds. The results showed that the application of this method is promising compared with the classical estimators (pulse-pair, Fourier).

  14. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...

  15. Parallel Backprojection: A Case Study in High-Performance Reconfigurable Computing

    Directory of Open Access Journals (Sweden)

    Cordes Ben

    2009-01-01

    Full Text Available High-performance reconfigurable computing (HPRC is a novel approach to provide large-scale computing power to modern scientific applications. Using both general-purpose processors and FPGAs allows application designers to exploit fine-grained and coarse-grained parallelism, achieving high degrees of speedup. One scientific application that benefits from this technique is backprojection, an image formation algorithm that can be used as part of a synthetic aperture radar (SAR processing system. We present an implementation of backprojection for SAR on an HPRC system. Using simulated data taken at a variety of ranges, our implementation runs over 200 times faster than a similar software program, with an overall application speedup better than 50x. The backprojection application is easily parallelizable, achieving near-linear speedup when run on multiple nodes of a clustered HPRC system. The results presented can be applied to other systems and other algorithms with similar characteristics.

  16. Parallel Backprojection: A Case Study in High-Performance Reconfigurable Computing

    Directory of Open Access Journals (Sweden)

    2009-03-01

    Full Text Available High-performance reconfigurable computing (HPRC is a novel approach to provide large-scale computing power to modern scientific applications. Using both general-purpose processors and FPGAs allows application designers to exploit fine-grained and coarse-grained parallelism, achieving high degrees of speedup. One scientific application that benefits from this technique is backprojection, an image formation algorithm that can be used as part of a synthetic aperture radar (SAR processing system. We present an implementation of backprojection for SAR on an HPRC system. Using simulated data taken at a variety of ranges, our implementation runs over 200 times faster than a similar software program, with an overall application speedup better than 50x. The backprojection application is easily parallelizable, achieving near-linear speedup when run on multiple nodes of a clustered HPRC system. The results presented can be applied to other systems and other algorithms with similar characteristics.

  17. Monte Carlo computations of F-region incoherent radar spectra at high latitudes and the use of a simple method for non-Maxwellian spectral calculations

    Science.gov (United States)

    Kikuchi, K.; Barakat, A.; St-Maurice, J.-P.

    1989-01-01

    Monte Carlo simulations of ion velocity distributions in the high-latitude F region have been performed in order to improve the calculation of incoherent radar spectra in the auroral ionosphere. The results confirm that when the ion temperature becomes large due to frictional heating in the presence of collisions with the neutral background constituent, F region spectra evolve from a normal double hump, to a triple hump, to a spectrum with a single maximum. An empirical approach is developed to overcome the inadequacy of the Maxwellian assumption for the case of radar aspect angles of between 30 and 70 deg.

  18. Tomographic measurement of temperature change in phantoms of the human body by chirp radar-type microwave computed tomography.

    Science.gov (United States)

    Miyakawa, M

    1993-07-01

    The chirp radar-type microwave computed tomograph (CT) measures the temperature change in a human body noninvasively. The paper examines its feasibility. A chirp pulse signal between 1 and 2 GHz is radiated from the transmitting antenna to the phantom. The transmitted waves are detected by the receiving antenna, which is placed on the opposite side of the object, and the beat signal between the incident wave and the transmitted wave is produced by the mixer. By spectral analysis of the beat signal, only those signals transmitted on the straight line between the transmitting antenna and the receiving antenna are discriminated from multipath signals. The microwave tomogram can therefore be reconstructed easily using the conventional algorithms for an X-ray CT image. The microwave CT can use the chirp signal to remove the influence of multipath signals caused by diffraction and reflection. The imaging of dielectric materials with complicated structures is thus possible. The experimental results using phantoms show that the spatial resolution of this microwave CT is about 10 mm and that a two-dimensional distribution of temperature change can be measured.

  19. Detecting Faults in Southern California using Computer-Vision Techniques and Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) Interferometry

    Science.gov (United States)

    Barba, M.; Rains, C.; von Dassow, W.; Parker, J. W.; Glasscoe, M. T.

    2013-12-01

    Knowing the location and behavior of active faults is essential for earthquake hazard assessment and disaster response. In Interferometric Synthetic Aperture Radar (InSAR) images, faults are revealed as linear discontinuities. Currently, interferograms are manually inspected to locate faults. During the summer of 2013, the NASA-JPL DEVELOP California Disasters team contributed to the development of a method to expedite fault detection in California using remote-sensing technology. The team utilized InSAR images created from polarimetric L-band data from NASA's Uninhabited Aerial Vehicle Synthetic Aperture Radar (UAVSAR) project. A computer-vision technique known as 'edge-detection' was used to automate the fault-identification process. We tested and refined an edge-detection algorithm under development through NASA's Earthquake Data Enhanced Cyber-Infrastructure for Disaster Evaluation and Response (E-DECIDER) project. To optimize the algorithm we used both UAVSAR interferograms and synthetic interferograms generated through Disloc, a web-based modeling program available through NASA's QuakeSim project. The edge-detection algorithm detected seismic, aseismic, and co-seismic slip along faults that were identified and compared with databases of known fault systems. Our optimization process was the first step toward integration of the edge-detection code into E-DECIDER to provide decision support for earthquake preparation and disaster management. E-DECIDER partners that will use the edge-detection code include the California Earthquake Clearinghouse and the US Department of Homeland Security through delivery of products using the Unified Incident Command and Decision Support (UICDS) service. Through these partnerships, researchers, earthquake disaster response teams, and policy-makers will be able to use this new methodology to examine the details of ground and fault motions for moderate to large earthquakes. Following an earthquake, the newly discovered faults can

  20. Power centroid radar and its rise from the universal cybernetics duality

    Science.gov (United States)

    Feria, Erlan H.

    2014-05-01

    Power centroid radar (PC-Radar) is a fast and powerful adaptive radar scheme that naturally surfaced from the recent discovery of the time-dual for information theory which has been named "latency theory." Latency theory itself was born from the universal cybernetics duality (UC-Duality), first identified in the late 1970s, that has also delivered a time dual for thermodynamics that has been named "lingerdynamics" and anchors an emerging lifespan theory for biological systems. In this paper the rise of PC-Radar from the UC-Duality is described. The development of PC-Radar, US patented, started with Defense Advanced Research Projects Agency (DARPA) funded research on knowledge-aided (KA) adaptive radar of the last decade. The outstanding signal to interference plus noise ratio (SINR) performance of PC-Radar under severely taxing environmental disturbances will be established. More specifically, it will be seen that the SINR performance of PC-Radar, either KA or knowledgeunaided (KU), approximates that of an optimum KA radar scheme. The explanation for this remarkable result is that PC-Radar inherently arises from the UC-Duality, which advances a "first principles" duality guidance theory for the derivation of synergistic storage-space/computational-time compression solutions. Real-world synthetic aperture radar (SAR) images will be used as prior-knowledge to illustrate these results.

  1. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  2. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    Science.gov (United States)

    Faraj, Ahmad [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  3. Performance Measurement of Cloud Computing Services

    CERN Document Server

    Suakanto, Sinung; Suhardi,; Saragih, Roberd

    2012-01-01

    Cloud computing today has now been growing as new technologies and new business models. In distributed technology perspective, cloud computing most like client-server services like web-based or web-service but it used virtual resources to execute. Currently, cloud computing relies on the use of an elastic virtual machine and the use of network for data exchange. We conduct an experimental setup to measure the quality of service received by cloud computing customers. Experimental setup done by creating a HTTP service that runs in the cloud computing infrastructure. We interest to know about the impact of increasing the number of users on the average quality received by users. The qualities received by user measured within two parameters consist of average response times and the number of requests time out. Experimental results of this study show that increasing the number of users has increased the average response time. Similarly, the number of request time out increasing with increasing number of users. It m...

  4. Micropower impulse radar imaging

    Energy Technology Data Exchange (ETDEWEB)

    Hall, M.S.

    1995-11-01

    From designs developed at the Lawrence Livermore National Laboratory (LLNL) in radar and imaging technologies, there exists the potential for a variety of applications in both public and private sectors. Presently tests are being conducted for the detection of buried mines and the analysis of civil structures. These new systems use a patented ultra-wide band (impulse) radar technology known as Micropower Impulse Radar (GPR) imaging systems. LLNL has also developed signal processing software capable of producing 2-D and 3-D images of objects embedded in materials such as soil, wood and concrete. My assignment while at LLNL has focused on the testing of different radar configurations and applications, as well as assisting in the creation of computer algorithms which enable the radar to scan target areas of different geometeries.

  5. Gigaflop (billion floating point operations per second) performance for computational electromagnetics

    Science.gov (United States)

    Shankar, V.; Rowell, C.; Hall, W. F.; Mohammadian, A. H.; Schuh, M.; Taylor, K.

    1992-01-01

    Accurate and rapid evaluation of radar signature for alternative aircraft/store configurations would be of substantial benefit in the evolution of integrated designs that meet radar cross-section (RCS) requirements across the threat spectrum. Finite-volume time domain methods offer the possibility of modeling the whole aircraft, including penetrable regions and stores, at longer wavelengths on today's gigaflop supercomputers and at typical airborne radar wavelengths on the teraflop computers of tomorrow. A structured-grid finite-volume time domain computational fluid dynamics (CFD)-based RCS code has been developed at the Rockwell Science Center, and this code incorporates modeling techniques for general radar absorbing materials and structures. Using this work as a base, the goal of the CFD-based CEM effort is to define, implement and evaluate various code development issues suitable for rapid prototype signature prediction.

  6. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  7. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  8. High-Performance Cloud Computing: A View of Scientific Applications

    CERN Document Server

    Vecchiola, Christian; Buyya, Rajkumar

    2009-01-01

    Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure...

  9. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    Science.gov (United States)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  10. Progress and Challenges in High Performance Computer Technology

    Institute of Scientific and Technical Information of China (English)

    Xue-Jun Yang; Yong Dou; Qing-Feng Hu

    2006-01-01

    High performance computers provide strategic computing power in the construction of national economy and defense, and become one of symbols of the country's overall strength. Over 30 years, with the supports of governments, the technology of high performance computers is in the process of rapid development, during which the computing performance increases nearly 3 million times and the processors number expands over 10 hundred thousands times. To solve the critical issues related with parallel efficiency and scalability, scientific researchers pursued extensive theoretical studies and technical innovations. The paper briefly looks back the course of building high performance computer systems both at home and abroad,and summarizes the significant breakthroughs of international high performance computer technology. We also overview the technology progress of China in the area of parallel computer architecture, parallel operating system and resource management,parallel compiler and performance optimization, environment for parallel programming and network computing. Finally, we examine the challenging issues, "memory wall", system scalability and "power wall", and discuss the issues of high productivity computers, which is the trend in building next generation high performance computers.

  11. Java Radar Analysis Tool

    Science.gov (United States)

    Zaczek, Mariusz P.

    2005-01-01

    Java Radar Analysis Tool (JRAT) is a computer program for analyzing two-dimensional (2D) scatter plots derived from radar returns showing pieces of the disintegrating Space Shuttle Columbia. JRAT can also be applied to similar plots representing radar returns showing aviation accidents, and to scatter plots in general. The 2D scatter plots include overhead map views and side altitude views. The superposition of points in these views makes searching difficult. JRAT enables three-dimensional (3D) viewing: by use of a mouse and keyboard, the user can rotate to any desired viewing angle. The 3D view can include overlaid trajectories and search footprints to enhance situational awareness in searching for pieces. JRAT also enables playback: time-tagged radar-return data can be displayed in time order and an animated 3D model can be moved through the scene to show the locations of the Columbia (or other vehicle) at the times of the corresponding radar events. The combination of overlays and playback enables the user to correlate a radar return with a position of the vehicle to determine whether the return is valid. JRAT can optionally filter single radar returns, enabling the user to selectively hide or highlight a desired radar return.

  12. Advanced Certification Program for Computer Graphic Specialists. Final Performance Report.

    Science.gov (United States)

    Parkland Coll., Champaign, IL.

    A pioneer program in computer graphics was implemented at Parkland College (Illinois) to meet the demand for specialized technicians to visualize data generated on high performance computers. In summer 1989, 23 students were accepted into the pilot program. Courses included C programming, calculus and analytic geometry, computer graphics, and…

  13. Quantum radar

    CERN Document Server

    Lanzagorta, Marco

    2011-01-01

    This book offers a concise review of quantum radar theory. Our approach is pedagogical, making emphasis on the physics behind the operation of a hypothetical quantum radar. We concentrate our discussion on the two major models proposed to date: interferometric quantum radar and quantum illumination. In addition, this book offers some new results, including an analytical study of quantum interferometry in the X-band radar region with a variety of atmospheric conditions, a derivation of a quantum radar equation, and a discussion of quantum radar jamming.This book assumes the reader is familiar w

  14. The missing cone problem in computer tomography and a model for interpolation in synthetic aperture radar

    Science.gov (United States)

    Hayner, D. A.

    1983-07-01

    The first part of this thesis considers the missing cone problem in computer tomography. In this problem, an incomplete set of projection data is available from which an image must be reconstructed. The object of the algorithms presented in this thesis is to reconstruct a higher quality image than that obtainable by treating the projections as the only source of information concerning the image to be generated. This is accomplished by treating the problem in terms of spectral extrapolation. With this interpretation, various assumptions concerning the image and other forms of a priori information can be included in the data set to increase the total information available. In order to understand the subtleties of these enhancement algorithms, the spectral extrapolation techniques exployed must be well understood. A result of studying the Gerchberg and Papoulis extrapolation techniques is that either can be characterized as a contraction mapping for any realizable discrete implementation. Furthermore, it is theoretically derived and experimentally verified that these algorithms will in general obtain an optimal solution prior to converging to the unique fixed point.

  15. Performing quantum computing experiments in the cloud

    Science.gov (United States)

    Devitt, Simon J.

    2016-09-01

    Quantum computing technology has reached a second renaissance in the past five years. Increased interest from both the private and public sector combined with extraordinary theoretical and experimental progress has solidified this technology as a major advancement in the 21st century. As anticipated my many, some of the first realizations of quantum computing technology has occured over the cloud, with users logging onto dedicated hardware over the classical internet. Recently, IBM has released the Quantum Experience, which allows users to access a five-qubit quantum processor. In this paper we take advantage of this online availability of actual quantum hardware and present four quantum information experiments. We utilize the IBM chip to realize protocols in quantum error correction, quantum arithmetic, quantum graph theory, and fault-tolerant quantum computation by accessing the device remotely through the cloud. While the results are subject to significant noise, the correct results are returned from the chip. This demonstrates the power of experimental groups opening up their technology to a wider audience and will hopefully allow for the next stage of development in quantum information technology.

  16. High Performance Spaceflight Computing (HPSC) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In 2012, the NASA Game Changing Development Program (GCDP), residing in the NASA Space Technology Mission Directorate (STMD), commissioned a High Performance...

  17. A high performance scientific cloud computing environment for materials simulations

    CERN Document Server

    Jorissen, Kevin; Rehr, John J

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditi...

  18. Intro - High Performance Computing for 2015 HPC Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Klitsner, Tom [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    The recent Executive Order creating the National Strategic Computing Initiative (NSCI) recognizes the value of high performance computing for economic competitiveness and scientific discovery and commits to accelerate delivery of exascale computing. The HPC programs at Sandia –the NNSA ASC program and Sandia’s Institutional HPC Program– are focused on ensuring that Sandia has the resources necessary to deliver computation in the national interest.

  19. Parallel Structure Based on Multi-Core Computing for Radar System Simulation%基于多核计算的雷达并行仿真结构

    Institute of Scientific and Technical Information of China (English)

    王磊; 卢显良; 陈明燕; 张伟; 张顺生

    2014-01-01

    针对顺序仿真结构下回波生成与信号处理环节软件仿真速度慢等瓶颈问题,提出一种基于多核处理器共享内存的多数据链路计算模型,通过构建多数据链路并行仿真的方法提升软件仿真效率。根据同一调度间隔内各雷达事件相互独立的特性,从数据划分、任务分配、时间同步及负载监测与度量等层面上进行阐述。仿真结果表明,该方法与传统的雷达串行仿真相比,数据帧处理平均时间可以降低37.5%,数据帧处理加速比曲线表现出良好的仿真加速特性,大大缩减雷达系统仿真时间。%To solve the bottle-neck problem of lower efficiency existed in radar echo generation and signal processing with serial simulation architecture, a multi-data links computing model based on multi-core memory-shared platform is proposed. This method could greatly promote simulation efficiency by taking advantage of multi-core. According to the independent characteristic between radar tasks in the same scheduling interval, the model takes data division, task allocation, time synchronization, and load monitoring with measurement into account to discuss its parallel characteristic. The Pentium(R) Dual-Core E5200 CPU with 2 GB memory is used to test the target scene with 20 batches. Simulation results demonstrate that, compared with serial simulation, the data frame average processing time of parallel model decreases 37.5% and the data frame processing speedup ratio curve has good acceleration performance. This parallel algorithm can reduce the simulation time greatly.

  20. CRPC research into linear algebra software for high performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Choi, J.; Walker, D.W. [Oak Ridge National Lab., TN (United States). Mathematical Sciences Section; Dongarra, J.J. [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Computer Science]|[Oak Ridge National Lab., TN (United States). Mathematical Sciences Section; Pozo, R. [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Computer Science; Sorensen, D.C. [Rice Univ., Houston, TX (United States). Dept. of Computational and Applied Mathematics

    1994-12-31

    In this paper the authors look at a number of approaches being investigated in the Center for Research on Parallel Computation (CRPC) to develop linear algebra software for high-performance computers. These approaches are exemplified by the LAPACK, templates, and ARPACK projects. LAPACK is a software library for performing dense and banded linear algebra computations, and was designed to run efficiently on high-performance computers. The authors focus on the design of the distributed-memory version of LAPACK, and on an object-oriented interface to LAPACK.

  1. Characteristics and performance of L-band radar-based soil moisture retrievals using Soil Moisture Active Passive (SMAP) synthetic aperture radar observations

    Science.gov (United States)

    Kim, S.; Johnson, J. T.; Moghaddam, M.; Tsang, L.; Colliander, A.

    2016-12-01

    Surface soil moisture of the top 5-cm was estimated at 3-km spatial resolution using L-band dual-copolarized Soil Moisture Active Passive (SMAP) synthetic aperture radar (SAR) data that mapped the globe every three days from mid-April to early July, 2015. Radar observations of soil moisture offer the advantage of high spatial resolution, but have been challenging in the past due to the complicating factors of surface roughness and vegetation scattering. In this work, physically-based forward models of radar scattering for individual vegetation types are inverted using a time-series approach to retrieve soil moisture while correcting for the effects of roughness and dynamic vegetation. The predictions of the forward models used agree with SMAP measurements to within 0.5 dB unbiased-RMSE (root mean square error, ubRMSE) and -0.05 dB (bias). The forward models further allow the mechanisms of radar scattering to be examined to identify the sensitivity of radar scattering to soil moisture. Global patterns of the soil moistures retrieved by the algorithm generally match well with those from other satellite sensors. However biases exist in dry regions, and discrepancies are found in thick vegetation areas. The retrievals are compared with in situ measurements of soil moisture in locations characterized as cropland, grassland, and woody vegetation. Terrain slopes, subpixel heterogeneity, tillage practices, and vegetation growth influence the retrievals, but are largely corrected by the retrieval processes. Soil moisture retrievals agree with the in-situ measurements at 0.052 m3/m3 ubRMSE, -0.015 m3/m3 bias, and a correlation of 0.50. These encouraging retrieval results demonstrate the feasibility of a physically-based time-series retrieval with L-band SAR data for characterizing soil moisture over diverse conditions of soil moisture, surface roughness, and vegetation types. The findings are important for future L-band radar missions with frequent revisits that permit time

  2. Satellite communication performance evaluation: Computational techniques based on moments

    Science.gov (United States)

    Omura, J. K.; Simon, M. K.

    1980-01-01

    Computational techniques that efficiently compute bit error probabilities when only moments of the various interference random variables are available are presented. The approach taken is a generalization of the well known Gauss-Quadrature rules used for numerically evaluating single or multiple integrals. In what follows, basic algorithms are developed. Some of its properties and generalizations are shown and its many potential applications are described. Some typical interference scenarios for which the results are particularly applicable include: intentional jamming, adjacent and cochannel interferences; radar pulses (RFI); multipath; and intersymbol interference. While the examples presented stress evaluation of bit error probilities in uncoded digital communication systems, the moment techniques can also be applied to the evaluation of other parameters, such as computational cutoff rate under both normal and mismatched receiver cases in coded systems. Another important application is the determination of the probability distributions of the output of a discrete time dynamical system. This type of model occurs widely in control systems, queueing systems, and synchronization systems (e.g., discrete phase locked loops).

  3. SMAP RADAR Processing and Calibration

    Science.gov (United States)

    West, R. D.; Jaruwatanadilok, S.; Kwoun, O.; Chaubell, M. J.

    2013-12-01

    The Soil Moisture Active Passive (SMAP) mission uses L-band radar and radiometer measurements to estimate soil moisture with 4% volumetric accuracy at a resolution of 10 km, and freeze-thaw state at a resolution of 1-3 km. Model sensitivities translate the soil moisture accuracy to a radar backscatter accuracy of 1 dB at 3 km resolution and a brightness temperature accuracy of 1.3 K at 40 km resolution. This presentation will describe the level 1 radar processing and calibration challenges and the choices made so far for the algorithms and software implementation. To obtain the desired high spatial resolution the level 1 radar ground processor employs synthetic aperture radar (SAR) imaging techniques. Part of the challenge of the SMAP data processing comes from doing SAR imaging on a conically scanned system with rapidly varying squint angles. The radar echo energy will be divided into range/Doppler bins using time domain processing algorithms that can easily follow the varying squint angle. For SMAP, projected range resolution is about 250 meters, while azimuth resolution varies from 400 meters to 1.2 km. Radiometric calibration of the SMAP radar means measuring, characterizing, and where necessary correcting the gain and noise contributions from every part of the system from the antenna radiation pattern all the way to the ground processing algorithms. The SMAP antenna pattern will be computed using an accurate antenna model, and then validated post-launch using homogeneous external targets such as the Amazon rain forest to look for uncorrected gain variation. Noise subtraction is applied after image processing using measurements from a noise only channel. Variations of the internal electronics are tracked by a loopback measurement which will capture most of the time and temperature variations of the transmit power and receiver gain. Long-term variations of system performance due to component aging will be tracked and corrected using stable external reference

  4. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  5. Radar Chart

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Radar Chart collection is an archived product of summarized radar data. The geographic coverage is the 48 contiguous states of the United States. These hourly...

  6. Ambiguity Function and Resolution Characteristic Analysis of DVB-S Signal for Passive Radar

    Directory of Open Access Journals (Sweden)

    Jin Wei

    2012-12-01

    Full Text Available This paper gives the performance research on the ambiguity function and resolution of passive radar based on DVB-S (Digital Video Broadcasting-Satellite signal. The radar system structure and signal model of DVB-S signal are firstly studied, then the ambiguity function of DVB-S signal is analyzed. At last, it has been obtained how the bistatic radar position impacts the resolution. Theoretical analyses and computer simulation show that DVB-S signal is applicable as an illuminator for passive radar.

  7. High performance computing for beam physics applications

    Science.gov (United States)

    Ryne, R. D.; Habib, S.

    Several countries are now involved in efforts aimed at utilizing accelerator-driven technologies to solve problems of national and international importance. These technologies have both economic and environmental implications. The technologies include waste transmutation, plutonium conversion, neutron production for materials science and biological science research, neutron production for fusion materials testing, fission energy production systems, and tritium production. All of these projects require a high-intensity linear accelerator that operates with extremely low beam loss. This presents a formidable computational challenge: One must design and optimize over a kilometer of complex accelerating structures while taking into account beam loss to an accuracy of 10 parts per billion per meter. Such modeling is essential if one is to have confidence that the accelerator will meet its beam loss requirement, which ultimately affects system reliability, safety and cost. At Los Alamos, the authors are developing a capability to model ultra-low loss accelerators using the CM-5 at the Advanced Computing Laboratory. They are developing PIC, Vlasov/Poisson, and Langevin/Fokker-Planck codes for this purpose. With slight modification, they have also applied their codes to modeling mesoscopic systems and astrophysical systems. In this paper, they will first describe HPC activities in the accelerator community. Then they will discuss the tools they have developed to model classical and quantum evolution equations. Lastly they will describe how these tools have been used to study beam halo in high current, mismatched charged particle beams.

  8. High-Performance Computer Modeling of the Cosmos-Iridium Collision

    Energy Technology Data Exchange (ETDEWEB)

    Olivier, S; Cook, K; Fasenfest, B; Jefferson, D; Jiang, M; Leek, J; Levatin, J; Nikolaev, S; Pertica, A; Phillion, D; Springer, K; De Vries, W

    2009-08-28

    This paper describes the application of a new, integrated modeling and simulation framework, encompassing the space situational awareness (SSA) enterprise, to the recent Cosmos-Iridium collision. This framework is based on a flexible, scalable architecture to enable efficient simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel, high-performance computer systems available, for example, at Lawrence Livermore National Laboratory. We will describe the application of this framework to the recent collision of the Cosmos and Iridium satellites, including (1) detailed hydrodynamic modeling of the satellite collision and resulting debris generation, (2) orbital propagation of the simulated debris and analysis of the increased risk to other satellites (3) calculation of the radar and optical signatures of the simulated debris and modeling of debris detection with space surveillance radar and optical systems (4) determination of simulated debris orbits from modeled space surveillance observations and analysis of the resulting orbital accuracy, (5) comparison of these modeling and simulation results with Space Surveillance Network observations. We will also discuss the use of this integrated modeling and simulation framework to analyze the risks and consequences of future satellite collisions and to assess strategies for mitigating or avoiding future incidents, including the addition of new sensor systems, used in conjunction with the Space Surveillance Network, for improving space situational awareness.

  9. Model-Based Radar Power Calculations for Ultra-Wideband (UWB) Synthetic Aperture Radar (SAR)

    Science.gov (United States)

    2013-06-01

    performance in complex scenarios. Among these scenarios are ground penetrating radar and forward-looking radar for landmine and improvised explosive...Model-Based Radar Power Calculations for Ultra-Wideband (UWB) Synthetic Aperture Radar (SAR) by Traian Dogaru ARL-TN-0548 June 2013...2013 Model-Based Radar Power Calculations for Ultra-Wideband (UWB) Synthetic Aperture Radar (SAR) Traian Dogaru Sensors and Electron

  10. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  11. Computer Self-Efficacy, Computer Anxiety, Performance and Personal Outcomes of Turkish Physical Education Teachers

    Science.gov (United States)

    Aktag, Isil

    2015-01-01

    The purpose of this study is to determine the computer self-efficacy, performance outcome, personal outcome, and affect and anxiety level of physical education teachers. Influence of teaching experience, computer usage and participation of seminars or in-service programs on computer self-efficacy level were determined. The subjects of this study…

  12. Computer Self-Efficacy, Computer Anxiety, Performance and Personal Outcomes of Turkish Physical Education Teachers

    Science.gov (United States)

    Aktag, Isil

    2015-01-01

    The purpose of this study is to determine the computer self-efficacy, performance outcome, personal outcome, and affect and anxiety level of physical education teachers. Influence of teaching experience, computer usage and participation of seminars or in-service programs on computer self-efficacy level were determined. The subjects of this study…

  13. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  14. Real-time implementation of frequency-modulated continuous-wave synthetic aperture radar imaging using field programmable gate array.

    Science.gov (United States)

    Quan, Yinghui; Li, Yachao; Hu, Guibin; Xing, Mengdao

    2015-06-01

    A new miniature linear frequency-modulated continuous-wave radar which mounted on an unmanned aerial vehicle is presented. It allows the accomplishment of high resolution synthetic aperture radar imaging in real-time. Only a Kintex-7 field programmable gate array from Xilinx is utilized for whole signal processing of sophisticated radar imaging algorithms. The proposed hardware architecture achieves remarkable improvement in integration, power consumption, volume, and computing performance over its predecessor designs. The realized design is verified by flight campaigns.

  15. Radar illusion via metamaterials

    Science.gov (United States)

    Jiang, Wei Xiang; Cui, Tie Jun

    2011-02-01

    An optical illusion is an image of a real target perceived by the eye that is deceptive or misleading due to a physiological illusion or a specific visual trick. The recently developed metamaterials provide efficient approaches to generate a perfect optical illusion. However, all existing research on metamaterial illusions has been limited to theory and numerical simulations. Here, we propose the concept of a radar illusion, which can make the electromagnetic (EM) image of a target gathered by radar look like a different target, and we realize a radar illusion device experimentally to change the radar image of a metallic target into a dielectric target with predesigned size and material parameters. It is well known that the radar signatures of metallic and dielectric objects are significantly different. However, when a metallic target is enclosed by the proposed illusion device, its EM scattering characteristics will be identical to that of a predesigned dielectric object under the illumination of radar waves. Such an illusion device will confuse the radar, and hence the real EM properties of the metallic target cannot be perceived. We designed and fabricated the radar illusion device using artificial metamaterials in the microwave frequency, and good illusion performances are observed in the experimental results.

  16. The Cloud Radar System

    Science.gov (United States)

    Racette, Paul; Heymsfield, Gerald; Li, Lihua; Tian, Lin; Zenker, Ed

    2003-01-01

    Improvement in our understanding of the radiative impact of clouds on the climate system requires a comprehensive view of clouds including their physical dimensions, dynamical generation processes, and detailed microphysical properties. To this end, millimeter vave radar is a powerful tool by which clouds can be remotely sensed. The NASA Goddard Space Flight Center has developed the Cloud Radar System (CRS). CRS is a highly sensitive 94 GHz (W-band) pulsed-Doppler polarimetric radar that is designed to fly on board the NASA high-altitude ER-2 aircraft. The instrument is currently the only millimeter wave radar capable of cloud and precipitation measurements from above most all clouds. Because it operates from high-altitude, the CRS provides a unique measurement perspective for cirrus cloud studies. The CRS emulates a satellite view of clouds and precipitation systems thus providing valuable measurements for the implementation and algorithm validation for the upcoming NASA CloudSat mission that is designed to measure ice cloud distributions on the global scale using a spaceborne 94 GHz radar. This paper describes the CRS instrument and preliminary data from the recent Cirrus Regional Study of Tropical Anvils and Cirrus Layers - Florida Area Cirrus Experiment (CRYSTAL-FACE). The radar design is discussed. Characteristics of the radar are given. A block diagram illustrating functional components of the radar is shown. The performance of the CRS during the CRYSTAL-FACE campaign is discussed.

  17. High Performance Computing for Medical Image Interpretation

    Science.gov (United States)

    1993-10-01

    Ongerubriceerd All rights reserved.abtctOgr ried No part of this publication may bebtac neubier reproduced and/or published by print , photoprint...behalen miet de performance die verreist is in de medische kiniek, waardoor routinemnatig gebruik veelal tegen hoge personeels- kosten wordt uitgevoerd...166. D.R. Hush and B.G. Home (1993). "Progress in Supervised Neural Networks," IEEE Signal Process. Magazine , pp.8-39. S.H. Izen and E.M. Haacke (1990

  18. Computer Program Predicts Turbine-Stage Performance

    Science.gov (United States)

    Boyle, Robert J.; Haas, Jeffrey E.; Katsanis, Theodore

    1988-01-01

    MTSBL updated version of flow-analysis programs MERIDL and TSONIC coupled to boundary-layer program BLAYER. Method uses quasi-three-dimensional, inviscid, stream-function flow analysis iteratively coupled to calculated losses so changes in losses result in changes in flow distribution. Manner effects both configuration on flow distribution and flow distribution on losses taken into account in prediction of performance of stage. Written in FORTRAN IV.

  19. Domain Decomposition Based High Performance Parallel Computing

    CERN Document Server

    Raju, Mandhapati P

    2009-01-01

    The study deals with the parallelization of finite element based Navier-Stokes codes using domain decomposition and state-ofart sparse direct solvers. There has been significant improvement in the performance of sparse direct solvers. Parallel sparse direct solvers are not found to exhibit good scalability. Hence, the parallelization of sparse direct solvers is done using domain decomposition techniques. A highly efficient sparse direct solver PARDISO is used in this study. The scalability of both Newton and modified Newton algorithms are tested.

  20. Improving Weather Radar Precipitation Estimates by Combining two Types of Radars

    DEFF Research Database (Denmark)

    Nielsen, Jesper Ellerbæk; Thorndahl, Søren Liedtke; Rasmussen, Michael R.

    2014-01-01

    the two radar types achieves a radar product with both long range and high temporal resolution. It is validated that the blended radar product performs better than the individual radars based on ground observations from laser disdrometers. However, the data combination is challenged by lower performance...

  1. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  2. Performance evaluation of computer and communication systems

    CERN Document Server

    Le Boudec, Jean-Yves

    2011-01-01

    … written by a scientist successful in performance evaluation, it is based on his experience and provides many ideas not only to laymen entering the field, but also to practitioners looking for inspiration. The work can be read systematically as a textbook on how to model and test the derived hypotheses on the basis of simulations. Also, separate parts can be studied, as the chapters are self-contained. … the book can be successfully used either for self-study or as a supplementary book for a lecture. I believe that different types of readers will like it: practicing engineers and resea

  3. Radar Fundamentals, Presentation

    OpenAIRE

    Jenn, David

    2008-01-01

    Topics include: introduction, radar functions, antennas basics, radar range equation, system parameters, electromagnetic waves, scattering mechanisms, radar cross section and stealth, and sample radar systems.

  4. Radar Fundamentals, Presentation

    OpenAIRE

    Jenn, David

    2008-01-01

    Topics include: introduction, radar functions, antennas basics, radar range equation, system parameters, electromagnetic waves, scattering mechanisms, radar cross section and stealth, and sample radar systems.

  5. Performance Predictable ServiceBSP Model for Grid Computing

    Institute of Scientific and Technical Information of China (English)

    TONG Weiqin; MIAO Weikai

    2007-01-01

    This paper proposes a performance prediction model for grid computing model ServiceBSP to support developing high quality applications in grid environment. In ServiceBSP model,the agents carrying computing tasks are dispatched to the local domain of the selected computation services. By using the IP (integer program) approach, the Service Selection Agent selects the computation services with global optimized QoS (quality of service) consideration. The performance of a ServiceBSP application can be predicted according to the performance prediction model based on the QoS of the selected services. The performance prediction model can help users to analyze their applications and improve them by optimized the factors which affects the performance. The experiment shows that the Service Selection Agent can provide ServiceBSP users with satisfied QoS of applications.

  6. Performance Models for Split-execution Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; McCaskey, Alex [ORNL; Schrock, Jonathan [ORNL; Seddiqi, Hadayat [ORNL; Britt, Keith A [ORNL; Imam, Neena [ORNL

    2016-01-01

    Split-execution computing leverages the capabilities of multiple computational models to solve problems, but splitting program execution across different computational models incurs costs associated with the translation between domains. We analyze the performance of a split-execution computing system developed from conventional and quantum processing units (QPUs) by using behavioral models that track resource usage. We focus on asymmetric processing models built using conventional CPUs and a family of special-purpose QPUs that employ quantum computing principles. Our performance models account for the translation of a classical optimization problem into the physical representation required by the quantum processor while also accounting for hardware limitations and conventional processor speed and memory. We conclude that the bottleneck in this split-execution computing system lies at the quantum-classical interface and that the primary time cost is independent of quantum processor behavior.

  7. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  8. Nuclear Forces and High-Performance Computing: The Perfect Match

    Energy Technology Data Exchange (ETDEWEB)

    Luu, T; Walker-Loud, A

    2009-06-12

    High-performance computing is now enabling the calculation of certain nuclear interaction parameters directly from Quantum Chromodynamics, the quantum field theory that governs the behavior of quarks and gluons and is ultimately responsible for the nuclear strong force. We briefly describe the state of the field and describe how progress in this field will impact the greater nuclear physics community. We give estimates of computational requirements needed to obtain certain milestones and describe the scientific and computational challenges of this field.

  9. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  10. Compact high performance spectrometers using computational imaging

    Science.gov (United States)

    Morton, Kenneth; Weisberg, Arel

    2016-05-01

    Compressive sensing technology can theoretically be used to develop low cost compact spectrometers with the performance of larger and more expensive systems. Indeed, compressive sensing for spectroscopic systems has been previously demonstrated using coded aperture techniques, wherein a mask is placed between the grating and a charge coupled device (CCD) and multiple measurements are collected with different masks. Although proven effective for some spectroscopic sensing paradigms (e.g. Raman), this approach requires that the signal being measured is static between shots (low noise and minimal signal fluctuation). Many spectroscopic techniques applicable to remote sensing are inherently noisy and thus coded aperture compressed sensing will likely not be effective. This work explores an alternative approach to compressed sensing that allows for reconstruction of a high resolution spectrum in sensing paradigms featuring significant signal fluctuations between measurements. This is accomplished through relatively minor changes to the spectrometer hardware together with custom super-resolution algorithms. Current results indicate that a potential overall reduction in CCD size of up to a factor of 4 can be attained without a loss of resolution. This reduction can result in significant improvements in cost, size, and weight of spectrometers incorporating the technology.

  11. Identifying Key Challenges in Performance Issues in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Ashraf Zia

    2012-10-01

    Full Text Available Cloud computing is a harbinger to a newer era in the field of computing where distributed and centralized services are used in a unique way. In cloud computing, the computational resources of different vendors and IT services providers are managed for providing an enormous and a scalable computing services platform that offers efficient data processing coupled with better QoS at a lower cost. The on-demand dynamic and scalable resource allocation is the main motif behind the development and deployment of cloud computing. The potential growth in this area and the presence of some dominant organizations with abundant resources (like Google, Amazon, Salesforce, Rackspace, Azure, GoGrid, make the field of cloud computing more fascinating. All the cloud computing processes need to be in unanimity to dole out better QoS i.e., to provide better software functionality, meet the tenant’s requirements for their desired processing power and to exploit elevated bandwidth.. However, several technical and functional e.g., pervasive access to resources, dynamic discovery, on the fly access and composition of resources pose serious challenges for cloud computing. In this study, the performance issues in cloud computing are discussed. A number of schemes pertaining to QoS issues are critically analyzed to point out their strengths and weaknesses. Some of the performance parameters at the three basic layers of the cloud — Infrastructure as a Service, Platform as a Service and Software as a Service — are also discussed in this paper.

  12. Individual Differences and Learning Performance in Computer-based Training

    Science.gov (United States)

    2011-02-01

    Navigation in hypermedia learning systems: experts vs. novices. Computers in Human Behavior , 22, 251–266. Chi, M. T. H., Glaser, R., & Rees, E...communication technologies on performance in a Web-based learning program. Computers in Human Behavior , 22(6), 962-970. Shivpuri, S., Schmitt, N., Oswald

  13. Multicore Challenges and Benefits for High Performance Scientific Computing

    Directory of Open Access Journals (Sweden)

    Ida M.B. Nielsen

    2008-01-01

    Full Text Available Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexity of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.

  14. A Component Architecture for High-Performance Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

    2004-12-14

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  15. A Component Architecture for High-Performance Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, David E; Allan, Benjamin A; Armstrong, Robert C; Bertrand, Felipe; Chiu, Kenneth; Dahlgren, Tamara L; Damevski, Kostadin; Elwasif, Wael R; Epperly, Thomas G; Govindaraju, Madhusudhan; Katz, Daniel S; Kohl, James A; Krishnan, Manoj Kumar; Kumfert, Gary K; Larson, J Walter; Lefantzi, Sophia; Lewis, Michael J; Malony, Allen D; McInnes, Lois C; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G; Ray, Jaideep; Shende, Sameer; Windus, Theresa L; Zhou, Shujia

    2006-07-03

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  16. Resource estimation in high performance medical image computing.

    Science.gov (United States)

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D M; Landman, Bennett A

    2014-10-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of 'jobs' requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources.

  17. High Performance Computing Assets for Ocean Acoustics Research

    Science.gov (United States)

    2016-11-18

    that make them easily parallelizable in the manner that, for example, atmospheric or ocean general circulation models (GCMs) are parallel. Many GCMs...Enclosed is the Final Report for ONR Grant No. NOOO 14-15-1-2840 entitled "High Performance Computing Assets for Ocean Acoustjc Research," Principal...distribution is unlimited. ONR DURIP Grant Final Report High Performance Computing Assets for Ocean Acoustics Research Timothy F. Dud a Applied Ocean

  18. A review of High Performance Computing foundations for scientists

    CERN Document Server

    Ibáñez, Pablo García-Risueño Pablo E

    2012-01-01

    The increase of existing computational capabilities has made simulation emerge as a third discipline of Science, lying midway between experimental and purely theoretical branches [1, 2]. Simulation enables the evaluation of quantities which otherwise would not be accessible, helps to improve experiments and provides new insights on systems which are analysed [3-6]. Knowing the fundamentals of computation can be very useful for scientists, for it can help them to improve the performance of their theoretical models and simulations. This review includes some technical essentials that can be useful to this end, and it is devised as a complement for researchers whose education is focused on scientific issues and not on technological respects. In this document we attempt to discuss the fundamentals of High Performance Computing (HPC) [7] in a way which is easy to understand without much previous background. We sketch the way standard computers and supercomputers work, as well as discuss distributed computing and di...

  19. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    Science.gov (United States)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  20. 46 CFR 15.815 - Radar observers.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 1 2010-10-01 2010-10-01 false Radar observers. 15.815 Section 15.815 Shipping COAST... Computations § 15.815 Radar observers. (a) Each person in the required complement of deck officers, including... endorsement as radar observer. (b) Each person who is employed or serves as pilot in accordance with...

  1. Planetary Radar

    Science.gov (United States)

    Neish, Catherine D.; Carter, Lynn M.

    2015-01-01

    This chapter describes the principles of planetary radar, and the primary scientific discoveries that have been made using this technique. The chapter starts by describing the different types of radar systems and how they are used to acquire images and accurate topography of planetary surfaces and probe their subsurface structure. It then explains how these products can be used to understand the properties of the target being investigated. Several examples of discoveries made with planetary radar are then summarized, covering solar system objects from Mercury to Saturn. Finally, opportunities for future discoveries in planetary radar are outlined and discussed.

  2. High performance computing: Clusters, constellations, MPPs, and future directions

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, Jack; Sterling, Thomas; Simon, Horst; Strohmaier, Erich

    2003-06-10

    Last year's paper by Bell and Gray [1] examined past trends in high performance computing and asserted likely future directions based on market forces. While many of the insights drawn from this perspective have merit and suggest elements governing likely future directions for HPC, there are a number of points put forth that we feel require further discussion and, in certain cases, suggest alternative, more likely views. One area of concern relates to the nature and use of key terms to describe and distinguish among classes of high end computing systems, in particular the authors use of ''cluster'' to relate to essentially all parallel computers derived through the integration of replicated components. The taxonomy implicit in their previous paper, while arguable and supported by some elements of our community, fails to provide the essential semantic discrimination critical to the effectiveness of descriptive terms as tools in managing the conceptual space of consideration. In this paper, we present a perspective that retains the descriptive richness while providing a unifying framework. A second area of discourse that calls for additional commentary is the likely future path of system evolution that will lead to effective and affordable Petaflops-scale computing including the future role of computer centers as facilities for supporting high performance computing environments. This paper addresses the key issues of taxonomy, future directions towards Petaflops computing, and the important role of computer centers in the 21st century.

  3. C(G)-Band and X(I)-Band Noncoherent Radar Transponder Performance Specification Standard

    Science.gov (United States)

    2014-06-01

    Unit B Computer Printer Log Periodic Antenna Log Periodic Antenna Bi Conical Antenna Bi Conical Antenna 41” Active Rod Parallel Element Electric...C) Part 2 (curve B) Part 1 specifies a resonance search, resonance dwell, and sinusoidal vibration cycling to the level of curve C from Figure 3...min 20 min B, AR Notes 1For sinusoidal vibration resonance tests and cycling tests of item mounted in airplanes and weighing more than 80 pounds

  4. Optical interconnection networks for high-performance computing systems.

    Science.gov (United States)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  5. Software defined noise radar with low sampling rate

    Science.gov (United States)

    Lukin, K.; Vyplavin, P.; Savkovich, Elena; Lukin, S.

    2011-10-01

    Preliminary results of our investigations of Software Defined Noise Radar are presented; namely, results on the design and implementation of FPGA-based Noise Radar with digital generation of random signal and coherent reception of radar returns. Parallelization of computations in FPGA enabled realization of algorithm in time domain for evaluation of the cross-correlations, comparable with the frequency-domain algorithm in efficiency. Moreover, implementation of relay-type correlator algorithm enabled realizing of the cross-correlation algorithm which might operate much faster. We present comparison of performance and limitations of different considered designs. Digital correlator has been implemented in the Altera/Stratix evaluation board having 1 million gates and up to 300 MHz clock frequency. We also realized a software defined CW noise radar on the basis of RVI Development Board from ICTP M-LAB.

  6. An Exploratory Investigation of Computer Simulations, Student Preferences, and Performance.

    Science.gov (United States)

    Vaidyanathan, Rajiv; Rochford, Linda

    1998-01-01

    Marketing students (n=99) used computer simulation, 34 did not. Students who performed well on traditional exams also did well on the simulation. Students who preferred working with others seemed to perform more poorly on both the exam and the simulation. (SK)

  7. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bremer, Peer-Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohr, Bernd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schulz, Martin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pasccci, Valerio [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gamblin, Todd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brunst, Holger [Dresden Univ. of Technology (Germany)

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  8. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2015-01-01

    A continuation of Contemporary High Performance Computing: From Petascale toward Exascale, this second volume continues the discussion of HPC flagship systems, major application workloads, facilities, and sponsors. The book includes of figures and pictures that capture the state of existing systems: pictures of buildings, systems in production, floorplans, and many block diagrams and charts to illustrate system design and performance.

  9. Computational Fluid Dynamics and Building Energy Performance Simulation

    DEFF Research Database (Denmark)

    Nielsen, Peter Vilhelm; Tryggvason, T.

    1998-01-01

    An interconnection between a building energy performance simulation program and a Computational Fluid Dynamics program (CFD) for room air distribution will be introduced for improvement of the predictions of both the energy consumption and the indoor environment. The building energy performance...

  10. Routing performance analysis and optimization within a massively parallel computer

    Science.gov (United States)

    Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen

    2013-04-16

    An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.

  11. Computer-Aided Design of Drugs on Emerging Hybrid High Performance Computers

    Science.gov (United States)

    2013-09-01

    Clustering using MapReduce , Workshop on Trends in High-Performance Distributed Computing, Vrije Universiteit, Amsterdam, NL. (Invited Talk) [25] February...and middleware packages for polarizable force fields on multi-core and GPU systems, supported by the MapReduce paradigm. NSF MRI #0922657, $451,051...High-throughput Molecular Datasets for Scalable Clustering using MapReduce , Workshop on Trends in High-Performance Distributed Computing, Vrije

  12. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  13. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  14. GPU-based high-performance computing for radiation therapy.

    Science.gov (United States)

    Jia, Xun; Ziegenhein, Peter; Jiang, Steve B

    2014-02-21

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented.

  15. Does familiarity with computers affect computerized neuropsychological test performance?

    Science.gov (United States)

    Iverson, Grant L; Brooks, Brian L; Ashton, V Lynn; Johnson, Lynda G; Gualtieri, C Thomas

    2009-07-01

    The purpose of this study was to determine whether self-reported computer familiarity is related to performance on computerized neurocognitive testing. Participants were 130 healthy adults who self-reported whether their computer use was "some" (n = 65) or "frequent" (n = 65). The two groups were individually matched on age, education, sex, and race. All completed the CNS Vital Signs (Gualtieri & Johnson, 2006b) computerized neurocognitive battery. There were significant differences on 6 of the 23 scores, including scores derived from the Symbol-Digit Coding Test, Stroop Test, and the Shifting Attention Test. The two groups were also significantly different on the Psychomotor Speed (Cohen's d = 0.37), Reaction Time (d = 0.68), Complex Attention (d = 0.40), and Cognitive Flexibility (d = 0.64) domain scores. People with "frequent" computer use performed better than people with "some" computer use on some tests requiring rapid visual scanning and keyboard work.

  16. A High Performance SOAP Engine for Grid Computing

    Science.gov (United States)

    Wang, Ning; Welzl, Michael; Zhang, Liang

    Web Service technology still has many defects that make its usage for Grid computing problematic, most notably the low performance of the SOAP engine. In this paper, we develop a novel SOAP engine called SOAPExpress, which adopts two key techniques for improving processing performance: SCTP data transport and dynamic early binding based data mapping. Experimental results show a significant and consistent performance improvement of SOAPExpress over Apache Axis.

  17. Pocket radar guide key facts, equations, and data

    CERN Document Server

    Curry, G Richard

    2010-01-01

    ThePocket Radar Guideis a concise collection of key radar facts and important radar data that provides you with necessary radar information when you are away from your office or references. It includes statements and comments on radar design, operation, and performance; equations describing the characteristics and performance of radar systems and their components; and tables with data on radar characteristics and key performance issues.It is intended to supplement other radar information sources by providing a pocket companion to refresh memory and provide details whenever you need them such a

  18. Dynamic Resource Management and Job Scheduling for High Performance Computing

    OpenAIRE

    2016-01-01

    Job scheduling and resource management plays an essential role in high-performance computing. Supercomputing resources are usually managed by a batch system, which is responsible for the effective mapping of jobs onto resources (i.e., compute nodes). From the system perspective, a batch system must ensure high system utilization and throughput, while from the user perspective it must ensure fast response times and fairness when allocating resources across jobs. Parallel jobs can be divide...

  19. Computer-Related Success and Failure: A Longitudinal Field Study of the Factors Influencing Computer-Related Performance.

    Science.gov (United States)

    Rozell, E. J.; Gardner, W. L., III

    1999-01-01

    A model of the intrapersonal processes impacting computer-related performance was tested using data from 75 manufacturing employees in a computer training course. Gender, computer experience, and attributional style were predictive of computer attitudes, which were in turn related to computer efficacy, task-specific performance expectations, and…

  20. High-Performance Java Codes for Computational Fluid Dynamics

    Science.gov (United States)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  1. Terahertz radar cross section measurements

    DEFF Research Database (Denmark)

    Iwaszczuk, Krzysztof; Heiselberg, Henning; Jepsen, Peter Uhd

    2010-01-01

    We perform angle- and frequency-resolved radar cross section (RCS) measurements on objects at terahertz frequencies. Our RCS measurements are performed on a scale model aircraft of size 5-10 cm in polar and azimuthal configurations, and correspond closely to RCS measurements with conventional radar...

  2. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  3. High performance computing and communications: FY 1997 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  4. Radio Synthesis Imaging - A High Performance Computing and Communications Project

    Science.gov (United States)

    Crutcher, Richard M.

    The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long

  5. Computer task performance by subjects with Duchenne muscular dystrophy

    Directory of Open Access Journals (Sweden)

    Malheiros SRP

    2015-12-01

    Full Text Available Silvia Regina Pinheiro Malheiros,1 Talita Dias da Silva,2 Francis Meire Favero,2 Luiz Carlos de Abreu,1 Felipe Fregni,3 Denise Cardoso Ribeiro,4 Carlos Bandeira de Mello Monteiro1,4,5 1School of Medicine of ABC, Santo Andre, Brazil; 2Department of Medicine, Paulista School of Medicine, Federal University of São Paulo, São Paulo, Brazil; 3Center for Neurosciences, University of São Paulo, São Paulo, Brazil; 4Post-graduate Program in Rehabilitation Sciences, Faculty of Medicine, University of São Paulo, São Paulo, Brazil; 5School of Arts, Sciences and Humanities, University of São Paulo, São Paulo, Brazil Aims: Two specific objectives were established to quantify computer task performance among people with Duchenne muscular dystrophy (DMD. First, we compared simple computational task performance between subjects with DMD and age-matched typically developing (TD subjects. Second, we examined correlations between the ability of subjects with DMD to learn the computational task and their motor functionality, age, and initial task performance.Method: The study included 84 individuals (42 with DMD, mean age of 18±5.5 years, and 42 age-matched controls. They executed a computer maze task; all participants performed the acquisition (20 attempts and retention (five attempts phases, repeating the same maze. A different maze was used to verify transfer performance (five attempts. The Motor Function Measure Scale was applied, and the results were compared with maze task performance.Results: In the acquisition phase, a significant decrease was found in movement time (MT between the first and last acquisition block, but only for the DMD group. For the DMD group, MT during transfer was shorter than during the first acquisition block, indicating improvement from the first acquisition block to transfer. In addition, the TD group showed shorter MT than the DMD group across the study.Conclusion: DMD participants improved their performance after practicing

  6. Evaluation of Detection Performance under Employment of the Generalized Detector in Radar Sensor Systems

    Directory of Open Access Journals (Sweden)

    M. S. Shbat

    2014-04-01

    Full Text Available The detection performance of the generalized detector (GD constructed based on the generalized approach to signal processing in noise is evaluated under homogeneous and non-homogeneous noise. The GD adaptive threshold is derived and defined applying an appropriate noise power estimation using the sliding window technique. The direct close expressions for the GD average probability of detection and probability of false alarm are also derived. Typical constant false alarm rate (CFAR detectors, namely, the cell averaging CFAR (CA-CFAR detector, the ordered statistic CFAR (OS-CFAR detector, the generalized censored mean level (GCML detector, and the adaptive censored greatest-of CFAR (ACGO-CFAR detector are compared with the GD by detection performance under both homogenous and non-homogeneous noise conditions, i.e. when the interfering targets are absent or present, respectively. Simulation results demonstrate a superiority of GD in detection performance in comparison with the above mentioned CFAR detectors under both homogeneous and non-homogeneous noise conditions.

  7. Visual Analysis of Cloud Computing Performance Using Behavioral Lines.

    Science.gov (United States)

    Muelder, Chris; Zhu, Biao; Chen, Wei; Zhang, Hongxin; Ma, Kwan-Liu

    2016-02-29

    Cloud computing is an essential technology to Big Data analytics and services. A cloud computing system is often comprised of a large number of parallel computing and storage devices. Monitoring the usage and performance of such a system is important for efficient operations, maintenance, and security. Tracing every application on a large cloud system is untenable due to scale and privacy issues. But profile data can be collected relatively efficiently by regularly sampling the state of the system, including properties such as CPU load, memory usage, network usage, and others, creating a set of multivariate time series for each system. Adequate tools for studying such large-scale, multidimensional data are lacking. In this paper, we present a visual based analysis approach to understanding and analyzing the performance and behavior of cloud computing systems. Our design is based on similarity measures and a layout method to portray the behavior of each compute node over time. When visualizing a large number of behavioral lines together, distinct patterns often appear suggesting particular types of performance bottleneck. The resulting system provides multiple linked views, which allow the user to interactively explore the data by examining the data or a selected subset at different levels of detail. Our case studies, which use datasets collected from two different cloud systems, show that this visual based approach is effective in identifying trends and anomalies of the systems.

  8. On the performances of computer vision algorithms on mobile platforms

    Science.gov (United States)

    Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.

    2012-01-01

    Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.

  9. Evolving cellular automata to perform computations. Final technical report

    Energy Technology Data Exchange (ETDEWEB)

    Crutchfield, J.P.; Mitchell, M.

    1998-04-01

    The overall goals of the project are to determine the usefulness of genetic algorithms (GAs) in designing spatially extended parallel systems to perform computational tasks and to develop theoretical frameworks both for understanding the computation in the systems evolved by the GA and for understanding the evolutionary process which successful systems are designed. In the original proposal the authors scheduled the first year of the project to be devoted to experimental grounding. During the first year they developed the simulation and graphics software necessary for doing experiments and analysis on one dimensional cellular automata (CAs), and they performed extensive experiments and analysis concerning two computational tasks--density classification and synchronization. Details of these experiments and results, and a list of resulting publications, were given in the 1994--1995 report. The authors scheduled the second year to be devoted to theoretical development. (A third year, to be funded by the National Science Foundation, will be devoted to applications.) Accordingly, most of the effort during the second year was spent on theory, both of GAs and of the CAs that they evolve. A central notion is that of the computational strategy of a CA, which they formalize in terms of domains, particles, and particle interactions. This formalization builds on the computational mechanics framework developed by Crutchfield and Hanson for understanding intrinsic computation in spatially extended dynamical systems. They have made significant progress in the following areas: (1) statistical dynamics of GAs; (2) formalizing particle based computation in cellular automata; and (3) computation in two-dimensional CAs.

  10. High performance computing and communications: FY 1996 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  11. A High-Performance Communication Service for Parallel Servo Computing

    Directory of Open Access Journals (Sweden)

    Cheng Xin

    2010-11-01

    Full Text Available Complexity of algorithms for the servo control in the multi-dimensional, ultra-precise stage application has made multi-processor parallel computing technology needed. Considering the specific communication requirements in the parallel servo computing, we propose a communication service scheme based on VME bus, which provides high-performance data transmission and precise synchronization trigger support for the processors involved. Communications service is implemented on both standard VME bus and user-defined Internal Bus (IB, and can be redefined online. This paper introduces parallel servo computing architecture and communication service, describes structure and implementation details of each module in the service, and finally provides data transmission model and analysis. Experimental results show that communication services can provide high-speed data transmission with sub-nanosecond-level error of transmission latency, and synchronous trigger with nanosecond-level synchronization error. Moreover, the performance of communication service is not affected by the increasing number of processors.

  12. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Michael J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  13. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa

    2012-10-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a representative HPC application. © 2012 IEEE.

  14. High Performance Computing tools for the Integrated Tokamak Modelling project

    Energy Technology Data Exchange (ETDEWEB)

    Guillerminet, B., E-mail: bernard.guillerminet@cea.f [Association Euratom-CEA sur la Fusion, IRFM, DSM, CEA Cadarache (France); Plasencia, I. Campos [Instituto de Fisica de Cantabria (IFCA), CSIC, Santander (Spain); Haefele, M. [Universite Louis Pasteur, Strasbourg (France); Iannone, F. [EURATOM/ENEA Fusion Association, Frascati (Italy); Jackson, A. [University of Edinburgh (EPCC) (United Kingdom); Manduchi, G. [EURATOM/ENEA Fusion Association, Padova (Italy); Plociennik, M. [Poznan Supercomputing and Networking Center (PSNC) (Poland); Sonnendrucker, E. [Universite Louis Pasteur, Strasbourg (France); Strand, P. [Chalmers University of Technology (Sweden); Owsiak, M. [Poznan Supercomputing and Networking Center (PSNC) (Poland)

    2010-07-15

    Fusion Modelling and Simulation are very challenging and the High Performance Computing issues are addressed here. Toolset for jobs launching and scheduling, data communication and visualization have been developed by the EUFORIA project and used with a plasma edge simulation code.

  15. Seeking Solution: High-Performance Computing for Science. Background Paper.

    Science.gov (United States)

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    This is the second publication from the Office of Technology Assessment's assessment on information technology and research, which was requested by the House Committee on Science and Technology and the Senate Committee on Commerce, Science, and Transportation. The first background paper, "High Performance Computing & Networking for…

  16. Computer science of the high performance; Informatica del alto rendimiento

    Energy Technology Data Exchange (ETDEWEB)

    Moraleda, A.

    2008-07-01

    The high performance computing is taking shape as a powerful accelerator of the process of innovation, to drastically reduce the waiting times for access to the results and the findings in a growing number of processes and activities as complex and important as medicine, genetics, pharmacology, environment, natural resources management or the simulation of complex processes in a wide variety of industries. (Author)

  17. Replica-Based High-Performance Tuple Space Computing

    DEFF Research Database (Denmark)

    Andric, Marina; De Nicola, Rocco; Lluch Lafuente, Alberto

    2015-01-01

    We present the tuple-based coordination language RepliKlaim, which enriches Klaim with primitives for replica-aware coordination. Our overall goal is to offer suitable solutions to the challenging problems of data distribution and locality in large-scale high performance computing. In particular,...

  18. Adaptive filters applied on radar signals

    OpenAIRE

    2013-01-01

    This master thesis has been performed at SAAB AB in Järfälla, Sweden.A radar warning receiver must alert the user when someone highlights it with radarsignals. Radar signals used today varies and has a wide frequency band. In order todetect all possible radar signals the radar warning receiver must have a widebandwidth. This results in that the noise power will be high in the radar warningreceiver and weak radar signals will be hard to detect or even undetected.The aim of the thesis work was ...

  19. Performance measurement and modeling of component applications in a high performance computing environment : a case study.

    Energy Technology Data Exchange (ETDEWEB)

    Armstrong, Robert C.; Ray, Jaideep; Malony, A. (University of Oregon, Eugene, OR); Shende, Sameer (University of Oregon, Eugene, OR); Trebon, Nicholas D.

    2003-11-01

    We present a case study of performance measurement and modeling of a CCA (Common Component Architecture) component-based application in a high performance computing environment. We explore issues peculiar to component-based HPC applications and propose a performance measurement infrastructure for HPC based loosely on recent work done for Grid environments. A prototypical implementation of the infrastructure is used to collect data for a three components in a scientific application and construct performance models for two of them. Both computational and message-passing performance are addressed.

  20. Studying an Eulerian Computer Model on Different High-performance Computer Platforms and Some Applications

    Science.gov (United States)

    Georgiev, K.; Zlatev, Z.

    2010-11-01

    The Danish Eulerian Model (DEM) is an Eulerian model for studying the transport of air pollutants on large scale. Originally, the model was developed at the National Environmental Research Institute of Denmark. The model computational domain covers Europe and some neighbour parts belong to the Atlantic Ocean, Asia and Africa. If DEM model is to be applied by using fine grids, then its discretization leads to a huge computational problem. This implies that such a model as DEM must be run only on high-performance computer architectures. The implementation and tuning of such a complex large-scale model on each different computer is a non-trivial task. Here, some comparison results of running of this model on different kind of vector (CRAY C92A, Fujitsu, etc.), parallel computers with distributed memory (IBM SP, CRAY T3E, Beowulf clusters, Macintosh G4 clusters, etc.), parallel computers with shared memory (SGI Origin, SUN, etc.) and parallel computers with two levels of parallelism (IBM SMP, IBM BlueGene/P, clusters of multiprocessor nodes, etc.) will be presented. The main idea in the parallel version of DEM is domain partitioning approach. Discussions according to the effective use of the cache and hierarchical memories of the modern computers as well as the performance, speed-ups and efficiency achieved will be done. The parallel code of DEM, created by using MPI standard library, appears to be highly portable and shows good efficiency and scalability on different kind of vector and parallel computers. Some important applications of the computer model output are presented in short.

  1. The performance of low-cost commercial cloud computing as an alternative in computational chemistry.

    Science.gov (United States)

    Thackston, Russell; Fortenberry, Ryan C

    2015-05-05

    The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services.

  2. Numerics of High Performance Computers and Benchmark Evaluation of Distributed Memory Computers

    Directory of Open Access Journals (Sweden)

    H. S. Krishna

    2004-07-01

    Full Text Available The internal representation of numerical data, their speed of manipulation to generate the desired result through efficient utilisation of central processing unit, memory, and communication links are essential steps of all high performance scientific computations. Machine parameters, in particular, reveal accuracy and error bounds of computation, required for performance tuning of codes. This paper reports diagnosis of machine parameters, measurement of computing power of several workstations, serial and parallel computers, and a component-wise test procedure for distributed memory computers. Hierarchical memory structure is illustrated by block copying and unrolling techniques. Locality of reference for cache reuse of data is amply demonstrated by fast Fourier transform codes. Cache and register-blocking technique results in their optimum utilisation with consequent gain in throughput during vector-matrix operations. Implementation of these memory management techniques reduces cache inefficiency loss, which is known to be proportional to the number of processors. Of the two Linux clusters-ANUP16, HPC22 and HPC64, it has been found from the measurement of intrinsic parameters and from application benchmark of multi-block Euler code test run that ANUP16 is suitable for problems that exhibit fine-grained parallelism. The delivered performance of ANUP16 is of immense utility for developing high-end PC clusters like HPC64 and customised parallel computers with added advantage of speed and high degree of parallelism.

  3. The design of linear algebra libraries for high performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, J.J. [Tennessee Univ., Knoxville, TN (United States). Dept. of Computer Science]|[Oak Ridge National Lab., TN (United States); Walker, D.W. [Oak Ridge National Lab., TN (United States)

    1993-08-01

    This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed by an outline of ScaLAPACK, which is a distributed memory version of LAPACK currently under development. The importance of block-partitioned algorithms in reducing the frequency of data movement between different levels of hierarchical memory is stressed. The use of such algorithms helps reduce the message startup costs on distributed memory concurrent computers. Other key ideas in our approach are the use of distributed versions of the Level 3 Basic Linear Algebra Subprograms (BLAS) as computational building blocks, and the use of Basic Linear Algebra Communication Subprograms (BLACS) as communication building blocks. Together the distributed BLAS and the BLACS can be used to construct higher-level algorithms, and hide many details of the parallelism from the application developer. The block-cyclic data distribution is described, and adopted as a good way of distributing block-partitioned matrices. Block-partitioned versions of the Cholesky and LU factorizations are presented, and optimization issues associated with the implementation of the LU factorization algorithm on distributed memory concurrent computers are discussed, together with its performance on the Intel Delta system. Finally, approaches to the design of library interfaces are reviewed.

  4. PERFORMANCE IMPROVEMENT IN CLOUD COMPUTING USING RESOURCE CLUSTERING

    Directory of Open Access Journals (Sweden)

    G. Malathy

    2013-01-01

    Full Text Available Cloud computing is a computing paradigm in which the various tasks are assigned to a combination of connections, software and services that can be accessed over the network. The computing resources and services can be efficiently delivered and utilized, making the vision of computing utility realizable. In various applications, execution of services with more number of tasks has to perform with minimum intertask communication. The applications are more likely to exhibit different patterns and levels and the distributed resources organize into various topologies for information and query dissemination. In a distributed system the resource discovery is a significant process for finding appropriate nodes. The earlier resource discovery mechanism in cloud system relies on the recent observations. In this study, resource usage distribution for a group of nodes with identical resource usages patterns are identified and kept as a cluster and is named as resource clustering approach. The resource clustering approach is modeled using CloudSim, a toolkit for modeling and simulating cloud computing environments and the evaluation improves the performance of the system in the usage of the resources. Results show that resource clusters are able to provide high accuracy for resource discovery.

  5. Performance Comparision of Dynamic Load Balancing Algorithm in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Yogita kaushik

    2016-08-01

    Full Text Available Cloud computing as a distributed paradigm, it has the latent to make over a large part of the Cooperative industry. In cloud computing it’s automatically describe more technologies like distributed computing, virtualization, software, web services and networking. We review the new cloud computing technologies, and indicate the main challenges for their development in future, among which load balancing problem stands out and attracts our attention Concept of load balancing in networking and in cloud environment both are widely different. Load balancing in networking its complete concern to avoid the problem of overloading and under loading in any sever networking cloud computing its complete different its involves different elements metrics such as security, reliability, throughput, tolerance, on demand services, cost etc. Through these elements we avoiding various node problem of distributing system where many services waiting for request and others are heavily loaded and through these its increase response time and degraded performance optimization. In this paper first we classify algorithms in static and dynamic. Then we analyzed the dynamic algorithms applied in dynamics environments in cloud. Through this paper we have been show compression of various dynamics algorithm in which we include honey bee algorithm, throttled algorithm, Biased random algorithm with different elements and describe how and which is best in cloud environment with different metrics mainly used elements are performance, resource utilization and minimum cost. Our main focus of paper is in the analyze various load balancing algorithms and their applicability in cloud environment.

  6. Establishing performance requirements of computer based systems subject to uncertainty

    Energy Technology Data Exchange (ETDEWEB)

    Robinson, D.

    1997-02-01

    An organized systems design approach is dictated by the increasing complexity of computer based systems. Computer based systems are unique in many respects but share many of the same problems that have plagued design engineers for decades. The design of complex systems is difficult at best, but as a design becomes intensively dependent on the computer processing of external and internal information, the design process quickly borders chaos. This situation is exacerbated with the requirement that these systems operate with a minimal quantity of information, generally corrupted by noise, regarding the current state of the system. Establishing performance requirements for such systems is particularly difficult. This paper briefly sketches a general systems design approach with emphasis on the design of computer based decision processing systems subject to parameter and environmental variation. The approach will be demonstrated with application to an on-board diagnostic (OBD) system for automotive emissions systems now mandated by the state of California and the Federal Clean Air Act. The emphasis is on an approach for establishing probabilistically based performance requirements for computer based systems.

  7. Opportunities and challenges of high-performance computing in chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Guest, M.F.; Kendall, R.A.; Nichols, J.A. [eds.] [and others

    1995-06-01

    The field of high-performance computing is developing at an extremely rapid pace. Massively parallel computers offering orders of magnitude increase in performance are under development by all the major computer vendors. Many sites now have production facilities that include massively parallel hardware. Molecular modeling methodologies (both quantum and classical) are also advancing at a brisk pace. The transition of molecular modeling software to a massively parallel computing environment offers many exciting opportunities, such as the accurate treatment of larger, more complex molecular systems in routine fashion, and a viable, cost-effective route to study physical, biological, and chemical `grand challenge` problems that are impractical on traditional vector supercomputers. This will have a broad effect on all areas of basic chemical science at academic research institutions and chemical, petroleum, and pharmaceutical industries in the United States, as well as chemical waste and environmental remediation processes. But, this transition also poses significant challenges: architectural issues (SIMD, MIMD, local memory, global memory, etc.) remain poorly understood and software development tools (compilers, debuggers, performance monitors, etc.) are not well developed. In addition, researchers that understand and wish to pursue the benefits offered by massively parallel computing are often hindered by lack of expertise, hardware, and/or information at their site. A conference and workshop organized to focus on these issues was held at the National Institute of Health, Bethesda, Maryland (February 1993). This report is the culmination of the organized workshop. The main conclusion: a drastic acceleration in the present rate of progress is required for the chemistry community to be positioned to exploit fully the emerging class of Teraflop computers, even allowing for the significant work to date by the community in developing software for parallel architectures.

  8. Static Memory Deduplication for Performance Optimization in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Gangyong Jia

    2017-04-01

    Full Text Available In a cloud computing environment, the number of virtual machines (VMs on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.

  9. Static Memory Deduplication for Performance Optimization in Cloud Computing.

    Science.gov (United States)

    Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan

    2017-04-27

    In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.

  10. Maximizing sparse matrix vector product performance in MIMD computers

    Energy Technology Data Exchange (ETDEWEB)

    McLay, R.T.; Kohli, H.S.; Swift, S.L.; Carey, G.F.

    1994-12-31

    A considerable component of the computational effort involved in conjugate gradient solution of structured sparse matrix systems is expended during the Matrix-Vector Product (MVP), and hence it is the focus of most efforts at improving performance. Such efforts are hindered on MIMD machines due to constraints on memory, cache and speed of memory-cpu data transfer. This paper describes a strategy for maximizing the performance of the local computations associated with the MVP. The method focuses on single stride memory access, and the efficient use of cache by pre-loading it with data that is re-used while bypassing it for other data. The algorithm is designed to behave optimally for varying grid sizes and number of unknowns per gridpoint. Results from an assembly language implementation of the strategy on the iPSC/860 show a significant improvement over the performance using FORTRAN.

  11. Component-based software for high-performance scientific computing

    Science.gov (United States)

    Alexeev, Yuri; Allan, Benjamin A.; Armstrong, Robert C.; Bernholdt, David E.; Dahlgren, Tamara L.; Gannon, Dennis; Janssen, Curtis L.; Kenny, Joseph P.; Krishnan, Manojkumar; Kohl, James A.; Kumfert, Gary; Curfman McInnes, Lois; Nieplocha, Jarek; Parker, Steven G.; Rasmussen, Craig; Windus, Theresa L.

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  12. Performance Assessment of OVERFLOW on Distributed Computing Environment

    Science.gov (United States)

    Djomehri, M. Jahed; Rizk, Yehia M.

    2000-01-01

    The aerodynamic computer code, OVERFLOW, with a multi-zone overset grid feature, has been parallelized to enhance its performance on distributed and shared memory paradigms. Practical application benchmarks have been set to assess the efficiency of code's parallelism on high-performance architectures. The code's performance has also been experimented with in the context of the distributed computing paradigm on distant computer resources using the Information Power Grid (IPG) toolkit, Globus. Two parallel versions of the code, namely OVERFLOW-MPI and -MLP, have developed around the natural coarse grained parallelism inherent in a multi-zonal domain decomposition paradigm. The algorithm invokes a strategy that forms a number of groups, each consisting of a zone, a cluster of zones and/or a partition of a large zone. Each group can be thought of as a process with one or multithreads assigned to it and that all groups run in parallel. The -MPI version of the code uses explicit message-passing based on the standard MPI library for sending and receiving interzonal boundary data across processors. The -MLP version employs no message-passing paradigm; the boundary data is transferred through the shared memory. The -MPI code is suited for both distributed and shared memory architectures, while the -MLP code can only be used on shared memory platforms. The IPG applications are implemented by the -MPI code using the Globus toolkit. While a computational task is distributed across multiple computer resources, the parallelism can be explored on each resource alone. Performance studies are achieved with some practical aerodynamic problems with complex geometries, consisting of 2.5 up to 33 million grid points and a large number of zonal blocks. The computations were executed primarily on SGI Origin 2000 multiprocessors and on the Cray T3E. OVERFLOW's IPG applications are carried out on NASA homogeneous metacomputing machines located at three sites, Ames, Langley and Glenn. Plans

  13. Digital Beamforming Synthetic Aperture Radar (DBSAR): Performance Analysis During the Eco-3D 2011 and Summer 2012 Flight Campaigns

    Science.gov (United States)

    Rincon, Rafael F.; Fatoyinbo, Temilola; Carter, Lynn; Ranson, K. Jon; Vega, Manuel; Osmanoglu, Batuhan; Lee, SeungKuk; Sun, Guoqing

    2014-01-01

    The Digital Beamforming Synthetic Aperture radar (DBSAR) is a state-of-the-art airborne radar developed at NASA/Goddard for the implementation, and testing of digital beamforming techniques applicable to Earth and planetary sciences. The DBSAR measurements have been employed to study: The estimation of vegetation biomass and structure - critical parameters in the study of the carbon cycle; The measurement of geological features - to explore its applicability to planetary science by measuring planetary analogue targets. The instrument flew two test campaigns over the East coast of the United States in 2011, and 2012. During the campaigns the instrument operated in full polarimetric mode collecting data from vegetation and topography features.

  14. Synthetic aperture radar capabilities in development

    Energy Technology Data Exchange (ETDEWEB)

    Miller, M. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    The Imaging and Detection Program (IDP) within the Laser Program is currently developing an X-band Synthetic Aperture Radar (SAR) to support the Joint US/UK Radar Ocean Imaging Program. The radar system will be mounted in the program`s Airborne Experimental Test-Bed (AETB), where the initial mission is to image ocean surfaces and better understand the physics of low grazing angle backscatter. The Synthetic Aperture Radar presentation will discuss its overall functionality and a brief discussion on the AETB`s capabilities. Vital subsystems including radar, computer, navigation, antenna stabilization, and SAR focusing algorithms will be examined in more detail.

  15. Overview of Parallel Platforms for Common High Performance Computing

    Directory of Open Access Journals (Sweden)

    T. Fryza

    2012-04-01

    Full Text Available The paper deals with various parallel platforms used for high performance computing in the signal processing domain. More precisely, the methods exploiting the multicores central processing units such as message passing interface and OpenMP are taken into account. The properties of the programming methods are experimentally proved in the application of a fast Fourier transform and a discrete cosine transform and they are compared with the possibilities of MATLAB's built-in functions and Texas Instruments digital signal processors with very long instruction word architectures. New FFT and DCT implementations were proposed and tested. The implementation phase was compared with CPU based computing methods and with possibilities of the Texas Instruments digital signal processing library on C6747 floating-point DSPs. The optimal combination of computing methods in the signal processing domain and new, fast routines' implementation is proposed as well.

  16. ABOUT THE SUITABILITY OF CLOUDS IN HIGH-PERFORMANCE COMPUTING

    Directory of Open Access Journals (Sweden)

    Harald Richter

    2016-01-01

    Full Text Available Cloud computing has become the ubiquitous computing and storage paradigm. It is also attractive for scientists, because they do not have to care any more for their own IT infrastructure, but can outsource it to a Cloud Service Provider of their choice. However, for the case of High-Performance Computing (HPC in a cloud, as it is needed in simulations or for Big Data analysis, things are getting more intricate, because HPC codes must stay highly efficient, even when executed by many virtual cores (vCPUs. Older clouds or new standard clouds can fulfil this only under special precautions, which are given in this article. The results can be extrapolated to other cloud OSes than OpenStack and to other codes than OpenFOAM, which were used as examples.

  17. Developing Computer Network Based on EIGRP Performance Comparison and OSPF

    Directory of Open Access Journals (Sweden)

    Lalu Zazuli Azhar Mardedi

    2015-09-01

    Full Text Available One of the computer network systems technologies that are growing rapidly at this time is internet. In building the networks, a routing mechanism is needed to integrate the entire computer with a high degree of flexibility. Routing is a major part in giving a performance to the network. With many existing routing protocols, network administrators need a reference comparison of the performance of each type of the routing protocol. One such routing is Enhanced Interior Gateway Routing Protocol (EIGRP and Open Shortest Path First (OSPF. This paper only focuses on the performance of both the routing protocol on the network topology hybrid. Network services existing internet access speeds average of 8.0 KB/sec and 2 MB bandwidth. A backbone network is used by two academies, they are Academy of Information Management and Computer (AIMC and Academy of Secretary and Management (ASM, with 2041 clients and it caused slow internet access. To solve the problems, the analysis and comparison of performance between the Enhanced Interior Gateway Routing Protocol (EIGRP and Open Shortest Path First (OSPF will be applied. Simulation software Cisco Packet Tracer 6.0.1 is used to get the value and to verify the results of its use.

  18. Scout: high-performance heterogeneous computing made simple

    Energy Technology Data Exchange (ETDEWEB)

    Jablin, James [Los Alamos National Laboratory; Mc Cormick, Patrick [Los Alamos National Laboratory; Herlihy, Maurice [BROWN UNIV.

    2011-01-26

    Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly navigate both software and hardware. Co-design between computer scientists and research scientists can alleviate but not solve this problem. The science community requires better tools for developing, optimizing, and future-proofing codes, allowing scientists to focus on their research while still achieving high computational performance. Scout is a parallel programming language and extensible compiler framework targeting heterogeneous architectures. It provides the abstraction required to buffer scientists from the constantly-shifting details of hardware while still realizing higb-performance by encapsulating software and hardware optimization within a compiler framework.

  19. High performance stream computing for particle beam transport simulations

    Energy Technology Data Exchange (ETDEWEB)

    Appleby, R; Bailey, D; Higham, J; Salt, M [School of Physics and Astronomy, University of Manchester, Oxford Road, Manchester, M13 9PL (United Kingdom)], E-mail: Robert.Appleby@manchester.ac.uk, E-mail: David.Bailey-2@manchester.ac.uk

    2008-07-15

    Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the simultaneous transport of many particles using dedicated hardware. Modern Graphics Processing Units (GPUs) are powerful and affordable stream computing devices. The results of simulations of particle transport through the booster-to-storage-ring transfer line of the DIAMOND synchrotron light source using an NVidia GeForce 7900 GPU are compared to the standard transport code MAD. It is found that particle transport calculations are suitable for stream processing and large performance increases are possible. The accuracy and potential speed gains are compared and the prospects for future work in the area are discussed.

  20. High performance stream computing for particle beam transport simulations

    Science.gov (United States)

    Appleby, R.; Bailey, D.; Higham, J.; Salt, M.

    2008-07-01

    Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the simultaneous transport of many particles using dedicated hardware. Modern Graphics Processing Units (GPUs) are powerful and affordable stream computing devices. The results of simulations of particle transport through the booster-to-storage-ring transfer line of the DIAMOND synchrotron light source using an NVidia GeForce 7900 GPU are compared to the standard transport code MAD. It is found that particle transport calculations are suitable for stream processing and large performance increases are possible. The accuracy and potential speed gains are compared and the prospects for future work in the area are discussed.

  1. Weather Radar Stations

    Data.gov (United States)

    Department of Homeland Security — These data represent Next-Generation Radar (NEXRAD) and Terminal Doppler Weather Radar (TDWR) weather radar stations within the US. The NEXRAD radar stations are...

  2. Coordinated Radar Resource Management for Networked Phased Array Radars

    Science.gov (United States)

    2014-12-01

    computed, and the detection of a target is determined based on a Monte Carlo test. For each successful target confirmation, a measurement report is...detection based on Monte Carlo test • add appropriate random perturbations to detec- tion measurements Radar Targets Environment Input Parameters... Fuente and J.R. Casar-Corredera. Optimal radar pulse scheduling using a neural network. In IEEE Int. Conf. Neural Networks, volume 7, pages 4558–4591

  3. Effect of mindfulness meditation on brain-computer interface performance

    OpenAIRE

    Tan, Lee-Fan; Dienes, Zoltan; Jansari, Ashok S.; Goh, Sing-Yau

    2014-01-01

    Electroencephalogram based Brain-Computer Interfaces (BCIs) enable stroke and motor\\ud neuron disease patients to communicate and control devices. Mindfulness meditation has\\ud been claimed to enhance metacognitive regulation. The current study explores whether\\ud mindfulness meditation training can thus improve the performance of BCI users. To eliminate\\ud the possibility of expectation of improvement influencing the results, we introduced a music\\ud training condition. A norming study found...

  4. The role of interpreters in high performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Naumann, Axel; /CERN; Canal, Philippe; /Fermilab

    2008-01-01

    Compiled code is fast, interpreted code is slow. There is not much we can do about it, and it's the reason why interpreters use in high performance computing is usually restricted to job submission. We show where interpreters make sense even in the context of analysis code, and what aspects have to be taken into account to make this combination a success.

  5. Simulation of Space-borne Radar Observation from High Resolution Cloud Model - for GPM Dual frequency Precipitation Radar -

    Science.gov (United States)

    Kim, H.; Meneghini, R.; Jones, J.; Liao, L.

    2011-12-01

    A comprehensive space-borne radar simulator has been developed to support active microwave sensor satellite missions. The two major objectives of this study are: 1) to develop a radar simulator optimized for the Dual-frequency Precipitation Radar (KuPR and KaPR) on the Global Precipitation Measurement Mission satellite (GPM-DPR) and 2) to generate the synthetic test datasets for DPR algorithm development. This simulator consists of two modules: a DPR scanning configuration module and a forward module that generates atmospheric and surface radar observations. To generate realistic DPR test data, the scanning configuration module specifies the technical characteristics of DPR sensor and emulates the scanning geometry of the DPR with a inner swath of about 120 km, which contains matched-beam data from both frequencies, and an outer swath from 120 to 245 km over which only Ku-band data will be acquired. The second module is a forward model used to compute radar observables (reflectivity, attenuation and polarimetric variables) from input model variables including temperature, pressure and water content (rain water, cloud water, cloud ice, snow, graupel and water vapor) over the radar resolution volume. Presently, the input data to the simulator come from the Goddard Cumulus Ensemble (GCE) and Weather Research and Forecast (WRF) models where a constant mass density is assumed for each species with a particle size distribution given by an exponential distribution with fixed intercept parameter (N0) and a slope parameter (Λ) determined from the equivalent water content. Although the model data do not presently contain mixed phase hydrometeors, the Yokoyama-Tanaka melting model is used along with the Bruggeman effective dielectric constant to replace rain and snow particles, where both are present, with mixed phase particles while preserving the snow/water fraction. For testing one of the DPR retrieval algorithms, the Surface Reference Technique (SRT), the simulator uses

  6. Integrating Reconfigurable Hardware-Based Grid for High Performance Computing

    Directory of Open Access Journals (Sweden)

    Julio Dondo Gazzano

    2015-01-01

    Full Text Available FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC. The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process.

  7. Computational Approach for Multi Performances Optimization of EDM

    Directory of Open Access Journals (Sweden)

    Yusoff Yusliza

    2016-01-01

    Full Text Available This paper proposes a new computational approach employed in obtaining optimal parameters of multi performances EDM. Regression and artificial neural network (ANN are used as the modeling techniques meanwhile multi objective genetic algorithm (multiGA is used as the optimization technique. Orthogonal array L256 is implemented in the procedure of network function and network architecture selection. Experimental studies are carried out to verify the machining performances suggested by this approach. The highest MRR value obtained from OrthoANN – MPR – MultiGA is 205.619 mg/min and the lowest Ra value is 0.0223μm.

  8. Solving Human Performance Problems with Computers. A Case Study: Building an Electronic Performance Support System.

    Science.gov (United States)

    Raybould, Barry

    1990-01-01

    Describes the design of an electronic performance support system (PSS) that was developed to help sales and support personnel access relevant information needed for good job performance. Highlights include expert systems, databases, interactive video discs, formatting information online, information retrieval techniques, HyperCard, computer-based…

  9. A Component Architecture for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, D E; Elwasif, W R; Kohl, J A; Epperly, T G W

    2003-01-21

    The Common Component Architecture (CCA) provides a means for developers to manage the complexity of large-scale scientific software systems and to move toward a ''plug and play'' environment for high-performance computing. The CCA model allows for a direct connection between components within the same process to maintain performance on inter-component calls. It is neutral with respect to parallelism, allowing components to use whatever means they desire to communicate within their parallel ''cohort.'' We will discuss in detail the importance of performance in the design of the CCA and will analyze the performance costs associated with features of the CCA.

  10. Micropower impulse radar technology and applications

    Energy Technology Data Exchange (ETDEWEB)

    Mast, J., LLNL

    1998-04-15

    The LLNL-developed Micropower Impulse Radar (MIR) technology has quickly gone from laboratory concept to embedded circuitry in numerous government and commercial systems in the last few years[l]. The main ideas behind MIR, invented by T. McEwan in the Laser Program, are the generation and detection systems for extremely low- power ultra-wideband pulses in the gigaHertz regime using low-cost components. These ideas, coupled with new antenna systems, timing and radio-frequency (RF) circuitry, computer interfaces, and signal processing, have provided the catalyst for a new generation of compact radar systems. Over the past several years we have concentrated on a number of applications of MIR which address a number of remote-sensing applications relevant to emerging programs in defense, transportation, medical, and environmental research. Some of the past commercial successes have been widely publicized [2] and are only now starting to become available for market. Over 30 patents have been filed and over 15 licenses have been signed on various aspects of the MIR technology. In addition, higher performance systems are under development for specific laboratory programs and government reimbursables. The MIR is an ultra- wideband, range-gated radar system that provides the enabling hardware technology used in the research areas mentioned above. It has numerous performance parameters that can be Selected by careful design to fit the requirements. We have improved the baseline, short- range, MIR system to demonstrate its effectiveness. The radar operates over the hand from approximately I to 4 GHz with pulse repetition frequencies up to 10 MHz. It provides a potential range resolution of I cm at ranges of greater than 20 m. We have developed a suite of algorithms for using MIR for image formation. These algorithms currently support Synthetic aperture and multistate array geometries. This baseline MIR radar imaging system has been used for several programmatic applications.

  11. Design and performance of an ultra-wideband stepped-frequency radar with precise frequency control for landmine and IED detection

    Science.gov (United States)

    Phelan, Brian R.; Sherbondy, Kelly D.; Ranney, Kenneth I.; Narayanan, Ram M.

    2014-05-01

    The Army Research Laboratory (ARL) has developed an impulse-based vehicle-mounted forward-looking ultra- wideband (UWB) radar for imaging buried landmines and improvised explosive devices (IEDs). However, there is no control of the radiated spectrum in this system. As part of ARL's Partnerships in Research Transition (PIRT) program, the above deficiency is addressed by the design of a Stepped-Frequency Radar (SFR) which allows for precise control over the radiated spectrum, while still maintaining an effective ultra-wide bandwidth. The SFR utilizes a frequency synthesizer which can be configured to excise prohibited and interfering frequency bands and also implement frequency-hopping capabilities. The SFR is designed to be a forward-looking ground- penetrating (FLGPR) Radar utilizing a uniform linear array of sixteen (16) Vivaldi notch receive antennas and two (2) Quad-ridge horn transmit antennas. While a preliminary SFR consisting of four (4) receive channels has been designed, this paper describes major improvements to the system, and an analysis of expected system performance. The 4-channel system will be used to validate the SFR design which will eventually be augmented in to the full 16-channel system. The SFR has an operating frequency band which ranges from 300 - 2000 MHz, and a minimum frequency step-size of 1 MHz. The radar system is capable of illuminating range swaths that have maximum extents of 30 to 150 meters (programmable). The transmitter has the ability to produce approximately -2 dBm/MHz average power over the entire operating frequency range. The SFR will be used to determine the practicality of detecting and classifying buried and concealed landmines and IEDs from safe stand-off distances.

  12. Performance Evaluation of Communication Software Systems for Distributed Computing

    Science.gov (United States)

    Fatoohi, Rod

    1996-01-01

    In recent years there has been an increasing interest in object-oriented distributed computing since it is better quipped to deal with complex systems while providing extensibility, maintainability, and reusability. At the same time, several new high-speed network technologies have emerged for local and wide area networks. However, the performance of networking software is not improving as fast as the networking hardware and the workstation microprocessors. This paper gives an overview and evaluates the performance of the Common Object Request Broker Architecture (CORBA) standard in a distributed computing environment at NASA Ames Research Center. The environment consists of two testbeds of SGI workstations connected by four networks: Ethernet, FDDI, HiPPI, and ATM. The performance results for three communication software systems are presented, analyzed and compared. These systems are: BSD socket programming interface, IONA's Orbix, an implementation of the CORBA specification, and the PVM message passing library. The results show that high-level communication interfaces, such as CORBA and PVM, can achieve reasonable performance under certain conditions.

  13. THE FAILURE OF TCP IN HIGH-PERFORMANCE COMPUTATIONAL GRIDS

    Energy Technology Data Exchange (ETDEWEB)

    W. FENG; ET AL

    2000-08-01

    Distributed computational grids depend on TCP to ensure reliable end-to-end communication between nodes across the wide-area network (WAN). Unfortunately, TCP performance can be abysmal even when buffers on the end hosts are manually optimized. Recent studies blame the self-similar nature of aggregate network traffic for TCP's poor performance because such traffic is not readily amenable to statistical multiplexing in the Internet, and hence computational grids. In this paper we identify a source of self-similarity previously ignored, a source that is readily controllable--TCP. Via an experimental study, we examine the effects of the TCP stack on network traffic using different implementations of TCP. We show that even when aggregate application traffic ought to smooth out as more applications' traffic are multiplexed, TCP induces burstiness into the aggregate traffic loud, thus adversely impacting network performance. Furthermore, our results indicate that TCP performance will worsen as WAN speeds continue to increase.

  14. Compressive CFAR Radar Processing

    NARCIS (Netherlands)

    Anitori, L.; Rossum, W.L. van; Otten, M.P.G.; Maleki, A.; Baraniuk, R.

    2013-01-01

    In this paper we investigate the performance of a combined Compressive Sensing (CS) Constant False Alarm Rate (CFAR) radar processor under different interference scenarios using both the Cell Averaging (CA) and Order Statistic (OS) CFAR detectors. Using the properties of the Complex Approximate Mess

  15. High performance computing and communications: FY 1995 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-01

    The High Performance Computing and Communications (HPCC) Program was formally established following passage of the High Performance Computing Act of 1991 signed on December 9, 1991. Ten federal agencies in collaboration with scientists and managers from US industry, universities, and laboratories have developed the HPCC Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1994 and FY 1995. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency. Although the Department of Education is an official HPCC agency, its current funding and reporting of crosscut activities goes through the Committee on Education and Health Resources, not the HPCC Program. For this reason the Implementation Plan covers nine HPCC agencies.

  16. Bistatic radar

    CERN Document Server

    Willis, Nick

    2004-01-01

    Annotation his book is a major extension of a chapter on bistatic radar written by the author for the Radar Handbook, 2nd edition, edited by Merrill Skolnik. It provides a history of bistatic systems that points out to potential designers the applications that have worked and the dead-ends not worth pursuing. The text reviews the basic concepts and definitions, and explains the mathematical development of relationships, such as geometry, Ovals of Cassini, dynamic range, isorange and isodoppler contours, target doppler, and clutter doppler spread.Key Features * All development and analysis are

  17. High-performance computing and networking as tools for accurate emission computed tomography reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Passeri, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Formiconi, A.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); De Cristofaro, M.T.E.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Pupi, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Meldolesi, U. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy)

    1997-04-01

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64 x 64) slices could be reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. (orig.). With 4 figs., 1 tab.

  18. Netted LPI RADARs

    Science.gov (United States)

    2011-09-01

    easier and, since they cover most of the space around the antenna, can expose it easily at various bearings ). Typical sidelobe levels for conventional...modern radar systems results in an electro- magnetic environment where the receiver should expect very few pulses. Staggered PRF and frequency agility...detector, a logarithmic amplitude compressor , and a signal encoder. All subunits are digitally controlled by computer as to frequency, sweep rate, and

  19. Computational Fluid Dynamics and Building Energy Performance Simulation

    DEFF Research Database (Denmark)

    Nielsen, Peter V.; Tryggvason, Tryggvi

    An interconnection between a building energy performance simulation program and a Computational Fluid Dynamics program (CFD) for room air distribution will be introduced for improvement of the predictions of both the energy consumption and the indoor environment. The building energy performance...... simulation program requires a detailed description of the energy flow in the air movement which can be obtained by a CFD program. The paper describes an energy consumption calculation in a large building, where the building energy simulation program is modified by CFD predictions of the flow between three...... program and a building energy performance simulation program will improve both the energy consumption data and the prediction of thermal comfort and air quality in a selected area of the building....

  20. Application Specific Performance Technology for Productive Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Malony, Allen D. [Univ. of Oregon, Eugene, OR (United States); Shende, Sameer [Univ. of Oregon, Eugene, OR (United States)

    2008-09-30

    Our accomplishments over the last three years of the DOE project Application- Specific Performance Technology for Productive Parallel Computing (DOE Agreement: DE-FG02-05ER25680) are described below. The project will have met all of its objectives by the time of its completion at the end of September, 2008. Two extensive yearly progress reports were produced in in March 2006 and 2007 and were previously submitted to the DOE Office of Advanced Scientific Computing Research (OASCR). Following an overview of the objectives of the project, we summarize for each of the project areas the achievements in the first two years, and then describe in some more detail the project accomplishments this past year. At the end, we discuss the relationship of the proposed renewal application to the work done on the current project.

  1. High Performance Computing - Power Application Programming Interface Specification.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  2. Polarization changing technique in macrocosm and it's application to radar

    Institute of Scientific and Technical Information of China (English)

    潘健; 毛二可

    2004-01-01

    A new model of air-surveillance radar (named polarization changing in macrocosm radar: PCM radar), which makes use of the polarization changing technique in macrocosm, is presented in this paper. On basis of careful selection of representative 98 states of polarization in macrocosm, PCM radar can not only perform transmitting and receiving polarization matching for various targets, consequently make full use of transmitting and receiving signals of radar, but also improve the capability against active interference and jamming. Experimental test in air defense early-warning radar system demonstrates that it can effectively enhance radar performance.

  3. Performance Evaluation of an Air-Coupled Phased-Array Radar for Near-Field Detection of Steel

    Science.gov (United States)

    2014-05-01

    reinforcing material. The degradation of concrete can vary depending on the environment. Concrete as it cures normally shrinks over time and its...Antenna Setup (Figure 12) and (Figure 13) shows the setup of the phased-array radar system that emits microwaves into a concrete slab. Two A.H...its design life. The health and state of the concrete roadways and bridge decks that commuters rely on a daily basis can be efficiently examined and

  4. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    Science.gov (United States)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  5. Memory Benchmarks for SMP-Based High Performance Parallel Computers

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, A B; de Supinski, B; Mueller, F; Mckee, S A

    2001-11-20

    As the speed gap between CPU and main memory continues to grow, memory accesses increasingly dominates the performance of many applications. The problem is particularly acute for symmetric multiprocessor (SMP) systems, where the shared memory may be accessed concurrently by a group of threads running on separate CPUs. Unfortunately, several key issues governing memory system performance in current systems are not well understood. Complex interactions between the levels of the memory hierarchy, buses or switches, DRAM back-ends, system software, and application access patterns can make it difficult to pinpoint bottlenecks and determine appropriate optimizations, and the situation is even more complex for SMP systems. To partially address this problem, we formulated a set of multi-threaded microbenchmarks for characterizing and measuring the performance of the underlying memory system in SMP-based high-performance computers. We report our use of these microbenchmarks on two important SMP-based machines. This paper has four primary contributions. First, we introduce a microbenchmark suite to systematically assess and compare the performance of different levels in SMP memory hierarchies. Second, we present a new tool based on hardware performance monitors to determine a wide array of memory system characteristics, such as cache sizes, quickly and easily; by using this tool, memory performance studies can be targeted to the full spectrum of performance regimes with many fewer data points than is otherwise required. Third, we present experimental results indicating that the performance of applications with large memory footprints remains largely constrained by memory. Fourth, we demonstrate that thread-level parallelism further degrades memory performance, even for the latest SMPs with hardware prefetching and switch-based memory interconnects.

  6. High Performance Input/Output Systems for High Performance Computing and Four-Dimensional Data Assimilation

    Science.gov (United States)

    Fox, Geoffrey C.; Ou, Chao-Wei

    1997-01-01

    The approach of this task was to apply leading parallel computing research to a number of existing techniques for assimilation, and extract parameters indicating where and how input/output limits computational performance. The following was used for detailed knowledge of the application problems: 1. Developing a parallel input/output system specifically for this application 2. Extracting the important input/output characteristics of data assimilation problems; and 3. Building these characteristics s parameters into our runtime library (Fortran D/High Performance Fortran) for parallel input/output support.

  7. High Performance Computing for probabilistic distributed slope stability analysis, an early example

    Science.gov (United States)

    Rossi, Guglielmo; Catani, Filippo

    2010-05-01

    The term shallow landslides is widely used in literature to describe a slope movement of limited size that mainly develops in soils up to a maximum of a few meters thick. Shallow landslides are usually triggered by heavy rainfall because, as the water starts to infiltrate into the soil, the pore-water pressure increases so that the shear strength of the soil is reduced leading to slope failure. We have developed a distributed hydrological-geotechnical model for forecasting the temporal and spatial distribution of shallow landslides to be used as a real time warning system for civil protection purposes. The stability simulator is developed to use High Performance Computing (HPC) resources and in this way can manage large areas, with high spatial and temporal resolution, at useful computational time for a warning system . The output of the model is a probabilistic value of slope instability. In its current stage the model applied for predicting the expected location of shallow landslides involves several stand-alone components. The base solution suggested by Iverson for the Richards equation is adapted to be used in a real time simulator to estimate the probabilistic distribution of the transient groundwater pressure head according to radar detected rainfall intensity. The use of radar detected rainfall intensity as the input for the hydrological simulation of the infiltration allows a more accurate computation of the redistribution of the groundwater pressure associated with transient infiltration of rain. A soil depth prediction scheme and a limit-equilibrium infinite slope stability algorithm are used to calculate the distributed factor of safety (FS) at different depths and to record the probability distribution of slope instability in the final output file. The additional ancillary data required have been collected during fieldwork and with laboratory standard tests. The model deals with both saturated and unsaturated conditions taking into account the effect of

  8. Energy Proportionality and Performance in Data Parallel Computing Clusters

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Jinoh; Chou, Jerry; Rotem, Doron

    2011-02-14

    Energy consumption in datacenters has recently become a major concern due to the rising operational costs andscalability issues. Recent solutions to this problem propose the principle of energy proportionality, i.e., the amount of energy consumedby the server nodes must be proportional to the amount of work performed. For data parallelism and fault tolerancepurposes, most common file systems used in MapReduce-type clusters maintain a set of replicas for each data block. A coveringset is a group of nodes that together contain at least one replica of the data blocks needed for performing computing tasks. In thiswork, we develop and analyze algorithms to maintain energy proportionality by discovering a covering set that minimizesenergy consumption while placing the remaining nodes in lowpower standby mode. Our algorithms can also discover coveringsets in heterogeneous computing environments. In order to allow more data parallelism, we generalize our algorithms so that itcan discover k-covering sets, i.e., a set of nodes that contain at least k replicas of the data blocks. Our experimental results showthat we can achieve substantial energy saving without significant performance loss in diverse cluster configurations and workingenvironments.

  9. Soil properties and performance of landmine detection by metal detector and ground-penetrating radar — Soil characterisation and its verification by a field test

    Science.gov (United States)

    Takahashi, Kazunori; Preetz, Holger; Igel, Jan

    2011-04-01

    Metal detectors have commonly been used for landmine detection, and ground-penetrating radar (GPR) is about to be deployed for this purpose. These devices are influenced by the magnetic and electric properties of soil, since both employ electromagnetic techniques. Various soil properties and their spatial distributions were measured and determined with geophysical methods in four soil types where a test of metal detectors and GPR systems took place. By analysing the soil properties, these four soils were classified based on the expected influence of each detection technique and predicted soil difficulty. This classification was compared to the detection performance of the detectors and a clear correlation between the predicted soil difficulty and performance was observed. The detection performance of the metal detector and target identification performance of the GPR systems degraded in soils that were expected to be problematic. Therefore, this study demonstrated that the metal detector and GPR performance for landmine detection can be assessed qualitatively by geophysical analyses.

  10. Performance comparison of hierarchical checkpoint protocols grid computing

    Directory of Open Access Journals (Sweden)

    Ndeye Massata NDIAYE

    2012-06-01

    Full Text Available Grid infrastructure is a large set of nodes geographically distributed and connected by a communication. In this context, fault tolerance is a necessity imposed by the distribution that poses a number of problems related to the heterogeneity of hardware, operating systems, networks, middleware, applications, the dynamic resource, the scalability, the lack of common memory, the lack of a common clock, the asynchronous communication between processes. To improve the robustness of supercomputing applications in the presence of failures, many techniques have been developed to provide resistance to these faults of the system. Fault tolerance is intended to allow the system to provide service as specified in spite of occurrences of faults. It appears as an indispensable element in distributed systems. To meet this need, several techniques have been proposed in the literature. We will study the protocols based on rollback recovery. These protocols are classified into two categories: coordinated checkpointing and rollback protocols and log-based independent checkpointing protocols or message logging protocols. However, the performance of a protocol depends on the characteristics of the system, network and applications running. Faced with the constraints of large-scale environments, many of algorithms of the literature showed inadequate. Given an application environment and a system, it is not easy to identify the recovery protocol that is most appropriate for a cluster or hierarchical environment, like grid computing. While some protocols have been used successfully in small scale, they are not suitable for use in large scale. Hence there is a need to implement these protocols in a hierarchical fashion to compare their performance in grid computing. In this paper, we propose hierarchical version of four well-known protocols. We have implemented and compare the performance of these protocols in clusters and grid computing using the Omnet++ simulator

  11. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  12. Triangulation using synthetic aperture radar images

    Science.gov (United States)

    Wu, Sherman S. C.; Howington-Kraus, Annie E.

    1991-01-01

    For the extraction of topographic information about Venus from stereoradar images obtained from the Magellan Mission, a Synthetic Aperture Radar (SAR) compilation system was developed on analytical stereoplotters. The system software was extensively tested by using stereoradar images from various spacecraft and airborne radar systems, including Seasat, SIR-B, ERIM XCL, and STAR-1. Stereomodeling from radar images was proven feasible, and development is on a correct approach. During testing, the software was enhanced and modified to obtain more flexibility and better precision. Triangulation software for establishing control points by using SAR images was also developed through a joint effort with the Defense Mapping Agency. The SAR triangulation system comprises four main programs, TRIDATA, MODDATA, TRISAR, and SHEAR. The first two programs are used to sort and update the data; the third program, the main one, performs iterative statistical adjustment; and the fourth program analyzes the results. Also, input are flight data and data from the Global Positioning System and Inertial System (navigation information). The SAR triangulation system was tested with six strips of STAR-1 radar images on a VAX-750 computer. Each strip contains images of 10 minutes flight time (equivalent to a ground distance of 73.5 km); the images cover a ground width of 22.5 km. All images were collected from the same side. With an input of 44 primary control points, 441 ground control points were produced. The adjustment process converged after eight iterations. With a 6-m/pixel resolution of the radar images, the triangulation adjustment has an average standard elevation error of 81 m. Development of Magellan radargrammetry will be continued to convert both SAR compilation and triangulation systems into digital form.

  13. Analysis of Network Performance for Computer Communication Systems with Benchmark

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    This paper introduced a performance evaluating approach of computer communication system based on the simulation and measurement technology, and discussed its evaluating models. The result of our experiment showed that the outcome of practical measurement on Ether-LAN fitted in well with the theoreticai analysis. The approach we presented can be used to define various kinds of artificially simulated load models conveniently, build all kinds of network application environments in a flexible way, and exert sufficientiy the widely-used and high-precision features of the traditional simulation technology and the reality,reliability, adaptability features of measurement technology.

  14. Computational studies on small wind turbine performance characteristics

    Science.gov (United States)

    Karthikeyan, N.; Suthakar, T.

    2016-10-01

    To optimize the selection of suitable airfoils for small wind turbine applications, computational investigation on aerodynamic characteristics of low Re airfoils MID321a, MID321d, SG6040, SG6041, SG6042 and SG6043 are carried out for the Reynolds number range of (0.5- 2)×105. The BEM method is used to determine the power coefficient of the rotor from the airfoil characteristics; in addition, the blade parameters like chord and twist are also determined. The newly designed MID321a airfoil shows better aerodynamic performance and maximum power coefficient as compared with other investigated airfoils for wider operating ranges.

  15. Radar detection

    CERN Document Server

    DiFranco, Julius

    2004-01-01

    This book presents a comprehensive tutorial exposition of radar detection using the methods and techniques of mathematical statistics. The material presented is as current and useful to today's engineers as when the book was first published by Prentice-Hall in 1968 and then republished by Artech House in 1980. The book is divided into six parts.

  16. Skin artifact removal technique for breast cancer radar detection

    Science.gov (United States)

    Caorsi, S.; Lenzi, C.

    2016-06-01

    In this paper we propose a new model-based skin artifact cleaning technique with the aim to remove skin reflections with good effectiveness, without introducing significant signal distortions, and without assuming a priori information on the real structure of the breast. The reference cleaning model, constituted by a two-layer geometry skin-adipose tissue, is oriented to all the ultrawideband radar methods able to detect the tumor starting by the knowledge of each trace recorded around the breast. All the radar signal measurements were simulated by using realistic breast models derived from the University of Wisconsin computational electromagnetic laboratory database and the finite difference time domain (FDTD)-based open source software GprMax. First, we have searched for the best configuration for the reference cleaning model with the aim to minimize the distortions introduced on the radar signal. Second, the performance of the proposed cleaning technique has been assessed by using a breast cancer radar detection technique based on the use of artificial neural network (ANN). In order to minimize the signal distortions, we found that it was necessary to use the real skin thickness and the static Debye parameters of both skin and adipose tissue. In such a case the ANN-based radar approach was able to detect the tumor with an accuracy of 87%. By extending the performance assessment also to the case when only average standard values are used to characterize the reference cleaning model, the detection accuracy was of 84%.

  17. Chip-to-board interconnects for high-performance computing

    Science.gov (United States)

    Riester, Markus B. K.; Houbertz-Krauss, Ruth; Steenhusen, Sönke

    2013-02-01

    Super computing is reaching out to ExaFLOP processing speeds, creating fundamental challenges for the way that computing systems are designed and built. One governing topic is the reduction of power used for operating the system, and eliminating the excess heat generated from the system. Current thinking sees optical interconnects on most interconnect levels to be a feasible solution to many of the challenges, although there are still limitations to the technical solutions, in particular with regard to manufacturability. This paper explores drivers for enabling optical interconnect technologies to advance into the module and chip level. The introduction of optical links into High Performance Computing (HPC) could be an option to allow scaling the manufacturing technology to large volume manufacturing. This will drive the need for manufacturability of optical interconnects, giving rise to other challenges that add to the realization of this type of interconnection. This paper describes a solution that allows the creation of optical components on module level, integrating optical chips, laser diodes or PIN diodes as components much like the well known SMD components used for electrical components. The paper shows the main challenges and potential solutions to this challenge and proposes a fundamental paradigm shift in the manufacturing of 3-dimensional optical links for the level 1 interconnect (chip package).

  18. SCEC Earthquake System Science Using High Performance Computing

    Science.gov (United States)

    Maechling, P. J.; Jordan, T. H.; Archuleta, R.; Beroza, G.; Bielak, J.; Chen, P.; Cui, Y.; Day, S.; Deelman, E.; Graves, R. W.; Minster, J. B.; Olsen, K. B.

    2008-12-01

    The SCEC Community Modeling Environment (SCEC/CME) collaboration performs basic scientific research using high performance computing with the goal of developing a predictive understanding of earthquake processes and seismic hazards in California. SCEC/CME research areas including dynamic rupture modeling, wave propagation modeling, probabilistic seismic hazard analysis (PSHA), and full 3D tomography. SCEC/CME computational capabilities are organized around the development and application of robust, re- usable, well-validated simulation systems we call computational platforms. The SCEC earthquake system science research program includes a wide range of numerical modeling efforts and we continue to extend our numerical modeling codes to include more realistic physics and to run at higher and higher resolution. During this year, the SCEC/USGS OpenSHA PSHA computational platform was used to calculate PSHA hazard curves and hazard maps using the new UCERF2.0 ERF and new 2008 attenuation relationships. Three SCEC/CME modeling groups ran 1Hz ShakeOut simulations using different codes and computer systems and carefully compared the results. The DynaShake Platform was used to calculate several dynamic rupture-based source descriptions equivalent in magnitude and final surface slip to the ShakeOut 1.2 kinematic source description. A SCEC/CME modeler produced 10Hz synthetic seismograms for the ShakeOut 1.2 scenario rupture by combining 1Hz deterministic simulation results with 10Hz stochastic seismograms. SCEC/CME modelers ran an ensemble of seven ShakeOut-D simulations to investigate the variability of ground motions produced by dynamic rupture-based source descriptions. The CyberShake Platform was used to calculate more than 15 new probabilistic seismic hazard analysis (PSHA) hazard curves using full 3D waveform modeling and the new UCERF2.0 ERF. The SCEC/CME group has also produced significant computer science results this year. Large-scale SCEC/CME high performance codes

  19. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  20. Accurate Characterization of Winter Precipitation Using Multi-Angle Snowflake Camera, Visual Hull, Advanced Scattering Methods and Polarimetric Radar

    Directory of Open Access Journals (Sweden)

    Branislav M. Notaroš

    2016-06-01

    Full Text Available This article proposes and presents a novel approach to the characterization of winter precipitation and modeling of radar observables through a synergistic use of advanced optical disdrometers for microphysical and geometrical measurements of ice and snow particles (in particular, a multi-angle snowflake camera—MASC, image processing methodology, advanced method-of-moments scattering computations, and state-of-the-art polarimetric radars. The article also describes the newly built and established MASCRAD (MASC + Radar in-situ measurement site, under the umbrella of CSU-CHILL Radar, as well as the MASCRAD project and 2014/2015 winter campaign. We apply a visual hull method to reconstruct 3D shapes of ice particles based on high-resolution MASC images, and perform “particle-by-particle” scattering computations to obtain polarimetric radar observables. The article also presents and discusses selected illustrative observation data, results, and analyses for three cases with widely-differing meteorological settings that involve contrasting hydrometeor forms. Illustrative results of scattering calculations based on MASC images captured during these events, in comparison with radar data, as well as selected comparative studies of snow habits from MASC, 2D video-disdrometer, and CHILL radar data, are presented, along with the analysis of microphysical characteristics of particles. In the longer term, this work has potential to significantly improve the radar-based quantitative winter-precipitation estimation.

  1. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  2. Play for Performance: Using Computer Games to Improve Motivation and Test-Taking Performance

    Science.gov (United States)

    Dennis, Alan R.; Bhagwatwar, Akshay; Minas, Randall K.

    2013-01-01

    The importance of testing, especially certification and high-stakes testing, has increased substantially over the past decade. Building on the "serious gaming" literature and the psychology "priming" literature, we developed a computer game designed to improve test-taking performance using psychological priming. The game primed…

  3. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Science.gov (United States)

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  4. Molecular Dynamics Simulations on High-Performance Reconfigurable Computing Systems.

    Science.gov (United States)

    Chiu, Matt; Herbordt, Martin C

    2010-11-01

    The acceleration of molecular dynamics (MD) simulations using high-performance reconfigurable computing (HPRC) has been much studied. Given the intense competition from multicore and GPUs, there is now a question whether MD on HPRC can be competitive. We concentrate here on the MD kernel computation: determining the short-range force between particle pairs. In one part of the study, we systematically explore the design space of the force pipeline with respect to arithmetic algorithm, arithmetic mode, precision, and various other optimizations. We examine simplifications and find that some have little effect on simulation quality. In the other part, we present the first FPGA study of the filtering of particle pairs with nearly zero mutual force, a standard optimization in MD codes. There are several innovations, including a novel partitioning of the particle space, and new methods for filtering and mapping work onto the pipelines. As a consequence, highly efficient filtering can be implemented with only a small fraction of the FPGA's resources. Overall, we find that, for an Altera Stratix-III EP3ES260, 8 force pipelines running at nearly 200 MHz can fit on the FPGA, and that they can perform at 95% efficiency. This results in an 80-fold per core speed-up for the short-range force, which is likely to make FPGAs highly competitive for MD.

  5. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    Energy Technology Data Exchange (ETDEWEB)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs.

  6. Multi-Language Programming Environments for High Performance Java Computing

    Directory of Open Access Journals (Sweden)

    Vladimir Getov

    1999-01-01

    Full Text Available Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI tool which provides application programmers wishing to use Java with immediate accessibility to existing scientific packages. The JCI tool also facilitates rapid development and reuse of existing code. These benefits are provided at minimal cost to the programmer. While beneficial to the programmer, the additional advantages of mixed‐language programming in terms of application performance and portability are addressed in detail within the context of this paper. In addition, we discuss how the JCI tool is complementing other ongoing projects such as IBM’s High‐Performance Compiler for Java (HPCJ and IceT’s metacomputing environment.

  7. Integrated modeling tool for performance engineering of complex computer systems

    Science.gov (United States)

    Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar

    1989-01-01

    This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.

  8. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  9. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  10. Experiment in Onboard Synthetic Aperture Radar Data Processing

    Science.gov (United States)

    Holland, Matthew

    2011-01-01

    Single event upsets (SEUs) are a threat to any computing system running on hardware that has not been physically radiation hardened. In addition to mandating the use of performance-limited, hardened heritage equipment, prior techniques for dealing with the SEU problem often involved hardware-based error detection and correction (EDAC). With limited computing resources, software- based EDAC, or any more elaborate recovery methods, were often not feasible. Synthetic aperture radars (SARs), when operated in the space environment, are interesting due to their relevance to NASAs objectives, but problematic in the sense of producing prodigious amounts of raw data. Prior implementations of the SAR data processing algorithm have been too slow, too computationally intensive, and require too much application memory for onboard execution to be a realistic option when using the type of heritage processing technology described above. This standard C-language implementation of SAR data processing is distributed over many cores of a Tilera Multicore Processor, and employs novel Radiation Hardening by Software (RHBS) techniques designed to protect the component processes (one per core) and their shared application memory from the sort of SEUs expected in the space environment. The source code includes calls to Tilera APIs, and a specialized Tilera compiler is required to produce a Tilera executable. The compiled application reads input data describing the position and orientation of a radar platform, as well as its radar-burst data, over time and writes out processed data in a form that is useful for analysis of the radar observations.

  11. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  12. Optimizing performance per watt on GPUs in High Performance Computing: temperature, frequency and voltage effects

    CERN Document Server

    Price, D C; Barsdell, B R; Babich, R; Greenhill, L J

    2014-01-01

    The magnitude of the real-time digital signal processing challenge attached to large radio astronomical antenna arrays motivates use of high performance computing (HPC) systems. The need for high power efficiency (performance per watt) at remote observatory sites parallels that in HPC broadly, where efficiency is an emerging critical metric. We investigate how the performance per watt of graphics processing units (GPUs) is affected by temperature, core clock frequency and voltage. Our results highlight how the underlying physical processes that govern transistor operation affect power efficiency. In particular, we show experimentally that GPU power consumption grows non-linearly with both temperature and supply voltage, as predicted by physical transistor models. We show lowering GPU supply voltage and increasing clock frequency while maintaining a low die temperature increases the power efficiency of an NVIDIA K20 GPU by up to 37-48% over default settings when running xGPU, a compute-bound code used in radio...

  13. Limitations of Radar Coordinates

    OpenAIRE

    Bini, Donato; Lusanna, Luca; Mashhoon, Bahram

    2004-01-01

    The construction of a radar coordinate system about the world line of an observer is discussed. Radar coordinates for a hyperbolic observer as well as a uniformly rotating observer are described in detail. The utility of the notion of radar distance and the admissibility of radar coordinates are investigated. Our results provide a critical assessment of the physical significance of radar coordinates.

  14. Improving Software Performance in the Compute Unified Device Architecture

    Directory of Open Access Journals (Sweden)

    Alexandru PIRJAN

    2010-01-01

    Full Text Available This paper analyzes several aspects regarding the improvement of software performance for applications written in the Compute Unified Device Architecture CUDA. We address an issue of great importance when programming a CUDA application: the Graphics Processing Unit’s (GPU’s memory management through ranspose ernels. We also benchmark and evaluate the performance for progressively optimizing a transposing matrix application in CUDA. One particular interest was to research how well the optimization techniques, applied to software application written in CUDA, scale to the latest generation of general-purpose graphic processors units (GPGPU, like the Fermi architecture implemented in the GTX480 and the previous architecture implemented in GTX280. Lately, there has been a lot of interest in the literature for this type of optimization analysis, but none of the works so far (to our best knowledge tried to validate if the optimizations can apply to a GPU from the latest Fermi architecture and how well does the Fermi architecture scale to these software performance improving techniques.

  15. Synthetic aperture radar autofocus via semidefinite relaxation.

    Science.gov (United States)

    Liu, Kuang-Hung; Wiesel, Ami; Munson, David C

    2013-06-01

    The autofocus problem in synthetic aperture radar imaging amounts to estimating unknown phase errors caused by unknown platform or target motion. At the heart of three state-of-the-art autofocus algorithms, namely, phase gradient autofocus, multichannel autofocus (MCA), and Fourier-domain multichannel autofocus (FMCA), is the solution of a constant modulus quadratic program (CMQP). Currently, these algorithms solve a CMQP by using an eigenvalue relaxation approach. We propose an alternative relaxation approach based on semidefinite programming, which has recently attracted considerable attention in other signal processing problems. Experimental results show that our proposed methods provide promising performance improvements for MCA and FMCA through an increase in computational complexity.

  16. Matrix element method for high performance computing platforms

    Science.gov (United States)

    Grasseau, G.; Chamont, D.; Beaudette, F.; Bianchini, L.; Davignon, O.; Mastrolorenzo, L.; Ochando, C.; Paganini, P.; Strebler, T.

    2015-12-01

    Lot of efforts have been devoted by ATLAS and CMS teams to improve the quality of LHC events analysis with the Matrix Element Method (MEM). Up to now, very few implementations try to face up the huge computing resources required by this method. We propose here a highly parallel version, combining MPI and OpenCL, which makes the MEM exploitation reachable for the whole CMS datasets with a moderate cost. In the article, we describe the status of two software projects under development, one focused on physics and one focused on computing. We also showcase their preliminary performance obtained with classical multi-core processors, CUDA accelerators and MIC co-processors. This let us extrapolate that with the help of 6 high-end accelerators, we should be able to reprocess the whole LHC run 1 within 10 days, and that we have a satisfying metric for the upcoming run 2. The future work will consist in finalizing a single merged system including all the physics and all the parallelism infrastructure, thus optimizing implementation for best hardware platforms.

  17. Optimizing high performance computing workflow for protein functional annotation.

    Science.gov (United States)

    Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene

    2014-09-10

    Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data.

  18. wolfPAC: building a high-performance distributed computing network for phylogenetic analysis using 'obsolete' computational resources.

    Science.gov (United States)

    Reeves, Patrick A; Friedman, Philip H; Richards, Christopher M

    2005-01-01

    wolfPAC is an AppleScript-based software package that facilitates the use of numerous, remotely located Macintosh computers to perform computationally-intensive phylogenetic analyses using the popular application PAUP* (Phylogenetic Analysis Using Parsimony). It has been designed to utilise readily available, inexpensive processors and to encourage sharing of computational resources within the worldwide phylogenetics community.

  19. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatest number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern

  20. Development of the Application techniques for KMA dual-pol. radar network in Korea

    Science.gov (United States)

    Suk, Mi-Kyung; Nam, Kyung-Yeub; Jung, Sung-A.; Ko, Jeong-Seok

    2016-04-01

    Korea is located between the Eurasian continent and Northwestern pacific. So East Asian Monsoon affects the country every season and every year with the rainy season (Chang-ma front), convective stroms, snow storms, and sometimes typhoons. Korea Meteorological Administration (KMA) has been operating many kinds of meteorological observation networks, including 10 operational radars and 1 testbed radar. Weather Radar Center (WRC) of Korea Meteorological Administration (KMA) performs a task of development and application of cross governmental dual-pol. radar harmonization for the effective use of the national resources from 2013 since the tri-agencies (KMA, Ministry of Land, Infrastructure and Transport, Ministry of National Defense) singed the MOU for the co-utilization of cross governmental dual-pol. radar. This task develops the techniques of the high-quality data processing, the support of the forecasting, etc. The techniques of the high-quality data processing are the quality control for the removal of non-meteorological echoes, the classification of the hydrometeors. The techniques for support of the forecasting are the computation and verification of the rainfall estimation of dual-pol. and single-pol. radars, etc. And it is developed the application techniques by using Yong-In Testbed dual-pol. radar, the merged rainfall field of the radars and the satellites, etc. Further works are the computation of the high-resolution 3-dimensional wind field, the quantitative precipitation forecasting, the development of the application and the information service techniques for the hydrology, climate, industry, aviation for the prevention techniques against the severe weather by using multi-wavelengths ( X, C, S-band radars) of the cross governments, etc.

  1. Electromagnetic Model Reliably Predicts Radar Scattering Characteristics of Airborne Organisms

    Science.gov (United States)

    Mirkovic, Djordje; Stepanian, Phillip M.; Kelly, Jeffrey F.; Chilson, Phillip B.

    2016-10-01

    The radar scattering characteristics of aerial animals are typically obtained from controlled laboratory measurements of a freshly harvested specimen. These measurements are tedious to perform, difficult to replicate, and typically yield only a small subset of the full azimuthal, elevational, and polarimetric radio scattering data. As an alternative, biological applications of radar often assume that the radar cross sections of flying animals are isotropic, since sophisticated computer models are required to estimate the 3D scattering properties of objects having complex shapes. Using the method of moments implemented in the WIPL-D software package, we show for the first time that such electromagnetic modeling techniques (typically applied to man-made objects) can accurately predict organismal radio scattering characteristics from an anatomical model: here the Brazilian free-tailed bat (Tadarida brasiliensis). The simulated scattering properties of the bat agree with controlled measurements and radar observations made during a field study of bats in flight. This numerical technique can produce the full angular set of quantitative polarimetric scattering characteristics, while eliminating many practical difficulties associated with physical measurements. Such a modeling framework can be applied for bird, bat, and insect species, and will help drive a shift in radar biology from a largely qualitative and phenomenological science toward quantitative estimation of animal densities and taxonomic identification.

  2. Next-generation sequencing: big data meets high performance computing.

    Science.gov (United States)

    Schmidt, Bertil; Hildebrandt, Andreas

    2017-02-02

    The progress of next-generation sequencing has a major impact on medical and genomic research. This high-throughput technology can now produce billions of short DNA or RNA fragments in excess of a few terabytes of data in a single run. This leads to massive datasets used by a wide range of applications including personalized cancer treatment and precision medicine. In addition to the hugely increased throughput, the cost of using high-throughput technologies has been dramatically decreasing. A low sequencing cost of around US$1000 per genome has now rendered large population-scale projects feasible. However, to make effective use of the produced data, the design of big data algorithms and their efficient implementation on modern high performance computing systems is required.

  3. Effect of mindfulness meditation on brain-computer interface performance.

    Science.gov (United States)

    Tan, Lee-Fan; Dienes, Zoltan; Jansari, Ashok; Goh, Sing-Yau

    2014-01-01

    Electroencephalogram based brain-computer interfaces (BCIs) enable stroke and motor neuron disease patients to communicate and control devices. Mindfulness meditation has been claimed to enhance metacognitive regulation. The current study explores whether mindfulness meditation training can thus improve the performance of BCI users. To eliminate the possibility of expectation of improvement influencing the results, we introduced a music training condition. A norming study found that both meditation and music interventions elicited clear expectations for improvement on the BCI task, with the strength of expectation being closely matched. In the main 12 week intervention study, seventy-six healthy volunteers were randomly assigned to three groups: a meditation training group; a music training group; and a no treatment control group. The mindfulness meditation training group obtained a significantly higher BCI accuracy compared to both the music training and no-treatment control groups after the intervention, indicating effects of meditation above and beyond expectancy effects.

  4. Performance Evaluation of Three Distributed Computing Environments for Scientific Applications

    Science.gov (United States)

    Fatoohi, Rod; Weeratunga, Sisira; Lasinski, T. A. (Technical Monitor)

    1994-01-01

    We present performance results for three distributed computing environments using the three simulated CFD applications in the NAS Parallel Benchmark suite. These environments are the DCF cluster, the LACE cluster, and an Intel iPSC/860 machine. The DCF is a prototypic cluster of loosely coupled SGI R3000 machines connected by Ethernet. The LACE cluster is a tightly coupled cluster of 32 IBM RS6000/560 machines connected by Ethernet as well as by either FDDI or an IBM Allnode switch. Results of several parallel algorithms for the three simulated applications are presented and analyzed based on the interplay between the communication requirements of an algorithm and the characteristics of the communication network of a distributed system.

  5. Power/energy use cases for high performance computing.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H.,; Kelly, Suzanne M; Hammond, Steven; Elmore, Ryan; Munch, Kristin

    2013-12-01

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  6. Tablet computer enhanced training improves internal medicine exam performance

    Science.gov (United States)

    Wende, Ilja; Grittner, Ulrike

    2017-01-01

    Background Traditional teaching concepts in medical education do not take full advantage of current information technology. We aimed to objectively determine the impact of Tablet PC enhanced training on learning experience and MKSAP® (medical knowledge self-assessment program) exam performance. Methods In this single center, prospective, controlled study final year medical students and medical residents doing an inpatient service rotation were alternatingly assigned to either the active test (Tablet PC with custom multimedia education software package) or traditional education (control) group, respectively. All completed an extensive questionnaire to collect their socio-demographic data, evaluate educational status, computer affinity and skills, problem solving, eLearning knowledge and self-rated medical knowledge. Both groups were MKSAP® tested at the beginning and the end of their rotation. The MKSAP® score at the final exam was the primary endpoint. Results Data of 55 (tablet n = 24, controls n = 31) male 36.4%, median age 28 years, 65.5% students, were evaluable. The mean MKSAP® score improved in the tablet PC (score Δ + 8 SD: 11), but not the control group (score Δ- 7, SD: 11), respectively. After adjustment for baseline score and confounders the Tablet PC group showed on average 11% better MKSAP® test results compared to the control group (pcomputer based integrated training and clinical practice enhances medical education and exam performance. Larger, multicenter trials are required to independently validate our data. Residency and fellowship directors are encouraged to consider adding portable computer devices, multimedia content and introduce blended learning to their respective training programs. PMID:28369063

  7. 15 CFR 743.2 - High performance computers: Post shipment verification reporting.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false High performance computers: Post... ADMINISTRATION REGULATIONS SPECIAL REPORTING § 743.2 High performance computers: Post shipment verification... certain computers to destinations in Computer Tier 3, see § 740.7(d) for a list of these destinations...

  8. Computer literacy in secondary education: The performance and engagement of girls

    NARCIS (Netherlands)

    Voogt, Joke

    1987-01-01

    This research study examines performance and engagement in computer literacy of boys and girls (N = 873). Performance and engagement in computer literacy are established with CAST. Computer Alfabetisme Schalen Twente, a Dutch version of the Minnesota Computer Literacy Awareness Assessment. The resul

  9. Addressing inaccuracies in BLOSUM computation improves homology search performance.

    Science.gov (United States)

    Hess, Martin; Keul, Frank; Goesele, Michael; Hamacher, Kay

    2016-04-27

    BLOSUM matrices belong to the most commonly used substitution matrix series for protein homology search and sequence alignments since their publication in 1992. In 2008, Styczynski et al. discovered miscalculations in the clustering step of the matrix computation. Still, the RBLOSUM64 matrix based on the corrected BLOSUM code was reported to perform worse at a statistically significant level than the BLOSUM62. Here, we present a further correction of the (R)BLOSUM code and provide a thorough performance analysis of BLOSUM-, RBLOSUM- and the newly derived CorBLOSUM-type matrices. Thereby, we assess homology search performance of these matrix-types derived from three different BLOCKS databases on all versions of the ASTRAL20, ASTRAL40 and ASTRAL70 subsets resulting in 51 different benchmarks in total. Our analysis is focused on two of the most popular BLOSUM matrices - BLOSUM50 and BLOSUM62. Our study shows that fixing small errors in the BLOSUM code results in substantially different substitution matrices with a beneficial influence on homology search performance when compared to the original matrices. The CorBLOSUM matrices introduced here performed at least as good as their BLOSUM counterparts in ∼75 % of all test cases. On up-to-date ASTRAL databases BLOSUM matrices were even outperformed by CorBLOSUM matrices in more than 86 % of the times. In contrast to the study by Styczynski et al., the tested RBLOSUM matrices also outperformed the corresponding BLOSUM matrices in most of the cases. Comparing the CorBLOSUM with the RBLOSUM matrices revealed no general performance advantages for either on older ASTRAL releases. On up-to-date ASTRAL databases however CorBLOSUM matrices performed better than their RBLOSUM counterparts in ∼74 % of the test cases. Our results imply that CorBLOSUM type matrices outperform the BLOSUM matrices on a statistically significant level in most of the cases, especially on up-to-date databases such as ASTRAL ≥2.01. Additionally, Cor

  10. Alternatives for Military Space Radar

    Science.gov (United States)

    2007-01-01

    Because the characteristics and performance of Discov- erer II’s radar are well documented, CBO based the design of its notional Space Radar on that of...2005, report to accompany H.R. 4613, Report 108-553 (June 18, 2004). 13. Air Force Space and Missile Systems Center, “Fact Sheet: Discov- erer II...360-degree coverage in GMTI mode. See Federation of American Scientists, Space Policy Project, “Discov- erer II STARLITE” (January 24, 2000

  11. Perform - A performance optimizing computer program for dynamic systems subject to transient loadings

    Science.gov (United States)

    Pilkey, W. D.; Wang, B. P.; Yoo, Y.; Clark, B.

    1973-01-01

    A description and applications of a computer capability for determining the ultimate optimal behavior of a dynamically loaded structural-mechanical system are presented. This capability provides characteristics of the theoretically best, or limiting, design concept according to response criteria dictated by design requirements. Equations of motion of the system in first or second order form include incompletely specified elements whose characteristics are determined in the optimization of one or more performance indices subject to the response criteria in the form of constraints. The system is subject to deterministic transient inputs, and the computer capability is designed to operate with a large linear programming on-the-shelf software package which performs the desired optimization. The report contains user-oriented program documentation in engineering, problem-oriented form. Applications cover a wide variety of dynamics problems including those associated with such diverse configurations as a missile-silo system, impacting freight cars, and an aircraft ride control system.

  12. FY 1995 Blue Book: High Performance Computing and Communications: Technology for the National Information Infrastructure

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — The Federal High Performance Computing and Communications HPCC Program was created to accelerate the development of future generations of high performance computers...

  13. RADAR PPI Scope Overlay

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — RADAR PPI Scope Overlays are used to position a RADAR image over a station at the correct resolution. The archive maintains several different RADAR resolution types,...

  14. Analysis of high resolution land clutter using an X-band radar

    CSIR Research Space (South Africa)

    Melebari, A

    2015-10-01

    Full Text Available In modern radar systems with high range resolution, the statistical properties of clutter have a significant effect on the performance of the radar. Analyzing the radar returns from various clutter terrains is essential when aiming to optimize...

  15. Detection Performance of the Circular Correlation Coefficient Receiver,

    Science.gov (United States)

    of the squared modulus of the circular serial correlation coefficient is found when no signal is present, allowing computation of the detection...threshold. For small data records, as is typical in radar applications, the performance of the correlation coefficient detector is compared to a standard... Correlation Coefficient , Autoregressive, CFAR, Autocorrelation Estimation, Radar Receiver, and Digital Signal Processing.

  16. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    Science.gov (United States)

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  17. Computer simulation of diffractive optical element (DOE) performance

    Science.gov (United States)

    Delacour, Jacques F.; Venturino, Jean-Claude; Gouedard, Yannick

    2004-02-01

    Diffractive optical elements (DOE), also known as computer generated holograms (CGH), can transform an illuminating laser beam into a specified intensity distribution by diffraction rather than refraction or reflection. These are widely used in coherent light systems with beam shaping purposes, as an alignment tool or as a structured light generator. The diffractive surface is split into an array of sub-wavelength depth cells. Each of these locally transforms the beam by phase adaptation. Based on the work of the LSP lab from the University of Strasbourg, France, we have developed a unique industry-oriented tool. It allows the user first to optimize a DOE using the Gerchberg-Saxton algorithm. This part can manage sources from the simple plane wave to high order Gaussian modes or complex maps defined beams and objective patterns based on BMP images. A simulation part permits then to test the performance of the DOE with regard to system parameters, dealing with the beam, the DOE itself and the system organization. This will meet the needs of people concerned by tolerancing issues. Focusing on the industrial problem of beam shaping, we will present the whole DOE design sequence, starting from the generation of a DOE up to the study of the sensitivity of its performance according to the variation of several parameters of the system. For example, we will show the influence of the position of the beam on diffraction efficiency. This unique feature formerly neglected in industrial design process will lead the way to production quality improvement.

  18. Real-time radar signal processing using GPGPU (general-purpose graphic processing unit)

    Science.gov (United States)

    Kong, Fanxing; Zhang, Yan Rockee; Cai, Jingxiao; Palmer, Robert D.

    2016-05-01

    This study introduces a practical approach to develop real-time signal processing chain for general phased array radar on NVIDIA GPUs(Graphical Processing Units) using CUDA (Compute Unified Device Architecture) libraries such as cuBlas and cuFFT, which are adopted from open source libraries and optimized for the NVIDIA GPUs. The processed results are rigorously verified against those from the CPUs. Performance benchmarked in computation time with various input data cube sizes are compared across GPUs and CPUs. Through the analysis, it will be demonstrated that GPGPUs (General Purpose GPU) real-time processing of the array radar data is possible with relatively low-cost commercial GPUs.

  19. Advances in bistatic radar

    CERN Document Server

    Willis, Nick

    2007-01-01

    Advances in Bistatic Radar updates and extends bistatic and multistatic radar developments since publication of Willis' Bistatic Radar in 1991. New and recently declassified military applications are documented. Civil applications are detailed including commercial and scientific systems. Leading radar engineers provide expertise to each of these applications. Advances in Bistatic Radar consists of two major sections: Bistatic/Multistatic Radar Systems and Bistatic Clutter and Signal Processing. Starting with a history update, the first section documents the early and now declassified military

  20. Automatic Synthetic Aperture Radar based oil spill detection and performance estimation via a semi-automatic operational service benchmark.

    Science.gov (United States)

    Singha, Suman; Vespe, Michele; Trieschmann, Olaf

    2013-08-15

    Today the health of ocean is in danger as it was never before mainly due to man-made pollutions. Operational activities show regular occurrence of accidental and deliberate oil spill in European waters. Since the areas covered by oil spills are usually large, satellite remote sensing particularly Synthetic Aperture Radar represents an effective option for operational oil spill detection. This paper describes the development of a fully automated approach for oil spill detection from SAR. Total of 41 feature parameters extracted from each segmented dark spot for oil spill and 'look-alike' classification and ranked according to their importance. The classification algorithm is based on a two-stage processing that combines classification tree analysis and fuzzy logic. An initial evaluation of this methodology on a large dataset has been carried out and degree of agreement between results from proposed algorithm and human analyst was estimated between 85% and 93% respectively for ENVISAT and RADARSAT.

  1. High-Performance Special-Purpose Computers in Science

    OpenAIRE

    1998-01-01

    The next decade will be an exciting time for computational physicists. After 50 years of being forced to use standardized commercial equipment, it will finally become relatively straightforward to adapt one's computing tools to one's own needs. The breakthrough that opens this new era is the now wide-spread availability of programmable chips that allow virtually every computational scientist to design his or her own special-purpose computer.

  2. High-performance computing, high-speed networks, and configurable computing environments: progress toward fully distributed computing.

    Science.gov (United States)

    Johnston, W E; Jacobson, V L; Loken, S C; Robertson, D W; Tierney, B L

    1992-01-01

    The next several years will see the maturing of a collection of technologies that will enable fully and transparently distributed computing environments. Networks will be used to configure independent computing, storage, and I/O elements into "virtual systems" that are optimal for solving a particular problem. This environment will make the most powerful computing systems those that are logically assembled from network-based components and will also make those systems available to a widespread audience. Anticipating that the necessary technology and communications infrastructure will be available in the next 3 to 5 years, we are developing and demonstrating prototype applications that test and exercise the currently available elements of this configurable environment. The Lawrence Berkeley Laboratory (LBL) Information and Computing Sciences and Research Medicine Divisions have collaborated with the Pittsburgh Supercomputer Center to demonstrate one distributed application that illuminates the issues and potential of using networks to configure virtual systems. This application allows the interactive visualization of large three-dimensional (3D) scalar fields (voxel data sets) by using a network-based configuration of heterogeneous supercomputers and workstations. The specific test case is visualization of 3D magnetic resonance imaging (MRI) data. The virtual system architecture consists of a Connection Machine-2 (CM-2) that performs surface reconstruction from the voxel data, a Cray Y-MP that renders the resulting geometric data into an image, and a workstation that provides the display of the image and the user interface for specifying the parameters for the geometry generation and 3D viewing. These three elements are configured into a virtual system by using several different network technologies. This paper reviews the current status of the software, hardware, and communications technologies that are needed to enable this configurable environment. These

  3. Resolution Performance Analysis of Multiple-transmitter Multiple-receiver Multistatic Radar System%多发多收多基地雷达系统分辨性能分析

    Institute of Scientific and Technical Information of China (English)

    闵涛; 肖顺平

    2013-01-01

    Detection of multiple unresolved targets is a frontier topic and pressing task in radar technology field. Considering a multistatic radar system with multiple transmitters and multiple receivers, ambiguity functions for spatially coherent and incoherent multistatic radar were analyzed, 3-dB contour area of ambiguity function was used as a measurement to compare target range resolution performance of different sensor geometrical configuration and waveform selection with one-transmitter multiple-receiver multistatic radar, relevant simulation results were presented. The results show that spatial coherence information enhances target range resolution performance of multistatic radar system, although multiple-transmitter multiple-receiver multistatic radar system increases system complexity, if improperly configured, the target resolution performance may be not superior to that of one-transmitter multiple-receiver multistatic radar system. The conclusion of this paper can be used in the analysis of multistatic radar detection performance of multiple unresolved targets, and will supply the sensor placement and waveform design of multistatic radar with reference.%对密集多目标的探测是当前雷达技术领域所面临的前沿课题和紧迫任务.本文考虑一个多发多收多基地雷达系统,在分析空间相关和非相关多基地雷达模糊函数的基础上,以模糊函数3-dB投影面积值为指标,对比单发多收多基地雷达,分析了不同传感器部署和波形选择情况下的目标距离分辨性能,并给出了相应的仿真结果.仿真结果显示,空间相关信息的引入,提高了多基地雷达系统的目标分辨性能;多发多收多基地雷达虽然增加了系统的复杂性,但如果配置不当,在目标分辨性能方面,不一定优于单发多收多基地雷达.本文的研究成果,不仅能用于对多基地雷达密集多目标探测性能的分析,还能为多基地雷达系统波形设计、布站提供一定的理论参考.

  4. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Bland, Arthur S Buddy [ORNL; Boudwin, Kathlyn J. [ORNL; Hack, James J [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL; Hudson, Douglas L [ORNL

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation

  5. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  6. Computational Performance Assessment of k-mer Counting Algorithms.

    Science.gov (United States)

    Pérez, Nelson; Gutierrez, Miguel; Vera, Nelson

    2016-04-01

    This article is about the assessment of several tools for k-mer counting, with the purpose to create a reference framework for bioinformatics researchers to identify computational requirements, parallelizing, advantages, disadvantages, and bottlenecks of each of the algorithms proposed in the tools. The k-mer counters evaluated in this article were BFCounter, DSK, Jellyfish, KAnalyze, KHMer, KMC2, MSPKmerCounter, Tallymer, and Turtle. Measured parameters were the following: RAM occupied space, processing time, parallelization, and read and write disk access. A dataset consisting of 36,504,800 reads was used corresponding to the 14th human chromosome. The assessment was performed for two k-mer lengths: 31 and 55. Obtained results were the following: pure Bloom filter-based tools and disk-partitioning techniques showed a lesser RAM use. The tools that took less execution time were the ones that used disk-partitioning techniques. The techniques that made the major parallelization were the ones that used disk partitioning, hash tables with lock-free approach, or multiple hash tables.

  7. High performance computing in power and energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Khaitan, Siddhartha Kumar [Iowa State Univ., Ames, IA (United States); Gupta, Anshul (eds.) [IBM Watson Research Center, Yorktown Heights, NY (United States)

    2013-07-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, cascading and security analysis, interaction between hybrid systems (electric, transport, gas, oil, coal, etc.) and so on, to get meaningful information in real time to ensure a secure, reliable and stable power system grid. Advanced research on development and implementation of market-ready leading-edge high-speed enabling technologies and algorithms for solving real-time, dynamic, resource-critical problems will be required for dynamic security analysis targeted towards successful implementation of Smart Grid initiatives. This books aims to bring together some of the latest research developments as well as thoughts on the future research directions of the high performance computing applications in electric power systems planning, operations, security, markets, and grid integration of alternate sources of energy, etc.

  8. Lightweight Provenance Service for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Dong; Chen, Yong; Carns, Philip; Jenkins, John; Ross, Robert

    2017-09-09

    Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. The need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.

  9. Radar-to-Radar Interference Suppression for Distributed Radar Sensor Networks

    OpenAIRE

    Wen-Qin Wang; Huaizong Shao

    2014-01-01

    Radar sensor networks, including bi- and multi-static radars, provide several operational advantages, like reduced vulnerability, good system flexibility and an increased radar cross-section. However, radar-to-radar interference suppression is a major problem in distributed radar sensor networks. In this paper, we present a cross-matched filtering-based radar-to-radar interference suppression algorithm. This algorithm first uses an iterative filtering algorithm to suppress the radar-to-radar ...

  10. Satellite radar for monitoring forest resources

    Science.gov (United States)

    Hoffer, Roger M.; Lee, Kyu-Sung

    1990-01-01

    An evaluation is made of the computer analysis results of a study which used Seasat satellite radar data obtained in 1978 and Shuttle Imaging Radar-B data obtained in 1984. The change-detection procedures employed demonstrate that deforestation and reforestation activities can be effectively monitored on the basis of radar data gathered at satellite altitudes. The computer-processing techniques applied to the data encompassed (1) overlay display, (2) ratios, (3) differences, (4) principal-component analysis, and (5) classification; of these, overlay display is noted to quickly and easily yield a qualitative display of the multidate data.

  11. SMAP RADAR Calibration and Validation

    Science.gov (United States)

    West, R. D.; Jaruwatanadilok, S.; Chaubel, M. J.; Spencer, M.; Chan, S. F.; Chen, C. W.; Fore, A.

    2015-12-01

    The Soil Moisture Active Passive (SMAP) mission launched on Jan 31, 2015. The mission employs L-band radar and radiometer measurements to estimate soil moisture with 4% volumetric accuracy at a resolution of 10 km, and freeze-thaw state at a resolution of 1-3 km. Immediately following launch, there was a three month instrument checkout period, followed by six months of level 1 (L1) calibration and validation. In this presentation, we will discuss the calibration and validation activities and results for the L1 radar data. Early SMAP radar data were used to check commanded timing parameters, and to work out issues in the low- and high-resolution radar processors. From April 3-13 the radar collected receive only mode data to conduct a survey of RFI sources. Analysis of the RFI environment led to a preferred operating frequency. The RFI survey data were also used to validate noise subtraction and scaling operations in the radar processors. Normal radar operations resumed on April 13. All radar data were examined closely for image quality and calibration issues which led to improvements in the radar data products for the beta release at the end of July. Radar data were used to determine and correct for small biases in the reported spacecraft attitude. Geo-location was validated against coastline positions and the known positions of corner reflectors. Residual errors at the time of the beta release are about 350 m. Intra-swath biases in the high-resolution backscatter images are reduced to less than 0.3 dB for all polarizations. Radiometric cross-calibration with Aquarius was performed using areas of the Amazon rain forest. Cross-calibration was also examined using ocean data from the low-resolution processor and comparing with the Aquarius wind model function. Using all a-priori calibration constants provided good results with co-polarized measurements matching to better than 1 dB, and cross-polarized measurements matching to about 1 dB in the beta release. During the

  12. Ion Trap Quantum Computers: Performance Limits and Experimental Progress

    Science.gov (United States)

    Hughes, Richard

    1998-03-01

    In a quantum computer information would be represented by the quantum mechanical states of suitable atomic-scale systems. (A single bit of information represented by a two-level quantum system is known as a qubit.) This notion leads to the possibility of computing with quantum mechanical superpositions of numbers ("quantum parallelism"), which for certain problems would make Quantum/quantum.html>quantum computation very much more efficient than classical computation. The possibility of rapidly factoring the large integers used in public-key cryptography is an important example. (Public key cryptosystems derive their security from the difficuty of factoring, and similar problems, with conventional computers.) Quantum computational hardware development is in its infancy, but an experimental study of quantum computation with laser-cooled trapped calcium ions that is under way at Los Alamos will be described. One of the pricipal obstacles to practical quantum computation is the inevitable loss of quantum coherence of the complex quantum states involved. The results of a theoretical analysis showing that quantum factoring of small integers should be possible with trapped ions will be presented. The prospects for larger-scale computations will be discussed.

  13. Scientific and high-performance computing at FAIR

    Directory of Open Access Journals (Sweden)

    Kisel Ivan

    2015-01-01

    Full Text Available Future FAIR experiments have to deal with very high input rates, large track multiplicities, make full event reconstruction and selection on-line on a large dedicated computer farm equipped with heterogeneous many-core CPU/GPU compute nodes. To develop efficient and fast algorithms, which are optimized for parallel computations, is a challenge for the groups of experts dealing with the HPC computing. Here we present and discuss the status and perspectives of the data reconstruction and physics analysis software of one of the future FAIR experiments, namely, the CBM experiment.

  14. Big Data and High-Performance Computing in Global Seismology

    Science.gov (United States)

    Bozdag, Ebru; Lefebvre, Matthieu; Lei, Wenjie; Peter, Daniel; Smith, James; Komatitsch, Dimitri; Tromp, Jeroen

    2014-05-01

    Much of our knowledge of Earth's interior is based on seismic observations and measurements. Adjoint methods provide an efficient way of incorporating 3D full wave propagation in iterative seismic inversions to enhance tomographic images and thus our understanding of processes taking place inside the Earth. Our aim is to take adjoint tomography, which has been successfully applied to regional and continental scale problems, further to image the entire planet. This is one of the extreme imaging challenges in seismology, mainly due to the intense computational requirements and vast amount of high-quality seismic data that can potentially be assimilated. We have started low-resolution inversions (T > 30 s and T > 60 s for body and surface waves, respectively) with a limited data set (253 carefully selected earthquakes and seismic data from permanent and temporary networks) on Oak Ridge National Laboratory's Cray XK7 "Titan" system. Recent improvements in our 3D global wave propagation solvers, such as a GPU version of the SPECFEM3D_GLOBE package, will enable us perform higher-resolution (T > 9 s) and longer duration (~180 m) simulations to take the advantage of high-frequency body waves and major-arc surface waves, thereby improving imbalanced ray coverage as a result of the uneven global distribution of sources and receivers. Our ultimate goal is to use all earthquakes in the global CMT catalogue within the magnitude range of our interest and data from all available seismic networks. To take the full advantage of computational resources, we need a solid framework to manage big data sets during numerical simulations, pre-processing (i.e., data requests and quality checks, processing data, window selection, etc.) and post-processing (i.e., pre-conditioning and smoothing kernels, etc.). We address the bottlenecks in our global seismic workflow, which are mainly coming from heavy I/O traffic during simulations and the pre- and post-processing stages, by defining new data

  15. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda

    2016-03-04

    Exascale systems are predicted to have approximately 1 billion cores, assuming gigahertz cores. Limitations on affordable network topologies for distributed memory systems of such massive scale bring new challenges to the currently dominant parallel programing model. Currently, there are many efforts to evaluate the hardware and software bottlenecks of exascale designs. It is therefore of interest to model application performance and to understand what changes need to be made to ensure extrapolated scalability. The fast multipole method (FMM) was originally developed for accelerating N-body problems in astrophysics and molecular dynamics but has recently been extended to a wider range of problems. Its high arithmetic intensity combined with its linear complexity and asynchronous communication patterns make it a promising algorithm for exascale systems. In this paper, we discuss the challenges for FMM on current parallel computers and future exascale architectures, with a focus on internode communication. We focus on the communication part only; the efficiency of the computational kernels are beyond the scope of the present study. We develop a performance model that considers the communication patterns of the FMM and observe a good match between our model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization of internode communication in FMM that validates the model against actual measurements of communication time. The ultimate communication model is predictive in an absolute sense; however, on complex systems, this objective is often out of reach or of a difficulty out of proportion to its benefit when there exists a simpler model that is inexpensive and sufficient to guide coding decisions leading to improved scaling. The current model provides such guidance.

  16. High performance computing network for cloud environment using simulators

    CERN Document Server

    Singh, N Ajith

    2012-01-01

    Cloud computing is the next generation computing. Adopting the cloud computing is like signing up new form of a website. The GUI which controls the cloud computing make is directly control the hardware resource and your application. The difficulty part in cloud computing is to deploy in real environment. Its' difficult to know the exact cost and it's requirement until and unless we buy the service not only that whether it will support the existing application which is available on traditional data center or had to design a new application for the cloud computing environment. The security issue, latency, fault tolerance are some parameter which we need to keen care before deploying, all this we only know after deploying but by using simulation we can do the experiment before deploying it to real environment. By simulation we can understand the real environment of cloud computing and then after it successful result we can start deploying your application in cloud computing environment. By using the simulator it...

  17. The Impact of Computers on Student Performance and Teacher Behavior.

    Science.gov (United States)

    Pisapia, John R.; Knutson, Kim; Coukos, Eleni D.

    This paper reports on a 3-year computer initiative implemented by a school district in a metropolitan area. The initiative began in 1996 and continued through 1998. The school district of 44,000 students funded 5 computers and an ink jet color printer in each elementary classroom in 34 schools. The purpose of this study was to determine the impact…

  18. Performability evaluation of the SIFT computer. [Software-Implemented Fault Tolerance computer onboard commercial aircraft during transoceanic flight

    Science.gov (United States)

    Meyer, J. F.; Furchtgott, D. G.; Wu, L. T.

    1980-01-01

    The paper deals with the models, techniques, and evaluation methods that were successfully used to test the performance of the SIFT degradable computing system. The performance of the computer plus its air transport mission environment is modeled as a random variable, taking values in a set of 'accomplishment level'. The levels are defined in terms of four attributes of total system (computer plus environment) behavior, namely safety, no change in mission profile, no operational penalties, and no economic penalties. The base model of the total system is a stochastic process, whose states describe the internal structure of SIFT and the relevant conditions of its computational environment. Base model state trajectories are related to accomplishment levels via a special function, and solution methods are then used to determine the performability of the total system for various parameters of the computer and environment.

  19. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and

  20. A Heterogeneous High-Performance System for Computational and Computer Science

    Science.gov (United States)

    2016-11-15

    Science The views, opinions and/or findings contained in this report are those of the author(s) and should not contrued as an official Department of the...System for Computational and Computer Science Report Title This DoD HBC/MI Equipment/Instrumentation grant was awarded in October 2014 for the purchase...Computing (HPC) course taught in the department of computer science as to attract more graduate students from many disciplines where their research

  1. Computer Anxiety and Performance: An Application of a Change Model in a Pedagogical Setting.

    Science.gov (United States)

    Desai, Mayur S.

    2001-01-01

    Discusses the adverse effects of computer anxiety on student performance and reports an application of a change management process to a class on computers in business that attempted to reduce computer anxiety and improve learning and performance through a pedagogical intervention. Considers implications of results that showed lower anxiety but not…

  2. A probabilistic methodology for radar cross section prediction in conceptual aircraft design

    Science.gov (United States)

    Hines, Nathan Robert

    System effectiveness has increasingly become the prime metric for the evaluation of military aircraft. As such, it is the decision maker's/designer's goal to maximize system effectiveness. Industry and government research documents indicate that all future military aircraft will incorporate signature reduction as an attempt to improve system effectiveness and reduce the cost of attrition. Today's operating environments demand low observable aircraft which are able to reliably take out valuable, time critical targets. Thus it is desirable to be able to design vehicles that are balanced for increased effectiveness. Previous studies have shown that shaping of the vehicle is one of the most important contributors to radar cross section, a measure of radar signature, and must be considered from the very beginning of the design process. Radar cross section estimation should be incorporated into conceptual design to develop more capable systems. This research strives to meet these needs by developing a conceptual design tool that predicts radar cross section for parametric geometries. This tool predicts the absolute radar cross section of the vehicle as well as the impact of geometry changes, allowing for the simultaneous tradeoff of the aerodynamic, performance, and cost characteristics of the vehicle with the radar cross section. Furthermore, this tool can be linked to a campaign theater analysis code to demonstrate the changes in system and system of system effectiveness due to changes in aircraft geometry. A general methodology was developed and implemented and sample computer codes applied to prototype the proposed process. Studies utilizing this radar cross section tool were subsequently performed to demonstrate the capabilities of this method and show the impact that various inputs have on the outputs of these models. The F/A-18 aircraft configuration was chosen as a case study vehicle to perform a design space exercise and to investigate the relative impact of

  3. Concealed target detection using augmented reality with SIRE radar

    Science.gov (United States)

    Saponaro, Philip; Kambhamettu, Chandra; Ranney, Kenneth; Sullivan, Anders

    2013-05-01

    The Synchronous Impulse Reconstruction (SIRE) forward-looking radar, developed by the U.S. Army Research Laboratory (ARL), can detect concealed targets using ultra-wideband synthetic aperture technology. The SIRE radar has been mounted on a Ford Expedition and combined with other sensors, including a pan/tilt/zoom camera, to test its capabilities of concealed target detection in a realistic environment. Augmented Reality (AR) can be used to combine the SIRE radar image with the live camera stream into one view, which provides the user with information that is quicker to assess and easier to understand than each separated. In this paper we present an AR system which utilizes a global positioning system (GPS) and inertial measurement unit (IMU) to overlay a SIRE radar image onto a live video stream. We describe a method for transforming 3D world points in the UTM coordinate system onto the video stream by calibrating for the intrinsic parameters of the camera. This calibration is performed offline to save computation time and achieve real time performance. Since the intrinsic parameters are affected by the zoom of the camera, we calibrate at eleven different zooms and interpolate. We show the results of a real time transformation of the SAR imagery onto the video stream. Finally, we quantify both the 2D error and 3D residue associated with our transformation and show that the amount of error is reasonable for our application.

  4. Automotive Radar Sensors in Silicon Technologies

    CERN Document Server

    Jain, Vipul

    2013-01-01

    This book presents architectures and design techniques for mm-wave automotive radar transceivers. Several fully-integrated transceivers and receivers operating at 22-29 GHz and 77-81 GHz are demonstrated in both CMOS and SiGe BiCMOS technologies. Excellent performance is achieved indicating the suitability of silicon technologies for automotive radar sensors.  This book bridges an existing gap between information available on dependable system/architecture design and circuit design.  It provides the background of the field and detailed description of recent research and development of silicon-based radar sensors.  System-level requirements and circuit topologies for radar transceivers are described in detail. Holistic approaches towards designing radar sensors are validated with several examples of highly-integrated radar ICs in silicon technologies. Circuit techniques to design millimeter-wave circuits in silicon technologies are discussed in depth.  Describes concepts and fundamentals of automotive rada...

  5. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services.

    Science.gov (United States)

    Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha

    2016-02-27

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  6. Performance management of high performance computing for medical image processing in Amazon Web Services

    Science.gov (United States)

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-03-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  7. Future of phased array radar systems

    Science.gov (United States)

    Bassyouni, Ahmed

    2011-12-01

    This paper spots the light on the future progress of phased array radar systems, presenting two innovative examples on the directions of development. The first example starts with the classic radar range equation to develop the topology of what is called a "Mobile Adaptive Digital Array Radar" (MADAR) system. The second example discusses the possibility to achieve what is called "Entangled Photonic Radar" (EPR) system. The EPR quantum range equation is derived and compared to the classic one to compare the performance. Block diagrams and analysis for both proposed systems are presented.

  8. FY 1992 Blue Book: Grand Challenges: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  9. Fostering complex learning-task performance through scripting student use of computer supported representational tools

    NARCIS (Netherlands)

    Slof, Bert; Erkens, Gijs; Kirschner, Paul A.; Janssen, Jeroen; Phielix, Chris

    2010-01-01

    Slof, B., Erkens, G., Kirschner, P. A., Janssen, J., & Phielix, C. (2010). Fostering complex learning-task performance through scripting student use of computer supported representational tools. Computers & Education, 55(4), 1707-1720.

  10. FY 1993 Blue Book: Grand Challenges 1993: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  11. Performance Evaluation of a Mobile Wireless Computational Grid ...

    African Journals Online (AJOL)

    PROF. OLIVER OSUAGWA

    2015-12-01

    Dec 1, 2015 ... 1, 2, 3 Department of Mathematics and Computer Science, Benue State University, Makurdi, Nigeria. Abstract. This work developed and simulated a mathematical model for a ... formulated as a non-cooperative game among.

  12. Distributed metadata in a high performance computing environment

    Energy Technology Data Exchange (ETDEWEB)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua; Liu, Xuezhao; Tang, Haiying

    2017-07-11

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination that a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.

  13. Role of high-performance computing in science education

    Energy Technology Data Exchange (ETDEWEB)

    Sabelli, N.H. (National Center for Supercomputing Applications, Champaign, IL (US))

    1991-01-01

    This article is a report on the continuing activities of a group committed to enhancing the development and use of computational science techniques in education. Interested readers are encouraged to contact members of the Steering Committee or the project coordinator.

  14. Benchmark Numerical Toolkits for High Performance Computing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Computational codes in physics and engineering often use implicit solution algorithms that require linear algebra tools such as Ax=b solvers, eigenvalue,...

  15. High-performance computing at NERSC: Present and future

    Energy Technology Data Exchange (ETDEWEB)

    Koniges, A.E.

    1995-07-01

    The author describes the new T3D parallel computer at NERSC. The adaptive mesh ICF3D code is one of the current applications being ported and developed for use on the T3D. It has been stressed in other papers in this proceedings that the development environment and tools available on the parallel computer is similar to any planned for the future including networks of workstations.

  16. Application of Computer Graphics to Performance Studies of Missile Warheads

    Directory of Open Access Journals (Sweden)

    K. Rama Rao

    1991-01-01

    Full Text Available Intercept geometry of target aircraft and missiles play an important role in determining the effectiveness of the warhead. Factors such as fragment spatial distribution profile, damage capabilities, target and missile characteristics have been considered and visualised through computer graphics and optimum intercept intercept angles have been arrived. Computer graphics has proved to be an important tool to enhance perception and conceptual design capabilities in the design environment.

  17. High Performance Computing Innovation Service Portal Study (HPC-ISP)

    Science.gov (United States)

    2009-04-01

    based electronic commerce interface for the goods and services available through the brokerage service. This infrastructure will also support the... electronic commerce backend functionality for third parties that want to sell custom computing services. • Tailored Industry Portals are web portals for...broker shown in Figure 8 is essentially a web server that provides remote access to computing and software resources through an electronic commerce

  18. Performance of parallel computation using CUDA for solving the one-dimensional elasticity equations

    Science.gov (United States)

    Darmawan, J. B. B.; Mungkasi, S.

    2017-01-01

    In this paper, we investigate the performance of parallel computation in solving the one-dimensional elasticity equations. Elasticity equations are usually implemented in engineering science. Solving these equations fast and efficiently is desired. Therefore, we propose the use of parallel computation. Our parallel computation uses CUDA of the NVIDIA. Our research results show that parallel computation using CUDA has a great advantage and is powerful when the computation is of large scale.

  19. An Embedded System for applying High Performance Computing in Educational Learning Activity

    OpenAIRE

    Irene Erlyn Wina Rachmawan; Nurul Fahmi; Edi Wahyu Widodo; Samsul Huda; M. Unggul Pamenang; M. Choirur Roziqin; Andri Permana W.; Stritusta Sukaridhoto; Dadet Pramadihanto

    2016-01-01

    HPC (High Performance Computing) has become more popular in the last few years. With the benefits on high computational power, HPC has impact on industry, scientific research and educational activities. Implementing HPC as a curriculum in universities could be consuming a lot of resources because well-known HPC system are using Personal Computer or Server. By using PC as the practical moduls it is need great resources and spaces.  This paper presents an innovative high performance computing c...

  20. Reducing power consumption while performing collective operations on a plurality of compute nodes

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Peters, Amanda E.; Ratterman, Joseph D.; Smith, Brian E.

    2011-10-18

    Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.

  1. Fractal characteristics for binary noise radar waveform

    Science.gov (United States)

    Li, Bing C.

    2016-05-01

    Noise radars have many advantages over conventional radars and receive great attentions recently. The performance of a noise radar is determined by its waveforms. Investigating characteristics of noise radar waveforms has significant value for evaluating noise radar performance. In this paper, we use binomial distribution theory to analyze general characteristics of binary phase coded (BPC) noise waveforms. Focusing on aperiodic autocorrelation function, we demonstrate that the probability distributions of sidelobes for a BPC noise waveform depend on the distances of these sidelobes to the mainlobe. The closer a sidelobe to the mainlobe, the higher the probability for this sidelobe to be a maximum sidelobe. We also develop Monte Carlo framework to explore the characteristics that are difficult to investigate analytically. Through Monte Carlo experiments, we reveal the Fractal relationship between the code length and the maximum sidelobe value for BPC waveforms, and propose using fractal dimension to measure noise waveform performance.

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  3. Performance analysis of three dimensional integral equation computations on a massively parallel computer. M.S. Thesis

    Science.gov (United States)

    Logan, Terry G.

    1994-01-01

    The purpose of this study is to investigate the performance of the integral equation computations using numerical source field-panel method in a massively parallel processing (MPP) environment. A comparative study of computational performance of the MPP CM-5 computer and conventional Cray-YMP supercomputer for a three-dimensional flow problem is made. A serial FORTRAN code is converted into a parallel CM-FORTRAN code. Some performance results are obtained on CM-5 with 32, 62, 128 nodes along with those on Cray-YMP with a single processor. The comparison of the performance indicates that the parallel CM-FORTRAN code near or out-performs the equivalent serial FORTRAN code for some cases.

  4. High-Performance Computer Modeling of the Cosmos-Iridium Collision

    Science.gov (United States)

    Olivier, S.

    This paper describes the application of a new, integrated modeling and simulation framework, encompassing the space situational awareness (SSA) enterprise, to the recent Cosmos-Iridium collision. This framework is based on a flexible, scalable architecture to enable efficient simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel computer systems available, for example, at Lawrence Livermore National Laboratory. We will describe the application of this framework to the recent collision of the Cosmos and Iridium satellites, including (1) detailed hydrodynamic modeling of the satellite collision and resulting debris generation, (2) orbital propagation of the simulated debris and analysis of the increased risk to other satellites (3) calculation of the radar and optical signatures of the simulated debris and modeling of debris detection with space surveillance radar and optical systems (4) determination of simulated debris orbits from modeled space surveillance observations and analysis of the resulting orbital accuracy, (5) comparison of these modeling and simulation results with Space Surveillance Network observations. We will also discuss the use of this integrated modeling and simulation framework to analyze the risks and consequences of future satellite collisions and to assess strategies for mitigating or avoiding future incidents, including the addition of new sensor systems, used in conjunction with the Space Surveillance Network, for improving space situational awareness.

  5. Application of High Performance Computing for Simulations of N-Dodecane Jet Spray with Evaporation

    Science.gov (United States)

    2016-11-01

    ARL-TR-7873 ● NOV 2016 US Army Research Laboratory Application of High Performance Computing for Simulations of N -Dodecane Jet...US Army Research Laboratory Application of High Performance Computing for Simulations of N -Dodecane Jet Spray with Evaporation by Luis...TITLE AND SUBTITLE Application of High Performance Computing for Simulations of N -Dodecane Jet Spray with Evaporation 5a. CONTRACT NUMBER 5b

  6. Vertical Pointing Weather Radar for Built-up Urban Areas

    DEFF Research Database (Denmark)

    Rasmussen, Michael R.; Thorndahl, Søren; Schaarup-Jensen, Kjeld

    2008-01-01

      A cost effective vertical pointing X-band weather radar (VPR) has been tested for measurement of precipitation in urban areas. Stationary tests indicate that the VPR performs well compared to horizontal weather radars, such as the local area weather radars (LAWR). The test illustrated...

  7. Achieving High Performance Distributed System: Using Grid, Cluster and Cloud Computing

    Directory of Open Access Journals (Sweden)

    Sunil Kr Singh

    2015-02-01

    Full Text Available To increase the efficiency of any task, we require a system that would provide high performance along with flexibilities and cost efficiencies for user. Distributed computing, as we are all aware, has become very popular over the past decade. Distributed computing has three major types, namely, cluster, grid and cloud. In order to develop a high performance distributed system, we need to utilize all the above mentioned three types of computing. In this paper, we shall first have an introduction of all the three types of distributed computing. Subsequently examining them we shall explore trends in computing and green sustainable computing to enhance the performance of a distributed system. Finally presenting the future scope, we conclude the paper suggesting a path to achieve a Green high performance distributed system using cluster, grid and cloud computing

  8. Temperate Ice Depth-Sounding Radar

    Science.gov (United States)

    Jara-Olivares, V. A.; Player, K.; Rodriguez-Morales, F.; Gogineni, P.

    2008-12-01

    . It also digitizes the output signal from the receiver and stores the data in binary format using a portable computer. The RF-section consists of a high- power transmitter and a low-noise receiver with digitally controlled variable gain. The antenna is time-shared between the transmitter and receiver by means of a transmit/receive (T/R) switch. In regards to the antenna, we have made a survey study of various electrically small antennas (ESA) to choose the most suitable radiating structure for this application. Among the different alternatives that provide a good trade-off between electrical performance and small size, we have adopted an ESA dipole configuration for airborne platforms and a half-wavelength radiator for the surface-based version. The airborne antenna solution is given after studying the geometry of the aerial vehicle and its fuselage contribution to the antenna radiation pattern. Dipoles are made of 11.6 mm diameter cables (AWG 0000) or printed patches embedded into the aircraft fuselage, wings, or both. The system is currently being integrated and tested. TIDSoR is expected to be deployed during the spring 2008 either in Alaska or Greenland for surface based observations. In this paper, we will discuss our design considerations and current progress towards the development of this radar system. [1] Center for Remote Sensing of Ice Sheets (Cresis), Sept 2008, [Online]. Available: http://www.cresis.ku.edu

  9. Understanding and Improving the Performance Consistency of Distributed Computing Systems

    NARCIS (Netherlands)

    Yigitbasi, M.N.

    2012-01-01

    With the increasing adoption of distributed systems in both academia and industry, and with the increasing computational and storage requirements of distributed applications, users inevitably demand more from these systems. Moreover, users also depend on these systems for latency and throughput sens

  10. Performance of Cloud Computing Centers with Multiple Priority Classes

    NARCIS (Netherlands)

    Ellens, W.; Zivkovic, M.; Akkerboom, J.D.; Litjens, R.; Berg, J.L. van den

    2012-01-01

    In this paper we consider the general problem of resource provisioning within cloud computing. We analyze the problem of how to allocate resources to different clients such that the service level agreements (SLAs) for all of these clients are met. A model with multiple service request classes genera

  11. Understanding and Improving the Performance Consistency of Distributed Computing Systems

    NARCIS (Netherlands)

    Yigitbasi, M.N.

    2012-01-01

    With the increasing adoption of distributed systems in both academia and industry, and with the increasing computational and storage requirements of distributed applications, users inevitably demand more from these systems. Moreover, users also depend on these systems for latency and throughput sens

  12. Performance of Cloud Computing Centers with Multiple Priority Classes

    NARCIS (Netherlands)

    Ellens, W.; Zivkovic, Miroslav; Akkerboom, J.; Litjens, R.; van den Berg, Hans Leo

    In this paper we consider the general problem of resource provisioning within cloud computing. We analyze the problem of how to allocate resources to different clients such that the service level agreements (SLAs) for all of these clients are met. A model with multiple service request classes

  13. Radar and ARPA manual

    CERN Document Server

    Bole, A G

    2013-01-01

    Radar and ARPA Manual focuses on the theoretical and practical aspects of electronic navigation. The manual first discusses basic radar principles, including principles of range and bearing measurements and picture orientation and presentation. The text then looks at the operational principles of radar systems. Function of units; aerial, receiver, and display principles; transmitter principles; and sitting of units on board ships are discussed. The book also describes target detection, Automatic Radar Plotting Aids (ARPA), and operational controls of radar systems, and then discusses radar plo

  14. Speeding up ecological and evolutionary computations in R; essentials of high performance computing for biologists.

    Science.gov (United States)

    Visser, Marco D; McMahon, Sean M; Merow, Cory; Dixon, Philip M; Record, Sydne; Jongejans, Eelke

    2015-03-01

    Computation has become a critical component of research in biology. A risk has emerged that computational and programming challenges may limit research scope, depth, and quality. We review various solutions to common computational efficiency problems in ecological and evolutionary research. Our review pulls together material that is currently scattered across many sources and emphasizes those techniques that are especially effective for typical ecological and environmental problems. We demonstrate how straightforward it can be to write efficient code and implement techniques such as profiling or parallel computing. We supply a newly developed R package (aprof) that helps to identify computational bottlenecks in R code and determine whether optimization can be effective. Our review is complemented by a practical set of examples and detailed Supporting Information material (S1-S3 Texts) that demonstrate large improvements in computational speed (ranging from 10.5 times to 14,000 times faster). By improving computational efficiency, biologists can feasibly solve more complex tasks, ask more ambitious questions, and include more sophisticated analyses in their research.

  15. Designing for Performance: A Cognitive Systems Engineering Approach to Modifying an AWACS Human Computer Interface

    Science.gov (United States)

    1993-03-01

    Radar dots ane same color for enmy and friendly Colorin Radar Dots "* Cannot track who is who in fuwT-ball (often because of same color radar dots) Caldse...management. Proceedings of the 11th Biennial DoD Psyhology Conference Colorado Springs, CO. Lipshitz, R. (1989). Decision making as arguent driven

  16. Prediction of the effects of soil and target properties on the antipersonnel landmine detection performance of ground-penetrating radar: A Colombian case study

    Science.gov (United States)

    Lopera, Olga; Milisavljevic, Nada

    2007-09-01

    The performance of ground-penetrating (GPR) radar is determined fundamentally by the soil electromagnetic (EM) properties and the target characteristics. In this paper, we predict the effects of such properties on the antipersonnel (AP) landmine detection performance of GPR in a Colombian scenario. Firstly, we use available soil geophysical information in existing pedotransfer models to calculate soil EM properties. The latter are included in a two-dimensional (2D), finite-difference time-domain (FDTD) modeling program in conjunction with the characteristics of AP landmines to calculate the buried target reflection. The approach is applied to two soils selected among Colombian mine-affected areas, and several local improvised explosive devices (IEDs) and AP landmines are modeled as targets. The signatures from such targets buried in the selected soils are predicted, considering different conditions. Finally, we show how the GPR can contribute in detecting low- and non-metallic targets in these Colombian soils. Such a system could be quite adequate for complementing humanitarian landmine detection by metal detectors.

  17. Illustrating the future prediction of performance based on computer code, physical experiments, and critical performance parameter samples

    Energy Technology Data Exchange (ETDEWEB)

    Hamada, Michael S [Los Alamos National Laboratory; Higdon, David M [Los Alamos National Laboratory

    2009-01-01

    In this paper, we present a generic example to illustrate various points about making future predictions of population performance using a biased performance computer code, physical performance data, and critical performance parameter data sampled from the population at various times. We show how the actual performance data help to correct the biased computer code and the impact of uncertainty especially when the prediction is made far from where the available data are taken. We also demonstrate how a Bayesian approach allows both inferences about the unknown parameters and predictions to be made in a consistent framework.

  18. Gender Differences in Attitudes toward Computers and Performance in the Accounting Information Systems Class

    Science.gov (United States)

    Lenard, Mary Jane; Wessels, Susan; Khanlarian, Cindi

    2010-01-01

    Using a model developed by Young (2000), this paper explores the relationship between performance in the Accounting Information Systems course, self-assessed computer skills, and attitudes toward computers. Results show that after taking the AIS course, students experience a change in perception about their use of computers. Females'…

  19. Effects of Computer-Based Test Administrations on Test Anxiety and Performance.

    Science.gov (United States)

    Shermis, Mark D.; Lombard, Danielle

    1998-01-01

    Examines the degree to which computer and test anxiety have a predictive role in performance across three computer-administered placement tests. Subjects (72 undergraduate students) were measured with the Computer Anxiety Rating Scale, the Test Anxiety Inventory, and the Myers-Briggs Type Indicator. Results suggest that much of what is considered…

  20. Home Computer Use and Academic Performance of Nine-Year-Olds

    Science.gov (United States)

    Casey, Alice; Layte, Richard; Lyons, Sean; Silles, Mary

    2012-01-01

    A recent rise in home computer ownership has seen a growing number of children using computers and accessing the internet from a younger age. This paper examines the link between children's home computing and their academic performance in the areas of reading and mathematics. Data from the nine-year-old cohort of the Growing Up in Ireland survey…

  1. An urban energy performance evaluation system and its computer implementation.

    Science.gov (United States)

    Wang, Lei; Yuan, Guan; Long, Ruyin; Chen, Hong

    2017-09-26

    To improve the urban environment and effectively reflect and promote urban energy performance, an urban energy performance evaluation system was constructed, thereby strengthening urban environmental management capabilities. From the perspectives of internalization and externalization, a framework of evaluation indicators and key factors that determine urban energy performance and explore the reasons for differences in performance was proposed according to established theory and previous studies. Using the improved stochastic frontier analysis method, an urban energy performance evaluation and factor analysis model was built that brings performance evaluation and factor analysis into the same stage for study. According to data obtained for the Chinese provincial capitals from 2004 to 2013, the coefficients of the evaluation indicators and key factors were calculated by the urban energy performance evaluation and factor analysis model. These coefficients were then used to compile the program file. The urban energy performance evaluation system developed in this study was designed in three parts: a database, a distributed component server, and a human-machine interface. Its functions were designed as login, addition, edit, input, calculation, analysis, comparison, inquiry, and export. On the basis of these contents, an urban energy performance evaluation system was developed using Microsoft Visual Studio .NET 2015. The system can effectively reflect the status of and any changes in urban energy performance. Beijing was considered as an example to conduct an empirical study, which further verified the applicability and convenience of this evaluation system. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Design of an FMCW radar baseband signal processing system for automotive application.

    Science.gov (United States)

    Lin, Jau-Jr; Li, Yuan-Ping; Hsu, Wei-Chiang; Lee, Ta-Sung

    2016-01-01

    For a typical FMCW automotive radar system, a new design of baseband signal processing architecture and algorithms is proposed to overcome the ghost targets and overlapping problems in the multi-target detection scenario. To satisfy the short measurement time constraint without increasing the RF front-end loading, a three-segment waveform with different slopes is utilized. By introducing a new pairing mechanism and a spatial filter design algorithm, the proposed detection architecture not only provides high accuracy and reliability, but also requires low pairing time and computational loading. This proposed baseband signal processing architecture and algorithms balance the performance and complexity, and are suitable to be implemented in a real automotive radar system. Field measurement results demonstrate that the proposed automotive radar signal processing system can perform well in a realistic application scenario.

  3. Sparse Representation Based Range-Doppler Processing for Integrated OFDM Radar-Communication Networks

    Directory of Open Access Journals (Sweden)

    Bo Kong

    2017-01-01

    Full Text Available In an integrated radar-communication network, multiuser access techniques with minimal performance degradation and without range-Doppler ambiguities are required, especially in a dense user environment. In this paper, a multiuser access scheme with random subcarrier allocation mechanism is proposed for orthogonal frequency division multiplexing (OFDM based integrated radar-communication networks. The expression of modulation Symbol-Domain method combined with sparse representation (SR for range-Doppler estimation is introduced and a parallel reconstruction algorithm is employed. The radar target detection performance is improved with less spectrum occupation. Additionally, a Doppler frequency detector is exploited to decrease the computational complexity. Numerical simulations show that the proposed method outperforms the traditional modulation Symbol-Domain method under ideal and realistic nonideal scenarios.

  4. Satellite radar altimetry water elevations performance over a 200 m wide river: Evaluation over the Garonne River

    Science.gov (United States)

    Biancamaria, S.; Frappart, F.; Leleu, A.-S.; Marieu, V.; Blumstein, D.; Desjonquères, Jean-Damien; Boy, F.; Sottolichio, A.; Valle-Levinson, A.

    2017-01-01

    For at least 20 years, nadir altimetry satellite missions have been successfully used to first monitor the surface elevation of oceans and, shortly after, of large rivers and lakes. For the last 5-10 years, few studies have demonstrated the possibility to also observe smaller water bodies than previously thought feasible (river smaller than 500 m wide and lake below 10 km2). The present study aims at quantifying the nadir altimetry performance over a medium river (200 m or lower wide) with a pluvio-nival regime in a temperate climate (the Garonne River, France). Three altimetry missions have been considered: ENVISAT (from 2002 to 2010), Jason-2 (from 2008 to 2014) and SARAL (from 2013 to 2014). Compared to nearby in situ gages, ENVISAT and Jason-2 observations over the lower Garonne River mainstream (110 km upstream of the estuary) have the smallest errors, with water elevation anomalies root mean square errors (RMSE) around 50 cm and 20 cm, respectively. The few ENVISAT upstream measurements have RMSE ranging from 80 cm to 160 cm. Over the estuary, ENVISAT and SARAL water elevation anomalies RMSE are around 30 cm and 10 cm, respectively. The most recent altimetry mission, SARAL, does not provide river elevation measurements for most satellite overflights of the river mainstream. The altimeter remains "locked" on the top of surrounding hilly areas and does not observe the steep-sided river valley, which could be 50-100 m lower. This phenomenon is also observed, for fewer dates, on Jason-2 and ENVISAT measurements. In these cases, the measurement is not "erroneous", it just does not correspond to water elevation of the river that is covered by the satellite. ENVISAT is less prone to get 'locked' on the top of the topography due to some differences in the instrument measurement parameters, trading lower accuracy for more useful measurements. Such problems are specific to continental surfaces (or near the coasts), but are not observed over the open oceans, which are

  5. The European computer model for optronic system performance prediction (ECOMOS)

    Science.gov (United States)

    Repasi, Endre; Bijl, Piet; Labarre, Luc; Wittenstein, Wolfgang; Bürsing, Helge

    2017-05-01

    ECOMOS is a multinational effort within the framework of an EDA Project Arrangement. Its aim is to provide a generally accepted and harmonized European computer model for computing nominal Target Acquisition (TA) ranges of optronic imagers operating in the Visible or thermal Infrared (IR). The project involves close co-operation of defence and security industry and public research institutes from France, Germany, Italy, The Netherlands and Sweden. ECOMOS uses and combines well-accepted existing European tools to build up a strong competitive position. This includes two TA models: the analytical TRM4 model and the image-based TOD model. In addition, it uses the atmosphere model MATISSE. In this paper, the central idea of ECOMOS is exposed. The overall software structure and the underlying models are shown and elucidated. The status of the project development is given as well as a short outlook on validation tests and the future potential of simulation for sensor assessment.

  6. High performance computing for classic gravitational N-body systems

    CERN Document Server

    Capuzzo-Dolcetta, Roberto

    2009-01-01

    The role of gravity is crucial in astrophysics. It determines the evolution of any system, over an enormous range of time and space scales. Astronomical stellar systems as composed by N interacting bodies represent examples of self-gravitating systems, usually treatable with the aid of newtonian gravity but for particular cases. In this note I will briefly discuss some of the open problems in the dynamical study of classic self-gravitating N-body systems, over the astronomical range of N. I will also point out how modern research in this field compulsorily requires a heavy use of large scale computations, due to the contemporary requirement of high precision and high computational speed.

  7. Iterative coupling reservoir simulation on high performance computers

    Institute of Scientific and Technical Information of China (English)

    Lu Bo; Wheeler Mary F

    2009-01-01

    In this paper, the iterative coupling approach is proposed for applications to solving multiphase flow equation systems in reservoir simulation, as it provides a more flexible time-stepping strategy than existing approaches. The iterative method decouples the whole equation systems into pressure and saturation/concentration equations, and then solves them in sequence, implicitly and semi-implicitly. At each time step, a series of iterations are computed, which involve solving linearized equations using specific tolerances that are iteration dependent. Following convergence of subproblems, material balance is checked. Convergence of time steps is based on material balance errors. Key components of the iterative method include phase scaling for deriving a pressure equation and use of several advanced numerical techniques. The iterative model is implemented for parallel computing platforms and shows high parallel efficiency and scalability.

  8. Performing a local reduction operation on a parallel computer

    Science.gov (United States)

    Blocksome, Michael A.; Faraj, Daniel A.

    2012-12-11

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  9. Optimize the Security Performance of the Computing Environment of IHEP

    Institute of Scientific and Technical Information of China (English)

    Rong-shengXU; Bao-XuLIU

    2001-01-01

    This paper gives a background of crackers,then some attack events that have happened in IHEP networks are enumerated and introduced.At last a highly efficient defence system that integrates author's experience,research results and have put in practice in IHEP networks environment is described in detail,This paper also gives network and information security advice and process for high energy physics computing environment in the Institute of High Energy Physics that will implement in the future.

  10. The Efficiency of Various Computers and Optimizations in Performing Finite Element Computations

    Science.gov (United States)

    Marcus, Martin H.; Broduer, Steve (Technical Monitor)

    2001-01-01

    With the advent of computers with many processors, it becomes unclear how to best exploit this advantage. For example, matrices can be inverted by applying several processors to each vector operation, or one processor can be applied to each matrix. The former approach has diminishing returns beyond a handful of processors, but how many processors depends on the computer architecture. Applying one processor to each matrix is feasible with enough ram memory and scratch disk space, but the speed at which this is done is found to vary by a factor of three depending on how it is done. The cost of the computer must also be taken into account. A computer with many processors and fast interprocessor communication is much more expensive than the same computer and processors with slow interprocessor communication. Consequently, for problems that require several matrices to be inverted, the best speed per dollar for computers is found to be several small workstations that are networked together, such as in a Beowulf cluster. Since these machines typically have two processors per node, each matrix is most efficiently inverted with no more than two processors assigned to it.

  11. Computer monitoring and optimization of the steam boiler performance

    OpenAIRE

    Sobota Tomasz

    2017-01-01

    The paper presents a method for determination of thermo-flow parameters for steam boilers. This method allows to perform the calculations of the boiler furnace chamber and heat flow rates absorbed by superheater stages. These parameters are important for monitoring the performance of the power unit. Knowledge of these parameters allows determining the degree of the furnace chamber slagging. The calculation can be performed in online mode and use to monitoring of steam boiler. The presented me...

  12. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  13. High-performance computational condensed-matter physics in the cloud

    Science.gov (United States)

    Rehr, J. J.; Svec, L.; Gardner, J. P.; Prange, M. P.

    2009-03-01

    We demonstrate the feasibility of high performance scientific computation in condensed-matter physics using cloud computers as an alternative to traditional computational tools. The availability of these large, virtualized pools of compute resources raises the possibility of a new compute paradigm for scientific research with many advantages. For research groups, cloud computing provides convenient access to reliable, high performance clusters and storage, without the need to purchase and maintain sophisticated hardware. For developers, virtualization allows scientific codes to be pre-installed on machine images, facilitating control over the computational environment. Detailed tests are presented for the parallelized versions of the electronic structure code SIESTA ootnotetextJ. Soler et al., J. Phys.: Condens. Matter 14, 2745 (2002). and for the x-ray spectroscopy code FEFF ootnotetextA. Ankudinov et al., Phys. Rev. B 65, 104107 (2002). including CPU, network, and I/O performance, using the the Amazon EC2 Elastic Cloud.

  14. Comprehensive Simulation Lifecycle Management for High Performance Computing Modeling and Simulation Project

    Data.gov (United States)

    National Aeronautics and Space Administration — There are significant logistical barriers to entry-level high performance computing (HPC) modeling and simulation (M&S) users. Performing large-scale, massively...

  15. Passive MIMO Radar Detection

    Science.gov (United States)

    2013-09-01

    cumulative distribution function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 CORA COvert RAdar...PaRaDe), developed by the Insti- tute of Electronic Systems at the Warsaw University of Technology [59, 60]; COvert RAdar ( CORA ), developed by the German

  16. Weather Radar Impact Zones

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — These data represent an inventory of the national impacts of wind turbine interference with NEXRAD radar stations. This inventory was developed by the NOAA Radar...

  17. Full tensor gravity gradiometry data inversion: Performance analysis of parallel computing algorithms

    Science.gov (United States)

    Hou, Zhen-Long; Wei, Xiao-Hui; Huang, Da-Nian; Sun, Xu

    2015-09-01

    We apply reweighted inversion focusing to full tensor gravity gradiometry data using message-passing interface (MPI) and compute unified device architecture (CUDA) parallel computing algorithms, and then combine MPI with CUDA to formulate a hybrid algorithm. Parallel computing performance metrics are introduced to analyze and compare the performance of the algorithms. We summarize the rules for the performance evaluation of parallel algorithms. We use model and real data from the Vinton salt dome to test the algorithms. We find good match between model and real density data, and verify the high efficiency and feasibility of parallel computing algorithms in the inversion of full tensor gravity gradiometry data.

  18. Wind Turbine Radar Cross Section

    Directory of Open Access Journals (Sweden)

    David Jenn

    2012-01-01

    Full Text Available The radar cross section (RCS of a wind turbine is a figure of merit for assessing its effect on the performance of electronic systems. In this paper, the fundamental equations for estimating the wind turbine clutter signal in radar and communication systems are presented. Methods of RCS prediction are summarized, citing their advantages and disadvantages. Bistatic and monostatic RCS patterns for two wind turbine configurations, a horizontal axis three-blade design and a vertical axis helical design, are shown. The unique electromagnetic scattering features, the effect of materials, and methods of mitigating wind turbine clutter are also discussed.

  19. Broadview Radar Altimetry Toolbox

    Science.gov (United States)

    Mondéjar, Albert; Benveniste, Jérôme; Naeije, Marc; Escolà, Roger; Moyano, Gorka; Roca, Mònica; Terra-Homem, Miguel; Friaças, Ana; Martinho, Fernando; Schrama, Ernst; Ambrózio, Américo; Restano, Marco

    2016-07-01

    The universal altimetry toolbox, BRAT (Broadview Radar Altimetry Toolbox) which can read all previous and current altimetry missions' data, incorporates now the capability to read the upcoming Sentinel-3 L1 and L2 products. ESA endeavoured to develop and supply this capability to support the users of the future Sentinel-3 SAR Altimetry Mission. BRAT is a collection of tools and tutorial documents designed to facilitate the processing of radar altimetry data. This project started in 2005 from the joint efforts of ESA (European Space Agency) and CNES (Centre National d'Études Spatiales), and it is freely available at http://earth.esa.int/brat. The tools enable users to interact with the most common altimetry data formats. The BratGUI is the front-end for the powerful command line tools that are part of the BRAT suite. BRAT can also be used in conjunction with MATLAB/IDL (via reading routines) or in C/C++/Fortran via a programming API, allowing the user to obtain desired data, bypassing the data-formatting hassle. BRAT can be used simply to visualise data quickly, or to translate the data into other formats such as NetCDF, ASCII text files, KML (Google Earth) and raster images (JPEG, PNG, etc.). Several kinds of computations can be done within BRAT involving combinations of data fields that the user can save for posterior reuse or using the already embedded formulas that include the standard oceanographic altimetry formulas. The Radar Altimeter Tutorial, that contains a strong introduction to altimetry, shows its applications in different fields such as Oceanography, Cryosphere, Geodesy, Hydrology among others. Included are also "use cases", with step-by-step examples, on how to use the toolbox in the different contexts. The Sentinel-3 SAR Altimetry Toolbox shall benefit from the current BRAT version. While developing the toolbox we will revamp of the Graphical User Interface and provide, among other enhancements, support for reading the upcoming S3 datasets and

  20. Issues in undergraduate education in computational science and high performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Marchioro, T.L. II; Martin, D. [Ames Lab., IA (United States)

    1994-12-31

    The ever increasing need for mathematical and computational literacy within their society and among members of the work force has generated enormous pressure to revise and improve the teaching of related subjects throughout the curriculum, particularly at the undergraduate level. The Calculus Reform movement is perhaps the best known example of an organized initiative in this regard. The UCES (Undergraduate Computational Engineering and Science) project, an effort funded by the Department of Energy and administered through the Ames Laboratory, is sponsoring an informal and open discussion of the salient issues confronting efforts to improve and expand the teaching of computational science as a problem oriented, interdisciplinary approach to scientific investigation. Although the format is open, the authors hope to consider pertinent questions such as: (1) How can faculty and research scientists obtain the recognition necessary to further excellence in teaching the mathematical and computational sciences? (2) What sort of educational resources--both hardware and software--are needed to teach computational science at the undergraduate level? Are traditional procedural languages sufficient? Are PCs enough? Are massively parallel platforms needed? (3) How can electronic educational materials be distributed in an efficient way? Can they be made interactive in nature? How should such materials be tied to the World Wide Web and the growing ``Information Superhighway``?

  1. Detection Performance Assessment of Ground-Based Phased Array Radar for Ballistic Targets%地基相控阵雷达对弹道目标的探测性能评估

    Institute of Scientific and Technical Information of China (English)

    李星星; 姚汉英; 孙文峰

    2014-01-01

    为解决地基相控阵雷达对弹道目标探测的最优部署问题,建立弹道中段目标轨道运动和进动模型,提出弹道中段多部地基相控阵雷达的弹道目标探测概率模型,以及平均检测概率、稳定跟踪时间和资源冗余时间3种组合的雷达探测性能评估指标。依据弹道目标RCS及探测距离随观测时间的变化情况,通过仿真实验对多种部署方式下地基雷达对弹道目标探测性能评估指标的分析,得出的结论为弹道导弹防御系统中地基雷达的部署方式提供了有效的参考依据。%In order to solve the optimal deployment problem of ground-based phased array radar in detecting ballistic targets,the orbit motion and precession motion models of ballistic targets were built up,and the detection probability model for ballistic targets by using several ground-based phased array radars was proposed.Three evaluation indexes of radars'detection performance were given: average detection probability,stable tracking time and resource redundancy time .According to the variation of RCS and detection range for ballistic targets in midcourse,detection performance evaluation indexes of several radar deployment schemes were analyzed through experiments .The conclusion in this paper may provide some reference for deploying the ground-based radar in ballistic missile defense (BMD) system for targets'optimal detection.

  2. Energy-efficient high performance computing measurement and tuning

    CERN Document Server

    III, James H Laros; Kelly, Sue

    2012-01-01

    In this work, the unique power measurement capabilities of the Cray XT architecture were exploited to gain an understanding of power and energy use, and the effects of tuning both CPU and network bandwidth. Modifications were made to deterministically halt cores when idle. Additionally, capabilities were added to alter operating P-state. At the application level, an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale is gained by simultaneously collecting current and voltage measurements on the hosting nod

  3. High-performance computational solutions in protein bioinformatics

    CERN Document Server

    Mrozek, Dariusz

    2014-01-01

    Recent developments in computer science enable algorithms previously perceived as too time-consuming to now be efficiently used for applications in bioinformatics and life sciences. This work focuses on proteins and their structures, protein structure similarity searching at main representation levels and various techniques that can be used to accelerate similarity searches. Divided into four parts, the first part provides a formal model of 3D protein structures for functional genomics, comparative bioinformatics and molecular modeling. The second part focuses on the use of multithreading for

  4. Using High Performance Computing to Support Water Resource Planning

    Energy Technology Data Exchange (ETDEWEB)

    Groves, David G. [RAND Corporation, Santa Monica, CA (United States); Lembert, Robert J. [RAND Corporation, Santa Monica, CA (United States); May, Deborah W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Leek, James R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Syme, James [RAND Corporation, Santa Monica, CA (United States)

    2015-10-22

    In recent years, decision support modeling has embraced deliberation-withanalysis— an iterative process in which decisionmakers come together with experts to evaluate a complex problem and alternative solutions in a scientifically rigorous and transparent manner. Simulation modeling supports decisionmaking throughout this process; visualizations enable decisionmakers to assess how proposed strategies stand up over time in uncertain conditions. But running these simulation models over standard computers can be slow. This, in turn, can slow the entire decisionmaking process, interrupting valuable interaction between decisionmakers and analytics.

  5. Implementing Simulation Design of Experiments and Remote Execution on a High Performance Computing Cluster

    Science.gov (United States)

    2007-09-01

    Kennard, R. W. & Stone, L.A. (1969). Computer Aided Desing of Experiments . Tecnometrics, 11(1), 137-148. Kleijnen, J. P. (2003). A user’s guide to the...SIMULATION DESIGN OF EXPERIMENTS AND REMOTE EXECUTION ON A HIGH PERFORMANCE COMPUTING CLUSTER by Adam J. Peters September 2007 Thesis...Simulation Design of Experiments and Remote Execution on a High Performance Computing Cluster 6. AUTHOR(S) Adam J. Peters 5. FUNDING NUMBERS 7

  6. Optimizing Performance of Scientific Visualization Software to Support Frontier-Class Computations

    Science.gov (United States)

    2015-08-01

    assistance with accessing graphics processing unit ( GPU )- enabled nodes on the HPC utility server systems via the Portable Batch System (PBS) batch job... graphics processing unit ( GPU )-enabled and large memory compute nodes. The EnSight client will run on the first allocated node (which is the graphics ...Defense DR Clients distributed rendering clients GPU graphics processing unit HPC high-performance computing HPCMDC High-Performance Computing

  7. Improving Student Performance through Computer-Based Assessment: Insights from Recent Research.

    Science.gov (United States)

    Ricketts, C.; Wilks, S. J.

    2002-01-01

    Compared student performance on computer-based assessment to machine-graded multiple choice tests. Found that performance improved dramatically on the computer-based assessment when students were not required to scroll through the question paper. Concluded that students may be disadvantaged by the introduction of online assessment unless care is…

  8. Digital LPI Radar Detector

    OpenAIRE

    Ong, Peng Ghee; Teng, Haw Kiad

    2001-01-01

    Approved for public release; distribution is unlimited The function of a Low Probability ofIntercept (LPI) radar is to prevent its interception by an Electronic Support (ES) receiver. This objective is generally achieved through the use of a radar waveform that is mismatched to those waveforms for which an ES receiver is tuned. This allows the radar to achieve a processing gain, with respect to the ES receiver, that is equal to the time-bandwidth product ofthe radar waveform. This...

  9. Failure detection in high-performance clusters and computers using chaotic map computations

    Science.gov (United States)

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  10. High performance computing software package for multitemporal Remote-Sensing computations

    Directory of Open Access Journals (Sweden)

    Asaad Chahboun

    2010-10-01

    Full Text Available With the huge satellite data actually stored, remote sensing multitemporal study is nowadays one of the most challenging fields of computer science. The multicore hardware support and Multithreading can play an important role in speeding up algorithm computations. In the present paper, a software package (called Multitemporal Software Package for Satellite Remote sensing data (MSPSRS has been developed for the multitemporal treatment of satellite remote sensing images in a standard format. Due to portability intend, the interface was developed using the QT application framework and the core wasdeveloped integrating C++ classes. MSP.SRS can run under different operating systems (i.e., Linux, Mac OS X, Windows, Embedded Linux, Windows CE, etc.. Final benchmark results, using multiple remote sensing biophysical indices, show a gain up to 6X on a quad core i7 personal computer.

  11. Echo Modeling and Detection Performance of Wideband VHF Radar%宽带甚高频雷达目标回波模型与检测性能研究

    Institute of Scientific and Technical Information of China (English)

    尤君; 万显荣; 龚子平; 程丰; 柯亨玉

    2013-01-01

    Very High Frequency (VHF) radar provides an important method for stealth target detection with the target resonance effect which increases Radar Cross Section (RCS) significantly. However, the traditional narrowband VHF radar has the disadvantages of being difficult to determine optimum operating frequencies, unstable performance in low attitude, being vulnerable to interference, and so on. Wideband technology provides a way to overcome the above disadvantages. The target echo of wideband VHF radar is different from that of wideband radar in the optical region, Boeing747-200 is selected as the specific object to research wideband VHF radar target echo model and its detection performance. First, the simulated wideband echo data are obtained, and then the multiple scattering centers’ model is established based on the simulated data. Finally, the detection performance of wideband VHF radar is discussed. Results show that in the resonance region targets have the property of multiple scattering centers;the main peak intensity of the scattering center is equal to the mean value of the echo intensity of the narrowband signals in the working band; wideband VHF radar does not have the obvious detection advantage compared with the traditional narrowband VHF radar. This investigation establishes the theoretical foundation for the design of novel wideband VHF radar system with the capability of detecting stealth targets.%甚高频(VHF)雷达利用谐振效应使后向散射系数(RCS)显著增加的特性为隐身目标探测提供了一种重要手段,但传统窄带VHF雷达存在最佳工作频率难以确定、低空性能欠稳定、易受干扰等缺点,宽带技术为克服上述缺点提供了途径。工作在谐振区的宽带VHF雷达目标回波与光学区宽带雷达目标回波相比具有显著区别,该文以Boeing747-200为对象研究了宽带VHF雷达目标回波模型及其检测性能。首先获取目标宽带回波仿真数据,然后通过分

  12. Network radar countermeasure systems integrating radar and radar countermeasures

    CERN Document Server

    Jiang, Qiuxi

    2016-01-01

    This is the very first book to present the network radar countermeasure system. It explains in detail the systematic concept of combining radar and radar countermeasures from the perspective of the information acquisition of target location, the optimization of the reconnaissance and detection, the integrated attack of the signals and facilities, and technological and legal developments concerning the networked system. It achieves the integration of the initiative and passivity, detection and jamming. The book explains how the system locates targets, completes target identification, tracks targets and compiles the data.

  13. A secure communications infrastructure for high-performance distributed computing

    Energy Technology Data Exchange (ETDEWEB)

    Foster, I.; Koenig, G.; Tuecke, S. [and others

    1997-08-01

    Applications that use high-speed networks to connect geographically distributed supercomputers, databases, and scientific instruments may operate over open networks and access valuable resources. Hence, they can require mechanisms for ensuring integrity and confidentially of communications and for authenticating both users and resources. Security solutions developed for traditional client-server applications do not provide direct support for the program structures, programming tools, and performance requirements encountered in these applications. The authors address these requirements via a security-enhanced version of the Nexus communication library; which they use to provide secure versions of parallel libraries and languages, including the Message Passing Interface. These tools permit a fine degree of control over what, where, and when security mechanisms are applied. In particular, a single application can mix secure and nonsecure communication, allowing the programmer to make fine-grained security/performance tradeoffs. The authors present performance results that quantify the performance of their infrastructure.

  14. Radar: Human Safety Net

    Science.gov (United States)

    Ritz, John M.

    2016-01-01

    Radar is a technology that can be used to detect distant objects not visible to the human eye. A predecessor of radar, called the telemobiloscope, was first used to detect ships in the fog in 1904 off the German coast. Many scientists have worked on the development and refinement of radar (Hertz with electromagnetic waves; Popov with determining…

  15. Radar and wind turbines; Radar en windturbines

    Energy Technology Data Exchange (ETDEWEB)

    Van Doorn, H.

    2010-03-15

    In the last years the developments of wind parks were hampered because of their possible effect on the radar for observation of air traffic. Work is currently being done on a new assessment model for wind turbines under the auspices of the steering group National Security for the military radar systems. Air traffic control Netherlands (LVNL) will look at the options for civil radars to join in. [Dutch] In de afgelopen jaren zijn windparkontwikkelingen onder meer belemmerd vanwege mogelijke effecten op radar voor de waarneming van luchtverkeer. Onder auspicien van de stuurgroep Nationale Veiligheid voor de militaire radarsystemen op land wordt gewerkt aan een nieuw beoordelingsmodel voor windturbines. De Luchtverkeersleiding Nederland (LVNL) zal bezien in hoeverre de civiele radars hierbij kunnen aansluiten.

  16. Separate DOD and DOA Estimation for Bistatic MIMO Radar

    Directory of Open Access Journals (Sweden)

    Lin Li

    2016-01-01

    Full Text Available A novel MUSIC-type algorithm is derived in this paper for the direction of departure (DOD and direction of arrival (DOA estimation in a bistatic MIMO radar. Through rearranging the received signal matrix, we illustrate that the DOD and the DOA can be separately estimated. Compared with conventional MUSIC-type algorithms, the proposed separate MUSIC algorithm can avoid the interference between DOD and DOA estimations effectively. Therefore, it is expected to give a better angle estimation performance and have a much lower computational complexity. Meanwhile, we demonstrate that our method is also effective for coherent targets in MIMO radar. Simulation results verify the efficiency of the proposed method, particularly when the signal-to-noise ratio (SNR is low and/or the number of snapshots is small.

  17. Logarithmic Laplacian Prior Based Bayesian Inverse Synthetic Aperture Radar Imaging.

    Science.gov (United States)

    Zhang, Shuanghui; Liu, Yongxiang; Li, Xiang; Bi, Guoan

    2016-04-28

    This paper presents a novel Inverse Synthetic Aperture Radar Imaging (ISAR) algorithm based on a new sparse prior, known as the logarithmic Laplacian prior. The newly proposed logarithmic Laplacian prior has a narrower main lobe with higher tail values than the Laplacian prior, which helps to achieve performance improvement on sparse representation. The logarithmic Laplacian prior is used for ISAR imaging within the Bayesian framework to achieve better focused radar image. In the proposed method of ISAR imaging, the phase errors are jointly estimated based on the minimum entropy criterion to accomplish autofocusing. The maximum a posterior (MAP) estimation and the maximum likelihood estimation (MLE) are utilized to estimate the model parameters to avoid manually tuning process. Additionally, the fast Fourier Transform (FFT) and Hadamard product are used to minimize the required computational efficiency. Experimental results based on both simulated and measured data validate that the proposed algorithm outperforms the traditional sparse ISAR imaging algorithms in terms of resolution improvement and noise suppression.

  18. Bistatic synthetic aperture radar imaging for arbitrary flight trajectories.

    Science.gov (United States)

    Yarman, Can Evren; Yazici, Birsen; Cheney, Margaret

    2008-01-01

    In this paper, we present an analytic, filtered backprojection (FBP) type inversion method for bistatic synthetic aperture radar (BISAR). We consider a BISAR system where a scene of interest is illuminated by electromagnetic waves that are transmitted, at known times, from positions along an arbitrary, but known, flight trajectory and the scattered waves are measured from positions along a different flight trajectory which is also arbitrary, but known. We assume a single-scattering model for the radar data, and we assume that the ground topography is known but not necessarily flat. We use microlocal analysis to develop the FBP-type reconstruction method. We analyze the computational complexity of the numerical implementation of the method and present numerical simulations to demonstrate its performance.

  19. Computational model of sustained acceleration effects on human cognitive performance.

    Science.gov (United States)

    McKinlly, Richard A; Gallimore, Jennie J

    2013-08-01

    Extreme acceleration maneuvers encountered in modern agile fighter aircraft can wreak havoc on human physiology, thereby significantly influencing cognitive task performance. As oxygen content declines under acceleration stress, the activity of high order cortical tissue reduces to ensure sufficient metabolic resources are available for critical life-sustaining autonomic functions. Consequently, cognitive abilities reliant on these affected areas suffer significant performance degradations. The goal was to develop and validate a model capable of predicting human cognitive performance under acceleration stress. Development began with creation of a proportional control cardiovascular model that produced predictions of several hemodynamic parameters, including eye-level blood pressure and regional cerebral oxygen saturation (rSo2). An algorithm was derived to relate changes in rSo2 within specific brain structures to performance on cognitive tasks that require engagement of different brain areas. Data from the "precision timing" experiment were then used to validate the model predicting cognitive performance as a function of G(z) profile. The following are value ranges. Results showed high agreement between the measured and predicted values for the rSo2 (correlation coefficient: 0.7483-0.8687; linear best-fit slope: 0.5760-0.9484; mean percent error: 0.75-3.33) and cognitive performance models (motion inference task--correlation coefficient: 0.7103-0.9451; linear best-fit slope: 0.7416-0.9144; mean percent error: 6.35-38.21; precision timing task--correlation coefficient: 0.6856-0.9726; linear best-fit slope: 0.5795-1.027; mean percent error: 6.30-17.28). The evidence suggests that the model is capable of accurately predicting cognitive performance of simplistic tasks under high acceleration stress.

  20. The RISC (Reduced Instruction Set Computer) Architecture and Computer Performance Evaluation.

    Science.gov (United States)

    1986-03-01

    8 )1; II;-21(82 ? 1JV , *~A 1*-e eo I Q .f ’ . - . - . .> - Approved for public release; distribution is unlimited. The RISC Architecture and Computer...1000 Lisboa Portugal 6. Manuel Pedrosa de Barros 4 Celula 5 Bloco 5 Lote D, 3 Direito 2795 Linda-a-Velha Portugal t~m " 96" ..... ...... |f

  1. Computer analysis of sodium cold trap design and performance. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    McPheeters, C.C.; Raue, D.J.

    1983-11-01

    Normal steam-side corrosion of steam-generator tubes in Liquid Metal Fast Breeder Reactors (LMFBRs) results in liberation of hydrogen, and most of this hydrogen diffuses through the tubes into the heat-transfer sodium and must be removed by the purification system. Cold traps are normally used to purify sodium, and they operate by cooling the sodium to temperatures near the melting point, where soluble impurities including hydrogen and oxygen precipitate as NaH and Na/sub 2/O, respectively. A computer model was developed to simulate the processes that occur in sodium cold traps. The Model for Analyzing Sodium Cold Traps (MASCOT) simulates any desired configuration of mesh arrangements and dimensions and calculates pressure drops and flow distributions, temperature profiles, impurity concentration profiles, and impurity mass distributions.

  2. Causal Analysis for Performance Modeling of Computer Programs

    Directory of Open Access Journals (Sweden)

    Jan Lemeire

    2007-01-01

    Full Text Available Causal modeling and the accompanying learning algorithms provide useful extensions for in-depth statistical investigation and automation of performance modeling. We enlarged the scope of existing causal structure learning algorithms by using the form-free information-theoretic concept of mutual information and by introducing the complexity criterion for selecting direct relations among equivalent relations. The underlying probability distribution of experimental data is estimated by kernel density estimation. We then reported on the benefits of a dependency analysis and the decompositional capacities of causal models. Useful qualitative models, providing insight into the role of every performance factor, were inferred from experimental data. This paper reports on the results for a LU decomposition algorithm and on the study of the parameter sensitivity of the Kakadu implementation of the JPEG-2000 standard. Next, the analysis was used to search for generic performance characteristics of the applications.

  3. Scalable File Systems for High Performance Computing Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Brandt, S A

    2007-10-03

    Simulations of mode I interlaminar fracture toughness tests of a carbon-reinforced composite material (BMS 8-212) were conducted with LSDYNA. The fracture toughness tests were performed by U.C. Berkeley. The simulations were performed to investigate the validity and practicality of employing decohesive elements to represent interlaminar bond failures that are prevalent in carbon-fiber composite structure penetration events. The simulations employed a decohesive element formulation that was verified on a simple two element model before being employed to perform the full model simulations. Care was required during the simulations to ensure that the explicit time integration of LSDYNA duplicate the near steady-state testing conditions. In general, this study validated the use of employing decohesive elements to represent the interlaminar bond failures seen in carbon-fiber composite structures, but the practicality of employing the elements to represent the bond failures seen in carbon-fiber composite structures during penetration events was not established.

  4. Analysing the performance of personal computers based on Intel microprocessors for sequence aligning bioinformatics applications.

    Science.gov (United States)

    Nair, Pradeep S; John, Eugene B

    2007-01-01

    Aligning specific sequences against a very large number of other sequences is a central aspect of bioinformatics. With the widespread availability of personal computers in biology laboratories, sequence alignment is now often performed locally. This makes it necessary to analyse the performance of personal computers for sequence aligning bioinformatics benchmarks. In this paper, we analyse the performance of a personal computer for the popular BLAST and FASTA sequence alignment suites. Results indicate that these benchmarks have a large number of recurring operations and use memory operations extensively. It seems that the performance can be improved with a bigger L1-cache.

  5. Kharkiv Meteor Radar System (the XX Age)

    Science.gov (United States)

    Kolomiyets, S. V.

    2012-09-01

    Kharkiv meteor radar research are of historic value (Kolomiyets and Sidorov 2007). Kharkiv radar observations of meteors proved internationally as the best in the world, it was noted at the IAU General Assembly in 1958. In the 1970s Kharkiv meteor automated radar system (MARS) was recommended at the international level as a successful prototype for wide distribution. Until now, this radar system is one of the most sensitive instruments of meteor radars in the world for astronomical observations. In 2004 Kharkiv meteor radar system is included in the list of objects which compose the national property of Ukraine. Kharkiv meteor radar system has acquired the status of the important historical astronomical instrument in world history. Meteor Centre for researching meteors in Kharkiv is a analogue of the observatory and performs the same functions of a generator and a battery of special knowledge and skills (the world-famous studio). Kharkiv and the location of the instrument were brand points on the globe, as the place where the world-class meteor radar studies were carried out. They are inscribed in the history of meteor astronomy, in large letters and should be immortalized on a world-wide level.

  6. Using Mental Computation Training to Improve Complex Mathematical Performance

    Science.gov (United States)

    Liu, Allison S.; Kallai, Arava Y.; Schunn, Christian D.; Fiez, Julie A.

    2015-01-01

    Mathematical fluency is important for academic and mathematical success. Fluency training programs have typically focused on fostering retrieval, which leads to math performance that does not reliably transfer to non-trained problems. More recent studies have focused on training number understanding and representational precision, but few have…

  7. HIGH-PERFORMANCE COMPUTING FOR THE STUDY OF EARTH AND ENVIRONMENTAL SCIENCE MATERIALS USING SYNCHROTRON X-RAY COMPUTED MICROTOMOGRAPHY.

    Energy Technology Data Exchange (ETDEWEB)

    FENG,H.; JONES,K.W.; MCGUIGAN,M.; SMITH,G.J.; SPILETIC,J.

    2001-10-12

    Synchrotron x-ray computed microtomography (CMT) is a non-destructive method for examination of rock, soil, and other types of samples studied in the earth and environmental sciences. The high x-ray intensities of the synchrotron source make possible the acquisition of tomographic volumes at a high rate that requires the application of high-performance computing techniques for data reconstruction to produce the three-dimensional volumes, for their visualization, and for data analysis. These problems are exacerbated by the need to share information between collaborators at widely separated locations over both local and tide-area networks. A summary of the CMT technique and examples of applications are given here together with a discussion of the applications of high-performance computing methods to improve the experimental techniques and analysis of the data.

  8. Airborne ground penetrating radar: practical field experiments

    CSIR Research Space (South Africa)

    Van Schoor, Michael

    2013-10-01

    Full Text Available The performance of ground penetrating radar (GPR) under conditions where the ground coupling of the antenna is potentially compromised is investigated. Of particular interest is the effect of increasing the distance between the antennae...

  9. Pulse Doppler radar

    CERN Document Server

    Alabaster, Clive

    2012-01-01

    This book is a practitioner's guide to all aspects of pulse Doppler radar. It concentrates on airborne military radar systems since they are the most used, most complex, and most interesting of the pulse Doppler radars; however, ground-based and non-military systems are also included. It covers the fundamental science, signal processing, hardware issues, systems design and case studies of typical systems. It will be a useful resource for engineers of all types (hardware, software and systems), academics, post-graduate students, scientists in radar and radar electronic warfare sectors and milit

  10. Understanding radar systems

    CERN Document Server

    Kingsley, Simon

    1999-01-01

    What is radar? What systems are currently in use? How do they work? This book provides engineers and scientists with answers to these critical questions, focusing on actual radar systems in use today. It is a perfect resource for those just entering the field, or as a quick refresher for experienced practitioners. The book leads readers through the specialized language and calculations that comprise the complex world of radar engineering as seen in dozens of state-of-the-art radar systems. An easy to read, wide ranging guide to the world of modern radar systems.

  11. Computer versus paper--does it make any difference in test performance?

    Science.gov (United States)

    Karay, Yassin; Schauber, Stefan K; Stosch, Christoph; Schüttpelz-Brauns, Katrin

    2015-01-01

    CONSTRUCT: In this study, we examine the differences in test performance between the paper-based and the computer-based version of the Berlin formative Progress Test. In this context it is the first study that allows controlling for students' prior performance. Computer-based tests make possible a more efficient examination procedure for test administration and review. Although university staff will benefit largely from computer-based tests, the question arises if computer-based tests influence students' test performance. A total of 266 German students from the 9th and 10th semester of medicine (comparable with the 4th-year North American medical school schedule) participated in the study (paper = 132, computer = 134). The allocation of the test format was conducted as a randomized matched-pair design in which students were first sorted according to their prior test results. The organizational procedure, the examination conditions, the room, and seating arrangements, as well as the order of questions and answers, were identical in both groups. The sociodemographic variables and pretest scores of both groups were comparable. The test results from the paper and computer versions did not differ. The groups remained within the allotted time, but students using the computer version (particularly the high performers) needed significantly less time to complete the test. In addition, we found significant differences in guessing behavior. Low performers using the computer version guess significantly more than low-performing students in the paper-pencil version. Participants in computer-based tests are not at a disadvantage in terms of their test results. The computer-based test required less processing time. The reason for the longer processing time when using the paper-pencil version might be due to the time needed to write the answer down, controlling for transferring the answer correctly. It is still not known why students using the computer version (particularly low-performing

  12. Allocating Tactical High-Performance Computer (HPC) Resources to Offloaded Computation in Battlefield Scenarios

    Science.gov (United States)

    2013-12-01

    devices. Offloading solutions such as Cuckoo (12), MAUI(13), COMET(14), and ThinkAir(15) offload applications via Wi-Fi or 3G networks to servers or...Soldier Smartphone Program. Information Week, 2010. 12. Kemp, R.; Palmer, N.; Kielmann, T.; Bal, H. Cuckoo : A Computation Offloading Framework for...ARMY RESEARCH LAB RDRL CIH S TAMIM SOOKOOR DALE SHIRES DAVID BRUNO RONDA TAYLOR SONG PARK 20 INTENTIONALLY LEFT BLANK. 21

  13. 14th annual Results and Review Workshop on High Performance Computing in Science and Engineering

    CERN Document Server

    Nagel, Wolfgang E; Resch, Michael M; Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2011; High Performance Computing in Science and Engineering '11

    2012-01-01

    This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2011. The reports cover all fields of computational science and engineering, ranging from CFD to computational physics and chemistry, to computer science, with a special emphasis on industrially relevant applications. Presenting results for both vector systems and microprocessor-based systems, the book allows readers to compare the performance levels and usability of various architectures. As HLRS

  14. L-band Synthetic Aperture Radar imagery performs better than optical datasets at retrieving woody fractional cover in deciduous, dry savannahs

    Science.gov (United States)

    Naidoo, Laven; Mathieu, Renaud; Main, Russell; Wessels, Konrad; Asner, Gregory P.

    2016-10-01

    Woody canopy cover (CC) is the simplest two dimensional metric for assessing the presence of the woody component in savannahs, but detailed validated maps are not currently available in southern African savannahs. A number of international EO programs (including in savannah landscapes) advocate and use optical LandSAT imagery for regional to country-wide mapping of woody canopy cover. However, previous research has shown that L-band Synthetic Aperture Radar (SAR) provides good performance at retrieving woody canopy cover in southern African savannahs. This study's objective was to evaluate, compare and use in combination L-band ALOS PALSAR and LandSAT-5 TM, in a Random Forest environment, to assess the benefits of using LandSAT compared to ALOS PALSAR. Additional objectives saw the testing of LandSAT-5 image seasonality, spectral vegetation indices and image textures for improved CC modelling. Results showed that LandSAT-5 imagery acquired in the summer and autumn seasons yielded the highest single season modelling accuracies (R2 between 0.47 and 0.65), depending on the year but the combination of multi-seasonal images yielded higher accuracies (R2 between 0.57 and 0.72). The derivation of spectral vegetation indices and image textures and their combinations with optical reflectance bands provided minimal improvement with no optical-only result exceeding the winter SAR L-band backscatter alone results (R2 of ∼0.8). The integration of seasonally appropriate LandSAT-5 image reflectance and L-band HH and HV backscatter data does provide a significant improvement for CC modelling at the higher end of the model performance (R2 between 0.83 and 0.88), but we conclude that L-band only based CC modelling be recommended for South African regions.

  15. Computer-mediated communication: task performance and satisfaction.

    Science.gov (United States)

    Simon, Andrew F

    2006-06-01

    The author assessed satisfaction and performance on 3 tasks (idea generation, intellective, judgment) among 75 dyads (N = 150) working through 1 of 3 modes of communication (instant messaging, videoconferencing, face to face). The author based predictions on the Media Naturalness Theory (N. Kock, 2001, 2002) and on findings from past researchers (e.g., D. M. DeRosa, C. Smith, & D. A. Hantula, in press) of the interaction between tasks and media. The present author did not identify task performance differences, although satisfaction with the medium was lower among those dyads communicating through an instant-messaging system than among those interacting face to face or through videoconferencing. The findings support the Media Naturalness Theory. The author discussed them in relation to the participants' frequent use of instant messaging and their familiarity with new communication media.

  16. Measuring Human Performance within Computer Security Incident Response Teams

    Energy Technology Data Exchange (ETDEWEB)

    McClain, Jonathan T. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Silva, Austin Ray [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Avina, Glory Emmanuel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Forsythe, James C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Human performance has become a pertinen t issue within cyber security. However, this research has been stymied by the limited availability of expert cyber security professionals. This is partly attributable to the ongoing workload faced by cyber security professionals, which is compound ed by the limited number of qualified personnel and turnover of p ersonnel across organizations. Additionally, it is difficult to conduct research, and particularly, openly published research, due to the sensitivity inherent to cyber ope rations at most orga nizations. As an alternative, the current research has focused on data collection during cyb er security training exercises. These events draw individuals with a range of knowledge and experience extending from seasoned professionals to recent college gradu ates to college students. The current paper describes research involving data collection at two separate cyber security exercises. This data collection involved multiple measures which included behavioral performance based on human - machine transactions and questionnaire - based assessments of cyber security experience.

  17. A Debugging Standard for High-Performance Computing

    Directory of Open Access Journals (Sweden)

    Joan M. Francioni

    2000-01-01

    Full Text Available Throughout 1998, the High Performance Debugging Forum worked on defining a base level standard for high performance debuggers. The standard had to meet the sometimes conflicting constraints of being useful to users, realistically implementable by developers, and architecturally independent across multiple platforms. To meet criteria for timeliness, the standard had to be defined in one year and in such a way that it could be implemented within an additional year. The Forum was successful, and in November 1998 released Version 1 of the HPD Standard. Implementations of the standard are currently underway. This paper presents an overview of Version 1 of the standard and an analysis of the process by which the standard was developed. The status of implementation efforts and plans for follow-on efforts are discussed as well.

  18. Computational Fluid Dynamics Analysis of Butterfly Valve Performance Factors

    OpenAIRE

    Del Toro, Adam

    2012-01-01

    Butterfly valves are commonly used in industrial applications to control the internal flow of both compressible and incompressible fluids. A butterfly valve typically consists of a metal disc formed around a central shaft, which acts as its axis of rotation. As the valve's opening angle is increased from 0 degrees (fully closed) to 90 degrees (fully open), fluid is able to more readily flow past the valve. Characterizing a valve's performance factors, such as pressure drop, hydrodynamic torqu...

  19. Reliable High Performance Peta- and Exa-Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G

    2012-04-02

    As supercomputers become larger and more powerful, they are growing increasingly complex. This is reflected both in the exponentially increasing numbers of components in HPC systems (LLNL is currently installing the 1.6 million core Sequoia system) as well as the wide variety of software and hardware components that a typical system includes. At this scale it becomes infeasible to make each component sufficiently reliable to prevent regular faults somewhere in the system or to account for all possible cross-component interactions. The resulting faults and instability cause HPC applications to crash, perform sub-optimally or even produce erroneous results. As supercomputers continue to approach Exascale performance and full system reliability becomes prohibitively expensive, we will require novel techniques to bridge the gap between the lower reliability provided by hardware systems and users unchanging need for consistent performance and reliable results. Previous research on HPC system reliability has developed various techniques for tolerating and detecting various types of faults. However, these techniques have seen very limited real applicability because of our poor understanding of how real systems are affected by complex faults such as soft fault-induced bit flips or performance degradations. Prior work on such techniques has had very limited practical utility because it has generally focused on analyzing the behavior of entire software/hardware systems both during normal operation and in the face of faults. Because such behaviors are extremely complex, such studies have only produced coarse behavioral models of limited sets of software/hardware system stacks. Since this provides little insight into the many different system stacks and applications used in practice, this work has had little real-world impact. My project addresses this problem by developing a modular methodology to analyze the behavior of applications and systems during both normal and faulty

  20. Influences of weather phenomena on automotive laser radar systems

    Science.gov (United States)

    Rasshofer, R. H.; Spies, M.; Spies, H.

    2011-07-01

    Laser radar (lidar) sensors provide outstanding angular resolution along with highly accurate range measurements and thus they were proposed as a part of a high performance perception system for advanced driver assistant functions. Based on optical signal transmission and reception, laser radar systems are influenced by weather phenomena. This work provides an overview on the different physical principles responsible for laser radar signal disturbance and theoretical investigations for estimation of their influence. Finally, the transmission models are applied for signal generation in a newly developed laser radar target simulator providing - to our knowledge - worldwide first HIL test capability for automotive laser radar systems.