WorldWideScience

Sample records for performance computing electronic

  1. Effectiveness of an Electronic Performance Support System on Computer Ethics and Ethical Decision-Making Education

    Science.gov (United States)

    Kert, Serhat Bahadir; Uz, Cigdem; Gecu, Zeynep

    2014-01-01

    This study examined the effectiveness of an electronic performance support system (EPSS) on computer ethics education and the ethical decision-making processes. There were five different phases to this ten month study: (1) Writing computer ethics scenarios, (2) Designing a decision-making framework (3) Developing EPSS software (4) Using EPSS in a…

  2. Evaluating Electronic Customer Relationship Management Performance: Case Studies from Persian Automotive and Computer Industry

    OpenAIRE

    Safari, Narges; Safari, Fariba; Olesen, Karin; Shahmehr, Fatemeh

    2016-01-01

    This research paper investigates the influence of industry on electronic customer relationship management (e-CRM) performance. A case study approach with two cases was applied to evaluate the influence of e-CRM on customer behavioral and attitudinal loyalty along with customer pyramid. The cases covered two industries consisting of computer and automotive industries. For investigating customer behavioral loyalty and customer pyramid companies database were computed while for examining custome...

  3. Computational Nanotechnology Molecular Electronics, Materials and Machines

    Science.gov (United States)

    Srivastava, Deepak; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    This presentation covers research being performed on computational nanotechnology, carbon nanotubes and fullerenes at the NASA Ames Research Center. Topics cover include: nanomechanics of nanomaterials, nanotubes and composite materials, molecular electronics with nanotube junctions, kinky chemistry, and nanotechnology for solid-state quantum computers using fullerenes.

  4. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens

    International Nuclear Information System (INIS)

    Oelerich, Jan Oliver; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D.; Volz, Kerstin

    2017-01-01

    Highlights: • We present STEMsalabim, a modern implementation of the multislice algorithm for simulation of STEM images. • Our package is highly parallelizable on high-performance computing clusters, combining shared and distributed memory architectures. • With STEMsalabim, computationally and memory expensive STEM image simulations can be carried out within reasonable time. - Abstract: We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space.

  5. STEMsalabim: A high-performance computing cluster friendly code for scanning transmission electron microscopy image simulations of thin specimens

    Energy Technology Data Exchange (ETDEWEB)

    Oelerich, Jan Oliver, E-mail: jan.oliver.oelerich@physik.uni-marburg.de; Duschek, Lennart; Belz, Jürgen; Beyer, Andreas; Baranovskii, Sergei D.; Volz, Kerstin

    2017-06-15

    Highlights: • We present STEMsalabim, a modern implementation of the multislice algorithm for simulation of STEM images. • Our package is highly parallelizable on high-performance computing clusters, combining shared and distributed memory architectures. • With STEMsalabim, computationally and memory expensive STEM image simulations can be carried out within reasonable time. - Abstract: We present a new multislice code for the computer simulation of scanning transmission electron microscope (STEM) images based on the frozen lattice approximation. Unlike existing software packages, the code is optimized to perform well on highly parallelized computing clusters, combining distributed and shared memory architectures. This enables efficient calculation of large lateral scanning areas of the specimen within the frozen lattice approximation and fine-grained sweeps of parameter space.

  6. Dynamism in Electronic Performance Support Systems.

    Science.gov (United States)

    Laffey, James

    1995-01-01

    Describes a model for dynamic electronic performance support systems based on NNAble, a system developed by the training group at Apple Computer. Principles for designing dynamic performance support are discussed, including a systems approach, performer-centered design, awareness of situated cognition, organizational memory, and technology use.…

  7. GPU-accelerated computation of electron transfer.

    Science.gov (United States)

    Höfinger, Siegfried; Acocella, Angela; Pop, Sergiu C; Narumi, Tetsu; Yasuoka, Kenji; Beu, Titus; Zerbetto, Francesco

    2012-11-05

    Electron transfer is a fundamental process that can be studied with the help of computer simulation. The underlying quantum mechanical description renders the problem a computationally intensive application. In this study, we probe the graphics processing unit (GPU) for suitability to this type of problem. Time-critical components are identified via profiling of an existing implementation and several different variants are tested involving the GPU at increasing levels of abstraction. A publicly available library supporting basic linear algebra operations on the GPU turns out to accelerate the computation approximately 50-fold with minor dependence on actual problem size. The performance gain does not compromise numerical accuracy and is of significant value for practical purposes. Copyright © 2012 Wiley Periodicals, Inc.

  8. Introduction to electronic analogue computers

    CERN Document Server

    Wass, C A A

    1965-01-01

    Introduction to Electronic Analogue Computers, Second Revised Edition is based on the ideas and experience of a group of workers at the Royal Aircraft Establishment, Farnborough, Hants. This edition is almost entirely the work of Mr. K. C. Garner, of the College of Aeronautics, Cranfield. As various advances have been made in the technology involving electronic analogue computers, this book presents discussions on the said progress, including some acquaintance with the capabilities of electronic circuits and equipment. This text also provides a mathematical background including simple differen

  9. High-performance scientific computing in the cloud

    Science.gov (United States)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  10. Method for linking a media work to perform an action, involves linking an electronic media work with a reference electronic media work identifier associated with a reference electronic media work using an approximate neighbor search

    DEFF Research Database (Denmark)

    2016-01-01

    A computer-implemented method including the steps of: receiving, by a computer system including at least one computer, a media work uploaded from a first electronic device; receiving, by the computer system from a second electronic device, a tag associated with the media work having a media work...... identifier; storing, by the computer system, the media work identifier and the associated tag; obtaining, by the computer system from a third electronic device, a query related to the associated tag; correlating, by the computer system, the query with associated information related to an action...... to be performed; and providing, from the computer system to the third electronic device, the associated information to be used in performing the action....

  11. Two-parametric model of electron beam in computational dosimetry for radiation processing

    International Nuclear Information System (INIS)

    Lazurik, V.M.; Lazurik, V.T.; Popov, G.; Zimek, Z.

    2016-01-01

    Computer simulation of irradiation process of various materials with electron beam (EB) can be applied to correct and control the performances of radiation processing installations. Electron beam energy measurements methods are described in the international standards. The obtained results of measurements can be extended by implementation computational dosimetry. Authors have developed the computational method for determination of EB energy on the base of two-parametric fitting of semi-empirical model for the depth dose distribution initiated by mono-energetic electron beam. The analysis of number experiments show that described method can effectively consider random displacements arising from the use of aluminum wedge with a continuous strip of dosimetric film and minimize the magnitude uncertainty value of the electron energy evaluation, calculated from the experimental data. Two-parametric fitting method is proposed for determination of the electron beam model parameters. These model parameters are as follow: E 0 – energy mono-energetic and mono-directional electron source, X 0 – the thickness of the aluminum layer, located in front of irradiated object. That allows obtain baseline data related to the characteristic of the electron beam, which can be later on applied for computer modeling of the irradiation process. Model parameters which are defined in the international standards (like E p – the most probably energy and R p – practical range) can be linked with characteristics of two-parametric model (E 0 , X 0 ), which allows to simulate the electron irradiation process. The obtained data from semi-empirical model were checked together with the set of experimental results. The proposed two-parametric model for electron beam energy evaluation and estimation of accuracy for computational dosimetry methods on the base of developed model are discussed. - Highlights: • Experimental and computational methods of electron energy evaluation. • Development

  12. Brain inspired high performance electronics on flexible silicon

    KAUST Repository

    Sevilla, Galo T.

    2014-06-01

    Brain\\'s stunning speed, energy efficiency and massive parallelism makes it the role model for upcoming high performance computation systems. Although human brain components are a million times slower than state of the art silicon industry components [1], they can perform 1016 operations per second while consuming less power than an electrical light bulb. In order to perform the same amount of computation with today\\'s most advanced computers, the output of an entire power station would be needed. In that sense, to obtain brain like computation, ultra-fast devices with ultra-low power consumption will have to be integrated in extremely reduced areas, achievable only if brain folded structure is mimicked. Therefore, to allow brain-inspired computation, flexible and transparent platform will be needed to achieve foldable structures and their integration on asymmetric surfaces. In this work, we show a new method to fabricate 3D and planar FET architectures in flexible and semitransparent silicon fabric without comprising performance and maintaining cost/yield advantage offered by silicon-based electronics.

  13. Rational design of metal-organic electronic devices: A computational perspective

    Science.gov (United States)

    Chilukuri, Bhaskar

    Organic and organometallic electronic materials continue to attract considerable attention among researchers due to their cost effectiveness, high flexibility, low temperature processing conditions and the continuous emergence of new semiconducting materials with tailored electronic properties. In addition, organic semiconductors can be used in a variety of important technological devices such as solar cells, field-effect transistors (FETs), flash memory, radio frequency identification (RFID) tags, light emitting diodes (LEDs), etc. However, organic materials have thus far not achieved the reliability and carrier mobility obtainable with inorganic silicon-based devices. Hence, there is a need for finding alternative electronic materials other than organic semiconductors to overcome the problems of inferior stability and performance. In this dissertation, I research the development of new transition metal based electronic materials which due to the presence of metal-metal, metal-pi, and pi-pi interactions may give rise to superior electronic and chemical properties versus their organic counterparts. Specifically, I performed computational modeling studies on platinum based charge transfer complexes and d 10 cyclo-[M(mu-L)]3 trimers (M = Ag, Au and L = monoanionic bidentate bridging (C/N~C/N) ligand). The research done is aimed to guide experimental chemists to make rational choices of metals, ligands, substituents in synthesizing novel organometallic electronic materials. Furthermore, the calculations presented here propose novel ways to tune the geometric, electronic, spectroscopic, and conduction properties in semiconducting materials. In addition to novel material development, electronic device performance can be improved by making a judicious choice of device components. I have studied the interfaces of a p-type metal-organic semiconductor viz cyclo-[Au(mu-Pz)] 3 trimer with metal electrodes at atomic and surface levels. This work was aimed to guide the device

  14. Computer electronics made simple computerbooks

    CERN Document Server

    Bourdillon, J F B

    1975-01-01

    Computer Electronics: Made Simple Computerbooks presents the basics of computer electronics and explains how a microprocessor works. Various types of PROMs, static RAMs, dynamic RAMs, floppy disks, and hard disks are considered, along with microprocessor support devices made by Intel, Motorola and Zilog. Bit slice logic and some AMD bit slice products are also described. Comprised of 14 chapters, this book begins with an introduction to the fundamentals of hardware design, followed by a discussion on the basic building blocks of hardware (NAND, NOR, AND, OR, NOT, XOR); tools and equipment that

  15. Computation of electron cloud diagnostics and mitigation in the main injector

    International Nuclear Information System (INIS)

    Veitzer, S A; Cary, J R; Stoltz, P H; LeBrun, P; Spentzouris, P; Amundson, J F

    2009-01-01

    High-performance computations on Blue Gene/P at Argonne's Leadership Computing Facility have been used to determine phase shifts induced in injected RF diagnostics as a function of electron cloud density in the Main Injector. Inversion of the relationship between electron cloud parameters and induced phase shifts allows us to predict electron cloud density and evolution over many bunch periods. Long time-scale simulations using Blue Gene have allowed us to measure cloud evolution patterns under the influence of beam propagation with realistic physical parameterizations, such as elliptical beam pipe geometry, self-consistent electromagnetic fields, space charge, secondary electron emission, and the application of arbitrary external magnetic fields. Simultaneously, we are able to simulate the use of injected microwave diagnostic signals to measure electron cloud density, and the effectiveness of various mitigation techniques such as surface coating and the application of confining magnetic fields. These simulations provide a baseline for both RF electron cloud diagnostic design and accelerator fabrication in order to measure electron clouds and mitigate the adverse effects of such clouds on beam propagation.

  16. The performance of low-cost commercial cloud computing as an alternative in computational chemistry.

    Science.gov (United States)

    Thackston, Russell; Fortenberry, Ryan C

    2015-05-05

    The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services. © 2015 Wiley Periodicals, Inc.

  17. Effects of electronic outlining on students’ argumentative writing performance

    NARCIS (Netherlands)

    De Smet, Milou; Broekkamp, Hein; Brand-Gruwel, Saskia; Kirschner, Paul A.

    2011-01-01

    De Smet, M. J. R., Broekkamp, H., Brand-Gruwel, S., & Kirschner, P. A. (2011). Effects of electronic outlining on students’ argumentative writing performance. Journal of Computer Assisted Learning, 27(6), 557-574. doi: 0.1111/j.1365-2729.2011.00418.x

  18. The 3d International Workshop on Computational Electronics

    Science.gov (United States)

    Goodnick, Stephen M.

    1994-09-01

    The Third International Workshop on Computational Electronics (IWCE) was held at the Benson Hotel in downtown Portland, Oregon, on May 18, 19, and 20, 1994. The workshop was devoted to a broad range of topics in computational electronics related to the simulation of electronic transport in semiconductors and semiconductor devices, particularly those which use large computational resources. The workshop was supported by the National Science Foundation (NSF), the Office of Naval Research and the Army Research Office, as well as local support from the Oregon Joint Graduate Schools of Engineering and the Oregon Center for Advanced Technology Education. There were over 100 participants in the Portland workshop, of which more than one quarter represented research groups outside of the United States from Austria, Canada, France, Germany, Italy, Japan, Switzerland, and the United Kingdom. There were a total 81 papers presented at the workshop, 9 invited talks, 26 oral presentations and 46 poster presentations. The emphasis of the contributions reflected the interdisciplinary nature of computational electronics with researchers from the Chemistry, Computer Science, Mathematics, Engineering, and Physics communities participating in the workshop.

  19. An Analog Computer for Electronic Engineering Education

    Science.gov (United States)

    Fitch, A. L.; Iu, H. H. C.; Lu, D. D. C.

    2011-01-01

    This paper describes a compact analog computer and proposes its use in electronic engineering teaching laboratories to develop student understanding of applications in analog electronics, electronic components, engineering mathematics, control engineering, safe laboratory and workshop practices, circuit construction, testing, and maintenance. The…

  20. Electronic Computer Originated Mail Service

    Science.gov (United States)

    Seto, Takao

    Electronic mail originated by computer is exactly a new communication media which is a product of combining traditional mailing with electrical communication. Experimental service of this type of mailing started in June 10, 1985 at Ministry of Posts and Telecommunications. Its location in various communication media, its comparison with facsimile type electronic mailing, and status quo of electronic mailing in foreign countries are described. Then, this mailing is briefed centering around the system organization and the services. Additional services to be executed in near future are also mentioned.

  1. Human Computer Music Performance

    OpenAIRE

    Dannenberg, Roger B.

    2012-01-01

    Human Computer Music Performance (HCMP) is the study of music performance by live human performers and real-time computer-based performers. One goal of HCMP is to create a highly autonomous artificial performer that can fill the role of a human, especially in a popular music setting. This will require advances in automated music listening and understanding, new representations for music, techniques for music synchronization, real-time human-computer communication, music generation, sound synt...

  2. Efficient and Flexible Computation of Many-Electron Wave Function Overlaps.

    Science.gov (United States)

    Plasser, Felix; Ruckenbauer, Matthias; Mai, Sebastian; Oppel, Markus; Marquetand, Philipp; González, Leticia

    2016-03-08

    A new algorithm for the computation of the overlap between many-electron wave functions is described. This algorithm allows for the extensive use of recurring intermediates and thus provides high computational efficiency. Because of the general formalism employed, overlaps can be computed for varying wave function types, molecular orbitals, basis sets, and molecular geometries. This paves the way for efficiently computing nonadiabatic interaction terms for dynamics simulations. In addition, other application areas can be envisaged, such as the comparison of wave functions constructed at different levels of theory. Aside from explaining the algorithm and evaluating the performance, a detailed analysis of the numerical stability of wave function overlaps is carried out, and strategies for overcoming potential severe pitfalls due to displaced atoms and truncated wave functions are presented.

  3. Comparing two iteration algorithms of Broyden electron density mixing through an atomic electronic structure computation

    International Nuclear Information System (INIS)

    Zhang Man-Hong

    2016-01-01

    By performing the electronic structure computation of a Si atom, we compare two iteration algorithms of Broyden electron density mixing in the literature. One was proposed by Johnson and implemented in the well-known VASP code. The other was given by Eyert. We solve the Kohn-Sham equation by using a conventional outward/inward integration of the differential equation and then connect two parts of solutions at the classical turning points, which is different from the method of the matrix eigenvalue solution as used in the VASP code. Compared to Johnson’s algorithm, the one proposed by Eyert needs fewer total iteration numbers. (paper)

  4. A computer simulation of auger electron spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Ragheb, M S; Bakr, M H.S. [Dept. Of Accellerators and Ion Sources, Division of Basic Nuclear Sciences, NRC, Atomic Energy Authority, (Egypt)

    1997-12-31

    A simulation study of Auger electron spectroscopy was performed to reveal how far the dependency between the different parameters governing the experimental behavior affects the peaks. The experimental procedure followed by the AC modulation technique were reproduced by means of a computer program. It generates the assumed output Auger electron peaks, exposes them to a retarding AC modulated field and collects the resulting modulated signals. The program produces the lock-in treatment in order to demodulate the signals revealing the Auger peaks. It analyzes the spectrum obtained giving the peak positions and energies. Comparison between results of simulation and the experimental data showed good agreement. The peaks of the spectrum obtained depend upon the amplitude, frequency and resolution of the applied modulated signal. The peak shape is effected by the rise time, the slope and the starting potential of the retarding field. 4 figs.

  5. Have Computer, Will Not Travel: Meeting Electronically.

    Science.gov (United States)

    Kurland, Norman D.

    1983-01-01

    Beginning with two different scenarios depicting a face-to-face conference on the one hand and, on the other, a computer or electronic conference, the author argues the advantages of electronic conferencing and describes some of its uses. (JBM)

  6. Maximal thickness of the normal human pericardium assessed by electron-beam computed tomography

    International Nuclear Information System (INIS)

    Delille, J.P.; Hernigou, A.; Sene, V.; Chatellier, G.; Boudeville, J.C.; Challande, P.; Plainfosse, M.C.

    1999-01-01

    The purpose of this study was to determine the maximal value of normal pericardial thickness with an electron-beam computed tomography unit allowing fast scan times of 100 ms to reduce cardiac motion artifacts. Electron-beam computed tomography was performed in 260 patients with hypercholesterolemia and/or hypertension, as these pathologies have no effect on pericardial thickness. The pixel size was 0.5 mm. Measurements could be performed in front of the right ventricle, the right atrioventricular groove, the right atrium, the left ventricle, and the interventricular groove. Maximal thickness of normal pericardium was defined at the 95th percentile. Inter-observer and intra-observer reproducibility studies were assessed from additional CT scans by the Bland and Altman method [24]. The maximal thickness of the normal pericardium was 2 mm for 95 % of cases. For the reproducibility studies, there was no significant relationship between the inter-observer and intra-observer measurements, but all pericardial thickness measurements were ≤ 1.6 mm. Using electron-beam computed tomography, which assists in decreasing substantially cardiac motion artifacts, the threshold of detection of thickened pericardium is statistically established as being 2 mm for 95 % of the patients with hypercholesterolemia and/or hypertension. However, the spatial resolution available prevents a reproducible measure of the real thickness of thin pericardium. (orig.)

  7. Capabilities and Advantages of Cloud Computing in the Implementation of Electronic Health Record.

    Science.gov (United States)

    Ahmadi, Maryam; Aslani, Nasim

    2018-01-01

    With regard to the high cost of the Electronic Health Record (EHR), in recent years the use of new technologies, in particular cloud computing, has increased. The purpose of this study was to review systematically the studies conducted in the field of cloud computing. The present study was a systematic review conducted in 2017. Search was performed in the Scopus, Web of Sciences, IEEE, Pub Med and Google Scholar databases by combination keywords. From the 431 article that selected at the first, after applying the inclusion and exclusion criteria, 27 articles were selected for surveyed. Data gathering was done by a self-made check list and was analyzed by content analysis method. The finding of this study showed that cloud computing is a very widespread technology. It includes domains such as cost, security and privacy, scalability, mutual performance and interoperability, implementation platform and independence of Cloud Computing, ability to search and exploration, reducing errors and improving the quality, structure, flexibility and sharing ability. It will be effective for electronic health record. According to the findings of the present study, higher capabilities of cloud computing are useful in implementing EHR in a variety of contexts. It also provides wide opportunities for managers, analysts and providers of health information systems. Considering the advantages and domains of cloud computing in the establishment of HER, it is recommended to use this technology.

  8. Noninvasive coronary angioscopy using electron beam computed tomography and multidetector computed tomography

    NARCIS (Netherlands)

    van Ooijen, PMA; Nieman, K; de Feyter, PJ; Oudkerk, M

    2002-01-01

    With the advent of noninvasive coronary imaging techniques like multidetector computed tomography and electron beam computed tomography, new representation methods such as intracoronary visualization. have been introduced. We explore the possibilities of these novel visualization techniques and

  9. Computational Benchmarking for Ultrafast Electron Dynamics: Wave Function Methods vs Density Functional Theory.

    Science.gov (United States)

    Oliveira, Micael J T; Mignolet, Benoit; Kus, Tomasz; Papadopoulos, Theodoros A; Remacle, F; Verstraete, Matthieu J

    2015-05-12

    Attosecond electron dynamics in small- and medium-sized molecules, induced by an ultrashort strong optical pulse, is studied computationally for a frozen nuclear geometry. The importance of exchange and correlation effects on the nonequilibrium electron dynamics induced by the interaction of the molecule with the strong optical pulse is analyzed by comparing the solution of the time-dependent Schrödinger equation based on the correlated field-free stationary electronic states computed with the equationof-motion coupled cluster singles and doubles and the complete active space multi-configurational self-consistent field methodologies on one hand, and various functionals in real-time time-dependent density functional theory (TDDFT) on the other. We aim to evaluate the performance of the latter approach, which is very widely used for nonlinear absorption processes and whose computational cost has a more favorable scaling with the system size. We focus on LiH as a toy model for a nontrivial molecule and show that our conclusions carry over to larger molecules, exemplified by ABCU (C10H19N). The molecules are probed with IR and UV pulses whose intensities are not strong enough to significantly ionize the system. By comparing the evolution of the time-dependent field-free electronic dipole moment, as well as its Fourier power spectrum, we show that TD-DFT performs qualitatively well in most cases. Contrary to previous studies, we find almost no changes in the TD-DFT excitation energies when excited states are populated. Transitions between states of different symmetries are induced using pulses polarized in different directions. We observe that the performance of TD-DFT does not depend on the symmetry of the states involved in the transition.

  10. A computer-controlled conformal radiotherapy system. IV: Electronic chart

    International Nuclear Information System (INIS)

    Fraass, Benedick A.; McShan, Daniel L.; Matrone, Gwynne M.; Weaver, Tamar A.; Lewis, James D.; Kessler, Marc L.

    1995-01-01

    Purpose: The design and implementation of a system for electronically tracking relevant plan, prescription, and treatment data for computer-controlled conformal radiation therapy is described. Methods and Materials: The electronic charting system is implemented on a computer cluster coupled by high-speed networks to computer-controlled therapy machines. A methodical approach to the specification and design of an integrated solution has been used in developing the system. The electronic chart system is designed to allow identification and access of patient-specific data including treatment-planning data, treatment prescription information, and charting of doses. An in-house developed database system is used to provide an integrated approach to the database requirements of the design. A hierarchy of databases is used for both centralization and distribution of the treatment data for specific treatment machines. Results: The basic electronic database system has been implemented and has been in use since July 1993. The system has been used to download and manage treatment data on all patients treated on our first fully computer-controlled treatment machine. To date, electronic dose charting functions have not been fully implemented clinically, requiring the continued use of paper charting for dose tracking. Conclusions: The routine clinical application of complex computer-controlled conformal treatment procedures requires the management of large quantities of information for describing and tracking treatments. An integrated and comprehensive approach to this problem has led to a full electronic chart for conformal radiation therapy treatments

  11. Computer-Related Task Performance

    DEFF Research Database (Denmark)

    Longstreet, Phil; Xiao, Xiao; Sarker, Saonee

    2016-01-01

    The existing information system (IS) literature has acknowledged computer self-efficacy (CSE) as an important factor contributing to enhancements in computer-related task performance. However, the empirical results of CSE on performance have not always been consistent, and increasing an individual......'s CSE is often a cumbersome process. Thus, we introduce the theoretical concept of self-prophecy (SP) and examine how this social influence strategy can be used to improve computer-related task performance. Two experiments are conducted to examine the influence of SP on task performance. Results show...... that SP and CSE interact to influence performance. Implications are then discussed in terms of organizations’ ability to increase performance....

  12. Computer conferencing: the "nurse" in the "Electronic School District".

    Science.gov (United States)

    Billings, D M; Phillips, A

    1991-01-01

    As computer-based instructional technologies become increasingly available, they offer new mechanisms for health educators to provide health instruction. This article describes a pilot project in which nurses established a computer conference to provide health instruction to high school students participating in an electronic link of high schools. The article discusses computer conferencing, the "Electronic School District," the design of the nursing conference, and the role of the nurse in distributed health education.

  13. Computational methods of electron/photon transport

    International Nuclear Information System (INIS)

    Mack, J.M.

    1983-01-01

    A review of computational methods simulating the non-plasma transport of electrons and their attendant cascades is presented. Remarks are mainly restricted to linearized formalisms at electron energies above 1 keV. The effectiveness of various metods is discussed including moments, point-kernel, invariant imbedding, discrete-ordinates, and Monte Carlo. Future research directions and the potential impact on various aspects of science and engineering are indicated

  14. Working and learning with electronic performance support systems: An effectiveness study

    NARCIS (Netherlands)

    Bastiaens, T.J.; Nijhof, W.J.; Streumer, Jan; Abma, Harmen J.

    1997-01-01

    In this study the effectiveness of electronic performance support systems (EPSS) is reported. Some of the expected advantages of EPSS, such as an increase in productivity and improved learning are evaluated with insurance agents using laptop computers. Theoretical statements, research design and

  15. National electronic medical records integration on cloud computing system.

    Science.gov (United States)

    Mirza, Hebah; El-Masri, Samir

    2013-01-01

    Few Healthcare providers have an advanced level of Electronic Medical Record (EMR) adoption. Others have a low level and most have no EMR at all. Cloud computing technology is a new emerging technology that has been used in other industry and showed a great success. Despite the great features of Cloud computing, they haven't been utilized fairly yet in healthcare industry. This study presents an innovative Healthcare Cloud Computing system for Integrating Electronic Health Record (EHR). The proposed Cloud system applies the Cloud Computing technology on EHR system, to present a comprehensive EHR integrated environment.

  16. Maximal thickness of the normal human pericardium assessed by electron-beam computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Delille, J.P.; Hernigou, A.; Sene, V.; Chatellier, G.; Boudeville, J.C.; Challande, P.; Plainfosse, M.C. [Service de Radiologie Centrale, Hopital Broussais, Paris (France)

    1999-08-01

    The purpose of this study was to determine the maximal value of normal pericardial thickness with an electron-beam computed tomography unit allowing fast scan times of 100 ms to reduce cardiac motion artifacts. Electron-beam computed tomography was performed in 260 patients with hypercholesterolemia and/or hypertension, as these pathologies have no effect on pericardial thickness. The pixel size was 0.5 mm. Measurements could be performed in front of the right ventricle, the right atrioventricular groove, the right atrium, the left ventricle, and the interventricular groove. Maximal thickness of normal pericardium was defined at the 95th percentile. Inter-observer and intra-observer reproducibility studies were assessed from additional CT scans by the Bland and Altman method [24]. The maximal thickness of the normal pericardium was 2 mm for 95 % of cases. For the reproducibility studies, there was no significant relationship between the inter-observer and intra-observer measurements, but all pericardial thickness measurements were {<=} 1.6 mm. Using electron-beam computed tomography, which assists in decreasing substantially cardiac motion artifacts, the threshold of detection of thickened pericardium is statistically established as being 2 mm for 95 % of the patients with hypercholesterolemia and/or hypertension. However, the spatial resolution available prevents a reproducible measure of the real thickness of thin pericardium. (orig.) With 6 figs., 1 tab., 31 refs.

  17. Quantum computers based on electron spins controlled by ultrafast off-resonant single optical pulses.

    Science.gov (United States)

    Clark, Susan M; Fu, Kai-Mei C; Ladd, Thaddeus D; Yamamoto, Yoshihisa

    2007-07-27

    We describe a fast quantum computer based on optically controlled electron spins in charged quantum dots that are coupled to microcavities. This scheme uses broadband optical pulses to rotate electron spins and provide the clock signal to the system. Nonlocal two-qubit gates are performed by phase shifts induced by electron spins on laser pulses propagating along a shared waveguide. Numerical simulations of this scheme demonstrate high-fidelity single-qubit and two-qubit gates with operation times comparable to the inverse Zeeman frequency.

  18. A computer code package for electron transport Monte Carlo simulation

    International Nuclear Information System (INIS)

    Popescu, Lucretiu M.

    1999-01-01

    A computer code package was developed for solving various electron transport problems by Monte Carlo simulation. It is based on condensed history Monte Carlo algorithm. In order to get reliable results over wide ranges of electron energies and target atomic numbers, specific techniques of electron transport were implemented such as: Moliere multiscatter angular distributions, Blunck-Leisegang multiscatter energy distribution, sampling of electron-electron and Bremsstrahlung individual interactions. Path-length and lateral displacement corrections algorithms and the module for computing collision, radiative and total restricted stopping powers and ranges of electrons are also included. Comparisons of simulation results with experimental measurements are finally presented. (author)

  19. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  20. Computer Series, 98. Electronics for Scientists: A Computer-Intensive Approach.

    Science.gov (United States)

    Scheeline, Alexander; Mork, Brian J.

    1988-01-01

    Reports the design for a principles-before-details presentation of electronics for an instrumental analysis class. Uses computers for data collection and simulations. Requires one semester with two 2.5-hour periods and two lectures per week. Includes lab and lecture syllabi. (MVL)

  1. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  2. Decal electronics for printed high performance cmos electronic systems

    KAUST Repository

    Hussain, Muhammad Mustafa

    2017-11-23

    High performance complementary metal oxide semiconductor (CMOS) electronics are critical for any full-fledged electronic system. However, state-of-the-art CMOS electronics are rigid and bulky making them unusable for flexible electronic applications. While there exist bulk material reduction methods to flex them, such thinned CMOS electronics are fragile and vulnerable to handling for high throughput manufacturing. Here, we show a fusion of a CMOS technology compatible fabrication process for flexible CMOS electronics, with inkjet and conductive cellulose based interconnects, followed by additive manufacturing (i.e. 3D printing based packaging) and finally roll-to-roll printing of packaged decal electronics (thin film transistors based circuit components and sensors) focusing on printed high performance flexible electronic systems. This work provides the most pragmatic route for packaged flexible electronic systems for wide ranging applications.

  3. ELECTRONIC EVIDENCE IN THE JUDICIAL PROCEEDINGS AND COMPUTER FORENSIC ANALYSIS

    Directory of Open Access Journals (Sweden)

    Marija Boban

    2017-01-01

    Full Text Available Today’s perspective of the information society is characterized by the terminology of modern dictionaries of globalization including the terms such as convergence, digitization (media, technology and/or telecommunications and mobility of people or technology. Each word with progress, development, a positive sign of the rise of the information society. On the other hand in a virtual environment traditional evidence in judicial proceedings with the document on paper substrate, are becoming electronic evidence, and their management processes and criteria for admissibility are changing over traditional evidence. The rapid growth of computer data created new opportunities and the growth of new forms of computing, and cyber crime, but also the new ways of proof in court cases, which were unavailable just a few decades. The authors of this paper describe new trends in the development of the information society and the emergence of electronic evidence, with emphasis on the impact of the development of computer crime on electronic evidence; the concept, legal regulation and probative value of electronic evidence, and in particular of electronic documents; and the issue of electronic evidence expertise and electronic documents in court proceedings.

  4. Decal electronics for printed high performance cmos electronic systems

    KAUST Repository

    Hussain, Muhammad Mustafa; Sevilla, Galo Torres; Cordero, Marlon Diaz; Kutbee, Arwa T.

    2017-01-01

    High performance complementary metal oxide semiconductor (CMOS) electronics are critical for any full-fledged electronic system. However, state-of-the-art CMOS electronics are rigid and bulky making them unusable for flexible electronic applications

  5. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  6. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  7. Calculation and construction of electron-diffraction photographs using computer

    International Nuclear Information System (INIS)

    Khayurov, S.S.; Notkin, A.B.

    1981-01-01

    A method of computer construction and indexing of theoretical electronograms for monophase structures with arbitrary type of crystal lattice and for polyphase ones with known orientational coorrelations between phases is presented. Electron-diffraction photograph is presented, obtained from the foil area of two-phase VT22 alloy at β phase orientation in comparison with theoretical electron-diffraction photographs, built ap by computer, with the [100] β phase zone axis and with three variants of α phase orientation relatively to β phase. It is shown that on the experimental electron-diffraction photograph simultaneously presents α-phase three orientations, which reflexes can be indexing correctly [ru

  8. Neuro-Inspired Computing with Stochastic Electronics

    KAUST Repository

    Naous, Rawan

    2016-01-06

    The extensive scaling and integration within electronic systems have set the standards for what is addressed to as stochastic electronics. The individual components are increasingly diverting away from their reliable behavior and producing un-deterministic outputs. This stochastic operation highly mimics the biological medium within the brain. Hence, building on the inherent variability, particularly within novel non-volatile memory technologies, paves the way for unconventional neuromorphic designs. Neuro-inspired networks with brain-like structures of neurons and synapses allow for computations and levels of learning for diverse recognition tasks and applications.

  9. Computation of the average energy for LXY electrons

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau, A.

    1996-01-01

    The application of an atomic rearrangement model in which we only consider the three shells K, L and M, to compute the counting efficiency for electron capture nuclides, requires a fine averaged energy value for LMN electrons. In this report, we illustrate the procedure with two example, ''125 I and ''109 Cd. (Author) 4 refs

  10. Electronic digital computers their use in science and engineering

    CERN Document Server

    Alt, Franz L

    1958-01-01

    Electronic Digital Computers: Their Use in Science and Engineering describes the principles underlying computer design and operation. This book describes the various applications of computers, the stages involved in using them, and their limitations. The machine is composed of the hardware which is run by a program. This text describes the use of magnetic drum for storage of data and some computing. The functions and components of the computer include automatic control, memory, input of instructions by using punched cards, and output from resulting information. Computers operate by using numbe

  11. Computer Conferencing and Electronic Messaging. Conference Proceedings (Guelph, Ontario, Canada, January 22-23, 1985).

    Science.gov (United States)

    Guelph Univ. (Ontario).

    This 21-paper collection examines various issues in electronic networking and conferencing with computers, including design issues, conferencing in education, electronic messaging, computer conferencing applications, social issues of computer conferencing, and distributed computer conferencing. In addition to a keynote address, "Computer…

  12. Computer simulation of high resolution transmission electron micrographs: theory and analysis

    International Nuclear Information System (INIS)

    Kilaas, R.

    1985-03-01

    Computer simulation of electron micrographs is an invaluable aid in their proper interpretation and in defining optimum conditions for obtaining images experimentally. Since modern instruments are capable of atomic resolution, simulation techniques employing high precision are required. This thesis makes contributions to four specific areas of this field. First, the validity of a new method for simulating high resolution electron microscope images has been critically examined. Second, three different methods for computing scattering amplitudes in High Resolution Transmission Electron Microscopy (HRTEM) have been investigated as to their ability to include upper Laue layer (ULL) interaction. Third, a new method for computing scattering amplitudes in high resolution transmission electron microscopy has been examined. Fourth, the effect of a surface layer of amorphous silicon dioxide on images of crystalline silicon has been investigated for a range of crystal thicknesses varying from zero to 2 1/2 times that of the surface layer

  13. Electron correlation in molecules: concurrent computation Many-Body Perturbation Theory (ccMBPT) calculations using macrotasking on the NEC SX-3/44 computer

    International Nuclear Information System (INIS)

    Moncrieff, D.; Wilson, S.

    1992-06-01

    The ab initio determination of the electronic structure of molecules is a many-fermion problem involving the approximate description of the motion of the electrons in the field of fixed nuclei. It is an area of research which demands considerable computational resources but having enormous potential in fields as diverse as interstellar chemistry and drug design, catalysis and solid state chemistry, molecular biology and environmental chemistry. Electronic structure calculations almost invariably divide into two main stages: the approximate solution of an independent electron model, in which each electron moves in the average field created by the other electrons in the system, and then, the more computationally demanding determination of a series of corrections to this model, the electron correlation effects. The many-body perturbation theory expansion affords a systematic description of correlation effects, which leads directly to algorithms which are suitable for concurrent computation. We term this concurrent computation Many-Body Perturbation Theory (ccMBPT). The use of a dynamic load balancing technique on the NEC SX-3/44 computer in electron correlation calculations is investigated for the calculation of the most demanding energy component in the most accurate of contemporary ab initio studies. An application to the ground state of the nitrogen molecule is described. We also briefly discuss the extent to which the calculation of the dominant corrections to such studies can be rendered computationally tractable by exploiting both the vector processing and parallel processor capabilities of the NEC SX-3/44 computer. (author)

  14. A Heterogeneous High-Performance System for Computational and Computer Science

    Science.gov (United States)

    2016-11-15

    expand the research infrastructure at the institution but also to enhance the high -performance computing training provided to both undergraduate and... cloud computing, supercomputing, and the availability of cheap memory and storage led to enormous amounts of data to be sifted through in forensic... High -Performance Computing (HPC) tools that can be integrated with existing curricula and support our research to modernize and dramatically advance

  15. [Computer-aided Diagnosis and New Electronic Stethoscope].

    Science.gov (United States)

    Huang, Mei; Liu, Hongying; Pi, Xitian; Ao, Yilu; Wang, Zi

    2017-05-30

    Auscultation is an important method in early-diagnosis of cardiovascular disease and respiratory system disease. This paper presents a computer-aided diagnosis of new electronic auscultation system. It has developed an electronic stethoscope based on condenser microphone and the relevant intelligent analysis software. It has implemented many functions that combined with Bluetooth, OLED, SD card storage technologies, such as real-time heart and lung sounds auscultation in three modes, recording and playback, auscultation volume control, wireless transmission. The intelligent analysis software based on PC computer utilizes C# programming language and adopts SQL Server as the background database. It has realized play and waveform display of the auscultation sound. By calculating the heart rate, extracting the characteristic parameters of T1, T2, T12, T11, it can analyze whether the heart sound is normal, and then generate diagnosis report. Finally the auscultation sound and diagnosis report can be sent to mailbox of other doctors, which can carry out remote diagnosis. The whole system has features of fully function, high portability, good user experience, and it is beneficial to promote the use of electronic stethoscope in the hospital, at the same time, the system can also be applied to auscultate teaching and other occasions.

  16. Computer performance evaluation of FACOM 230-75 computer system, (2)

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1980-08-01

    In this report are described computer performance evaluations for FACOM230-75 computers in JAERI. The evaluations are performed on following items: (1) Cost/benefit analysis of timesharing terminals, (2) Analysis of the response time of timesharing terminals, (3) Analysis of throughout time for batch job processing, (4) Estimation of current potential demands for computer time, (5) Determination of appropriate number of card readers and line printers. These evaluations are done mainly from the standpoint of cost reduction of computing facilities. The techniques adapted are very practical ones. This report will be useful for those people who are concerned with the management of computing installation. (author)

  17. A High Performance COTS Based Computer Architecture

    Science.gov (United States)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  18. Electron Gun for Computer-controlled Welding of Small Components

    Czech Academy of Sciences Publication Activity Database

    Dupák, Jan; Vlček, Ivan; Zobač, Martin

    2001-01-01

    Roč. 62, 2-3 (2001), s. 159-164 ISSN 0042-207X R&D Projects: GA AV ČR IBS2065015 Institutional research plan: CEZ:AV0Z2065902 Keywords : Electron beam-welding machine * Electron gun * Computer- control led beam Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 0.541, year: 2001

  19. Improving CASINO performance for models with large number of electrons

    International Nuclear Information System (INIS)

    Anton, L.; Alfe, D.; Hood, R.Q.; Tanqueray, D.

    2009-01-01

    Quantum Monte Carlo calculations have at their core algorithms based on statistical ensembles of multidimensional random walkers which are straightforward to use on parallel computers. Nevertheless some computations have reached the limit of the memory resources for models with more than 1000 electrons because of the need to store a large amount of electronic orbitals related data. Besides that, for systems with large number of electrons, it is interesting to study if the evolution of one configuration of random walkers can be done faster in parallel. We present a comparative study of two ways to solve these problems: (1) distributed orbital data done with MPI or Unix inter-process communication tools, (2) second level parallelism for configuration computation

  20. Reconciliation of the cloud computing model with US federal electronic health record regulations.

    Science.gov (United States)

    Schweitzer, Eugene J

    2012-01-01

    Cloud computing refers to subscription-based, fee-for-service utilization of computer hardware and software over the Internet. The model is gaining acceptance for business information technology (IT) applications because it allows capacity and functionality to increase on the fly without major investment in infrastructure, personnel or licensing fees. Large IT investments can be converted to a series of smaller operating expenses. Cloud architectures could potentially be superior to traditional electronic health record (EHR) designs in terms of economy, efficiency and utility. A central issue for EHR developers in the US is that these systems are constrained by federal regulatory legislation and oversight. These laws focus on security and privacy, which are well-recognized challenges for cloud computing systems in general. EHRs built with the cloud computing model can achieve acceptable privacy and security through business associate contracts with cloud providers that specify compliance requirements, performance metrics and liability sharing.

  1. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  2. Studies of electron collisions with polyatomic molecules using distributed-memory parallel computers

    International Nuclear Information System (INIS)

    Winstead, C.; Hipes, P.G.; Lima, M.A.P.; McKoy, V.

    1991-01-01

    Elastic electron scattering cross sections from 5--30 eV are reported for the molecules C 2 H 4 , C 2 H 6 , C 3 H 8 , Si 2 H 6 , and GeH 4 , obtained using an implementation of the Schwinger multichannel method for distributed-memory parallel computer architectures. These results, obtained within the static-exchange approximation, are in generally good agreement with the available experimental data. These calculations demonstrate the potential of highly parallel computation in the study of collisions between low-energy electrons and polyatomic gases. The computational methodology discussed is also directly applicable to the calculation of elastic cross sections at higher levels of approximation (target polarization) and of electronic excitation cross sections

  3. Simulation of the behaviour of electron-optical systems using a parallel computer

    International Nuclear Information System (INIS)

    Balladore, J.L.; Hawkes, P.W.

    1990-01-01

    The advantage of using a multiprocessor computer for the calculation of electron-optical properties is investigated. A considerable reduction of computing time is obtained by reorganising the finite-element field computation. (orig.)

  4. DOE research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  5. Collaborative Computational Project for Electron cryo-Microscopy

    Energy Technology Data Exchange (ETDEWEB)

    Wood, Chris; Burnley, Tom [Science and Technology Facilities Council, Research Complex at Harwell, Didcot OX11 0FA (United Kingdom); Patwardhan, Ardan [European Molecular Biology Laboratory, Wellcome Trust Genome Campus, Hinxton, Cambridge CB10 1SD (United Kingdom); Scheres, Sjors [MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge Biomedical Campus, Cambridge CB2 0QH (United Kingdom); Topf, Maya [University of London, Malet Street, London WC1E 7HX (United Kingdom); Roseman, Alan [University of Manchester, Oxford Road, Manchester M13 9PT (United Kingdom); Winn, Martyn, E-mail: martyn.winn@stfc.ac.uk [Science and Technology Facilities Council, Daresbury Laboratory, Warrington WA4 4AD (United Kingdom); Science and Technology Facilities Council, Research Complex at Harwell, Didcot OX11 0FA (United Kingdom)

    2015-01-01

    The Collaborative Computational Project for Electron cryo-Microscopy (CCP-EM) is a new initiative for the structural biology community, following the success of CCP4 for macromolecular crystallography. Progress in supporting the users and developers of cryoEM software is reported. The Collaborative Computational Project for Electron cryo-Microscopy (CCP-EM) has recently been established. The aims of the project are threefold: to build a coherent cryoEM community which will provide support for individual scientists and will act as a focal point for liaising with other communities, to support practising scientists in their use of cryoEM software and finally to support software developers in producing and disseminating robust and user-friendly programs. The project is closely modelled on CCP4 for macromolecular crystallography, and areas of common interest such as model fitting, underlying software libraries and tools for building program packages are being exploited. Nevertheless, cryoEM includes a number of techniques covering a large range of resolutions and a distinct project is required. In this article, progress so far is reported and future plans are discussed.

  6. Collaborative Computational Project for Electron cryo-Microscopy

    International Nuclear Information System (INIS)

    Wood, Chris; Burnley, Tom; Patwardhan, Ardan; Scheres, Sjors; Topf, Maya; Roseman, Alan; Winn, Martyn

    2015-01-01

    The Collaborative Computational Project for Electron cryo-Microscopy (CCP-EM) is a new initiative for the structural biology community, following the success of CCP4 for macromolecular crystallography. Progress in supporting the users and developers of cryoEM software is reported. The Collaborative Computational Project for Electron cryo-Microscopy (CCP-EM) has recently been established. The aims of the project are threefold: to build a coherent cryoEM community which will provide support for individual scientists and will act as a focal point for liaising with other communities, to support practising scientists in their use of cryoEM software and finally to support software developers in producing and disseminating robust and user-friendly programs. The project is closely modelled on CCP4 for macromolecular crystallography, and areas of common interest such as model fitting, underlying software libraries and tools for building program packages are being exploited. Nevertheless, cryoEM includes a number of techniques covering a large range of resolutions and a distinct project is required. In this article, progress so far is reported and future plans are discussed

  7. High-performance computing — an overview

    Science.gov (United States)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  8. High performance computing in Windows Azure cloud

    OpenAIRE

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  9. Training Corrective Maintenance Performance on Electronic Equipment with CAI Terminals: I. A Feasibility Study.

    Science.gov (United States)

    Rigney, Joseph W.

    A report is given of a feasibility study in which several possible relationships between student, computer terminal, and electronic equipment were considered. The simplest of these configurations was set up and examined in terms of its feasibility for teaching the performance of fault localization on a Navy transceiver. An instructional program…

  10. Can We Build a Truly High Performance Computer Which is Flexible and Transparent?

    KAUST Repository

    Rojas, Jhonathan Prieto

    2013-09-10

    State-of-the art computers need high performance transistors, which consume ultra-low power resulting in longer battery lifetime. Billions of transistors are integrated neatly using matured silicon fabrication process to maintain the performance per cost advantage. In that context, low-cost mono-crystalline bulk silicon (100) based high performance transistors are considered as the heart of today\\'s computers. One limitation is silicon\\'s rigidity and brittleness. Here we show a generic batch process to convert high performance silicon electronics into flexible and semi-transparent one while retaining its performance, process compatibility, integration density and cost. We demonstrate high-k/metal gate stack based p-type metal oxide semiconductor field effect transistors on 4 inch silicon fabric released from bulk silicon (100) wafers with sub-threshold swing of 80 mV dec(-1) and on/off ratio of near 10(4) within 10% device uniformity with a minimum bending radius of 5 mm and an average transmittance of similar to 7% in the visible spectrum.

  11. Reconciliation of the cloud computing model with US federal electronic health record regulations

    Science.gov (United States)

    2011-01-01

    Cloud computing refers to subscription-based, fee-for-service utilization of computer hardware and software over the Internet. The model is gaining acceptance for business information technology (IT) applications because it allows capacity and functionality to increase on the fly without major investment in infrastructure, personnel or licensing fees. Large IT investments can be converted to a series of smaller operating expenses. Cloud architectures could potentially be superior to traditional electronic health record (EHR) designs in terms of economy, efficiency and utility. A central issue for EHR developers in the US is that these systems are constrained by federal regulatory legislation and oversight. These laws focus on security and privacy, which are well-recognized challenges for cloud computing systems in general. EHRs built with the cloud computing model can achieve acceptable privacy and security through business associate contracts with cloud providers that specify compliance requirements, performance metrics and liability sharing. PMID:21727204

  12. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  13. International Conference on Emerging Research in Electronics, Computer Science and Technology

    CERN Document Server

    Sheshadri, Holalu; Padma, M

    2014-01-01

    PES College of Engineering is organizing an International Conference on Emerging Research in Electronics, Computer Science and Technology (ICERECT-12) in Mandya and merging the event with Golden Jubilee of the Institute. The Proceedings of the Conference presents high quality, peer reviewed articles from the field of Electronics, Computer Science and Technology. The book is a compilation of research papers from the cutting-edge technologies and it is targeted towards the scientific community actively involved in research activities.

  14. Computer technique for evaluating collimator performance

    International Nuclear Information System (INIS)

    Rollo, F.D.

    1975-01-01

    A computer program has been developed to theoretically evaluate the overall performance of collimators used with radioisotope scanners and γ cameras. The first step of the program involves the determination of the line spread function (LSF) and geometrical efficiency from the fundamental parameters of the collimator being evaluated. The working equations can be applied to any plane of interest. The resulting LSF is applied to subroutine computer programs which compute corresponding modulation transfer function and contrast efficiency functions. The latter function is then combined with appropriate geometrical efficiency data to determine the performance index function. The overall computer program allows one to predict from the physical parameters of the collimator alone how well the collimator will reproduce various sized spherical voids of activity in the image plane. The collimator performance program can be used to compare the performance of various collimator types, to study the effects of source depth on collimator performance, and to assist in the design of collimators. The theory of the collimator performance equation is discussed, a comparison between the experimental and theoretical LSF values is made, and examples of the application of the technique are presented

  15. 77 FR 50726 - Software Requirement Specifications for Digital Computer Software and Complex Electronics Used in...

    Science.gov (United States)

    2012-08-22

    ... Computer Software and Complex Electronics Used in Safety Systems of Nuclear Power Plants AGENCY: Nuclear...-1209, ``Software Requirement Specifications for Digital Computer Software and Complex Electronics used... Electronics Engineers (ANSI/IEEE) Standard 830-1998, ``IEEE Recommended Practice for Software Requirements...

  16. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  17. Evaluation of Performance and Opportunities for Improvements in Automotive Power Electronics Systems: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Moreno, Gilberto; Bennion, Kevin; King, Charles; Narumanchi, Sreekant

    2016-06-14

    Thermal management strategies for automotive power electronic systems have evolved over time to reduce system cost and to improve reliability and thermal performance. In this study, we characterized the power electronic thermal management systems of two electric-drive vehicles--the 2012 Nissan LEAF and 2014 Honda Accord Hybrid. Tests were conducted to measure the insulated-gate bipolar transistor-to-coolant thermal resistances for both steady-state and transient conditions at various coolant flow rates. Water-ethylene glycol at a temperature of 65 degrees C was used as the coolant for these experiments. Computational fluid dynamics and finite element analysis models of the vehicle's power electronics thermal management system were then created and validated using experimentally obtained results. Results indicate that the Accord module provides lower steady-state thermal resistance as compared with the LEAF module. However, the LEAF design may provide improved performance in transient conditions and may have cost benefits.

  18. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  19. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    Science.gov (United States)

    Faraj, Ahmad [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  20. US QCD computational performance studies with PERI

    International Nuclear Information System (INIS)

    Zhang, Y; Fowler, R; Huck, K; Malony, A; Porterfield, A; Reed, D; Shende, S; Taylor, V; Wu, X

    2007-01-01

    We report on some of the interactions between two SciDAC projects: The National Computational Infrastructure for Lattice Gauge Theory (USQCD), and the Performance Engineering Research Institute (PERI). Many modern scientific programs consistently report the need for faster computational resources to maintain global competitiveness. However, as the size and complexity of emerging high end computing (HEC) systems continue to rise, achieving good performance on such systems is becoming ever more challenging. In order to take full advantage of the resources, it is crucial to understand the characteristics of relevant scientific applications and the systems these applications are running on. Using tools developed under PERI and by other performance measurement researchers, we studied the performance of two applications, MILC and Chroma, on several high performance computing systems at DOE laboratories. In the case of Chroma, we discuss how the use of C++ and modern software engineering and programming methods are driving the evolution of performance tools

  1. Analysis of electronic circuits using digital computers

    International Nuclear Information System (INIS)

    Tapu, C.

    1968-01-01

    Various programmes have been proposed for studying electronic circuits with the help of computers. It is shown here how it possible to use the programme ECAP, developed by I.B.M., for studying the behaviour of an operational amplifier from different point of view: direct current, alternating current and transient state analysis, optimisation of the gain in open loop, study of the reliability. (author) [fr

  2. A Crafts-Oriented Approach to Computing in High School: Introducing Computational Concepts, Practices, and Perspectives with Electronic Textiles

    Science.gov (United States)

    Kafai, Yasmin B.; Lee, Eunkyoung; Searle, Kristin; Fields, Deborah; Kaplan, Eliot; Lui, Debora

    2014-01-01

    In this article, we examine the use of electronic textiles (e-textiles) for introducing key computational concepts and practices while broadening perceptions about computing. The starting point of our work was the design and implementation of a curriculum module using the LilyPad Arduino in a pre-AP high school computer science class. To…

  3. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  4. The ELPA library: scalable parallel eigenvalue solutions for electronic structure theory and computational science.

    Science.gov (United States)

    Marek, A; Blum, V; Johanni, R; Havu, V; Lang, B; Auckenthaler, T; Heinecke, A; Bungartz, H-J; Lederer, H

    2014-05-28

    sizes arising in the field of electronic structure theory is demonstrated for current high-performance computer architectures such as Cray or Intel/Infiniband. For a matrix of dimension 260,000, scalability up to 295,000 CPU cores has been shown on BlueGene/P.

  5. Performance Tuning of Fock Matrix and Two-Electron Integral Calculations for NWChem on Leading HPC Platforms

    Energy Technology Data Exchange (ETDEWEB)

    Shan, Hongzhan; Austin, Brian M.; De Jong, Wibe A.; Oliker, Leonid; Wright, Nicholas J.; Apra, Edoardo

    2014-10-01

    Attaining performance in the evaluation of two-electron repulsion integrals and constructing the Fock matrix is of considerable importance to the computational chemistry community. Due to its numerical complexity improving the performance behavior across a variety of leading supercomputing platforms is an increasing challenge due to the significant diversity in high-performance computing architectures. In this paper, we present our successful tuning methodology for these important numerical methods on the Cray XE6, the Cray XC30, the IBM BG/Q, as well as the Intel Xeon Phi. Our optimization schemes leverage key architectural features including vectorization and simultaneous multithreading, and results in speedups of up to 2.5x compared with the original implementation.

  6. Information Technology in project-organized electronic and computer technology engineering education

    DEFF Research Database (Denmark)

    Nielsen, Kirsten Mølgaard; Nielsen, Jens Frederik Dalsgaard

    1999-01-01

    This paper describes the integration of IT in the education of electronic and computer technology engineers at Institute of Electronic Systems, Aalborg Uni-versity, Denmark. At the Institute Information Technology is an important tool in the aspects of the education as well as for communication...

  7. Decal Electronics: Printable Packaged with 3D Printing High-Performance Flexible CMOS Electronic Systems

    KAUST Repository

    Sevilla, Galo T.; Cordero, Marlon D.; Nassar, Joanna M.; Hanna, Amir; Kutbee, Arwa T.; Carreno, Armando Arpys Arevalo; Hussain, Muhammad Mustafa

    2016-01-01

    High-performance complementary metal oxide semiconductor electronics are flexed, packaged using 3D printing as decal electronics, and then printed in roll-to-roll fashion for highly manufacturable printed flexible high-performance electronic systems.

  8. Decal Electronics: Printable Packaged with 3D Printing High-Performance Flexible CMOS Electronic Systems

    KAUST Repository

    Sevilla, Galo T.

    2016-10-14

    High-performance complementary metal oxide semiconductor electronics are flexed, packaged using 3D printing as decal electronics, and then printed in roll-to-roll fashion for highly manufacturable printed flexible high-performance electronic systems.

  9. Dose field simulation for products irradiated by electron beams: formulation of the problem and its step by step solution with EGS4 computer code

    International Nuclear Information System (INIS)

    Rakhno, I.L.; Roginets, L.P.

    1999-01-01

    When performing radiation treatment of products using an electron beam much time and money should be spent for numerous measurements to make optimal choice of treatment mode. Direct radiation treatment simulation by means of the EGS4 computer code fails to describe such measurement results correctly. In the paper a multi-step radiation treatment planning procedure is suggested which consists in fitting the EGS4 simulation results to reference measurement results, and using the fitted electron beam parameters and other ones in subsequent computer simulations. It is shown that the fitting procedure should be performed separately for each material or product type. The procedure suggested allows to replace measurements by computer simulations and therefore reduces significantly time and money required for such measurements. (author)

  10. Interacting electrons theory and computational approaches

    CERN Document Server

    Martin, Richard M; Ceperley, David M

    2016-01-01

    Recent progress in the theory and computation of electronic structure is bringing an unprecedented level of capability for research. Many-body methods are becoming essential tools vital for quantitative calculations and understanding materials phenomena in physics, chemistry, materials science and other fields. This book provides a unified exposition of the most-used tools: many-body perturbation theory, dynamical mean field theory and quantum Monte Carlo simulations. Each topic is introduced with a less technical overview for a broad readership, followed by in-depth descriptions and mathematical formulation. Practical guidelines, illustrations and exercises are chosen to enable readers to appreciate the complementary approaches, their relationships, and the advantages and disadvantages of each method. This book is designed for graduate students and researchers who want to use and understand these advanced computational tools, get a broad overview, and acquire a basis for participating in new developments.

  11. Incorporating electronic-based and computer-based strategies: graduate nursing courses in administration.

    Science.gov (United States)

    Graveley, E; Fullerton, J T

    1998-04-01

    The use of electronic technology allows faculty to improve their course offerings. Four graduate courses in nursing administration were contemporized to incorporate fundamental computer-based skills that would be expected of graduates in the work setting. Principles of adult learning offered a philosophical foundation that guided course development and revision. Course delivery strategies included computer-assisted instructional modules, e-mail interactive discussion groups, and use of the electronic classroom. Classroom seminar discussions and two-way interactive video conferencing focused on group resolution of problems derived from employment settings and assigned readings. Using these electronic technologies, a variety of courses can be revised to accommodate the learners' needs.

  12. Fault tolerant embedded computers and power electronics for nuclear robotics

    International Nuclear Information System (INIS)

    Giraud, A.; Robiolle, M.

    1995-01-01

    For requirements of nuclear industries, it is necessary to use embedded rad-tolerant electronics and high-level safety. In this paper, we first describe a computer architecture called MICADO designed for French nuclear industry. We then present outgoing projects on our industry. A special point is made on power electronics for remote-operated and legged robots. (authors). 7 refs., 2 figs

  13. Fault tolerant embedded computers and power electronics for nuclear robotics

    Energy Technology Data Exchange (ETDEWEB)

    Giraud, A.; Robiolle, M.

    1995-12-31

    For requirements of nuclear industries, it is necessary to use embedded rad-tolerant electronics and high-level safety. In this paper, we first describe a computer architecture called MICADO designed for French nuclear industry. We then present outgoing projects on our industry. A special point is made on power electronics for remote-operated and legged robots. (authors). 7 refs., 2 figs.

  14. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  15. Computational Nanotechnology of Molecular Materials, Electronics, and Actuators with Carbon Nanotubes and Fullerenes

    Science.gov (United States)

    Srivastava, Deepak; Menon, Madhu; Cho, Kyeongjae; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The role of computational nanotechnology in developing next generation of multifunctional materials, molecular scale electronic and computing devices, sensors, actuators, and machines is described through a brief review of enabling computational techniques and few recent examples derived from computer simulations of carbon nanotube based molecular nanotechnology.

  16. Solar Anomalous and Magnetospheric Particle Explorer attitude control electronics box design and performance

    Science.gov (United States)

    Chamberlin, K.; Clagett, C.; Correll, T.; Gruner, T.; Quinn, T.; Shiflett, L.; Schnurr, R.; Wennersten, M.; Frederick, M.; Fox, S. M.

    1993-01-01

    The attitude Control Electronics (ACE) Box is the center of the Attitude Control Subsystem (ACS) for the Solar Anomalous and Magnetospheric Particle Explorer (SAMPEX) satellite. This unit is the single point interface for all of the Attitude Control Subsystem (ACS) related sensors and actuators. Commands and telemetry between the SAMPEX flight computer and the ACE Box are routed via a MIL-STD-1773 bus interface, through the use of an 80C85 processor. The ACE Box consists of the flowing electronic elements: power supply, momentum wheel driver, electromagnet driver, coarse sun sensor interface, digital sun sensor interface, magnetometer interface, and satellite computer interface. In addition, the ACE Box also contains an independent Safehold electronics package capable of keeping the satellite pitch axis pointing towards the sun. The ACE Box has dimensions of 24 x 31 x 8 cm, a mass of 4.3 kg, and an average power consumption of 10.5 W. This set of electronics was completely designed, developed, integrated, and tested by personnel at NASA GSFC. SAMPEX was launched on July 3, 1992, and the initial attitude acquisition was successfully accomplished via the analog Safehold electronics in the ACE Box. This acquisition scenario removed the excess body rates via magnetic control and precessed the satellite pitch axis to within 10 deg of the sun line. The performance of the SAMPEX ACS in general and the ACE Box in particular has been quite satisfactory.

  17. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    Science.gov (United States)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  18. Computational Methodologies for Developing Structure–Morphology–Performance Relationships in Organic Solar Cells: A Protocol Review

    KAUST Repository

    Do, Khanh

    2016-09-08

    We outline a step-by-step protocol that incorporates a number of theoretical and computational methodologies to evaluate the structural and electronic properties of pi-conjugated semiconducting materials in the condensed phase. Our focus is on methodologies appropriate for the characterization, at the molecular level, of the morphology in blend systems consisting of an electron donor and electron acceptor, of importance for understanding the performance properties of bulk-heterojunction organic solar cells. The protocol is formulated as an introductory manual for investigators who aim to study the bulk-heterojunction morphology in molecular details, thereby facilitating the development of structure morphology property relationships when used in tandem with experimental results.

  19. High-performance computing for airborne applications

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Manuzatto, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-01-01

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  20. Current-voltage curves for molecular junctions computed using all-electron basis sets

    International Nuclear Information System (INIS)

    Bauschlicher, Charles W.; Lawson, John W.

    2006-01-01

    We present current-voltage (I-V) curves computed using all-electron basis sets on the conducting molecule. The all-electron results are very similar to previous results obtained using effective core potentials (ECP). A hybrid integration scheme is used that keeps the all-electron calculations cost competitive with respect to the ECP calculations. By neglecting the coupling of states to the contacts below a fixed energy cutoff, the density matrix for the core electrons can be evaluated analytically. The full density matrix is formed by adding this core contribution to the valence part that is evaluated numerically. Expanding the definition of the core in the all-electron calculations significantly reduces the computational effort and, up to biases of about 2 V, the results are very similar to those obtained using more rigorous approaches. The convergence of the I-V curves and transmission coefficients with respect to basis set is discussed. The addition of diffuse functions is critical in approaching basis set completeness

  1. Psychiatrists' Comfort Using Computers and Other Electronic Devices in Clinical Practice.

    Science.gov (United States)

    Duffy, Farifteh F; Fochtmann, Laura J; Clarke, Diana E; Barber, Keila; Hong, Seung-Hee; Yager, Joel; Mościcki, Eve K; Plovnick, Robert M

    2016-09-01

    This report highlights findings from the Study of Psychiatrists' Use of Informational Resources in Clinical Practice, a cross-sectional Web- and paper-based survey that examined psychiatrists' comfort using computers and other electronic devices in clinical practice. One-thousand psychiatrists were randomly selected from the American Medical Association Physician Masterfile and asked to complete the survey between May and August, 2012. A total of 152 eligible psychiatrists completed the questionnaire (response rate 22.2 %). The majority of psychiatrists reported comfort using computers for educational and personal purposes. However, 26 % of psychiatrists reported not using or not being comfortable using computers for clinical functions. Psychiatrists under age 50 were more likely to report comfort using computers for all purposes than their older counterparts. Clinical tasks for which computers were reportedly used comfortably, specifically by psychiatrists younger than 50, included documenting clinical encounters, prescribing, ordering laboratory tests, accessing read-only patient information (e.g., test results), conducting internet searches for general clinical information, accessing online patient educational materials, and communicating with patients or other clinicians. Psychiatrists generally reported comfort using computers for personal and educational purposes. However, use of computers in clinical care was less common, particularly among psychiatrists 50 and older. Information and educational resources need to be available in a variety of accessible, user-friendly, computer and non-computer-based formats, to support use across all ages. Moreover, ongoing training and technical assistance with use of electronic and mobile device technologies in clinical practice is needed. Research on barriers to clinical use of computers is warranted.

  2. Psychiatrists’ Comfort Using Computers and Other Electronic Devices in Clinical Practice

    Science.gov (United States)

    Fochtmann, Laura J.; Clarke, Diana E.; Barber, Keila; Hong, Seung-Hee; Yager, Joel; Mościcki, Eve K.; Plovnick, Robert M.

    2015-01-01

    This report highlights findings from the Study of Psychiatrists’ Use of Informational Resources in Clinical Practice, a cross-sectional Web- and paper-based survey that examined psychiatrists’ comfort using computers and other electronic devices in clinical practice. One-thousand psychiatrists were randomly selected from the American Medical Association Physician Masterfile and asked to complete the survey between May and August, 2012. A total of 152 eligible psychiatrists completed the questionnaire (response rate 22.2 %). The majority of psychiatrists reported comfort using computers for educational and personal purposes. However, 26 % of psychiatrists reported not using or not being comfortable using computers for clinical functions. Psychiatrists under age 50 were more likely to report comfort using computers for all purposes than their older counterparts. Clinical tasks for which computers were reportedly used comfortably, specifically by psychiatrists younger than 50, included documenting clinical encounters, prescribing, ordering laboratory tests, accessing read-only patient information (e.g., test results), conducting internet searches for general clinical information, accessing online patient educational materials, and communicating with patients or other clinicians. Psychiatrists generally reported comfort using computers for personal and educational purposes. However, use of computers in clinical care was less common, particularly among psychiatrists 50 and older. Information and educational resources need to be available in a variety of accessible, user-friendly, computer and non-computer-based formats, to support use across all ages. Moreover, ongoing training and technical assistance with use of electronic and mobile device technologies in clinical practice is needed. Research on barriers to clinical use of computers is warranted. PMID:26667248

  3. Electronics and computer

    International Nuclear Information System (INIS)

    Asano, Yuzo

    1980-01-01

    The requirement for the data collection and handling system of TRISTAN is discussed. In April, 1979, the first general meeting was held at KEK to organize the workshop on the future electronics for large scale, high energy experiments. Three sub-groups were formed, and those are the Group 1 for the study of fast logics, the Group 2 for the pre-processing and temporary storage of data, and the Group 3 for the data acquisition system. The general trends of the future system are the reduction of data size and the reduction of trigger rate. The important points for processing the fast data are fast block transfer, parallel processing and pre-processing. The U.S. Fast System Design Group has proposed some features for the future system called Fastbus. The Time Projection Chamber proposed for a PEP Facility gives a typical example of the future detectors for colliding beam machines. It is a large drift chamber in a solenoidal magnetic field. The method of data processing is interesting. By extrapolating the past experiences, the requirements for the host computer for the data acquisition system can be made. (Kato, T.)

  4. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  5. Analysis of parallel computing performance of the code MCNP

    International Nuclear Information System (INIS)

    Wang Lei; Wang Kan; Yu Ganglin

    2006-01-01

    Parallel computing can reduce the running time of the code MCNP effectively. With the MPI message transmitting software, MCNP5 can achieve its parallel computing on PC cluster with Windows operating system. Parallel computing performance of MCNP is influenced by factors such as the type, the complexity level and the parameter configuration of the computing problem. This paper analyzes the parallel computing performance of MCNP regarding with these factors and gives measures to improve the MCNP parallel computing performance. (authors)

  6. 77 FR 27078 - Certain Electronic Devices, Including Mobile Phones and Tablet Computers, and Components Thereof...

    Science.gov (United States)

    2012-05-08

    ... Phones and Tablet Computers, and Components Thereof; Notice of Receipt of Complaint; Solicitation of... entitled Certain Electronic Devices, Including Mobile Phones and Tablet Computers, and Components Thereof... the United States after importation of certain electronic devices, including mobile phones and tablet...

  7. High Performance Electronics on Flexible Silicon

    KAUST Repository

    Sevilla, Galo T.

    2016-09-01

    Over the last few years, flexible electronic systems have gained increased attention from researchers around the world because of their potential to create new applications such as flexible displays, flexible energy harvesters, artificial skin, and health monitoring systems that cannot be integrated with conventional wafer based complementary metal oxide semiconductor processes. Most of the current efforts to create flexible high performance devices are based on the use of organic semiconductors. However, inherent material\\'s limitations make them unsuitable for big data processing and high speed communications. The objective of my doctoral dissertation is to develop integration processes that allow the transformation of rigid high performance electronics into flexible ones while maintaining their performance and cost. In this work, two different techniques to transform inorganic complementary metal-oxide-semiconductor electronics into flexible ones have been developed using industry compatible processes. Furthermore, these techniques were used to realize flexible discrete devices and circuits which include metal-oxide-semiconductor field-effect-transistors, the first demonstration of flexible Fin-field-effect-transistors, and metal-oxide-semiconductors-based circuits. Finally, this thesis presents a new technique to package, integrate, and interconnect flexible high performance electronics using low cost additive manufacturing techniques such as 3D printing and inkjet printing. This thesis contains in depth studies on electrical, mechanical, and thermal properties of the fabricated devices.

  8. A Conceptual Framework for the Electronic Performance Support Systems within IBM Lotus Notes 6 (LN6 Example

    Directory of Open Access Journals (Sweden)

    Servet BAYRAM

    2005-10-01

    Full Text Available A Conceptual Framework for the Electronic PerformanceSupport Systems within IBM Lotus Notes 6 (LN6 Example Assoc. Prof. Dr. Servet BAYRAM Computer Education & Instructional Technologies Marmara University, TURKEYsbayram@marmara.edu.tr ABSTRACT The concept of Electronic Performance Support Systems (EPSS is containing multimedia or computer based instruction components that improves human performance by providing process simplification, performance information and decision support system. EPSS has become a hot topic for organizational development, human resources, performance technology, training, and educational development professionals. A conceptual framework of EPSS is constructed under five interrelated and interdependent domains for educational implications. The domains of the framework are online collaboration, cost-effectiveness, motivation, service management, and performance empowering. IBM Lotus Notes 6 (LN6 is used as an example application tool to illustrate the power of this framework. The framework describes a set of relevant events based upon deductive analyses for improving our understanding of the EPSS and its implications on education and training. The article is also pointed out that there are some similarities between the EPSS’ and the LN6’s specific features within this conceptual framework. It can provide some guidelines and benefits to researchers, educators, and designers as well.

  9. Computer Music

    Science.gov (United States)

    Cook, Perry R.

    This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and the study of perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.).

  10. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  11. Trapped electron decay by the thermally-assisted tunnelling to electron acceptors in glassy matrices. A computer simulation study

    International Nuclear Information System (INIS)

    Feret, B.; Bartczak, W.M.; Kroh, J.

    1991-01-01

    The Redi-Hopefield quantum mechanical model of the thermally-assisted electron transfer has been applied to simulate the decay of trapped electrons by tunnelling to electron acceptor molecules added to the glassy matrix. It was assumed that the electron energy levels in donors and acceptors are statistically distributed and the electron excess energy after transfer is dissipated in the medium by the electron-phonon coupling. The electron decay curves were obtained by the method of computer simulation. It was found that for a given medium there exists a certain preferred value of the electronic excess energy which can be effectively converted into the matrix vibrations. If the mismatch of the electron states on the donor and acceptor coincides with the ''resonance'' energy the overall kinetics of electron transfer is accelerated. (author)

  12. Regional Platform on Personal Computer Electronic Waste in Latin ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    Regional Platform on Personal Computer Electronic Waste in Latin America and the Caribbean. Donation of ... This project aims to identify environmentally responsible and sustainable solutions to the problem of e-waste. ... Policy in Focus publishes a special issue profiling evidence to empower women in the labour market.

  13. Management and Valorization of Electronic and Computer Wastes in ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    This project will examine the issue of electronic and computer waste and its management, and endeavor to identify feasible and sustainable strategies for ... IDRC congratulates first cohort of Women in Climate Change Science Fellows ... titled “Climate change and adaptive water management: Innovative solutions from the ...

  14. A computational study of the electronic properties of one-dimensional armchair phosphorene nanotubes

    Energy Technology Data Exchange (ETDEWEB)

    Yu, Sheng; Zhu, Hao; Eshun, Kwesi; Arab, Abbas; Badwan, Ahmad; Li, Qiliang [Department of Electrical and Computer Engineering, George Mason University, Fairfax, Virginia 22033 (United States)

    2015-10-28

    We have performed a comprehensive first-principle computational study of the electronic properties of one-dimensional phosphorene nanotubes (PNTs), and the strain effect on the mechanical and electrical properties of PNTs, including the elastic modulus, energy bandstructure, and carrier effective mass. The study has demonstrated that the armchair PNTs have semiconducting properties along the axial direction and the carrier mobility can be significantly improved by compressive strain. The hole mobility increases from 40.7 cm{sup 2}/V s to 197.0 cm{sup 2}/V s as the compressive strain increases to −5% at room temperature. The investigations of size effect on armchair PNTs indicated that the conductance increases significantly as the increasing diameter. Overall, this study indicated that the PNTs have very attractive electronic properties for future application in nanomaterials and devices.

  15. Performing stencil computations

    Energy Technology Data Exchange (ETDEWEB)

    Donofrio, David

    2018-01-16

    A method and apparatus for performing stencil computations efficiently are disclosed. In one embodiment, a processor receives an offset, and in response, retrieves a value from a memory via a single instruction, where the retrieving comprises: identifying, based on the offset, one of a plurality of registers of the processor; loading an address stored in the identified register; and retrieving from the memory the value at the address.

  16. High performance computing in linear control

    International Nuclear Information System (INIS)

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  17. Cloud Computing for Complex Performance Codes.

    Energy Technology Data Exchange (ETDEWEB)

    Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Klein, Brandon Thorin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Miner, John Gifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.

  18. Search of computers for discovery of electronic evidence

    Directory of Open Access Journals (Sweden)

    Pisarić Milana M.

    2015-01-01

    Full Text Available In order to address the specific nature of criminal activities committed using computer networks and systems, the efforts of states to adapt or complement the existing criminal law with purposeful provisions is understandable. To create an appropriate legal framework for supressing cybercrime, except the rules of substantive criminal law predict certain behavior as criminal offenses against the confidentiality, integrity and availability of computer data, computer systems and networks, it is essential that the provisions of the criminal procedure law contain adequate powers of competent authorities for detecting sources of illegal activities, or the collection of data on the committed criminal offense and offender, which can be used as evidence in criminal proceedings, taking into account the specificities of cyber crime and the environment within which the illegal activity is undertaken. Accordingly, the provisions of the criminal procedural law should be designed to be able to overcome certain challenges in discovering and proving high technology crime, and the provisions governing search of computer for discovery of electronic evidence is of special importance.

  19. Calculations of the self-amplified spontaneous emission performance of a free-electron laser

    International Nuclear Information System (INIS)

    Dejus, R. J.

    1999-01-01

    The linear integral equation based computer code (RON: Roger Oleg Nikolai), which was recently developed at Argonne National Laboratory, was used to calculate the self-amplified spontaneous emission (SASE) performance of the free-electron laser (FEL) being built at Argonne. Signal growth calculations under different conditions are used for estimating tolerances of actual design parameters. The radiation characteristics are discussed, and calculations using an ideal undulator magnetic field and a real measured magnetic field will be compared and discussed

  20. High Performance Computing Multicast

    Science.gov (United States)

    2012-02-01

    A History of the Virtual Synchrony Replication Model,” in Replication: Theory and Practice, Charron-Bost, B., Pedone, F., and Schiper, A. (Eds...Performance Computing IP / IPv4 Internet Protocol (version 4.0) IPMC Internet Protocol MultiCast LAN Local Area Network MCMD Dr. Multicast MPI

  1. New Developments in Modeling MHD Systems on High Performance Computing Architectures

    Science.gov (United States)

    Germaschewski, K.; Raeder, J.; Larson, D. J.; Bhattacharjee, A.

    2009-04-01

    Modeling the wide range of time and length scales present even in fluid models of plasmas like MHD and X-MHD (Extended MHD including two fluid effects like Hall term, electron inertia, electron pressure gradient) is challenging even on state-of-the-art supercomputers. In the last years, HPC capacity has continued to grow exponentially, but at the expense of making the computer systems more and more difficult to program in order to get maximum performance. In this paper, we will present a new approach to managing the complexity caused by the need to write efficient codes: Separating the numerical description of the problem, in our case a discretized right hand side (r.h.s.), from the actual implementation of efficiently evaluating it. An automatic code generator is used to describe the r.h.s. in a quasi-symbolic form while leaving the translation into efficient and parallelized code to a computer program itself. We implemented this approach for OpenGGCM (Open General Geospace Circulation Model), a model of the Earth's magnetosphere, which was accelerated by a factor of three on regular x86 architecture and a factor of 25 on the Cell BE architecture (commonly known for its deployment in Sony's PlayStation 3).

  2. An Adaptive Middleware for Improved Computational Performance

    DEFF Research Database (Denmark)

    Bonnichsen, Lars Frydendal

    , we are improving computational performance by exploiting modern hardware features, such as dynamic voltage-frequency scaling and transactional memory. Adapting software is an iterative process, requiring that we continually revisit it to meet new requirements or realities; a time consuming process......The performance improvements in computer systems over the past 60 years have been fueled by an exponential increase in energy efficiency. In recent years, the phenomenon known as the end of Dennard’s scaling has slowed energy efficiency improvements — but improving computer energy efficiency...... is more important now than ever. Traditionally, most improvements in computer energy efficiency have come from improvements in lithography — the ability to produce smaller transistors — and computer architecture - the ability to apply those transistors efficiently. Since the end of scaling, we have seen...

  3. Optical Performance of Carbon-Nanotube Electron Sources

    International Nuclear Information System (INIS)

    Jonge, Niels de; Allioux, Myriam; Oostveen, Jim T.; Teo, Kenneth B. K.; Milne, William I.

    2005-01-01

    The figure of merit for the electron optical performance of carbon-nanotube (CNT) electron sources is presented. This figure is given by the relation between the reduced brightness and the energy spread in the region of stable emission. It is shown experimentally that a CNT electron source exhibits a highly stable emission process that follows the Fowler-Nordheim theory for field emission, fixing the relationship among the energy spread, the current, and the radius. The performance of the CNT emitter under realistic operating conditions is compared with state-of-the-art electron point sources. It is demonstrated that the reduced brightness is a function of the tunneling parameter, a measure of the energy spread at low temperatures, only, independent of the geometry of the emitter

  4. Clinical application of electron beam computed tomography in intravenous three-dimensional coronary angiography

    International Nuclear Information System (INIS)

    Luo Chufan; Du Zhimin; Hu Chengheng; Li Yi; Zeng Wutao; Ma Hong; Li Xiangmin; Zhou Xuhui

    2002-01-01

    Objective: To investigate the clinical application of intravenous three-dimensional coronary angiography using electron beam computed tomography (EBCT) as compared with selective coronary angiography. Methods: Intravenous EBCT and selective coronary angiography were performed during the same period in 38 patients. The value of EBCT angiography for diagnosing coronary artery disease was evaluated. Results: The number of coronary arteries adequately evaluated by EBCT angiography was 134 out of 152 vessels (88.2%), including 100% of the left main coronary arteries, 94.7% of the left anterior descending arteries, 81.6% of the left circumflex arteries and 76.3 % of the right coronary arteries. Significantly more left main and heft anterior descending coronary arteries were adequately visualized than the left circumflex and right coronary arteries (P < 0.05). The sensitivity, specificity, accuracy, and positive and negative predictive value of EBCT angiography for diagnosing coronary artery disease were 88.0%, 84.6%, 86.8%, 91.7% and 78.6%, respectively. Of the 38 arteries with ≥ 50% stenosis, EBCT underestimated 8, for a sensitivity of 78.9%. Of the 96 arteries without significant stenosis, EBCT overestimated 7 stenosis, for a specificity of 92.7%. Conclusion: Intravenous electron beam computed tomographic coronary angiography is a promising noninvasive method for diagnosing coronary artery disease

  5. Electronic Devices, Methods, and Computer Program Products for Selecting an Antenna Element Based on a Wireless Communication Performance Criterion

    DEFF Research Database (Denmark)

    2014-01-01

    A method of operating an electronic device includes providing a plurality of antenna elements, evaluating a wireless communication performance criterion to obtain a performance evaluation, and assigning a first one of the plurality of antenna elements to a main wireless signal reception...... and transmission path and a second one of the plurality of antenna elements to a diversity wireless signal reception path based on the performance evaluation....

  6. Department of Energy research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-08-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programmatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models, the execution of which is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex, and consequently it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  7. Quantum computation in semiconductor quantum dots of electron-spin asymmetric anisotropic exchange

    International Nuclear Information System (INIS)

    Hao Xiang; Zhu Shiqun

    2007-01-01

    The universal quantum computation is obtained when there exists asymmetric anisotropic exchange between electron spins in coupled semiconductor quantum dots. The asymmetric Heisenberg model can be transformed into the isotropic model through the control of two local unitary rotations for the realization of essential quantum gates. The rotations on each qubit are symmetrical and depend on the strength and orientation of asymmetric exchange. The implementation of the axially symmetric local magnetic fields can assist the construction of quantum logic gates in anisotropic coupled quantum dots. This proposal can efficiently use each physical electron spin as a logical qubit in the universal quantum computation

  8. Current algorithms for computed electron beam dose planning

    International Nuclear Information System (INIS)

    Brahme, A.

    1985-01-01

    Two- and sometimes three-dimensional computer algorithms for electron beam irradiation are capable of taking all irregularities of the body cross-section and the properties of the various tissues into account. This is achieved by dividing the incoming broad beams into a number of narrow pencil beams, the penetration of which can be described by essentially one-dimensional formalisms. The constituent pencil beams are most often described by Gaussian, experimentally or theoretically derived distributions. The accuracy of different dose planning algorithms is discussed in some detail based on their ability to take the different physical interaction processes of high energy electrons into account. It is shown that those programs that take the deviations from the simple Gaussian model into account give the best agreement with experimental results. With such programs a dosimetric relative accuracy of about 5% is generally achieved except in the most complex inhomogeneity configurations. Finally, the present limitations and possible future developments of electron dose planning are discussed. (orig.)

  9. Optical interconnection networks for high-performance computing systems

    International Nuclear Information System (INIS)

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  10. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  11. COMPUTATIONAL ELECTROCHEMISTRY: AQUEOUS ONE-ELECTRON OXIDATION POTENTIALS FOR SUBSTITUTED ANILINES

    Science.gov (United States)

    Semiempirical molecular orbital theory and density functional theory are used to compute one-electron oxidation potentials for aniline and a set of 21 mono- and di-substituted anilines in aqueous solution. Linear relationships between theoretical predictions and experiment are co...

  12. Management and Valorization of Electronic and Computer Wastes in ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    So far, little is known about the extent of the problem and there is little research available to serve as a basis for persuading decision-makers to address it. This project will examine the issue of electronic and computer waste and its management, and endeavor to identify feasible and sustainable strategies for valorizing such ...

  13. Examining Big Brother's Purpose for Using Electronic Performance Monitoring

    Science.gov (United States)

    Bartels, Lynn K.; Nordstrom, Cynthia R.

    2012-01-01

    We examined whether the reason offered for electronic performance monitoring (EPM) influenced participants' performance, stress, motivation, and satisfaction. Participants performed a data-entry task in one of five experimental conditions. In one condition, participants were not electronically monitored. In the remaining conditions, participants…

  14. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...

  15. Misleading Performance Claims in Parallel Computations

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2009-05-29

    In a previous humorous note entitled 'Twelve Ways to Fool the Masses,' I outlined twelve common ways in which performance figures for technical computer systems can be distorted. In this paper and accompanying conference talk, I give a reprise of these twelve 'methods' and give some actual examples that have appeared in peer-reviewed literature in years past. I then propose guidelines for reporting performance, the adoption of which would raise the level of professionalism and reduce the level of confusion, not only in the world of device simulation but also in the larger arena of technical computing.

  16. Teaching strategies applied to teaching computer networks in Engineering in Telecommunications and Electronics

    Directory of Open Access Journals (Sweden)

    Elio Manuel Castañeda-González

    2016-07-01

    Full Text Available Because of the large impact that today computer networks, their study in related fields such as Telecommunications Engineering and Electronics is presented to the student with great appeal. However, by digging in content, lacking a strong practical component, you can make this interest decreases considerably. This paper proposes the use of teaching strategies and analogies, media and interactive applications that enhance the teaching of discipline networks and encourage their study. It is part of an analysis of how the teaching of the discipline process is performed and then a description of each of these strategies is done with their respective contribution to student learning.

  17. High-performance electronics for time-of-flight PET systems

    International Nuclear Information System (INIS)

    Choong, W-S; Peng, Q; Vu, C Q; Turko, B T; Moses, W W

    2013-01-01

    We have designed and built a high-performance readout electronics system for time-of-flight positron emission tomography (TOF PET) cameras. The electronics architecture is based on the electronics for a commercial whole-body PET camera (Siemens/CPS Cardinal electronics), modified to improve the timing performance. The fundamental contributions in the electronics that can limit the timing resolution include the constant fraction discriminator (CFD), which converts the analog electrical signal from the photo-detector to a digital signal whose leading edge is time-correlated with the input signal, and the time-to-digital converter (TDC), which provides a time stamp for the CFD output. Coincident events are identified by digitally comparing the values of the time stamps. In the Cardinal electronics, the front-end processing electronics are performed by an Analog subsection board, which has two application-specific integrated circuits (ASICs), each servicing a PET block detector module. The ASIC has a built-in CFD and TDC. We found that a significant degradation in the timing resolution comes from the ASIC's CFD and TDC. Therefore, we have designed and built an improved Analog subsection board that replaces the ASIC's CFD and TDC with a high-performance CFD (made with discrete components) and TDC (using the CERN high-performance TDC ASIC). The improved Analog subsection board is used in a custom single-ring LSO-based TOF PET camera. The electronics system achieves a timing resolution of 60 ps FWHM. Prototype TOF detector modules are read out with the electronics system and give coincidence timing resolutions of 259 ps FWHM and 156 ps FWHM for detector modules coupled to LSO and LaBr 3 crystals respectively.

  18. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  19. HPCToolkit: performance tools for scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Tallent, N; Mellor-Crummey, J; Adhianto, L; Fagan, M; Krentel, M [Department of Computer Science, Rice University, Houston, TX 77005 (United States)

    2008-07-15

    As part of the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC) program, science teams are tackling problems that require simulation and modeling on petascale computers. As part of activities associated with the SciDAC Center for Scalable Application Development Software (CScADS) and the Performance Engineering Research Institute (PERI), Rice University is building software tools for performance analysis of scientific applications on the leadership-class platforms. In this poster abstract, we briefly describe the HPCToolkit performance tools and how they can be used to pinpoint bottlenecks in SPMD and multi-threaded parallel codes. We demonstrate HPCToolkit's utility by applying it to two SciDAC applications: the S3D code for simulation of turbulent combustion and the MFDn code for ab initio calculations of microscopic structure of nuclei.

  20. HPCToolkit: performance tools for scientific computing

    International Nuclear Information System (INIS)

    Tallent, N; Mellor-Crummey, J; Adhianto, L; Fagan, M; Krentel, M

    2008-01-01

    As part of the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC) program, science teams are tackling problems that require simulation and modeling on petascale computers. As part of activities associated with the SciDAC Center for Scalable Application Development Software (CScADS) and the Performance Engineering Research Institute (PERI), Rice University is building software tools for performance analysis of scientific applications on the leadership-class platforms. In this poster abstract, we briefly describe the HPCToolkit performance tools and how they can be used to pinpoint bottlenecks in SPMD and multi-threaded parallel codes. We demonstrate HPCToolkit's utility by applying it to two SciDAC applications: the S3D code for simulation of turbulent combustion and the MFDn code for ab initio calculations of microscopic structure of nuclei

  1. Computer task performance by subjects with Duchenne muscular dystrophy.

    Science.gov (United States)

    Malheiros, Silvia Regina Pinheiro; da Silva, Talita Dias; Favero, Francis Meire; de Abreu, Luiz Carlos; Fregni, Felipe; Ribeiro, Denise Cardoso; de Mello Monteiro, Carlos Bandeira

    2016-01-01

    Two specific objectives were established to quantify computer task performance among people with Duchenne muscular dystrophy (DMD). First, we compared simple computational task performance between subjects with DMD and age-matched typically developing (TD) subjects. Second, we examined correlations between the ability of subjects with DMD to learn the computational task and their motor functionality, age, and initial task performance. The study included 84 individuals (42 with DMD, mean age of 18±5.5 years, and 42 age-matched controls). They executed a computer maze task; all participants performed the acquisition (20 attempts) and retention (five attempts) phases, repeating the same maze. A different maze was used to verify transfer performance (five attempts). The Motor Function Measure Scale was applied, and the results were compared with maze task performance. In the acquisition phase, a significant decrease was found in movement time (MT) between the first and last acquisition block, but only for the DMD group. For the DMD group, MT during transfer was shorter than during the first acquisition block, indicating improvement from the first acquisition block to transfer. In addition, the TD group showed shorter MT than the DMD group across the study. DMD participants improved their performance after practicing a computational task; however, the difference in MT was present in all attempts among DMD and control subjects. Computational task improvement was positively influenced by the initial performance of individuals with DMD. In turn, the initial performance was influenced by their distal functionality but not their age or overall functionality.

  2. Human performance models for computer-aided engineering

    Science.gov (United States)

    Elkind, Jerome I. (Editor); Card, Stuart K. (Editor); Hochberg, Julian (Editor); Huey, Beverly Messick (Editor)

    1989-01-01

    This report discusses a topic important to the field of computational human factors: models of human performance and their use in computer-based engineering facilities for the design of complex systems. It focuses on a particular human factors design problem -- the design of cockpit systems for advanced helicopters -- and on a particular aspect of human performance -- vision and related cognitive functions. By focusing in this way, the authors were able to address the selected topics in some depth and develop findings and recommendations that they believe have application to many other aspects of human performance and to other design domains.

  3. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  4. The Need for Optical Means as an Alternative for Electronic Computing

    Science.gov (United States)

    Adbeldayem, Hossin; Frazier, Donald; Witherow, William; Paley, Steve; Penn, Benjamin; Bank, Curtis; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    An increasing demand for faster computers is rapidly growing to encounter the fast growing rate of Internet, space communication, and robotic industry. Unfortunately, the Very Large Scale Integration technology is approaching its fundamental limits beyond which the device will be unreliable. Optical interconnections and optical integrated circuits are strongly believed to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by conventional electronics. This paper demonstrates two ultra-fast, all-optical logic gates and a high-density storage medium, which are essential components in building the future optical computer.

  5. Simulation of electronic structure Hamiltonians in a superconducting quantum computer architecture

    Energy Technology Data Exchange (ETDEWEB)

    Kaicher, Michael; Wilhelm, Frank K. [Theoretical Physics, Saarland University, 66123 Saarbruecken (Germany); Love, Peter J. [Department of Physics, Haverford College, Haverford, Pennsylvania 19041 (United States)

    2015-07-01

    Quantum chemistry has become one of the most promising applications within the field of quantum computation. Simulating the electronic structure Hamiltonian (ESH) in the Bravyi-Kitaev (BK)-Basis to compute the ground state energies of atoms/molecules reduces the number of qubit operations needed to simulate a single fermionic operation to O(log(n)) as compared to O(n) in the Jordan-Wigner-Transformation. In this work we will present the details of the BK-Transformation, show an example of implementation in a superconducting quantum computer architecture and compare it to the most recent quantum chemistry algorithms suggesting a constant overhead.

  6. Performative Computation-aided Design Optimization

    Directory of Open Access Journals (Sweden)

    Ming Tang

    2012-12-01

    Full Text Available This article discusses a collaborative research and teaching project between the University of Cincinnati, Perkins+Will’s Tech Lab, and the University of North Carolina Greensboro. The primary investigation focuses on the simulation, optimization, and generation of architectural designs using performance-based computational design approaches. The projects examine various design methods, including relationships between building form, performance and the use of proprietary software tools for parametric design.

  7. Using the Electronic Industry Code of Conduct to Evaluate Green Supply Chain Management: An Empirical Study of Taiwan’s Computer Industry

    Directory of Open Access Journals (Sweden)

    Ching-Ching Liu

    2015-03-01

    Full Text Available Electronics companies throughout Asia recognize the benefits of Green Supply Chain Management (GSCM for gaining competitive advantage. A large majority of electronics companies in Taiwan have recently adopted the Electronic Industry Citizenship Coalition (EICC Code of Conduct for defining and managing their social and environmental responsibilities throughout their supply chains. We surveyed 106 Tier 1 suppliers to the Taiwanese computer industry to determine their environmental performance using the EICC Code of Conduct (EICC Code and performed Analysis of Variance (ANOVA on the 63/106 questionnaire responses collected. We test the results to determine whether differences in product type, geographic area, and supplier size correlate with different levels of environmental performance. To our knowledge, this is the first study to analyze questionnaire data on supplier adoption to optimize the implementation of GSCM. The results suggest that characteristic classification of suppliers could be employed to enhance the efficiency of GSCM.

  8. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1991-03-15

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour.

  9. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour

  10. Software Applications on the Peregrine System | High-Performance Computing

    Science.gov (United States)

    Algebraic Modeling System (GAMS) Statistics and analysis High-level modeling system for mathematical reactivity. Gurobi Optimizer Statistics and analysis Solver for mathematical programming LAMMPS Chemistry and , reactivities, and vibrational, electronic and NMR spectra. R Statistical Computing Environment Statistics and

  11. Concepts and techniques: Active electronics and computers in safety-critical accelerator operation

    International Nuclear Information System (INIS)

    Frankel, R.S.

    1995-01-01

    The Relativistic Heavy Ion Collider (RHIC) under construction at Brookhaven National Laboratory, requires an extensive Access Control System to protect personnel from Radiation, Oxygen Deficiency and Electrical hazards. In addition, the complicated nature of operation of the Collider as part of a complex of other Accelerators necessitates the use of active electronic measurement circuitry to ensure compliance with established Operational Safety Limits. Solutions were devised which permit the use of modern computer and interconnections technology for Safety-Critical applications, while preserving and enhancing, tried and proven protection methods. In addition a set of Guidelines, regarding required performance for Accelerator Safety Systems and a Handbook of design criteria and rules were developed to assist future system designers and to provide a framework for internal review and regulation

  12. Concepts and techniques: Active electronics and computers in safety-critical accelerator operation

    Energy Technology Data Exchange (ETDEWEB)

    Frankel, R.S.

    1995-12-31

    The Relativistic Heavy Ion Collider (RHIC) under construction at Brookhaven National Laboratory, requires an extensive Access Control System to protect personnel from Radiation, Oxygen Deficiency and Electrical hazards. In addition, the complicated nature of operation of the Collider as part of a complex of other Accelerators necessitates the use of active electronic measurement circuitry to ensure compliance with established Operational Safety Limits. Solutions were devised which permit the use of modern computer and interconnections technology for Safety-Critical applications, while preserving and enhancing, tried and proven protection methods. In addition a set of Guidelines, regarding required performance for Accelerator Safety Systems and a Handbook of design criteria and rules were developed to assist future system designers and to provide a framework for internal review and regulation.

  13. High performance parallel computers for science

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  14. An approach to first principles electronic structure calculation by symbolic-numeric computation

    Directory of Open Access Journals (Sweden)

    Akihito Kikuchi

    2013-04-01

    Full Text Available There is a wide variety of electronic structure calculation cooperating with symbolic computation. The main purpose of the latter is to play an auxiliary role (but not without importance to the former. In the field of quantum physics [1-9], researchers sometimes have to handle complicated mathematical expressions, whose derivation seems almost beyond human power. Thus one resorts to the intensive use of computers, namely, symbolic computation [10-16]. Examples of this can be seen in various topics: atomic energy levels, molecular dynamics, molecular energy and spectra, collision and scattering, lattice spin models and so on [16]. How to obtain molecular integrals analytically or how to manipulate complex formulas in many body interactions, is one such problem. In the former, when one uses special atomic basis for a specific purpose, to express the integrals by the combination of already known analytic functions, may sometimes be very difficult. In the latter, one must rearrange a number of creation and annihilation operators in a suitable order and calculate the analytical expectation value. It is usual that a quantitative and massive computation follows a symbolic one; for the convenience of the numerical computation, it is necessary to reduce a complicated analytic expression into a tractable and computable form. This is the main motive for the introduction of the symbolic computation as a forerunner of the numerical one and their collaboration has won considerable successes. The present work should be classified as one such trial. Meanwhile, the use of symbolic computation in the present work is not limited to indirect and auxiliary part to the numerical computation. The present work can be applicable to a direct and quantitative estimation of the electronic structure, skipping conventional computational methods.

  15. Electronic circuit design with HEP computational tools

    International Nuclear Information System (INIS)

    Vaz, Mario

    1996-01-01

    CPSPICE is an electronic circuit statistical simulation program developed to run in a parallel environment under UNIX operating system and TCP/IP communications protocol, using CPS - Cooperative Processes Software , SPICE program and CERNLIB software package. It is part of a set of tools being develop, intended to help electronic engineers to design, model and simulate complex systems and circuits for High Energy Physics detectors, based on statistical methods, using the same software and methodology used by HEP physicists for data analysis. CPSPICE simulates electronic circuits by Monte Carlo method, through several different processes running simultaneously SPICE in UNIX parallel computers or workstation farms. Data transfer between CPS processes for a modified version of SPICE2G6 is done by RAM memory, but can also be done through hard disk files if no source files are available for the simulator, and for bigger simulation outputs files. Simulation results are written in a HBOOK file as a NTUPLE, to be examined by HBOOK in batch model or graphics, and analyzed by statistical procedures available. The HBOOK file be stored on hard disk for small amount of data, or into Exabyte tape file for large amount of data. HEP tools also helps circuit or component modeling, like MINUT program from CERNLIB, that implements Nelder and Mead Simplex and Gradient with or without derivatives algorithms, and can be used for design optimization.This paper presents CPSPICE program implementation. The scheme adopted is suitable to make parallel other electronic circuit simulators. (author)

  16. COMPUTATIONAL EFFICIENCY OF A MODIFIED SCATTERING KERNEL FOR FULL-COUPLED PHOTON-ELECTRON TRANSPORT PARALLEL COMPUTING WITH UNSTRUCTURED TETRAHEDRAL MESHES

    Directory of Open Access Journals (Sweden)

    JONG WOON KIM

    2014-04-01

    In this paper, we introduce a modified scattering kernel approach to avoid the unnecessarily repeated calculations involved with the scattering source calculation, and used it with parallel computing to effectively reduce the computation time. Its computational efficiency was tested for three-dimensional full-coupled photon-electron transport problems using our computer program which solves the multi-group discrete ordinates transport equation by using the discontinuous finite element method with unstructured tetrahedral meshes for complicated geometrical problems. The numerical tests show that we can improve speed up to 17∼42 times for the elapsed time per iteration using the modified scattering kernel, not only in the single CPU calculation but also in the parallel computing with several CPUs.

  17. Quantum Accelerators for High-Performance Computing Systems

    OpenAIRE

    Britt, Keith A.; Mohiyaddin, Fahd A.; Humble, Travis S.

    2017-01-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantu...

  18. Stretchable, Twisted Conductive Microtubules for Wearable Computing, Robotics, Electronics, and Healthcare.

    Science.gov (United States)

    Do, Thanh Nho; Visell, Yon

    2017-05-11

    Stretchable and flexible multifunctional electronic components, including sensors and actuators, have received increasing attention in robotics, electronics, wearable, and healthcare applications. Despite advances, it has remained challenging to design analogs of many electronic components to be highly stretchable, to be efficient to fabricate, and to provide control over electronic performance. Here, we describe highly elastic sensors and interconnects formed from thin, twisted conductive microtubules. These devices consist of twisted assemblies of thin, highly stretchable (>400%) elastomer tubules filled with liquid conductor (eutectic gallium indium, EGaIn), and fabricated using a simple roller coating process. As we demonstrate, these devices can operate as multimodal sensors for strain, rotation, contact force, or contact location. We also show that, through twisting, it is possible to control their mechanical performance and electronic sensitivity. In extensive experiments, we have evaluated the capabilities of these devices, and have prototyped an array of applications in several domains of stretchable and wearable electronics. These devices provide a novel, low cost solution for high performance stretchable electronics with broad applications in industry, healthcare, and consumer electronics, to emerging product categories of high potential economic and societal significance.

  19. Nanoscale RRAM-based synaptic electronics: toward a neuromorphic computing device.

    Science.gov (United States)

    Park, Sangsu; Noh, Jinwoo; Choo, Myung-Lae; Sheri, Ahmad Muqeem; Chang, Man; Kim, Young-Bae; Kim, Chang Jung; Jeon, Moongu; Lee, Byung-Geun; Lee, Byoung Hun; Hwang, Hyunsang

    2013-09-27

    Efforts to develop scalable learning algorithms for implementation of networks of spiking neurons in silicon have been hindered by the considerable footprints of learning circuits, which grow as the number of synapses increases. Recent developments in nanotechnologies provide an extremely compact device with low-power consumption.In particular, nanoscale resistive switching devices (resistive random-access memory (RRAM)) are regarded as a promising solution for implementation of biological synapses due to their nanoscale dimensions, capacity to store multiple bits and the low energy required to operate distinct states. In this paper, we report the fabrication, modeling and implementation of nanoscale RRAM with multi-level storage capability for an electronic synapse device. In addition, we first experimentally demonstrate the learning capabilities and predictable performance by a neuromorphic circuit composed of a nanoscale 1 kbit RRAM cross-point array of synapses and complementary metal-oxide-semiconductor neuron circuits. These developments open up possibilities for the development of ubiquitous ultra-dense, ultra-low-power cognitive computers.

  20. Nanoscale RRAM-based synaptic electronics: toward a neuromorphic computing device

    International Nuclear Information System (INIS)

    Park, Sangsu; Noh, Jinwoo; Choo, Myung-lae; Sheri, Ahmad Muqeem; Jeon, Moongu; Lee, Byung-Geun; Lee, Byoung Hun; Chang, Man; Kim, Young-Bae; Kim, Chang Jung; Hwang, Hyunsang

    2013-01-01

    Efforts to develop scalable learning algorithms for implementation of networks of spiking neurons in silicon have been hindered by the considerable footprints of learning circuits, which grow as the number of synapses increases. Recent developments in nanotechnologies provide an extremely compact device with low-power consumption. In particular, nanoscale resistive switching devices (resistive random-access memory (RRAM)) are regarded as a promising solution for implementation of biological synapses due to their nanoscale dimensions, capacity to store multiple bits and the low energy required to operate distinct states. In this paper, we report the fabrication, modeling and implementation of nanoscale RRAM with multi-level storage capability for an electronic synapse device. In addition, we first experimentally demonstrate the learning capabilities and predictable performance by a neuromorphic circuit composed of a nanoscale 1 kbit RRAM cross-point array of synapses and complementary metal–oxide–semiconductor neuron circuits. These developments open up possibilities for the development of ubiquitous ultra-dense, ultra-low-power cognitive computers. (paper)

  1. A model and computer code for the Monte Carlo simulation of relativistic electron and positron penetration through matter

    International Nuclear Information System (INIS)

    Ismail, M.; Liljequist, D.

    1986-10-01

    In the present model, the treatment of elastic scattering is based on the similarity of multiple scattering processes with equal transport mean free path /LAMBDA/sub(tr). Elastic scattering events are separated by an artificially enlarged mean free path. In such events, scattering is optionally performed either by means of a single, energy-dependent scattering angle, or by means of a scattering angle distribution of the same form as the screened Rutherford cross section, but with an artificial screening factor. The physically correct /LAMBDA/sub(tr) value is obtained by appropriate choice of scattering angle or screening factor, respectively. We find good agreement with experimental transmission and with energy loss distributions. The Rutherford-like model gives good agreement with experimental angular distribution even for the penetration of very thin layers. Treatment of electron energy loss is based on the partial CSDA method: energy losses W WMINSE are treated as discrete electron-electron or positron-electron scattering events. Similarly, for bremsstrahlung photon energies W WMINR are treated at discrete events. The sensitivity of the model to the parameters WMINSE and WMINR is studied. WMINR can, in practise, be made negligibly small, and WMINSE can without any excessive computer time be made as small as to give results in good agreement with experiment and with computations based on Landau theory of straggling. Using this model, we study some of the characteristic features of relativistic electron transmission, energy loss distributions, straggling, angular distributions and trajectories. (authors)

  2. Radiation defects in Te-implanted germanium. Electron microscopy and computer simulation studies

    International Nuclear Information System (INIS)

    Kalitzova, M.G.; Karpuzov, D.S.; Pashov, N.K.

    1985-01-01

    Direct observation of radiation damage induced by heavy ion implantation in crystalline germanium by means of high-resolution electron microscopy is reported. The dark-field lattice imaging mode is used, under conditions suitable for object-like imaging. Conventional TEM is used for estimating the efficiency of creating visibly damaged regions. Heavy ion damage clusters with three types of inner structure are observed: with near-perfect crystalline cores, and with metastable and stable amorphous cores. The MARLOWE computer code is used to simulate the atomic collision cascades and to obtain the lateral spread distributions of point defects created. A comparison of high-resolution electron microscopy (HREM) with computer simulation results shows encouraging agreement for the average cluster dimensions and for the lateral spread of vacancies and interstitials. (author)

  3. Educational Systems Design Implications of Electronic Publishing.

    Science.gov (United States)

    Romiszowski, Alexander J.

    1994-01-01

    Discussion of electronic publishing focuses on the four main purposes of media in general: communication, entertainment, motivation, and education. Highlights include electronic journals and books; hypertext; user control; computer graphics and animation; electronic games; virtual reality; multimedia; electronic performance support;…

  4. Multicore Challenges and Benefits for High Performance Scientific Computing

    Directory of Open Access Journals (Sweden)

    Ida M.B. Nielsen

    2008-01-01

    Full Text Available Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexity of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.

  5. All-optical reservoir computing.

    Science.gov (United States)

    Duport, François; Schneider, Bendix; Smerieri, Anteo; Haelterman, Marc; Massar, Serge

    2012-09-24

    Reservoir Computing is a novel computing paradigm that uses a nonlinear recurrent dynamical system to carry out information processing. Recent electronic and optoelectronic Reservoir Computers based on an architecture with a single nonlinear node and a delay loop have shown performance on standardized tasks comparable to state-of-the-art digital implementations. Here we report an all-optical implementation of a Reservoir Computer, made of off-the-shelf components for optical telecommunications. It uses the saturation of a semiconductor optical amplifier as nonlinearity. The present work shows that, within the Reservoir Computing paradigm, all-optical computing with state-of-the-art performance is possible.

  6. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Science.gov (United States)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the

  7. High-Performance Java Codes for Computational Fluid Dynamics

    Science.gov (United States)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  8. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  9. Gersch-Rodriguez-Smith computation of deep inelastic electron scattering on 4He

    International Nuclear Information System (INIS)

    Viviani, M.; Kievsky, A.; Rinat, A.S.

    2003-01-01

    We compute cross sections for inclusive scattering of high-energy electrons on 4 He, based on the two lowest orders of the Gersch-Rodriguez-Smith series. The required one- and two-particle density matrices are obtained from nonrelativistic 4 He wave functions using realistic models for the nucleon-nucleon and three-nucleon interaction. The computed results for E=3.6 GeV agree well with the NE3 SLAC-Virginia data

  10. Embedded High Performance Scalable Computing Systems

    National Research Council Canada - National Science Library

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  11. Convective Performance of Nanofluids in Commercial Electronics Cooling Systems

    International Nuclear Information System (INIS)

    Roberts, N.A.; Walker, D.G.

    2010-01-01

    Nanofluids are stable engineered colloidal suspensions of a small fraction of nanoparticles in a base fluid. Nanofluids have shown great promise as heat transfer fluids over typically used base fluids and fluids with micron sized particles. Suspensions with micron sized particles are known to settle rapidly and cause clogging and damage to the surfaces of pumping and flow equipment. These problems are dramatically reduced in nanofluids. In the current work we investigate the performance of different volume loadings of water-based alumina nanofluids in a commercially available electronics cooling system. The commercially available system is a water block used for liquid cooling of a computational processing unit. The size of the nanoparticles in the study is 20-30 nm. Results show an enhancement in convective heat transfer due to the addition of nanoparticles in the commercial cooling system with volume loadings of nanoparticles up to 1.5% by volume. The enhancement in the convective performance observed is similar to what has been reported in well controlled and understood systems and is commensurate with bulk models. The current nanoparticle suspensions showed visible signs of settling which varied from hours to weeks depending on the size of the particles used.

  12. ABINIT: Plane-Wave-Based Density-Functional Theory on High Performance Computers

    Science.gov (United States)

    Torrent, Marc

    2014-03-01

    For several years, a continuous effort has been produced to adapt electronic structure codes based on Density-Functional Theory to the future computing architectures. Among these codes, ABINIT is based on a plane-wave description of the wave functions which allows to treat systems of any kind. Porting such a code on petascale architectures pose difficulties related to the many-body nature of the DFT equations. To improve the performances of ABINIT - especially for what concerns standard LDA/GGA ground-state and response-function calculations - several strategies have been followed: A full multi-level parallelisation MPI scheme has been implemented, exploiting all possible levels and distributing both computation and memory. It allows to increase the number of distributed processes and could not be achieved without a strong restructuring of the code. The core algorithm used to solve the eigen problem (``Locally Optimal Blocked Congugate Gradient''), a Blocked-Davidson-like algorithm, is based on a distribution of processes combining plane-waves and bands. In addition to the distributed memory parallelization, a full hybrid scheme has been implemented, using standard shared-memory directives (openMP/openACC) or porting some comsuming code sections to Graphics Processing Units (GPU). As no simple performance model exists, the complexity of use has been increased; the code efficiency strongly depends on the distribution of processes among the numerous levels. ABINIT is able to predict the performances of several process distributions and automatically choose the most favourable one. On the other hand, a big effort has been carried out to analyse the performances of the code on petascale architectures, showing which sections of codes have to be improved; they all are related to Matrix Algebra (diagonalisation, orthogonalisation). The different strategies employed to improve the code scalability will be described. They are based on an exploration of new diagonalization

  13. Computational Fluid Dynamics (CFD) Computations With Zonal Navier-Stokes Flow Solver (ZNSFLOW) Common High Performance Computing Scalable Software Initiative (CHSSI) Software

    National Research Council Canada - National Science Library

    Edge, Harris

    1999-01-01

    ...), computational fluid dynamics (CFD) 6 project. Under the project, a proven zonal Navier-Stokes solver was rewritten for scalable parallel performance on both shared memory and distributed memory high performance computers...

  14. High performance flexible electronics for biomedical devices.

    Science.gov (United States)

    Salvatore, Giovanni A; Munzenrieder, Niko; Zysset, Christoph; Kinkeldei, Thomas; Petti, Luisa; Troster, Gerhard

    2014-01-01

    Plastic electronics is soft, deformable and lightweight and it is suitable for the realization of devices which can form an intimate interface with the body, be implanted or integrated into textile for wearable and biomedical applications. Here, we present flexible electronics based on amorphous oxide semiconductors (a-IGZO) whose performance can achieve MHz frequency even when bent around hair. We developed an assembly technique to integrate complex electronic functionalities into textile while preserving the softness of the garment. All this and further developments can open up new opportunities in health monitoring, biotechnology and telemedicine.

  15. Enabling high performance computational science through combinatorial algorithms

    International Nuclear Information System (INIS)

    Boman, Erik G; Bozdag, Doruk; Catalyurek, Umit V; Devine, Karen D; Gebremedhin, Assefaw H; Hovland, Paul D; Pothen, Alex; Strout, Michelle Mills

    2007-01-01

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation

  16. Enabling high performance computational science through combinatorial algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Erik G [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Bozdag, Doruk [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Catalyurek, Umit V [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Devine, Karen D [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Gebremedhin, Assefaw H [Computer Science and Center for Computational Science, Old Dominion University (United States); Hovland, Paul D [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Pothen, Alex [Computer Science and Center for Computational Science, Old Dominion University (United States); Strout, Michelle Mills [Computer Science, Colorado State University (United States)

    2007-07-15

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation.

  17. Electronic structure of BN-aromatics: Choice of reliable computational tools

    Science.gov (United States)

    Mazière, Audrey; Chrostowska, Anna; Darrigan, Clovis; Dargelos, Alain; Graciaa, Alain; Chermette, Henry

    2017-10-01

    The importance of having reliable calculation tools to interpret and predict the electronic properties of BN-aromatics is directly linked to the growing interest for these very promising new systems in the field of materials science, biomedical research, or energy sustainability. Ionization energy (IE) is one of the most important parameters to approach the electronic structure of molecules. It can be theoretically estimated, but in order to evaluate their persistence and propose the most reliable tools for the evaluation of different electronic properties of existent or only imagined BN-containing compounds, we took as reference experimental values of ionization energies provided by ultra-violet photoelectron spectroscopy (UV-PES) in gas phase—the only technique giving access to the energy levels of filled molecular orbitals. Thus, a set of 21 aromatic molecules containing B-N bonds and B-N-B patterns has been merged for a comparison between experimental IEs obtained by UV-PES and various theoretical approaches for their estimation. Time-Dependent Density Functional Theory (TD-DFT) methods using B3LYP and long-range corrected CAM-B3LYP functionals are used, combined with the Δ SCF approach, and compared with electron propagator theory such as outer valence Green's function (OVGF, P3) and symmetry adapted cluster-configuration interaction ab initio methods. Direct Kohn-Sham estimation and "corrected" Kohn-Sham estimation are also given. The deviation between experimental and theoretical values is computed for each molecule, and a statistical study is performed over the average and the root mean square for the whole set and sub-sets of molecules. It is shown that (i) Δ SCF+TDDFT(CAM-B3LYP), OVGF, and P3 are the most efficient way for a good agreement with UV-PES values, (ii) a CAM-B3LYP range-separated hybrid functional is significantly better than B3LYP for the purpose, especially for extended conjugated systems, and (iii) the "corrected" Kohn-Sham result is a

  18. Neuromorphic computing enabled by physics of electron spins: Prospects and perspectives

    Science.gov (United States)

    Sengupta, Abhronil; Roy, Kaushik

    2018-03-01

    “Spintronics” refers to the understanding of the physics of electron spin-related phenomena. While most of the significant advancements in this field has been driven primarily by memory, recent research has demonstrated that various facets of the underlying physics of spin transport and manipulation can directly mimic the functionalities of the computational primitives in neuromorphic computation, i.e., the neurons and synapses. Given the potential of these spintronic devices to implement bio-mimetic computations at very low terminal voltages, several spin-device structures have been proposed as the core building blocks of neuromorphic circuits and systems to implement brain-inspired computing. Such an approach is expected to play a key role in circumventing the problems of ever-increasing power dissipation and hardware requirements for implementing neuro-inspired algorithms in conventional digital CMOS technology. Perspectives on spin-enabled neuromorphic computing, its status, and challenges and future prospects are outlined in this review article.

  19. Function Follows Performance in Evolutionary Computational Processing

    DEFF Research Database (Denmark)

    Pasold, Anke; Foged, Isak Worre

    2011-01-01

    As the title ‘Function Follows Performance in Evolutionary Computational Processing’ suggests, this paper explores the potentials of employing multiple design and evaluation criteria within one processing model in order to account for a number of performative parameters desired within varied...

  20. CUDA/GPU Technology : Parallel Programming For High Performance Scientific Computing

    OpenAIRE

    YUHENDRA; KUZE, Hiroaki; JOSAPHAT, Tetuko Sri Sumantyo

    2009-01-01

    [ABSTRACT]Graphics processing units (GP Us) originally designed for computer video cards have emerged as the most powerful chip in a high-performance workstation. In the high performance computation capabilities, graphic processing units (GPU) lead to much more powerful performance than conventional CPUs by means of parallel processing. In 2007, the birth of Compute Unified Device Architecture (CUDA) and CUDA-enabled GPUs by NVIDIA Corporation brought a revolution in the general purpose GPU a...

  1. Computer Self-Efficacy, Computer Anxiety, Performance and Personal Outcomes of Turkish Physical Education Teachers

    Science.gov (United States)

    Aktag, Isil

    2015-01-01

    The purpose of this study is to determine the computer self-efficacy, performance outcome, personal outcome, and affect and anxiety level of physical education teachers. Influence of teaching experience, computer usage and participation of seminars or in-service programs on computer self-efficacy level were determined. The subjects of this study…

  2. Performance of the Antares large area cold cathode electron gun

    International Nuclear Information System (INIS)

    Rosocha, L.A.; Mansfield, C.R.

    1983-01-01

    The performance of the electron gun which supplies ionization for the Antares high power electron beam sustained CO 2 laser power amplifier is described. This electron gun is a coaxial cylindrical cold cathode vacuum triode having a total electron aperture area of approximately 9 m 2 . Electrons are extracted from the gun in pulses of 3-6 μs duration, average current densities of 40-60 ma/cm2, and electron energies of 450-500 keV. The main areas of discussion in this paper are the performance in terms of grid control, current density balance, and current runaway due to breakdown limitations. Comparison of the experimental results with the predictions of a theoretical model for the electron gun will also be presented

  3. Performance of the Antares large area cold cathode electron gun

    International Nuclear Information System (INIS)

    Rosocha, L.A.; Mansfield, C.R.

    1983-01-01

    The performance of the electron gun which supplies ionization for the Antares high-power electron-beam-sustained CO 2 -laser power amplifier is described. This electron gun is a coaxial cylindrical cold cathode vacuum triode having a total electron aperture area of approximately 9 m 2 . Electrons are extracted from the gun in pulses of 3 to 6 μs duration, average current densities of 40 to 60 mA/cm 2 , and electron energies of 450 to 500 keV. The main areas of discussion in this paper are the performance in terms of grid control, current-density balance, and current runaway due to breakdown limitations. Comparison of the experimental results with the predictions of a theoretical model for the electron gun are also presented

  4. A high performance scientific cloud computing environment for materials simulations

    Science.gov (United States)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  5. Performance of the electron energy-loss spectrometer

    International Nuclear Information System (INIS)

    Tanaka, H.; Huebner, R.H.

    1977-01-01

    Performance characteristics of the electron energy-loss spectrometer incorporating a new high-resolution hemispherical monochromator are reported. The apparatus achieved an energy-resolution of 25 meV in the elastic scattering mode, and angular distributions of elastically scattered electrons were in excellent agreement with previous workers. Preliminary energy-loss spectra for several atmospheric gases demonstrate the excellent versatility and stable operation of the improved system. 12 references

  6. Efficient Computation of Coherent Synchrotron Radiation Taking into Account 6D Phase Space Distribution of Emitting Electrons

    International Nuclear Information System (INIS)

    Chubar, O.; Couprie, M.-E.

    2007-01-01

    CPU-efficient method for calculation of the frequency domain electric field of Coherent Synchrotron Radiation (CSR) taking into account 6D phase space distribution of electrons in a bunch is proposed. As an application example, calculation results of the CSR emitted by an electron bunch with small longitudinal and large transverse sizes are presented. Such situation can be realized in storage rings or ERLs by transverse deflection of the electron bunches in special crab-type RF cavities, i.e. using the technique proposed for the generation of femtosecond X-ray pulses (A. Zholents et. al., 1999). The computation, performed for the parameters of the SOLEIL storage ring, shows that if the transverse size of electron bunch is larger than the diffraction limit for single-electron SR at a given wavelength -- this affects the angular distribution of the CSR at this wavelength and reduces the coherent flux. Nevertheless, for transverse bunch dimensions up to several millimeters and a longitudinal bunch size smaller than hundred micrometers, the resulting CSR flux in the far infrared spectral range is still many orders of magnitude higher than the flux of incoherent SR, and therefore can be considered for practical use

  7. Optical Thermal Characterization Enables High-Performance Electronics Applications

    Energy Technology Data Exchange (ETDEWEB)

    2016-02-01

    NREL developed a modeling and experimental strategy to characterize thermal performance of materials. The technique provides critical data on thermal properties with relevance for electronics packaging applications. Thermal contact resistance and bulk thermal conductivity were characterized for new high-performance materials such as thermoplastics, boron-nitride nanosheets, copper nanowires, and atomically bonded layers. The technique is an important tool for developing designs and materials that enable power electronics packaging with small footprint, high power density, and low cost for numerous applications.

  8. VLSI electronics microstructure science

    CERN Document Server

    1981-01-01

    VLSI Electronics: Microstructure Science, Volume 3 evaluates trends for the future of very large scale integration (VLSI) electronics and the scientific base that supports its development.This book discusses the impact of VLSI on computer architectures; VLSI design and design aid requirements; and design, fabrication, and performance of CCD imagers. The approaches, potential, and progress of ultra-high-speed GaAs VLSI; computer modeling of MOSFETs; and numerical physics of micron-length and submicron-length semiconductor devices are also elaborated. This text likewise covers the optical linewi

  9. Soft Electronics Enabled Ergonomic Human-Computer Interaction for Swallowing Training

    Science.gov (United States)

    Lee, Yongkuk; Nicholls, Benjamin; Sup Lee, Dong; Chen, Yanfei; Chun, Youngjae; Siang Ang, Chee; Yeo, Woon-Hong

    2017-04-01

    We introduce a skin-friendly electronic system that enables human-computer interaction (HCI) for swallowing training in dysphagia rehabilitation. For an ergonomic HCI, we utilize a soft, highly compliant (“skin-like”) electrode, which addresses critical issues of an existing rigid and planar electrode combined with a problematic conductive electrolyte and adhesive pad. The skin-like electrode offers a highly conformal, user-comfortable interaction with the skin for long-term wearable, high-fidelity recording of swallowing electromyograms on the chin. Mechanics modeling and experimental quantification captures the ultra-elastic mechanical characteristics of an open mesh microstructured sensor, conjugated with an elastomeric membrane. Systematic in vivo studies investigate the functionality of the soft electronics for HCI-enabled swallowing training, which includes the application of a biofeedback system to detect swallowing behavior. The collection of results demonstrates clinical feasibility of the ergonomic electronics in HCI-driven rehabilitation for patients with swallowing disorders.

  10. Electronic Banking And Bank Performance In Nigeria | Abaenewe ...

    African Journals Online (AJOL)

    This study investigated the profitability performance of Nigerian banks following the full adoption of electronic banking system. The study became necessary as a result of increased penetration of electronic banking which has redefined the banking operations in Nigeria and around the world. Judgmental sampling method ...

  11. High Performance Computing Modernization Program Kerberos Throughput Test Report

    Science.gov (United States)

    2017-10-26

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/5524--17-9751 High Performance Computing Modernization Program Kerberos Throughput Test ...NUMBER 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 2. REPORT TYPE1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 6. AUTHOR(S) 8. PERFORMING...PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT High Performance Computing Modernization Program Kerberos Throughput Test Report Daniel G. Gdula* and

  12. Computer experiments on the imaging of point defects with the conventional transmission electron microscope

    Energy Technology Data Exchange (ETDEWEB)

    Krakow, W [Xerox Corp., Rochester, N.Y. (USA)

    1978-02-01

    To aid in the interpretation of high resolution electron micrographs of defect structures in crystals, computer-simulated dark-field electron micrographs have been obtained for a variety of point defects in metals. Interpretation of these images in terms of atomic positions and atom correlations becomes straightforward, and it is a simple matter to distinguish between real structural information and image artifacts produced by the phase contrast mechanism in the electron optical imaging process.

  13. Molecular Computational Investigation of Electron Transfer Kinetics across Cytochrome-Iron Oxide Interfaces

    International Nuclear Information System (INIS)

    Kerisit, Sebastien N.; Rosso, Kevin M.; Dupuis, Michel; Valiev, Marat

    2007-01-01

    The interface between electron transfer proteins such as cytochromes and solid phase mineral oxides is central to the activity of dissimilatory-metal reducing bacteria. A combination of potential-based molecular dynamics simulations and ab initio electronic structure calculations are used in the framework of Marcus' electron transfer theory to compute elementary electron transfer rates from a well-defined cytochrome model, namely the small tetraheme cytochrome (STC) from Shewanella oneidensis, to surfaces of the iron oxide mineral hematite (a-Fe2O3). Room temperature molecular dynamics simulations show that an isolated STC molecule favors surface attachment via direct contact of hemes I and IV at the poles of the elongated axis, with electron transfer distances as small as 9 Angstroms. The cytochrome remains attached to the mineral surface in the presence of water and shows limited surface diffusion at the interface. Ab initio electronic coupling matrix element (VAB) calculations of configurations excised from the molecular dynamics simulations reveal VAB values ranging from 1 to 20 cm-1, consistent with nonadiabaticity. Using these results, together with experimental data on the redox potential of hematite and hemes in relevant cytochromes and calculations of the reorganization energy from cluster models, we estimate the rate of electron transfer across this model interface to range from 1 to 1000 s-1 for the most exothermic driving force considered in this work, and from 0.01 to 20 s-1 for the most endothermic. This fairly large range of electron transfer rates highlights the sensitivity of the rate upon the electronic coupling matrix element, which is in turn dependent on the fluctuations of the heme configuration at the interface. We characterize this dependence using an idealized bis-imidazole heme to compute from first principles the VAB variation due to porphyrin ring orientation, electron transfer distance, and mineral surface termination. The electronic

  14. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    Directory of Open Access Journals (Sweden)

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  15. GPU-based high-performance computing for radiation therapy

    International Nuclear Information System (INIS)

    Jia, Xun; Jiang, Steve B; Ziegenhein, Peter

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. (topical review)

  16. DURIP: High Performance Computing in Biomathematics Applications

    Science.gov (United States)

    2017-05-10

    Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied

  17. AHPCRC - Army High Performance Computing Research Center

    Science.gov (United States)

    2010-01-01

    computing. Of particular interest is the ability of a distrib- uted jamming network (DJN) to jam signals in all or part of a sensor or communications net...and reasoning, assistive technologies. FRIEDRICH (FRITZ) PRINZ Finmeccanica Professor of Engineering, Robert Bosch Chair, Department of Engineering...High Performance Computing Research Center www.ahpcrc.org BARBARA BRYAN AHPCRC Research and Outreach Manager, HPTi (650) 604-3732 bbryan@hpti.com Ms

  18. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  19. High-performance green flexible electronics based on biodegradable cellulose nanofibril paper.

    Science.gov (United States)

    Jung, Yei Hwan; Chang, Tzu-Hsuan; Zhang, Huilong; Yao, Chunhua; Zheng, Qifeng; Yang, Vina W; Mi, Hongyi; Kim, Munho; Cho, Sang June; Park, Dong-Wook; Jiang, Hao; Lee, Juhwan; Qiu, Yijie; Zhou, Weidong; Cai, Zhiyong; Gong, Shaoqin; Ma, Zhenqiang

    2015-05-26

    Today's consumer electronics, such as cell phones, tablets and other portable electronic devices, are typically made of non-renewable, non-biodegradable, and sometimes potentially toxic (for example, gallium arsenide) materials. These consumer electronics are frequently upgraded or discarded, leading to serious environmental contamination. Thus, electronic systems consisting of renewable and biodegradable materials and minimal amount of potentially toxic materials are desirable. Here we report high-performance flexible microwave and digital electronics that consume the smallest amount of potentially toxic materials on biobased, biodegradable and flexible cellulose nanofibril papers. Furthermore, we demonstrate gallium arsenide microwave devices, the consumer wireless workhorse, in a transferrable thin-film form. Successful fabrication of key electrical components on the flexible cellulose nanofibril paper with comparable performance to their rigid counterparts and clear demonstration of fungal biodegradation of the cellulose-nanofibril-based electronics suggest that it is feasible to fabricate high-performance flexible electronics using ecofriendly materials.

  20. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    Science.gov (United States)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  1. Improvement of tokamak performance by injection of electrons

    International Nuclear Information System (INIS)

    Ono, Masayuki.

    1992-12-01

    Concepts for improving tokamak performance by utilizing injection of hot electrons are discussed. Motivation of this paper is to introduce the research work being performed in this area and to refer the interested readers to the literature for more detail. The electron injection based concepts presented here have been developed in the CDX, CCT, and CDX-U tokamak facilities. The following three promising application areas of electron injection are described here: 1. Non-inductive current drive, 2. Plasma preionization for tokamak start-up assist, and 3. Charging-up of tokamak flux surfaces for improved plasma confinement. The main motivation for the dc-helicity injection current drive is in its efficiency that, in theory, is independent of plasma density. This property makes it attractive for driving currents in high density reactor plasmas

  2. Excess electrons in methanol clusters: Beyond the one-electron picture

    Science.gov (United States)

    Pohl, Gábor; Mones, Letif; Turi, László

    2016-10-01

    We performed a series of comparative quantum chemical calculations on various size negatively charged methanol clusters, ("separators=" CH 3 OH ) n - . The clusters are examined in their optimized geometries (n = 2-4), and in geometries taken from mixed quantum-classical molecular dynamics simulations at finite temperature (n = 2-128). These latter structures model potential electron binding sites in methanol clusters and in bulk methanol. In particular, we compute the vertical detachment energy (VDE) of an excess electron from increasing size methanol cluster anions using quantum chemical computations at various levels of theory including a one-electron pseudopotential model, several density functional theory (DFT) based methods, MP2 and coupled-cluster CCSD(T) calculations. The results suggest that at least four methanol molecules are needed to bind an excess electron on a hydrogen bonded methanol chain in a dipole bound state. Larger methanol clusters are able to form stronger interactions with an excess electron. The two simulated excess electron binding motifs in methanol clusters, interior and surface states, correlate well with distinct, experimentally found VDE tendencies with size. Interior states in a solvent cavity are stabilized significantly stronger than electron states on cluster surfaces. Although we find that all the examined quantum chemistry methods more or less overestimate the strength of the experimental excess electron stabilization, MP2, LC-BLYP, and BHandHLYP methods with diffuse basis sets provide a significantly better estimate of the VDE than traditional DFT methods (BLYP, B3LYP, X3LYP, PBE0). A comparison to the better performing many electron methods indicates that the examined one-electron pseudopotential can be reasonably used in simulations for systems of larger size.

  3. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  4. Computer performance optimization systems, applications, processes

    CERN Document Server

    Osterhage, Wolfgang W

    2013-01-01

    Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting p

  5. Stretchable, Twisted Conductive Microtubules for Wearable Computing, Robotics, Electronics, and Healthcare

    OpenAIRE

    Thanh Nho Do; Yon Visell

    2017-01-01

    Stretchable and flexible multifunctional electronic components, including sensors and actuators, have received increasing attention in robotics, electronics, wearable, and healthcare applications. Despite advances, it has remained challenging to design analogs of many electronic components to be highly stretchable, to be efficient to fabricate, and to provide control over electronic performance. Here, we describe highly elastic sensors and interconnects formed from thin, twisted conductive mi...

  6. Computer versus paper--does it make any difference in test performance?

    Science.gov (United States)

    Karay, Yassin; Schauber, Stefan K; Stosch, Christoph; Schüttpelz-Brauns, Katrin

    2015-01-01

    CONSTRUCT: In this study, we examine the differences in test performance between the paper-based and the computer-based version of the Berlin formative Progress Test. In this context it is the first study that allows controlling for students' prior performance. Computer-based tests make possible a more efficient examination procedure for test administration and review. Although university staff will benefit largely from computer-based tests, the question arises if computer-based tests influence students' test performance. A total of 266 German students from the 9th and 10th semester of medicine (comparable with the 4th-year North American medical school schedule) participated in the study (paper = 132, computer = 134). The allocation of the test format was conducted as a randomized matched-pair design in which students were first sorted according to their prior test results. The organizational procedure, the examination conditions, the room, and seating arrangements, as well as the order of questions and answers, were identical in both groups. The sociodemographic variables and pretest scores of both groups were comparable. The test results from the paper and computer versions did not differ. The groups remained within the allotted time, but students using the computer version (particularly the high performers) needed significantly less time to complete the test. In addition, we found significant differences in guessing behavior. Low performers using the computer version guess significantly more than low-performing students in the paper-pencil version. Participants in computer-based tests are not at a disadvantage in terms of their test results. The computer-based test required less processing time. The reason for the longer processing time when using the paper-pencil version might be due to the time needed to write the answer down, controlling for transferring the answer correctly. It is still not known why students using the computer version (particularly low-performing

  7. Carbon nanotube computer.

    Science.gov (United States)

    Shulaker, Max M; Hills, Gage; Patil, Nishant; Wei, Hai; Chen, Hong-Yu; Wong, H-S Philip; Mitra, Subhasish

    2013-09-26

    The miniaturization of electronic devices has been the principal driving force behind the semiconductor industry, and has brought about major improvements in computational power and energy efficiency. Although advances with silicon-based electronics continue to be made, alternative technologies are being explored. Digital circuits based on transistors fabricated from carbon nanotubes (CNTs) have the potential to outperform silicon by improving the energy-delay product, a metric of energy efficiency, by more than an order of magnitude. Hence, CNTs are an exciting complement to existing semiconductor technologies. Owing to substantial fundamental imperfections inherent in CNTs, however, only very basic circuit blocks have been demonstrated. Here we show how these imperfections can be overcome, and demonstrate the first computer built entirely using CNT-based transistors. The CNT computer runs an operating system that is capable of multitasking: as a demonstration, we perform counting and integer-sorting simultaneously. In addition, we implement 20 different instructions from the commercial MIPS instruction set to demonstrate the generality of our CNT computer. This experimental demonstration is the most complex carbon-based electronic system yet realized. It is a considerable advance because CNTs are prominent among a variety of emerging technologies that are being considered for the next generation of highly energy-efficient electronic systems.

  8. A novel high performance, ultra thin heat sink for electronics

    International Nuclear Information System (INIS)

    Escher, W.; Michel, B.; Poulikakos, D.

    2010-01-01

    We present an ultra thin heat sink for electronics, combining optimized impinging slot-jets, micro-channels and manifolds for efficient cooling. We first introduce a three-dimensional numerical model of the heat transfer structure, to investigate its hydrodynamic and thermal performance and its sensitivity to geometric parameters. In a second step we propose a three-dimensional hydrodynamic numerical model representing the complete system. Based on this model we design a novel manifold providing uniform fluid distribution. In order to save computational time a simpler semi-empirical model is proposed and validated. The semi-empirical model allows a robust optimization of the heat sink geometric parameters. The design is optimized for a 2 x 2 cm 2 chip and provides a total thermal resistance of 0.087 cm 2 K/W for flow rates 2 for a temperature difference between fluid inlet and chip of 65 K.

  9. Reconstruction and identification of electrons in the Atlas experiment. Setup of a Tier 2 of the computing grid

    International Nuclear Information System (INIS)

    Derue, F.

    2008-03-01

    The origin of the mass of elementary particles is linked to the electroweak symmetry breaking mechanism. Its study will be one of the main efforts of the Atlas experiment at the Large Hadron Collider of CERN, starting in 2008. In most cases, studies will be limited by our knowledge of the detector performances, as the precision of the energy reconstruction or the efficiency to identify particles. This manuscript presents a work dedicated to the reconstruction of electrons in the Atlas experiment with simulated data and data taken during the combined test beam of 2004. The analysis of the Atlas data implies the use of a huge amount of computing and storage resources which brought to the development of a world computing grid. (author)

  10. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  11. A Comparative Study of Electronic Performance Support Systems

    Science.gov (United States)

    Nguyen, Frank; Klein, James D.; Sullivan, Howard

    2005-01-01

    Electronic performance support systems (EPSS) deliver relevant support information to users while they are performing tasks. The present study examined the effect of different types of EPSS on user performance, attitudes, system use and time on task. Employees at a manufacturing company were asked to complete a procedural software task and…

  12. Computational hydrodynamics and optical performance of inductively-coupled plasma adaptive lenses

    Energy Technology Data Exchange (ETDEWEB)

    Mortazavi, M.; Urzay, J., E-mail: jurzay@stanford.edu; Mani, A. [Center for Turbulence Research, Stanford University, Stanford, California 94305-3024 (United States)

    2015-06-15

    This study addresses the optical performance of a plasma adaptive lens for aero-optical applications by using both axisymmetric and three-dimensional numerical simulations. Plasma adaptive lenses are based on the effects of free electrons on the phase velocity of incident light, which, in theory, can be used as a phase-conjugation mechanism. A closed cylindrical chamber filled with Argon plasma is used as a model lens into which a beam of light is launched. The plasma is sustained by applying a radio-frequency electric current through a coil that envelops the chamber. Four different operating conditions, ranging from low to high powers and induction frequencies, are employed in the simulations. The numerical simulations reveal complex hydrodynamic phenomena related to buoyant and electromagnetic laminar transport, which generate, respectively, large recirculating cells and wall-normal compression stresses in the form of local stagnation-point flows. In the axisymmetric simulations, the plasma motion is coupled with near-wall axial striations in the electron-density field, some of which propagate in the form of low-frequency traveling disturbances adjacent to vortical quadrupoles that are reminiscent of Taylor-Görtler flow structures in centrifugally unstable flows. Although the refractive-index fields obtained from axisymmetric simulations lead to smooth beam wavefronts, they are found to be unstable to azimuthal disturbances in three of the four three-dimensional cases considered. The azimuthal striations are optically detrimental, since they produce high-order angular aberrations that account for most of the beam wavefront error. A fourth case is computed at high input power and high induction frequency, which displays the best optical properties among all the three-dimensional simulations considered. In particular, the increase in induction frequency prevents local thermalization and leads to an axisymmetric distribution of electrons even after introduction of

  13. Transformational silicon electronics

    KAUST Repository

    Rojas, Jhonathan Prieto; Sevilla, Galo T.; Ghoneim, Mohamed T.; Inayat, Salman Bin; Ahmed, Sally; Hussain, Aftab M.; Hussain, Muhammad Mustafa

    2014-01-01

    In today's traditional electronics such as in computers or in mobile phones, billions of high-performance, ultra-low-power devices are neatly integrated in extremely compact areas on rigid and brittle but low-cost bulk monocrystalline silicon (100

  14. 77 FR 34063 - Certain Electronic Devices, Including Mobile Phones and Tablet Computers, and Components Thereof...

    Science.gov (United States)

    2012-06-08

    ... Phones and Tablet Computers, and Components Thereof Institution of Investigation AGENCY: U.S... the United States after importation of certain electronic devices, including mobile phones and tablet... mobile phones and tablet computers, and components thereof that infringe one or more of claims 1-3 and 5...

  15. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  16. Disposable Electronic Cigarettes and Electronic Hookahs: Evaluation of Performance

    Science.gov (United States)

    Williams, Monique; Ghai, Sanjay

    2015-01-01

    Introduction: The purpose of this study was to characterize the performance of disposable button-activated and disposable airflow-activated electronic cigarettes (EC) and electronic hookahs (EH). Methods: The airflow rate required to produce aerosol, pressure drop, and the aerosol absorbance at 420nm were measured during smoke-outs of 9 disposable products. Three units of each product were tested in these experiments. Results: The airflow rates required to produce aerosol and the aerosol absorbances were lower for button-activated models (3mL/s; 0.41–0.55 absorbance) than for airflow-activated models (7–17mL/s; 0.48–0.84 absorbance). Pressure drop was also lower across button-activated products (range = 6–12mm H2O) than airflow-activated products (range = 15–67mm H20). For 25 of 27 units tested, airflow did not have to be increased during smoke-out to maintain aerosol production, unlike earlier generation models. Two brands had uniform performance characteristics for all parameters, while 3 had at least 1 product that did not function normally. While button-activated models lasted 200 puffs or less and EH airflow-activated models often lasted 400 puffs, none of the models produced as many puffs as advertised. Puff number was limited by battery life, which was shorter in button-activated models. Conclusion: The performance of disposable products was differentiated mainly by the way the aerosol was produced (button vs airflow-activated) rather than by product type (EC vs EH). Users needed to take harder drags on airflow-activated models. Performance varied within models, and battery life limited the number of puffs. Data suggest quality control in manufacturing varies among brands. PMID:25104117

  17. Separation of electron ion ring components (computational simulation and experimental results)

    International Nuclear Information System (INIS)

    Aleksandrov, V.S.; Dolbilov, G.V.; Kazarinov, N.Yu.; Mironov, V.I.; Novikov, V.G.; Perel'shtejn, Eh.A.; Sarantsev, V.P.; Shevtsov, V.F.

    1978-01-01

    The problems of the available polarization value of electron-ion rings in the regime of acceleration and separation of its components at the final stage of acceleration are studied. The results of computational simulation by use of the macroparticle method and experiments on the ring acceleration and separation are given. The comparison of calculation results with experiment is presented

  18. Program package for the computation of lenses and deflectors

    International Nuclear Information System (INIS)

    Lencova, B.; Wisselink, G.

    1990-01-01

    In this paper a set of computer programs for the design of electrostatic and magnetic electron lenses and for the design of multipoles for electron microscopy and lithography is described. The two-dimensional field computation is performed by the finite-element method. In order to meet the high demands on accuracy, the programs include the use of a variable step in the fine mesh made with an automeshing procedure, improved methods for coefficient evaluation, a fast solution procedure for the linear equations, and modified algorithms for computation of multipoles and electrostatic lenses. They allow for a fast and accurate computation of electron optical elements. For the input and modification of data, and for presentation of results, graphical menu driven programs written for personal computers are used. For the computation of electron optical properties axial fields are used. (orig.)

  19. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  20. High Performance Computing Software Applications for Space Situational Awareness

    Science.gov (United States)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  1. A methodology for performing computer security reviews

    International Nuclear Information System (INIS)

    Hunteman, W.J.

    1991-01-01

    DOE Order 5637.1, ''Classified Computer Security,'' requires regular reviews of the computer security activities for an ADP system and for a site. Based on experiences gained in the Los Alamos computer security program through interactions with DOE facilities, we have developed a methodology to aid a site or security officer in performing a comprehensive computer security review. The methodology is designed to aid a reviewer in defining goals of the review (e.g., preparation for inspection), determining security requirements based on DOE policies, determining threats/vulnerabilities based on DOE and local threat guidance, and identifying critical system components to be reviewed. Application of the methodology will result in review procedures and checklists oriented to the review goals, the target system, and DOE policy requirements. The review methodology can be used to prepare for an audit or inspection and as a periodic self-check tool to determine the status of the computer security program for a site or specific ADP system. 1 tab

  2. A methodology for performing computer security reviews

    International Nuclear Information System (INIS)

    Hunteman, W.J.

    1991-01-01

    This paper reports on DIE Order 5637.1, Classified Computer Security, which requires regular reviews of the computer security activities for an ADP system and for a site. Based on experiences gained in the Los Alamos computer security program through interactions with DOE facilities, the authors have developed a methodology to aid a site or security officer in performing a comprehensive computer security review. The methodology is designed to aid a reviewer in defining goals of the review (e.g., preparation for inspection), determining security requirements based on DOE policies, determining threats/vulnerabilities based on DOE and local threat guidance, and identifying critical system components to be reviewed. Application of the methodology will result in review procedures and checklists oriented to the review goals, the target system, and DOE policy requirements. The review methodology can be used to prepare for an audit or inspection and as a periodic self-check tool to determine the status of the computer security program for a site or specific ADP system

  3. Monitoring SLAC High Performance UNIX Computing Systems

    International Nuclear Information System (INIS)

    Lettsome, Annette K.

    2005-01-01

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface

  4. Parallel, distributed and GPU computing technologies in single-particle electron microscopy.

    Science.gov (United States)

    Schmeisser, Martin; Heisen, Burkhard C; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger

    2009-07-01

    Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of today's technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined.

  5. Theoretical investigations of molecular wires: Electronic spectra and electron transport

    Science.gov (United States)

    Palma, Julio Leopoldo

    The results of theoretical and computational research are presented for two promising molecular wires, the Nanostar dendrimer, and a series of substituted azobenzene derivatives connected to aluminum electrodes. The electronic absorption spectra of the Nanostar (a phenylene-ethynylene dendrimer attached to an ethynylperylene chromophore) were calculated using a sequential Molecular Dynamics/Quantum Mechanics (MD/QM) method to perform an analysis of the temperature dependence of the electronic absorption process. We modeled the Nanostar as a series of connected units, and performed MD simulations for each chromophore at 10 K and 300 K to study how the temperature affected the structures and, consequently, the spectra. The absorption spectra of the Nanostar were computed using an ensemble of 8000 structures for each chromophore. Quantum Mechanical (QM) ZINDO/S calculations were performed for each conformation in the ensemble, including 16 excited states, for a total of 128,000 excitation energies. The spectral intensity was then scaled linearly with the number of conjugated units. Our calculations for both the individual chromophores and the Nanostar, are in good agreement with experiments. We explain in detail the effects of temperature and the consequences for the absorption process. The second part of this thesis presents a study of the effects of chemical substituents on the electron transport properties of the azobenzene molecule, which has been proposed recently as a component of a light-driven molecular switch. This molecule has two stable conformations (cis and trans) in its electronic ground state, with considerable differences in their conductance. The electron transport properties were calculated using first-principles methods combining non-equilibrium Green's function (NEGF) techniques with density functional theory (DFT). For the azobenzene studies, we included electron-donating groups and electron-withdrawing groups in meta- and ortho-positions with

  6. Micromagnetics on high-performance workstation and mobile computational platforms

    Science.gov (United States)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  7. Computational Fluid Dynamics and Building Energy Performance Simulation

    DEFF Research Database (Denmark)

    Nielsen, Peter V.; Tryggvason, Tryggvi

    An interconnection between a building energy performance simulation program and a Computational Fluid Dynamics program (CFD) for room air distribution will be introduced for improvement of the predictions of both the energy consumption and the indoor environment. The building energy performance...

  8. Parallel, distributed and GPU computing technologies in single-particle electron microscopy

    International Nuclear Information System (INIS)

    Schmeisser, Martin; Heisen, Burkhard C.; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger

    2009-01-01

    An introduction to the current paradigm shift towards concurrency in software. Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of today’s technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined

  9. Electron beam treatment planning: A review of dose computation methods

    International Nuclear Information System (INIS)

    Mohan, R.; Riley, R.; Laughlin, J.S.

    1983-01-01

    Various methods of dose computations are reviewed. The equivalent path length methods used to account for body curvature and internal structure are not adequate because they ignore the lateral diffusion of electrons. The Monte Carlo method for the broad field three-dimensional situation in treatment planning is impractical because of the enormous computer time required. The pencil beam technique may represent a suitable compromise. The behavior of a pencil beam may be described by the multiple scattering theory or, alternatively, generated using the Monte Carlo method. Although nearly two orders of magnitude slower than the equivalent path length technique, the pencil beam method improves accuracy sufficiently to justify its use. It applies very well when accounting for the effect of surface irregularities; the formulation for handling inhomogeneous internal structure is yet to be developed

  10. Neuroanatomical correlates of brain-computer interface performance.

    Science.gov (United States)

    Kasahara, Kazumi; DaSalla, Charles Sayo; Honda, Manabu; Hanakawa, Takashi

    2015-04-15

    Brain-computer interfaces (BCIs) offer a potential means to replace or restore lost motor function. However, BCI performance varies considerably between users, the reasons for which are poorly understood. Here we investigated the relationship between sensorimotor rhythm (SMR)-based BCI performance and brain structure. Participants were instructed to control a computer cursor using right- and left-hand motor imagery, which primarily modulated their left- and right-hemispheric SMR powers, respectively. Although most participants were able to control the BCI with success rates significantly above chance level even at the first encounter, they also showed substantial inter-individual variability in BCI success rate. Participants also underwent T1-weighted three-dimensional structural magnetic resonance imaging (MRI). The MRI data were subjected to voxel-based morphometry using BCI success rate as an independent variable. We found that BCI performance correlated with gray matter volume of the supplementary motor area, supplementary somatosensory area, and dorsal premotor cortex. We suggest that SMR-based BCI performance is associated with development of non-primary somatosensory and motor areas. Advancing our understanding of BCI performance in relation to its neuroanatomical correlates may lead to better customization of BCIs based on individual brain structure. Copyright © 2015 Elsevier Inc. All rights reserved.

  11. A Novel Method for the Discrimination of Semen Arecae and Its Processed Products by Using Computer Vision, Electronic Nose, and Electronic Tongue

    Directory of Open Access Journals (Sweden)

    Min Xu

    2015-01-01

    Full Text Available Areca nut, commonly known locally as Semen Arecae (SA in China, has been used as an important Chinese herbal medicine for thousands of years. The raw SA (RAW is commonly processed by stir-baking to yellow (SBY, stir-baking to dark brown (SBD, and stir-baking to carbon dark (SBC for different clinical uses. In our present investigation, intelligent sensory technologies consisting of computer vision (CV, electronic nose (E-nose, and electronic tongue (E-tongue were employed in order to develop a novel and accurate method for discrimination of SA and its processed products. Firstly, the color parameters and electronic sensory responses of E-nose and E-tongue of the samples were determined, respectively. Then, indicative components including 5-hydroxymethyl furfural (5-HMF and arecoline (ARE were determined by HPLC. Finally, principal component analysis (PCA and discriminant factor analysis (DFA were performed. The results demonstrated that these three instruments can effectively discriminate SA and its processed products. 5-HMF and ARE can reflect the stir-baking degree of SA. Interestingly, the two components showed close correlations to the color parameters and sensory responses of E-nose and E-tongue. In conclusion, this novel method based on CV, E-nose, and E-tongue can be successfully used to discriminate SA and its processed products.

  12. Leveraging multi-layer imager detector design to improve low-dose performance for megavoltage cone-beam computed tomography

    Science.gov (United States)

    Hu, Yue-Houng; Rottmann, Joerg; Fueglistaller, Rony; Myronakis, Marios; Wang, Adam; Huber, Pascal; Shedlock, Daniel; Morf, Daniel; Baturin, Paul; Star-Lack, Josh; Berbeco, Ross

    2018-02-01

    While megavoltage cone-beam computed tomography (CBCT) using an electronic portal imaging device (EPID) provides many advantages over kilovoltage (kV) CBCT, clinical adoption is limited by its high doses. Multi-layer imager (MLI) EPIDs increase DQE(0) while maintaining high resolution. However, even well-designed, high-performance MLIs suffer from increased electronic noise from each readout, degrading low-dose image quality. To improve low-dose performance, shift-and-bin addition (ShiBA) imaging is proposed, leveraging the unique architecture of the MLI. ShiBA combines hardware readout-binning and super-resolution concepts, reducing electronic noise while maintaining native image sampling. The imaging performance of full-resolution (FR); standard, aligned binned (BIN); and ShiBA images in terms of noise power spectrum (NPS), electronic NPS, modulation transfer function (MTF), and the ideal observer signal-to-noise ratio (SNR)—the detectability index (d‧)—are compared. The FR 4-layer readout of the prototype MLI exhibits an electronic NPS magnitude 6-times higher than a state-of-the-art single layer (SLI) EPID. Although the MLI is built on the same readout platform as the SLI, with each layer exhibiting equivalent electronic noise, the multi-stage readout of the MLI results in electronic noise 50% higher than simple summation. Electronic noise is mitigated in both BIN and ShiBA imaging, reducing its total by ~12 times. ShiBA further reduces the NPS, effectively upsampling the image, resulting in a multiplication by a sinc2 function. Normalized NPS show that neither ShiBA nor BIN otherwise affects image noise. The LSF shows that ShiBA removes the pixilation artifact of BIN images and mitigates the effect of detector shift, but does not quantifiably improve the MTF. ShiBA provides a pre-sampled representation of the images, mitigating phase dependence. Hardware binning strategies lower the quantum noise floor, with 2  ×  2 implementation reducing the

  13. Development of Computer-Based Training to Supplement Lessons in Fundamentals of Electronics

    Directory of Open Access Journals (Sweden)

    Ian P. Benitez

    2016-05-01

    Full Text Available Teaching Fundamentals of Electronics allow students to familiarize with basic electronics concepts, acquire skills in the use of multi-meter test instrument, and develop mastery in testing basic electronic components. Actual teaching and doing observations during practical activities on components pin identification and testing showed that the lack of skills of new students in testing components can lead to incorrect fault diagnosis and wrong pin connection during in-circuit replacement of the defective parts. With the aim of reinforcing students with concrete understanding of the concepts of components applied in the actual test and measurement, a Computer-Based Training was developed. The proponent developed the learning modules (courseware utilizing concept mapping and storyboarding instructional design. Developing a courseware as simulated, activity-based and interactive as possible was the primary goal to resemble the real-world process. A Local area network (LAN-based learning management system was also developed to use in administering the learning modules. The Paired Sample T-Test based on the pretest and post-test result was used to determine whether the students achieved learning after taking the courseware. The result revealed that there is a significant achievement of the students after studying the learning module. The E-learning content was validated by the instructors in terms of contents, activities, assessment and format with a grand weighted mean of 4.35 interpreted as Sufficient. Based from the evaluation result, supplementing with the proposed computer-based training can enhance the teachinglearning process in electronic fundamentals.

  14. Performance of a GaAs electron source

    International Nuclear Information System (INIS)

    Calabrese, R.; Ciullo, G.; Della Mea, G.; Egeni, G.P.; Guidi, V.; Lamanna, G.; Lenisa, P.; Maciga, B.; Rigato, V.; Rudello, V.; Tecchio, L.; Yang, B.; Zandolin, S.

    1994-01-01

    We discuss the performance improvement of a GaAs electron source. High quantum yield (14%) and constant current extraction (1 mA for more than four weeks) are achieved after a little initial decay. These parameters meet the requirements for application of the GaAs photocathode as a source for electron cooling devices. We also present the preliminary results of a surface analysis experiment, carried out by means of the RBS technique to check the hypothesis of cesium evaporation from the surface when the photocathode is in operation. (orig.)

  15. The contribution of high-performance computing and modelling for industrial development

    CSIR Research Space (South Africa)

    Sithole, Happy

    2017-10-01

    Full Text Available Performance Computing and Modelling for Industrial Development Dr Happy Sithole and Dr Onno Ubbink 2 Strategic context • High-performance computing (HPC) combined with machine Learning and artificial intelligence present opportunities to non...

  16. Device controllers using an industrial personal computer of the PF 2.5-GeV Electron Linac at KEK

    International Nuclear Information System (INIS)

    Otake, Yuji; Yokota, Mitsuhiro; Kakihara, Kazuhisa; Ogawa, Yujiro; Ohsawa, Satoshi; Shidara, Tetsuo; Nakahara, Kazuo

    1992-01-01

    Device controllers for electron guns and slits using an industrial personal computer have been designed and installed in the Photon Factory 2.5-GeV Electron Linac at KEK. The design concept of the controllers is to realize a reliable system and good productivity of hardware and software by using an industrial personal computer and a programmable sequence controller. The device controllers have been working reliably for several years. (author)

  17. Different Effect of the Additional Electron-Withdrawing Cyano Group in Different Conjugation Bridge: The Adjusted Molecular Energy Levels and Largely Improved Photovoltaic Performance.

    Science.gov (United States)

    Li, Huiyang; Fang, Manman; Hou, Yingqin; Tang, Runli; Yang, Yizhou; Zhong, Cheng; Li, Qianqian; Li, Zhen

    2016-05-18

    Four organic sensitizers (LI-68-LI-71) bearing various conjugated bridges were designed and synthesized, in which the only difference between LI-68 and LI-69 (or LI-70 and LI-71) was the absence/presence of the CN group as the auxiliary electron acceptor. Interestingly, compared to the reference dye of LI-68, LI-69 bearing the additional CN group exhibited the bad performance with the decreased Jsc and Voc values. However, once one thiophene moiety near the anchor group was replaced by pyrrole with the electron-rich property, the resultant LI-71 exhibited a photoelectric conversion efficiency increase by about 3 folds from 2.75% (LI-69) to 7.95% (LI-71), displaying the synergistic effect of the two moieties (CN and pyrrole). Computational analysis disclosed that pyrrole as the auxiliary electron donor (D') in the conjugated bridge can compensate for the lower negative charge in the electron acceptor, which was caused by the CN group as the electron trap, leading to the more efficient electron injection and better photovoltaic performance.

  18. Visual Analysis of Cloud Computing Performance Using Behavioral Lines.

    Science.gov (United States)

    Muelder, Chris; Zhu, Biao; Chen, Wei; Zhang, Hongxin; Ma, Kwan-Liu

    2016-02-29

    Cloud computing is an essential technology to Big Data analytics and services. A cloud computing system is often comprised of a large number of parallel computing and storage devices. Monitoring the usage and performance of such a system is important for efficient operations, maintenance, and security. Tracing every application on a large cloud system is untenable due to scale and privacy issues. But profile data can be collected relatively efficiently by regularly sampling the state of the system, including properties such as CPU load, memory usage, network usage, and others, creating a set of multivariate time series for each system. Adequate tools for studying such large-scale, multidimensional data are lacking. In this paper, we present a visual based analysis approach to understanding and analyzing the performance and behavior of cloud computing systems. Our design is based on similarity measures and a layout method to portray the behavior of each compute node over time. When visualizing a large number of behavioral lines together, distinct patterns often appear suggesting particular types of performance bottleneck. The resulting system provides multiple linked views, which allow the user to interactively explore the data by examining the data or a selected subset at different levels of detail. Our case studies, which use datasets collected from two different cloud systems, show that this visual based approach is effective in identifying trends and anomalies of the systems.

  19. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  20. Performance evaluation of scientific programs on advanced architecture computers

    International Nuclear Information System (INIS)

    Walker, D.W.; Messina, P.; Baille, C.F.

    1988-01-01

    Recently a number of advanced architecture machines have become commercially available. These new machines promise better cost-performance then traditional computers, and some of them have the potential of competing with current supercomputers, such as the Cray X/MP, in terms of maximum performance. This paper describes an on-going project to evaluate a broad range of advanced architecture computers using a number of complete scientific application programs. The computers to be evaluated include distributed- memory machines such as the NCUBE, INTEL and Caltech/JPL hypercubes, and the MEIKO computing surface, shared-memory, bus architecture machines such as the Sequent Balance and the Alliant, very long instruction word machines such as the Multiflow Trace 7/200 computer, traditional supercomputers such as the Cray X.MP and Cray-2, and SIMD machines such as the Connection Machine. Currently 11 application codes from a number of scientific disciplines have been selected, although it is not intended to run all codes on all machines. Results are presented for two of the codes (QCD and missile tracking), and future work is proposed

  1. The utilization of electronic computers for bone density measurements with iodine 125 profile scanner

    International Nuclear Information System (INIS)

    Reiners, C.

    1974-01-01

    The utilization of electronic computers in the determination of the mineral content in bone with the 125 I profile scanner offers many advantages. The computer considerably lessens intensive work of routine evaluation. It enables the direct calculation of the attenuation coefficients. This means a greater accuracy and correctness of the results compared to the former 'graphical' method, as the approximations are eliminated and reference errors are avoided. (orig./LH) [de

  2. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa; Parashar, Manish; Kim, Hyunjoo; Jordan, Kirk E.; Sachdeva, Vipin; Sexton, James; Jamjoom, Hani; Shae, Zon-Yin; Pencheva, Gergina; Tavakoli, Reza; Wheeler, Mary F.

    2012-01-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a

  3. Computation of quantum electron transport with local current conservation using quantum trajectories

    International Nuclear Information System (INIS)

    Alarcón, A; Oriols, X

    2009-01-01

    A recent proposal for modeling time-dependent quantum electron transport with Coulomb and exchange correlations using quantum (Bohm) trajectories (Oriols 2007 Phys. Rev. Lett. 98 066803) is extended towards the computation of the total (particle plus displacement) current in mesoscopic devices. In particular, two different methods for the practical computation of the total current are compared. The first method computes the particle and the displacement currents from the rate of Bohm particles crossing a particular surface and the time-dependent variations of the electric field there. The second method uses the Ramo–Shockley theorem to compute the total current on that surface from the knowledge of the Bohm particle dynamics in a 3D volume and the time-dependent variations of the electric field on the boundaries of that volume. From a computational point of view, it is shown that both methods achieve local current conservation, but the second is preferred because it is free from 'spurious' peaks. A numerical example, a Bohm trajectory crossing a double-barrier tunneling structure, is presented, supporting the conclusions

  4. Computational methods for constructing protein structure models from 3D electron microscopy maps.

    Science.gov (United States)

    Esquivel-Rodríguez, Juan; Kihara, Daisuke

    2013-10-01

    Protein structure determination by cryo-electron microscopy (EM) has made significant progress in the past decades. Resolutions of EM maps have been improving as evidenced by recently reported structures that are solved at high resolutions close to 3Å. Computational methods play a key role in interpreting EM data. Among many computational procedures applied to an EM map to obtain protein structure information, in this article we focus on reviewing computational methods that model protein three-dimensional (3D) structures from a 3D EM density map that is constructed from two-dimensional (2D) maps. The computational methods we discuss range from de novo methods, which identify structural elements in an EM map, to structure fitting methods, where known high resolution structures are fit into a low-resolution EM map. A list of available computational tools is also provided. Copyright © 2013 Elsevier Inc. All rights reserved.

  5. Design of Carborane Molecular Architectures via Electronic Structure Computations

    International Nuclear Information System (INIS)

    Oliva, J.M.; Serrano-Andres, L.; Klein, D.J.; Schleyer, P.V.R.; Mich, J.

    2009-01-01

    Quantum-mechanical electronic structure computations were employed to explore initial steps towards a comprehensive design of poly carborane architectures through assembly of molecular units. Aspects considered were (i) the striking modification of geometrical parameters through substitution, (ii) endohedral carboranes and proposed ejection mechanisms for energy/ion/atom/energy storage/transport, (iii) the excited state character in single and dimeric molecular units, and (iv) higher architectural constructs. A goal of this work is to find optimal architectures where atom/ion/energy/spin transport within carborane superclusters is feasible in order to modernize and improve future photo energy processes.

  6. Computational Study on Atomic Structures, Electronic Properties, and Chemical Reactions at Surfaces and Interfaces and in Biomaterials

    Science.gov (United States)

    Takano, Yu; Kobayashi, Nobuhiko; Morikawa, Yoshitada

    2018-06-01

    Through computer simulations using atomistic models, it is becoming possible to calculate the atomic structures of localized defects or dopants in semiconductors, chemically active sites in heterogeneous catalysts, nanoscale structures, and active sites in biological systems precisely. Furthermore, it is also possible to clarify physical and chemical properties possessed by these nanoscale structures such as electronic states, electronic and atomic transport properties, optical properties, and chemical reactivity. It is sometimes quite difficult to clarify these nanoscale structure-function relations experimentally and, therefore, accurate computational studies are indispensable in materials science. In this paper, we review recent studies on the relation between local structures and functions for inorganic, organic, and biological systems by using atomistic computer simulations.

  7. Performance of a carbon nanotube field emission electron gun

    Science.gov (United States)

    Getty, Stephanie A.; King, Todd T.; Bis, Rachael A.; Jones, Hollis H.; Herrero, Federico; Lynch, Bernard A.; Roman, Patrick; Mahaffy, Paul

    2007-04-01

    A cold cathode field emission electron gun (e-gun) based on a patterned carbon nanotube (CNT) film has been fabricated for use in a miniaturized reflectron time-of-flight mass spectrometer (RTOF MS), with future applications in other charged particle spectrometers, and performance of the CNT e-gun has been evaluated. A thermionic electron gun has also been fabricated and evaluated in parallel and its performance is used as a benchmark in the evaluation of our CNT e-gun. Implications for future improvements and integration into the RTOF MS are discussed.

  8. Computer Cataloging of Electronic Journals in Unstable Aggregator Databases: The Hong Kong Baptist University Library Experience.

    Science.gov (United States)

    Li, Yiu-On; Leung, Shirley W.

    2001-01-01

    Discussion of aggregator databases focuses on a project at the Hong Kong Baptist University library to integrate full-text electronic journal titles from three unstable aggregator databases into its online public access catalog (OPAC). Explains the development of the electronic journal computer program (EJCOP) to generate MARC records for…

  9. Computed radiography systems performance evaluation

    International Nuclear Information System (INIS)

    Xavier, Clarice C.; Nersissian, Denise Y.; Furquim, Tania A.C.

    2009-01-01

    The performance of a computed radiography system was evaluated, according to the AAPM Report No. 93. Evaluation tests proposed by the publication were performed, and the following nonconformities were found: imaging p/ate (lP) dark noise, which compromises the clinical image acquired using the IP; exposure indicator uncalibrated, which can cause underexposure to the IP; nonlinearity of the system response, which causes overexposure; resolution limit under the declared by the manufacturer and erasure thoroughness uncalibrated, impairing structures visualization; Moire pattern visualized at the grid response, and IP Throughput over the specified by the manufacturer. These non-conformities indicate that digital imaging systems' lack of calibration can cause an increase in dose in order that image prob/ems can be so/ved. (author)

  10. New Computational Approach to Electron Transport in Irregular Graphene Nanostructures

    Science.gov (United States)

    Mason, Douglas; Heller, Eric; Prendergast, David; Neaton, Jeffrey

    2009-03-01

    For novel graphene devices of nanoscale-to-macroscopic scale, many aspects of their transport properties are not easily understood due to difficulties in fabricating devices with regular edges. Here we develop a framework to efficiently calculate and potentially screen electronic transport properties of arbitrary nanoscale graphene device structures. A generalization of the established recursive Green's function method is presented, providing access to arbitrary device and lead geometries with substantial computer-time savings. Using single-orbital nearest-neighbor tight-binding models and the Green's function-Landauer scattering formalism, we will explore the transmission function of irregular two-dimensional graphene-based nanostructures with arbitrary lead orientation. Prepared by LBNL under contract DE-AC02-05CH11231 and supported by the U.S. Dept. of Energy Computer Science Graduate Fellowship under grant DE-FG02-97ER25308.

  11. High performance computing system in the framework of the Higgs boson studies

    CERN Document Server

    Belyaev, Nikita; The ATLAS collaboration

    2017-01-01

    The Higgs boson physics is one of the most important and promising fields of study in modern High Energy Physics. To perform precision measurements of the Higgs boson properties, the use of fast and efficient instruments of Monte Carlo event simulation is required. Due to the increasing amount of data and to the growing complexity of the simulation software tools, the computing resources currently available for Monte Carlo simulation on the LHC GRID are not sufficient. One of the possibilities to address this shortfall of computing resources is the usage of institutes computer clusters, commercial computing resources and supercomputers. In this paper, a brief description of the Higgs boson physics, the Monte-Carlo generation and event simulation techniques are presented. A description of modern high performance computing systems and tests of their performance are also discussed. These studies have been performed on the Worldwide LHC Computing Grid and Kurchatov Institute Data Processing Center, including Tier...

  12. Advanced Certification Program for Computer Graphic Specialists. Final Performance Report.

    Science.gov (United States)

    Parkland Coll., Champaign, IL.

    A pioneer program in computer graphics was implemented at Parkland College (Illinois) to meet the demand for specialized technicians to visualize data generated on high performance computers. In summer 1989, 23 students were accepted into the pilot program. Courses included C programming, calculus and analytic geometry, computer graphics, and…

  13. How Do We Really Compute with Units?

    Science.gov (United States)

    Fiedler, B. H.

    2010-01-01

    The methods that we teach students for computing with units of measurement are often not consistent with the practice of professionals. For professionals, the vast majority of computations with quantities of measure are performed within programs on electronic computers, for which an accounting for the units occurs only once, in the design of the…

  14. Improving engineers' performance with computers

    International Nuclear Information System (INIS)

    Purvis, E.E. III

    1984-01-01

    The problem addressed is how to improve the performance of engineers in the design, operation, and maintenance of nuclear power plants. The application of computer science to this problem offers a challenge in maximizing the use of developments outside the nuclear industry and setting priorities to address the most fruitful areas first. Areas of potential benefits include data base management through design, analysis, procurement, construction, operation maintenance, cost, schedule and interface control and planning, and quality engineering on specifications, inspection, and training

  15. A high performance scientific cloud computing environment for materials simulations

    OpenAIRE

    Jorissen, Kevin; Vila, Fernando D.; Rehr, John J.

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including...

  16. Performance monitoring for brain-computer-interface actions.

    Science.gov (United States)

    Schurger, Aaron; Gale, Steven; Gozel, Olivia; Blanke, Olaf

    2017-02-01

    When presented with a difficult perceptual decision, human observers are able to make metacognitive judgements of subjective certainty. Such judgements can be made independently of and prior to any overt response to a sensory stimulus, presumably via internal monitoring. Retrospective judgements about one's own task performance, on the other hand, require first that the subject perform a task and thus could potentially be made based on motor processes, proprioceptive, and other sensory feedback rather than internal monitoring. With this dichotomy in mind, we set out to study performance monitoring using a brain-computer interface (BCI), with which subjects could voluntarily perform an action - moving a cursor on a computer screen - without any movement of the body, and thus without somatosensory feedback. Real-time visual feedback was available to subjects during training, but not during the experiment where the true final position of the cursor was only revealed after the subject had estimated where s/he thought it had ended up after 6s of BCI-based cursor control. During the first half of the experiment subjects based their assessments primarily on the prior probability of the end position of the cursor on previous trials. However, during the second half of the experiment subjects' judgements moved significantly closer to the true end position of the cursor, and away from the prior. This suggests that subjects can monitor task performance when the task is performed without overt movement of the body. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Performance of the Electronic Readout of the ATLAS Liquid Argon Calorimeters

    CERN Document Server

    Abreu, H; Aleksa, M; Aperio Bella, L; Archambault, JP; Arfaoui, S; Arnaez, O; Auge, E; Aurousseau, M; Bahinipati, S; Ban, J; Banfi, D; Barajas, A; Barillari, T; Bazan, A; Bellachia, F; Beloborodova, O; Benchekroun, D; Benslama, K; Berger, N; Berghaus, F; Bernat, P; Bernier, R; Besson, N; Binet, S; Blanchard, JB; Blondel, A; Bobrovnikov, V; Bohner, O; Boonekamp, M; Bordoni, S; Bouchel, M; Bourdarios, C; Bozzone, A; Braun, HM; Breton, D; Brettel, H; Brooijmans, G; Caputo, R; Carli, T; Carminati, L; Caughron, S; Cavalleri, P; Cavalli, D; Chareyre, E; Chase, RL; Chekulaev, SV; Chen, H; Cheplakov, A; Chiche, R; Citterio, M; Cojocaru, C; Colas, J; Collard, C; Collot, J; Consonni, M; Cooke, M; Copic, K; Costa, GC; Courneyea, L; Cuisy, D; Cwienk, WD; Damazio, D; Dannheim, D; De Cecco, S; De La Broise, X; De La Taille, C; de Vivie, JB; Debennerot, B; Delagnes, E; Delmastro, M; Derue, F; Dhaliwal, S; Di Ciaccio, L; Doan, O; Dudziak, F; Duflot, L; Dumont-Dayot, N; Dzahini, D; Elles, S; Ertel, E; Escalier, M; Etienvre, AI; Falleau, I; Fanti, M; Farooque, T; Favre, P; Fayard, Louis; Fent, J; Ferencei, J; Fischer, A; Fournier, D; Fournier, L; Fras, M; Froeschl, R; Gadfort, T; Gallin-Martel, ML; Gibson, A; Gillberg, D; Gingrich, DM; Göpfert, T; Goodson, J; Gouighri, M; Goy, C; Grassi, V; Gray, J; Guillemin, T; Guo, B; Habring, J; Handel, C; Heelan, L; Heintz, H; Helary, L; Henrot-Versille, S; Hervas, L; Hobbs, J; Hoffman, J; Hostachy, JY; Hoummada, A; Hrivnac, J; Hrynova, T; Hubaut, F; Huber, J; Iconomidou-Fayard, L; Iengo, P; Imbert, P; Ishmukhametov, R; Jantsch, A; Javadov, N; Jezequel, S; Jimenez Belenguer, M; Ju, XY; Kado, M; Kalinowski, A; Kar, D; Karev, A; Katsanos, I; Kazarinov, M; Kerschen, N; Kierstead, J; Kim, MS; Kiryunin, A; Kladiva, E; Knecht, N; Kobel, M; Koletsou, I; König, S; Krieger, P; Kukhtin, V; Kuna, M; Kurchaninov, L; Labbe, J; Lacour, D; Ladygin, E; Lafaye, R; Laforge, B; Lamarra, D; Lampl, W; Lanni, F; Laplace, S; Laskus, H; Le Coguie, A; Le Dortz, O; Le Maner, C; Lechowski, M; Lee, SC; Lefebvre, M; Leonhardt, K; Lethiec, L; Leveque, J; Liang, Z; Liu, C; Liu, T; Liu, Y; Loch, P; Lu, J; Ma, H; Mader, W; Majewski, S; Makovec, N; Makowiecki, D; Mandelli, L; Mangeard, PS; Mansoulie, B; Marchand, JF; Marchiori, G; Martin, D; Martin-Chassard, G; Martin dit Latour, B; Marzin, A; Maslennikov, A; Massol, N; Matricon, P; Maximov, D; Mazzanti, M; McCarthy, T; McPherson, R; Menke, S; Meyer, JP; Ming, Y; Monnier, E; Mooshofer, P; Neganov, A; Niedercorn, F; Nikolic-Audit, I; Nugent, IM; Oakham, G; Oberlack, H; Ocariz, J; Odier, J; Oram, CJ; Orlov, I; Orr, R; Parsons, JA; Peleganchuk, S; Penson, A; Perini, L; Perrodo, P; Perrot, G; Perus, A; Petit, E; Pisarev, I; Plamondon, M; Poffenberger, P; Poggioli, L; Pospelov, G; Pralavorio, P; Prast, J; Prudent, X; Przysiezniak, H; Puzo, P; Quentin, M; Radeka, V; Rajagopalan, S; Rauter, E; Reimann, O; Rescia, S; Resende, B; Richer, JP; Ridel, M; Rios, R; Roos, L; Rosenbaum, G; Rosenzweig, H; Rossetto, O; Roudil, W; Rousseau, D; Ruan, X; Rudert, A; Rusakovich, N; Rusquart, P; Rutherfoord, J; Sauvage, G; Savine, A; Schaarschmidt, J; Schacht, P; Schaffer, A; Schram, M; Schwemling, P; Seguin Moreau, N; Seifert, F; Serin, L; Seuster, R; Shalyugin, A; Shupe, M; Simion, S; Sinervo, P; Sippach, W; Skovpen, K; Sliwa, R; Soukharev, A; Spano, F; Stavina, P; Straessner, A; Strizenec, P; Stroynowski, R; Talyshev, A; Tapprogge, S; Tarrade, F; Tartarelli, GF; Teuscher, R; Tikhonov, Yu; Tocut, V; Tompkins, D; Thompson, P; Tisserant, S; Todorov, T; Tomasz, F; Trincaz-Duvoid, S; Trinh, Thi N; Trochet, S; Trocme, B; Tschann-Grimm, K; Tsionou, D; Ueno, R; Unal, G; Urbaniec, D; Usov, Y; Voss, K; Veillet, JJ; Vincter, M; Vogt, S; Weng, Z; Whalen, K; Wicek, F; Wilkens, H; Wingerter-Seez, I; Wulf, E; Yang, Z; Ye, J; Yuan, L; Yurkewicz, A; Zarzhitsky, P; Zerwas, D; Zhang, H; Zhang, L; Zhou, N; Zimmer, J; Zitoun, R; Zivkovic, L

    2010-01-01

    The ATLAS detector has been designed for operation at the Large Hadron Collider at CERN. ATLAS includes electromagnetic and hadronic liquid argon calorimeters, with almost 200,000 channels of data that must be sampled at the LHC bunch crossing frequency of 40 MHz. The calorimeter electronics calibration and readout are performed by custom electronics developed specifically for these purposes. This paper describes the system performance of the ATLAS liquid argon calibration and readout electronics, including noise, energy and time resolution, and long term stability, with data taken mainly from full-system calibration runs performed after installation of the system in the ATLAS detector hall at CERN.

  18. Static Memory Deduplication for Performance Optimization in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Gangyong Jia

    2017-04-01

    Full Text Available In a cloud computing environment, the number of virtual machines (VMs on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.

  19. Static Memory Deduplication for Performance Optimization in Cloud Computing.

    Science.gov (United States)

    Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan

    2017-04-27

    In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.

  20. Computer Simulation Performed for Columbia Project Cooling System

    Science.gov (United States)

    Ahmad, Jasim

    2005-01-01

    This demo shows a high-fidelity simulation of the air flow in the main computer room housing the Columbia (10,024 intel titanium processors) system. The simulation asseses the performance of the cooling system and identified deficiencies, and recommended modifications to eliminate them. It used two in house software packages on NAS supercomputers: Chimera Grid tools to generate a geometric model of the computer room, OVERFLOW-2 code for fluid and thermal simulation. This state-of-the-art technology can be easily extended to provide a general capability for air flow analyses on any modern computer room. Columbia_CFD_black.tiff

  1. Survey of computer codes applicable to waste facility performance evaluations

    International Nuclear Information System (INIS)

    Alsharif, M.; Pung, D.L.; Rivera, A.L.; Dole, L.R.

    1988-01-01

    This study is an effort to review existing information that is useful to develop an integrated model for predicting the performance of a radioactive waste facility. A summary description of 162 computer codes is given. The identified computer programs address the performance of waste packages, waste transport and equilibrium geochemistry, hydrological processes in unsaturated and saturated zones, and general waste facility performance assessment. Some programs also deal with thermal analysis, structural analysis, and special purposes. A number of these computer programs are being used by the US Department of Energy, the US Nuclear Regulatory Commission, and their contractors to analyze various aspects of waste package performance. Fifty-five of these codes were identified as being potentially useful on the analysis of low-level radioactive waste facilities located above the water table. The code summaries include authors, identification data, model types, and pertinent references. 14 refs., 5 tabs

  2. Performance of photocathode rf gun electron accelerators

    International Nuclear Information System (INIS)

    Ben-Zvi, I.

    1993-01-01

    In Photo-Injectors (PI) electron guns, electrons are emitted from a photocathode by a short laser pulse and then accelerated by intense rf fields in a resonant cavity. The best known advantage of this technique is the high peak current with a good emittance (high brightness). This is important for short wavelength Free-Electron Lasers and linear colliders. PIs are in operation in many electron accelerator facilities and a large number of new guns are under construction. Some applications have emerged, providing, for example, very high pulse charges. PIs have been operated over a wide range of frequencies, from 144 to 3000 MHz (a 17 GHz gun is being developed). An exciting new possibility is the development of superconducting PIs. A significant body of experimental and theoretical work exists by now, indicating the criticality of the accelerator elements that follow the gun for the preservation of the PI's performance as well as possible avenues of improvements in brightness. Considerable research is being done on the laser and photocathode material of the PI, and improvement is expected in this area

  3. New method of computing the contributions of graphs without lepton loops to the electron anomalous magnetic moment in QED

    Science.gov (United States)

    Volkov, Sergey

    2017-11-01

    This paper presents a new method of numerical computation of the mass-independent QED contributions to the electron anomalous magnetic moment which arise from Feynman graphs without closed electron loops. The method is based on a forestlike subtraction formula that removes all ultraviolet and infrared divergences in each Feynman graph before integration in Feynman-parametric space. The integration is performed by an importance sampling Monte-Carlo algorithm with the probability density function that is constructed for each Feynman graph individually. The method is fully automated at any order of the perturbation series. The results of applying the method to 2-loop, 3-loop, 4-loop Feynman graphs, and to some individual 5-loop graphs are presented, as well as the comparison of this method with other ones with respect to Monte Carlo convergence speed.

  4. High performance computing in science and engineering Garching/Munich 2016

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Siegfried; Bode, Arndt; Bruechle, Helmut; Brehm, Matthias (eds.)

    2016-11-01

    Computer simulations are the well-established third pillar of natural sciences along with theory and experimentation. Particularly high performance computing is growing fast and constantly demands more and more powerful machines. To keep pace with this development, in spring 2015, the Leibniz Supercomputing Centre installed the high performance computing system SuperMUC Phase 2, only three years after the inauguration of its sibling SuperMUC Phase 1. Thereby, the compute capabilities were more than doubled. This book covers the time-frame June 2014 until June 2016. Readers will find many examples of outstanding research in the more than 130 projects that are covered in this book, with each one of these projects using at least 4 million core-hours on SuperMUC. The largest scientific communities using SuperMUC in the last two years were computational fluid dynamics simulations, chemistry and material sciences, astrophysics, and life sciences.

  5. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa

    2012-10-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a representative HPC application. © 2012 IEEE.

  6. High-performance computational fluid dynamics: a custom-code approach

    International Nuclear Information System (INIS)

    Fannon, James; Náraigh, Lennon Ó; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain

    2016-01-01

    We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier–Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing. (paper)

  7. High-performance computational fluid dynamics: a custom-code approach

    Science.gov (United States)

    Fannon, James; Loiseau, Jean-Christophe; Valluri, Prashant; Bethune, Iain; Náraigh, Lennon Ó.

    2016-07-01

    We introduce a modified and simplified version of the pre-existing fully parallelized three-dimensional Navier-Stokes flow solver known as TPLS. We demonstrate how the simplified version can be used as a pedagogical tool for the study of computational fluid dynamics (CFDs) and parallel computing. TPLS is at its heart a two-phase flow solver, and uses calls to a range of external libraries to accelerate its performance. However, in the present context we narrow the focus of the study to basic hydrodynamics and parallel computing techniques, and the code is therefore simplified and modified to simulate pressure-driven single-phase flow in a channel, using only relatively simple Fortran 90 code with MPI parallelization, but no calls to any other external libraries. The modified code is analysed in order to both validate its accuracy and investigate its scalability up to 1000 CPU cores. Simulations are performed for several benchmark cases in pressure-driven channel flow, including a turbulent simulation, wherein the turbulence is incorporated via the large-eddy simulation technique. The work may be of use to advanced undergraduate and graduate students as an introductory study in CFDs, while also providing insight for those interested in more general aspects of high-performance computing.

  8. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  9. A FORTRAN program for an IBM PC compatible computer for calculating kinematical electron diffraction patterns

    International Nuclear Information System (INIS)

    Skjerpe, P.

    1989-01-01

    This report describes a computer program which is useful in transmission electron microscopy. The program is written in FORTRAN and calculates kinematical electron diffraction patterns in any zone axis from a given crystal structure. Quite large unit cells, containing up to 2250 atoms, can be handled by the program. The program runs on both the Helcules graphic card and the standard IBM CGA card

  10. Enhanced ECR ion source performance with an electron gun

    International Nuclear Information System (INIS)

    Xie, Z.; Lyneis, C.M.; Lam, R.S.; Lundgren, S.A.

    1991-01-01

    An electron gun for the advanced electron cyclotron resonance (AECR) source has been developed to increase the production of high charge state ions. The AECR source, which operates at 14 GHz, is being developed for the 88-in. cyclotron at Lawrence Berkeley Laboratory. The electron gun injects 10 to 150 eV electrons into the plasma chamber of the AECR. With the electron gun the AECR has produced at 10 kV extraction voltage 131 e μA of O 7+ , 13 e μA of O 8+ , 17 e μA of Ar 14+ , 2.2 e μA of Kr 25+ , 1 e μA of Xe 31+ , and 0.2 e μA of Bi 38+ . The AECR was also tested as a single stage source with a coating of SiO 2 on the plasma chamber walls. This significantly improved its performance compared to no coating, but direct injection of electrons with the electron gun produced the best results

  11. Reproducibility of coronary calcification detection with electron-beam computed tomography

    International Nuclear Information System (INIS)

    Hernigou, A.; Challande, P.; Boudeville, J.C.; Sene, V.; Grataloup, C.; Plainfosse, M.

    1996-01-01

    If coronary calcification scores obtained with electron-beam computed tomography (EBT) were proved to be correlated to coronary atherosclerosis, the reproducibility of the technique had to be assessed before being useed for patient follow-up. A total of 150 patients, selected as a result of a cholesterol screening programme, were studied by EBT. Twelve contiguous 3-mm-thick transverse slices beginning on the proximal coronary arteries were obtained through the base of the heart. The amount of calcium was evaluated as the calcified area weighted by a coefficient depending on the density peak level. The value was expressed as a logarithmic scale. Intra-observer, inter-observer and inter-examination reproducibilities were calculated. They were 1.9, 1.3 and 7.2%, respectively. These results were good enough to allow the use of EBT for longitudinal studies. The influence of acquisition and calculation conditions on score computation were also analysed. (orig.)

  12. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs

  13. Modeling the high-energy electronic state manifold of adenine: Calibration for nonlinear electronic spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Nenov, Artur, E-mail: Artur.Nenov@unibo.it; Giussani, Angelo; Segarra-Martí, Javier; Jaiswal, Vishal K. [Dipartimento di Chimica “G. Ciamician,” Università di Bologna, Via Selmi 2, IT-40126 Bologna (Italy); Rivalta, Ivan [Université de Lyon, CNRS, Institut de Chimie de Lyon, École Normale Supérieure de Lyon, 46 Allée d’Italie, F-69364 Lyon Cedex 07 (France); Cerullo, Giulio [Dipartimento di Fisica, Politecnico di Milano, IFN-CNR, Piazza Leonardo Da Vinci 32, IT-20133 Milano (Italy); Mukamel, Shaul [Department of Chemistry, University of California, Irvine, California 92697-2025 (United States); Garavelli, Marco, E-mail: marco.garavelli@unibo.it, E-mail: marco.garavelli@ens-lyon.fr [Dipartimento di Chimica “G. Ciamician,” Università di Bologna, Via Selmi 2, IT-40126 Bologna (Italy); Université de Lyon, CNRS, Institut de Chimie de Lyon, École Normale Supérieure de Lyon, 46 Allée d’Italie, F-69364 Lyon Cedex 07 (France)

    2015-06-07

    Pump-probe electronic spectroscopy using femtosecond laser pulses has evolved into a standard tool for tracking ultrafast excited state dynamics. Its two-dimensional (2D) counterpart is becoming an increasingly available and promising technique for resolving many of the limitations of pump-probe caused by spectral congestion. The ability to simulate pump-probe and 2D spectra from ab initio computations would allow one to link mechanistic observables like molecular motions and the making/breaking of chemical bonds to experimental observables like excited state lifetimes and quantum yields. From a theoretical standpoint, the characterization of the electronic transitions in the visible (Vis)/ultraviolet (UV), which are excited via the interaction of a molecular system with the incoming pump/probe pulses, translates into the determination of a computationally challenging number of excited states (going over 100) even for small/medium sized systems. A protocol is therefore required to evaluate the fluctuations of spectral properties like transition energies and dipole moments as a function of the computational parameters and to estimate the effect of these fluctuations on the transient spectral appearance. In the present contribution such a protocol is presented within the framework of complete and restricted active space self-consistent field theory and its second-order perturbation theory extensions. The electronic excited states of adenine have been carefully characterized through a previously presented computational recipe [Nenov et al., Comput. Theor. Chem. 1040–1041, 295-303 (2014)]. A wise reduction of the level of theory has then been performed in order to obtain a computationally less demanding approach that is still able to reproduce the characteristic features of the reference data. Foreseeing the potentiality of 2D electronic spectroscopy to track polynucleotide ground and excited state dynamics, and in particular its expected ability to provide

  14. Electronic structure of Mo and W investigated with positron annihilation

    Energy Technology Data Exchange (ETDEWEB)

    Dutschke, Markus [Theoretical Physics III, Center for Electronic Correlations and Magnetism, Institute of Physics, University of Augsburg (Germany); Sekania, Michael [Theoretical Physics III, Center for Electronic Correlations and Magnetism, Institute of Physics, University of Augsburg (Germany); Andronikashvili Institute of Physics, Tbilisi (Georgia); Benea, Diana [Faculty of Physics, Babes-Bolyai University, Cluj-Napoca (Romania); Department of Chemistry, Ludwig Maximilian University of Munich (Germany); Ceeh, Hubert; Weber, Joseph A.; Hugenschmidt, Christoph [FRM II, Technische Universitaet Muenchen, Garching (Germany); Chioncel, Liviu [Theoretical Physics III, Center for Electronic Correlations and Magnetism, Institute of Physics, University of Augsburg (Germany); Augsburg Center for Innovative Technologies, University of Augsburg (Germany)

    2016-07-01

    We perform electronic structure calculations to analyze the momentum distribution of the transition metals molybdenum and tungsten. We study the influence of positron-electron and the electron-electron interactions on the shape of the two-dimensional angular correlation of positron annihilation radiation (2D-ACAR) spectra. Our analysis is performed within the framework of the combined Density Functional (DFT) and Dynamical Mean-Field Theory (DMFT). Computed spectra are compared with recent experimental investigations.

  15. Electronic Publishing.

    Science.gov (United States)

    Lancaster, F. W.

    1989-01-01

    Describes various stages involved in the applications of electronic media to the publishing industry. Highlights include computer typesetting, or photocomposition; machine-readable databases; the distribution of publications in electronic form; computer conferencing and electronic mail; collaborative authorship; hypertext; hypermedia publications;…

  16. A computer code package for Monte Carlo photon-electron transport simulation Comparisons with experimental benchmarks

    International Nuclear Information System (INIS)

    Popescu, Lucretiu M.

    2000-01-01

    A computer code package (PTSIM) for particle transport Monte Carlo simulation was developed using object oriented techniques of design and programming. A flexible system for simulation of coupled photon, electron transport, facilitating development of efficient simulation applications, was obtained. For photons: Compton and photo-electric effects, pair production and Rayleigh interactions are simulated, while for electrons, a class II condensed history scheme was considered, in which catastrophic interactions (Moeller electron-electron interaction, bremsstrahlung, etc.) are treated in detail and all other interactions with reduced individual effect on electron history are grouped together using continuous slowing down approximation and energy straggling theories. Electron angular straggling is simulated using Moliere theory or a mixed model in which scatters at large angles are treated as distinct events. Comparisons with experimentally benchmarks for electron transmission and bremsstrahlung emissions energy and angular spectra, and for dose calculations are presented

  17. Computer-aided performance monitoring program at Diablo Canyon

    International Nuclear Information System (INIS)

    Nelson, T.; Glynn, R. III; Kessler, T.C.

    1992-01-01

    This paper describes the thermal performance monitoring program at Pacific Gas ampersand Electric Company's (PG ampersand E's) Diablo Canyon Nuclear Power Plant. The plant performance monitoring program at Diablo Canyon uses the THERMAC performance monitoring and analysis computer software provided by Expert-EASE Systems. THERMAC is used to collect performance data from the plant process computers, condition that data to adjust for measurement errors and missing data points, evaluate cycle and component-level performance, archive the data for trend analysis and generate performance reports. The current status of the program is that, after a fair amount of open-quotes tuningclose quotes of the basic open-quotes thermal kitclose quotes models provided with the initial THERMAC installation, we have successfully baselined both units to cycle isolation test data from previous reload cycles. Over the course of the past few months, we have accumulated enough data to generate meaningful performance trends and, as a result, have been able to use THERMAC to track a condenser fouling problem that was costing enough megawatts to attract corporate-level attention. Trends from THERMAC clearly related the megawatt loss to a steadily degrading condenser cleanliness factor and verified the subsequent gain in megawatts after the condenser was cleaned. In the future, we expect to rebaseline THERMAC to a beginning of cycle (BOC) data set and to use the program to help track feedwater nozzle fouling

  18. High performance computing and communications: FY 1996 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  19. A massively-parallel electronic-structure calculations based on real-space density functional theory

    International Nuclear Information System (INIS)

    Iwata, Jun-Ichi; Takahashi, Daisuke; Oshiyama, Atsushi; Boku, Taisuke; Shiraishi, Kenji; Okada, Susumu; Yabana, Kazuhiro

    2010-01-01

    Based on the real-space finite-difference method, we have developed a first-principles density functional program that efficiently performs large-scale calculations on massively-parallel computers. In addition to efficient parallel implementation, we also implemented several computational improvements, substantially reducing the computational costs of O(N 3 ) operations such as the Gram-Schmidt procedure and subspace diagonalization. Using the program on a massively-parallel computer cluster with a theoretical peak performance of several TFLOPS, we perform electronic-structure calculations for a system consisting of over 10,000 Si atoms, and obtain a self-consistent electronic-structure in a few hundred hours. We analyze in detail the costs of the program in terms of computation and of inter-node communications to clarify the efficiency, the applicability, and the possibility for further improvements.

  20. Research on high performance mirrors for free electron lasers

    International Nuclear Information System (INIS)

    Kitatani, Fumito

    1996-01-01

    For the stable functioning of free electron laser, high performance optical elements are required because of its characteristics. In particular in short wavelength free electron laser, since its gain is low, the optical elements having very high reflectivity are required. Also in free electron laser, since high energy noise light exists, the optical elements must have high optical breaking strength. At present in Power Reactor and Nuclear Fuel Development Corporation, the research for heightening the performance of dielectric multi-layer film elements for short wavelength is carried out. For manufacturing such high performance elements, it is necessary to develop the new materials for vapor deposition, new vapor deposition process, and the techniques of accurate substrate polishing and inspection. As the material that satisfies the requirements, there is diamond-like carbon (DLC) film, of which the properties are explained. As for the manufacture of the DLC films for short wavelength optics, the test equipment for forming the DLC films, the test of forming the DLC films, the change of the film quality due to gas conditions, discharge conditions and substrate materials, and the measurement of the optical breaking strength are reported. (K.I.)

  1. RISC Processors and High Performance Computing

    Science.gov (United States)

    Bailey, David H.; Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    This tutorial will discuss the top five RISC microprocessors and the parallel systems in which they are used. It will provide a unique cross-machine comparison not available elsewhere. The effective performance of these processors will be compared by citing standard benchmarks in the context of real applications. The latest NAS Parallel Benchmarks, both absolute performance and performance per dollar, will be listed. The next generation of the NPB will be described. The tutorial will conclude with a discussion of future directions in the field. Technology Transfer Considerations: All of these computer systems are commercially available internationally. Information about these processors is available in the public domain, mostly from the vendors themselves. The NAS Parallel Benchmarks and their results have been previously approved numerous times for public release, beginning back in 1991.

  2. Architecture and VHDL behavioural validation of a parallel processor dedicated to computer vision

    International Nuclear Information System (INIS)

    Collette, Thierry

    1992-01-01

    Speeding up image processing is mainly obtained using parallel computers; SIMD processors (single instruction stream, multiple data stream) have been developed, and have proven highly efficient regarding low-level image processing operations. Nevertheless, their performances drop for most intermediate of high level operations, mainly when random data reorganisations in processor memories are involved. The aim of this thesis was to extend the SIMD computer capabilities to allow it to perform more efficiently at the image processing intermediate level. The study of some representative algorithms of this class, points out the limits of this computer. Nevertheless, these limits can be erased by architectural modifications. This leads us to propose SYMPATIX, a new SIMD parallel computer. To valid its new concept, a behavioural model written in VHDL - Hardware Description Language - has been elaborated. With this model, the new computer performances have been estimated running image processing algorithm simulations. VHDL modeling approach allows to perform the system top down electronic design giving an easy coupling between system architectural modifications and their electronic cost. The obtained results show SYMPATIX to be an efficient computer for low and intermediate level image processing. It can be connected to a high level computer, opening up the development of new computer vision applications. This thesis also presents, a top down design method, based on the VHDL, intended for electronic system architects. (author) [fr

  3. High-performance computing in accelerating structure design and analysis

    International Nuclear Information System (INIS)

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  4. Plant Layout Analysis by Computer Simulation for Electronic Manufacturing Service Plant

    OpenAIRE

    Visuwan D.; Phruksaphanrat B

    2014-01-01

    In this research, computer simulation is used for Electronic Manufacturing Service (EMS) plant layout analysis. The current layout of this manufacturing plant is a process layout, which is not suitable due to the nature of an EMS that has high-volume and high-variety environment. Moreover, quick response and high flexibility are also needed. Then, cellular manufacturing layout design was determined for the selected group of products. Systematic layout planning (SLP) was used to analyze and de...

  5. Clinical application of electron beam computed tomography in diagnosis of truncus arteriosus

    International Nuclear Information System (INIS)

    Zhang Gejun; Dai Ruping; Cao Cheng; Qi Xiaoou; Bai Hua; Ma Zhanhong; Chen Yao; Mu Feng; Ren Li

    2005-01-01

    Objective: To evaluate value of electron beam computed tomography (EBCT) in diagnosis of truncus arteriosus (TA). Methods: Ten cases of TA with age ranging from 2-month to 24 years were studied. All cases were examined and diagnosed with Imatron C-150 scanner using contrastmedia. The results of EBCT were analyzed and compared with the results of echocardiography (in 10 cases), cardiovascular angiography (in 3 cases) and surgery findings (in 1 case ). Results: EBCT yielded qualitative diagnosis and classification in all 10 cases. Echocardiography revealed qualitative diagnosis in 9 cases, however its classification was accordant to EBCT just in 5 cases. The concomitant abnormalities of TA were found more with EBCT than that with echocardiography. Cardiovascular angiography was performed in 3 cases, yielding inaccurate classification 2 cases. One case of TA was operated just based on the results of echocardiography, EBCT and catheterization. Conclusion: As a noninvasive method, EBCT could yield qualitative diagnosis of TA as well as classification. The results of EBCT examination combining echocardiography and catheterization could guide the operations. (authors)

  6. Electron capture detector based on a non-radioactive electron source: operating parameters vs. analytical performance

    Directory of Open Access Journals (Sweden)

    E. Bunert

    2017-12-01

    Full Text Available Gas chromatographs with electron capture detectors are widely used for the analysis of electron affine substances such as pesticides or chlorofluorocarbons. With detection limits in the low pptv range, electron capture detectors are the most sensitive detectors available for such compounds. Based on their operating principle, they require free electrons at atmospheric pressure, which are usually generated by a β− decay. However, the use of radioactive materials leads to regulatory restrictions regarding purchase, operation, and disposal. Here, we present a novel electron capture detector based on a non-radioactive electron source that shows similar detection limits compared to radioactive detectors but that is not subject to these limitations and offers further advantages such as adjustable electron densities and energies. In this work we show first experimental results using 1,1,2-trichloroethane and sevoflurane, and investigate the effect of several operating parameters on the analytical performance of this new non-radioactive electron capture detector (ECD.

  7. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2015-01-01

    A continuation of Contemporary High Performance Computing: From Petascale toward Exascale, this second volume continues the discussion of HPC flagship systems, major application workloads, facilities, and sponsors. The book includes of figures and pictures that capture the state of existing systems: pictures of buildings, systems in production, floorplans, and many block diagrams and charts to illustrate system design and performance.

  8. Simulation of electron beam formation and transport in a gas-filled electron-optical system with a plasma emitter

    Energy Technology Data Exchange (ETDEWEB)

    Grishkov, A. A. [Russian Academy of Sciences, Institute of High Current Electronics, Siberian Branch (Russian Federation); Kornilov, S. Yu., E-mail: kornilovsy@gmail.com; Rempe, N. G. [Tomsk State University of Control Systems and Radioelectronics (Russian Federation); Shidlovskiy, S. V. [Tomsk State University (Russian Federation); Shklyaev, V. A. [Russian Academy of Sciences, Institute of High Current Electronics, Siberian Branch (Russian Federation)

    2016-07-15

    The results of computer simulations of the electron-optical system of an electron gun with a plasma emitter are presented. The simulations are performed using the KOBRA3-INP, XOOPIC, and ANSYS codes. The results describe the electron beam formation and transport. The electron trajectories are analyzed. The mechanisms of gas influence on the energy inhomogeneity of the beam and its current in the regions of beam primary formation, acceleration, and transport are described. Recommendations for optimizing the electron-optical system with a plasma emitter are presented.

  9. Parallel computers and three-dimensional computational electromagnetics

    International Nuclear Information System (INIS)

    Madsen, N.K.

    1994-01-01

    The authors have continued to enhance their ability to use new massively parallel processing computers to solve time-domain electromagnetic problems. New vectorization techniques have improved the performance of their code DSI3D by factors of 5 to 15, depending on the computer used. New radiation boundary conditions and far-field transformations now allow the computation of radar cross-section values for complex objects. A new parallel-data extraction code has been developed that allows the extraction of data subsets from large problems, which have been run on parallel computers, for subsequent post-processing on workstations with enhanced graphics capabilities. A new charged-particle-pushing version of DSI3D is under development. Finally, DSI3D has become a focal point for several new Cooperative Research and Development Agreement activities with industrial companies such as Lockheed Advanced Development Company, Varian, Hughes Electron Dynamics Division, General Atomic, and Cray

  10. Revisioning Theoretical Framework of Electronic Performance Support Systems (EPSS within the Software Application Examples

    Directory of Open Access Journals (Sweden)

    Dr. Servet BAYRAM,

    2004-04-01

    Full Text Available Revisioning Theoretical Framework of Electronic Performance Support Systems (EPSS within the Software Application Examples Assoc. Prof. Dr. Servet BAYRAM Computer Education & Instructional Technologies Marmara University , TURKEY ABSTRACT EPSS provides electronic support to learners in achieving a performance objective; a feature which makes it universally and consistently available on demand any time, any place, regardless of situation, without unnecessary intermediaries involved in the process. The aim of this review is to develop a set of theoretical construct that provide descriptive power for explanation of EPSS and its roots and features within the software application examples (i.e., Microsoft SharePoint Server”v2.0” Beta 2, IBM Lotus Notes 6 & Domino 6, Oracle 9i Collaboration Suite, and Mac OS X v10.2. From the educational and training point of view, the paper visualizes a pentagon model for the interrelated domains of the theoretical framework of EPSS. These domains are: learning theories, information processing theories, developmental theories, instructional theories, and acceptance theories. This descriptive framework explains a set of descriptions as to which outcomes occur under given theoretical conditions for a given EPSS model within software examples. It summarizes some of the theoretical concepts supporting to the EPSS’ related features and explains how such concepts sharing same features with the example software programs in education and job training.

  11. Computing the Density Matrix in Electronic Structure Theory on Graphics Processing Units.

    Science.gov (United States)

    Cawkwell, M J; Sanville, E J; Mniszewski, S M; Niklasson, Anders M N

    2012-11-13

    The self-consistent solution of a Schrödinger-like equation for the density matrix is a critical and computationally demanding step in quantum-based models of interatomic bonding. This step was tackled historically via the diagonalization of the Hamiltonian. We have investigated the performance and accuracy of the second-order spectral projection (SP2) algorithm for the computation of the density matrix via a recursive expansion of the Fermi operator in a series of generalized matrix-matrix multiplications. We demonstrate that owing to its simplicity, the SP2 algorithm [Niklasson, A. M. N. Phys. Rev. B2002, 66, 155115] is exceptionally well suited to implementation on graphics processing units (GPUs). The performance in double and single precision arithmetic of a hybrid GPU/central processing unit (CPU) and full GPU implementation of the SP2 algorithm exceed those of a CPU-only implementation of the SP2 algorithm and traditional matrix diagonalization when the dimensions of the matrices exceed about 2000 × 2000. Padding schemes for arrays allocated in the GPU memory that optimize the performance of the CUBLAS implementations of the level 3 BLAS DGEMM and SGEMM subroutines for generalized matrix-matrix multiplications are described in detail. The analysis of the relative performance of the hybrid CPU/GPU and full GPU implementations indicate that the transfer of arrays between the GPU and CPU constitutes only a small fraction of the total computation time. The errors measured in the self-consistent density matrices computed using the SP2 algorithm are generally smaller than those measured in matrices computed via diagonalization. Furthermore, the errors in the density matrices computed using the SP2 algorithm do not exhibit any dependence of system size, whereas the errors increase linearly with the number of orbitals when diagonalization is employed.

  12. Brain inspired high performance electronics on flexible silicon

    KAUST Repository

    Sevilla, Galo T.; Rojas, Jhonathan Prieto; Hussain, Muhammad Mustafa

    2014-01-01

    Brain's stunning speed, energy efficiency and massive parallelism makes it the role model for upcoming high performance computation systems. Although human brain components are a million times slower than state of the art silicon industry components

  13. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  14. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  15. 6th International Conference on High Performance Scientific Computing

    CERN Document Server

    Phu, Hoang; Rannacher, Rolf; Schlöder, Johannes

    2017-01-01

    This proceedings volume highlights a selection of papers presented at the Sixth International Conference on High Performance Scientific Computing, which took place in Hanoi, Vietnam on March 16-20, 2015. The conference was jointly organized by the Heidelberg Institute of Theoretical Studies (HITS), the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University, and the Vietnam Institute for Advanced Study in Mathematics, Ministry of Education The contributions cover a broad, interdisciplinary spectrum of scientific computing and showcase recent advances in theory, methods, and practical applications. Subjects covered numerical simulation, methods for optimization and control, parallel computing, and software development, as well as the applications of scientific computing in physics, mechanics, biomechanics and robotics, material science, hydrology, biotechnology, medicine, transport, scheduling, and in...

  16. Benchmarking high performance computing architectures with CMS’ skeleton framework

    Science.gov (United States)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  17. Computer predictions on Rh-based double perovskites with unusual electronic and magnetic properties

    Science.gov (United States)

    Halder, Anita; Nafday, Dhani; Sanyal, Prabuddha; Saha-Dasgupta, Tanusri

    2018-03-01

    In search for new magnetic materials, we make computer prediction of structural, electronic and magnetic properties of yet-to-be synthesized Rh-based double perovskite compounds, Sr(Ca)2BRhO6 (B=Cr, Mn, Fe). We use combination of evolutionary algorithm, density functional theory, and statistical-mechanical tool for this purpose. We find that the unusual valence of Rh5+ may be stabilized in these compounds through formation of oxygen ligand hole. Interestingly, while the Cr-Rh and Mn-Rh compounds are predicted to be ferromagnetic half-metals, the Fe-Rh compounds are found to be rare examples of antiferromagnetic and metallic transition-metal oxide with three-dimensional electronic structure. The computed magnetic transition temperatures of the predicted compounds, obtained from finite temperature Monte Carlo study of the first principles-derived model Hamiltonian, are found to be reasonably high. The prediction of favorable growth condition of the compounds, reported in our study, obtained through extensive thermodynamic analysis should be useful for future synthesize of this interesting class of materials with intriguing properties.

  18. The path toward HEP High Performance Computing

    International Nuclear Information System (INIS)

    Apostolakis, John; Brun, René; Gheata, Andrei; Wenzel, Sandro; Carminati, Federico

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit

  19. XVI International symposium on nuclear electronics and VI International school on automation and computing in nuclear physics and astrophysics

    International Nuclear Information System (INIS)

    Churin, I.N.

    1995-01-01

    Reports and papers of the 16- International Symposium on nuclear electronics and the 6- International school on automation and computing in nuclear physics and astrophysics are presented. The latest achievements in the field of development of fact - response electronic circuits designed for detecting and spectrometric facilities are studied. The peculiar attention is paid to the systems for acquisition, processing and storage of experimental data. The modern equipment designed for data communication in the computer networks is studied

  20. EDF: Computing electron number probability distribution functions in real space from molecular wave functions

    Science.gov (United States)

    Francisco, E.; Pendás, A. Martín; Blanco, M. A.

    2008-04-01

    Given an N-electron molecule and an exhaustive partition of the real space ( R) into m arbitrary regions Ω,Ω,…,Ω ( ⋃i=1mΩ=R), the edf program computes all the probabilities P(n,n,…,n) of having exactly n electrons in Ω, n electrons in Ω,…, and n electrons ( n+n+⋯+n=N) in Ω. Each Ω may correspond to a single basin (atomic domain) or several such basins (functional group). In the later case, each atomic domain must belong to a single Ω. The program can manage both single- and multi-determinant wave functions which are read in from an aimpac-like wave function description ( .wfn) file (T.A. Keith et al., The AIMPAC95 programs, http://www.chemistry.mcmaster.ca/aimpac, 1995). For multi-determinantal wave functions a generalization of the original .wfn file has been introduced. The new format is completely backwards compatible, adding to the previous structure a description of the configuration interaction (CI) coefficients and the determinants of correlated wave functions. Besides the .wfn file, edf only needs the overlap integrals over all the atomic domains between the molecular orbitals (MO). After the P(n,n,…,n) probabilities are computed, edf obtains from them several magnitudes relevant to chemical bonding theory, such as average electronic populations and localization/delocalization indices. Regarding spin, edf may be used in two ways: with or without a splitting of the P(n,n,…,n) probabilities into α and β spin components. Program summaryProgram title: edf Catalogue identifier: AEAJ_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEAJ_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 5387 No. of bytes in distributed program, including test data, etc.: 52 381 Distribution format: tar.gz Programming language: Fortran 77 Computer

  1. High Performance Computing Operations Review Report

    Energy Technology Data Exchange (ETDEWEB)

    Cupps, Kimberly C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  2. Cloud Computing for Maintenance Performance Improvement

    OpenAIRE

    Kour, Ravdeep; Karim, Ramin; Parida, Aditya

    2013-01-01

    Cloud Computing is an emerging research area. It can be utilised for acquiring an effective and efficient information logistics. This paper uses cloud-based technology for the establishment of information logistics for railway system which requires information based on data from different data sources (e.g. railway maintenance, railway operation, and railway business data). In order to improve the performance of the maintenance process relevant data from various sources need to be acquired, f...

  3. High performance computing and communications: FY 1997 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  4. LabVIEW Serial Driver Software for an Electronic Load

    Science.gov (United States)

    Scullin, Vincent; Garcia, Christopher

    2003-01-01

    A LabVIEW-language computer program enables monitoring and control of a Transistor Devices, Inc., Dynaload WCL232 (or equivalent) electronic load via an RS-232 serial communication link between the electronic load and a remote personal computer. (The electronic load can operate at constant voltage, current, power consumption, or resistance.) The program generates a graphical user interface (GUI) at the computer that looks and acts like the front panel of the electronic load. Once the electronic load has been placed in remote-control mode, this program first queries the electronic load for the present values of all its operational and limit settings, and then drops into a cycle in which it reports the instantaneous voltage, current, and power values in displays that resemble those on the electronic load while monitoring the GUI images of pushbuttons for control actions by the user. By means of the pushbutton images and associated prompts, the user can perform such operations as changing limit values, the operating mode, or the set point. The benefit of this software is that it relieves the user of the need to learn one method for operating the electronic load locally and another method for operating it remotely via a personal computer.

  5. Simple, parallel, high-performance virtual machines for extreme computations

    International Nuclear Information System (INIS)

    Chokoufe Nejad, Bijan; Ohl, Thorsten; Reuter, Jurgen

    2014-11-01

    We introduce a high-performance virtual machine (VM) written in a numerically fast language like Fortran or C to evaluate very large expressions. We discuss the general concept of how to perform computations in terms of a VM and present specifically a VM that is able to compute tree-level cross sections for any number of external legs, given the corresponding byte code from the optimal matrix element generator, O'Mega. Furthermore, this approach allows to formulate the parallel computation of a single phase space point in a simple and obvious way. We analyze hereby the scaling behaviour with multiple threads as well as the benefits and drawbacks that are introduced with this method. Our implementation of a VM can run faster than the corresponding native, compiled code for certain processes and compilers, especially for very high multiplicities, and has in general runtimes in the same order of magnitude. By avoiding the tedious compile and link steps, which may fail for source code files of gigabyte sizes, new processes or complex higher order corrections that are currently out of reach could be evaluated with a VM given enough computing power.

  6. High performance computing in science and engineering '09: transactions of the High Performance Computing Center, Stuttgart (HLRS) 2009

    National Research Council Canada - National Science Library

    Nagel, Wolfgang E; Kröner, Dietmar; Resch, Michael

    2010-01-01

    ...), NIC/JSC (J¨ u lich), and LRZ (Munich). As part of that strategic initiative, in May 2009 already NIC/JSC has installed the first phase of the GCS HPC Tier-0 resources, an IBM Blue Gene/P with roughly 300.000 Cores, this time in J¨ u lich, With that, the GCS provides the most powerful high-performance computing infrastructure in Europe alread...

  7. Performance comparison between Java and JNI for optimal implementation of computational micro-kernels

    OpenAIRE

    Halli , Nassim; Charles , Henri-Pierre; Méhaut , Jean-François

    2015-01-01

    International audience; General purpose CPUs used in high performance computing (HPC) support a vector instruction set and an out-of-order engine dedicated to increase the instruction level parallelism. Hence, related optimizations are currently critical to improve the performance of applications requiring numerical computation. Moreover, the use of a Java run-time environment such as the HotSpot Java Virtual Machine (JVM) in high performance computing is a promising alternative. It benefits ...

  8. Computational Performance Analysis of Nonlinear Dynamic Systems using Semi-infinite Programming

    Directory of Open Access Journals (Sweden)

    Tor A. Johansen

    2001-01-01

    Full Text Available For nonlinear systems that satisfy certain regularity conditions it is shown that upper and lower bounds on the performance (cost function can be computed using linear or quadratic programming. The performance conditions derived from Hamilton-Jacobi inequalities are formulated as linear inequalities defined pointwise by discretizing the state-space when assuming a linearly parameterized class of functions representing the candidate performance bounds. Uncertainty with respect to some system parameters can be incorporated by also gridding the parameter set. In addition to performance analysis, the method can also be used to compute Lyapunov functions that guarantees uniform exponential stability.

  9. A High Performance VLSI Computer Architecture For Computer Graphics

    Science.gov (United States)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  10. FY 1992 Blue Book: Grand Challenges: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  11. Performance Improvement and Feature Enhancement of WriteOn

    OpenAIRE

    Chandrasekar, Samantha

    2008-01-01

    A Tablet PC is a portable computing device which combines a regular notebook computer with a digitizing screen that interacts with a complementary electronic pen stylus. The pen allows the user to input data by writing on or by tapping the screen. Like a regular notebook computer, the user can also perform tasks using the mouse and keyboard. A Tablet PC gives the users all the features of a regular notebook computer along with the support to recognize, process, and store electronic/digital in...

  12. A Perspective on Computational Human Performance Models as Design Tools

    Science.gov (United States)

    Jones, Patricia M.

    2010-01-01

    The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.

  13. Computational simulation of electron and ion beams interaction with solid high-molecular dielectrics and inorganic glasses

    International Nuclear Information System (INIS)

    Milyavskiy, V.V.

    1998-01-01

    Numerical investigation of interaction of electron beams (with the energy within the limits 100 keV--20 MeV) and ion beams (with the energy over the range 1 keV--50 MeV) with solid high-molecular dielectrics and inorganic glasses is performed. Note that the problem of interaction of electron beams with glass optical covers is especially interesting in connection with the problem of radiation protection of solar power elements on cosmic satellites and stations. For computational simulation of the above-mentioned processes a mathematical model was developed, describing the propagation of particle beams through the sample thickness, the accumulation and relaxation of volume charge and shock-wave processes, as well as the evolution of electric field in the sample. The calculation of energy deposition by electron beam in a target in the presence of nonuniform electric field was calculated with the assistance of the semiempirical procedure, formerly proposed by author of this work. Propagation of the low energy ions through the sample thickness was simulated using Pearson IV distribution. Damage distribution, ionization distribution and range distribution was taken into account. Propagation of high energy ions was calculated in the approximation of continuous deceleration. For description of hydrodynamic processes the system of equations of continuum mechanics in elastic-plastic approximation and the wide-range equation of state were used

  14. High performance computing system in the framework of the Higgs boson studies

    CERN Document Server

    Belyaev, Nikita; The ATLAS collaboration; Velikhov, Vasily; Konoplich, Rostislav

    2017-01-01

    The Higgs boson physics is one of the most important and promising fields of study in the modern high energy physics. It is important to notice, that GRID computing resources become strictly limited due to increasing amount of statistics, required for physics analyses and unprecedented LHC performance. One of the possibilities to address the shortfall of computing resources is the usage of computer institutes' clusters, commercial computing resources and supercomputers. To perform precision measurements of the Higgs boson properties in these realities, it is also highly required to have effective instruments to simulate kinematic distributions of signal events. In this talk we give a brief description of the modern distribution reconstruction method called Morphing and perform few efficiency tests to demonstrate its potential. These studies have been performed on the WLCG and Kurchatov Institute’s Data Processing Center, including Tier-1 GRID site and supercomputer as well. We also analyze the CPU efficienc...

  15. Recent development in computational actinide chemistry

    International Nuclear Information System (INIS)

    Li Jun

    2008-01-01

    Ever since the Manhattan project in World War II, actinide chemistry has been essential for nuclear science and technology. Yet scientists still seek the ability to interpret and predict chemical and physical properties of actinide compounds and materials using first-principle theory and computational modeling. Actinide compounds are challenging to computational chemistry because of their complicated electron correlation effects and relativistic effects, including spin-orbit coupling effects. There have been significant developments in theoretical studies on actinide compounds in the past several years. The theoretical capabilities coupled with new experimental characterization techniques now offer a powerful combination for unraveling the complexities of actinide chemistry. In this talk, we will provide an overview of our own research in this field, with particular emphasis on applications of relativistic density functional and ab initio quantum chemical methods to the geometries, electronic structures, spectroscopy and excited-state properties of small actinide molecules such as CUO and UO 2 and some large actinide compounds relevant to separation and environment science. The performance of various density functional approaches and wavefunction theory-based electron correlation methods will be compared. The results of computational modeling on the vibrational, electronic, and NMR spectra of actinide compounds will be briefly discussed as well [1-4]. We will show that progress in relativistic quantum chemistry, computer hardware and computational chemistry software has enabled computational actinide chemistry to emerge as a powerful and predictive tool for research in actinide chemistry. (authors)

  16. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and

  17. Component-based software for high-performance scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  18. Component-based software for high-performance scientific computing

    International Nuclear Information System (INIS)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly

  19. Self-amplified spontaneous emission free electron laser devices and nonideal electron beam transport

    Directory of Open Access Journals (Sweden)

    L. L. Lazzarino

    2014-11-01

    Full Text Available We have developed, at the SPARC test facility, a procedure for a real time self-amplified spontaneous emission free electron laser (FEL device performance control. We describe an actual FEL, including electron and optical beam transport, through a set of analytical formulas, allowing a fast and reliable on-line “simulation” of the experiment. The system is designed in such a way that the characteristics of the transport elements and the laser intensity are measured and adjusted, via a real time computation, during the experimental run, to obtain an on-line feedback of the laser performances. The detail of the procedure and the relevant experimental results are discussed.

  20. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    International Nuclear Information System (INIS)

    Khaleel, Mohammad A.

    2009-01-01

    This report is an account of the deliberations and conclusions of the workshop on 'Forefront Questions in Nuclear Science and the Role of High Performance Computing' held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to (1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; (2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; (3) provide nuclear physicists the opportunity to influence the development of high performance computing; and (4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  1. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.

    2009-10-01

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  2. Equation-of-motion O(N) electronic structure studies of very large systems (N ∼ 107)

    International Nuclear Information System (INIS)

    Michalewicz, M.T.

    1999-01-01

    Extremely fast parallel implementation of the equation-of-motion method for electronic structure computations is presented. The method can be applied to non-periodic, disordered nanocrystalline samples, transition metal oxides and other systems. The equation-of-motion method exhibits linear scaling, O(N), runs with a speed of up to 43 GFLOPS on a NEC SX-4 vector-parallel supercomputer with 32 processors and computes electronic densities of states (DOS) for multi-million atom samples in mere minutes. The largest test computation performed was for the electronic DOS for a TiO 2 sample consisting of 7,623,000 atoms. Mathematically, this is equivalent to obtaining the spectrum of an n x n Hermitian operator (Hamiltonian) where n = 38, 115, 000. We briefly discuss the practical implications of being able to perform electronic structure computations of this great speed and scale. Copyright (1999) CSIRO Australia

  3. The new landscape of parallel computer architecture

    International Nuclear Information System (INIS)

    Shalf, John

    2007-01-01

    The past few years has seen a sea change in computer architecture that will impact every facet of our society as every electronic device from cell phone to supercomputer will need to confront parallelism of unprecedented scale. Whereas the conventional multicore approach (2, 4, and even 8 cores) adopted by the computing industry will eventually hit a performance plateau, the highest performance per watt and per chip area is achieved using manycore technology (hundreds or even thousands of cores). However, fully unleashing the potential of the manycore approach to ensure future advances in sustained computational performance will require fundamental advances in computer architecture and programming models that are nothing short of reinventing computing. In this paper we examine the reasons behind the movement to exponentially increasing parallelism, and its ramifications for system design, applications and programming models

  4. The new landscape of parallel computer architecture

    Energy Technology Data Exchange (ETDEWEB)

    Shalf, John [NERSC Division, Lawrence Berkeley National Laboratory 1 Cyclotron Road, Berkeley California, 94720 (United States)

    2007-07-15

    The past few years has seen a sea change in computer architecture that will impact every facet of our society as every electronic device from cell phone to supercomputer will need to confront parallelism of unprecedented scale. Whereas the conventional multicore approach (2, 4, and even 8 cores) adopted by the computing industry will eventually hit a performance plateau, the highest performance per watt and per chip area is achieved using manycore technology (hundreds or even thousands of cores). However, fully unleashing the potential of the manycore approach to ensure future advances in sustained computational performance will require fundamental advances in computer architecture and programming models that are nothing short of reinventing computing. In this paper we examine the reasons behind the movement to exponentially increasing parallelism, and its ramifications for system design, applications and programming models.

  5. 78 FR 63492 - Certain Electronic Devices, Including Mobile Phones and Tablet Computers, and Components Thereof...

    Science.gov (United States)

    2013-10-24

    ... INTERNATIONAL TRADE COMMISSION [Investigation No. 337-TA-847] Certain Electronic Devices, Including Mobile Phones and Tablet Computers, and Components Thereof; Notice of Request for Statements on the Public Interest AGENCY: U.S. International Trade Commission. ACTION: Notice. SUMMARY: Notice is...

  6. PET/CT Biograph trademark Sensation 16. Performance improvement using faster electronics

    International Nuclear Information System (INIS)

    Martinez, M.J.; Schwaiger, M.; Ziegler, S.I.; Bercier, Y.

    2006-01-01

    Aim: the new PET/CT biograph sensation 16 (BS16) tomographs have faster detector electronics which allow a reduced timing coincidence window and an increased lower energy threshold (from 350 to 400 keV). This paper evaluates the performance of the BS16 PET scanner before and after the Pico-3D electronics upgrade. Methods: four NEMA NU 2-2001 protocols, (i) spatial resolution, (ii) scatter fraction, count losses and random measurement, (iii) sensitivity, and (iv) image quality, have been performed. Results: a considerable change in both PET count-rate performance and image quality is observed after electronics upgrade. The new scatter fraction obtained using Pico-3D electronics showed a 14% decrease compared to that obtained with the previous electronics. At the typical patient background activity (5.3 kBq/ml), the new scatter fraction was approximately 0.42. The noise equivalent count-rate (R NEC ) performance was also improved. The value at which the R NEC curve peaked, increased from 3.7 . 10 4 s -1 at 14 kBq/ml to 6.4 . 10 4 s -1 at 21 kBq/ml (2R-NEC rate). Likewise, the peak true count-rate value increased from 1.9 . 10 5 s -1 at 22 kBq/ml to 3.4 . 10 5 s -1 at 33 kBq/ml. An average increase of 45% in contrast was observed for hot spheres when using AW-OSEM (4ix8s) as the reconstruction algorithm. For cold spheres, the average increase was 12%. Conclusion: the performance of the PET scanners in the BS16 tomographs is improved by the optimization of the signal processing. The narrower energy and timing coincidence windows lead to a considerable increase of signal-to-noise ratio. The existing combination of fast detectors and adapted electronics in the BS16 tomographs allow imaging protocols with reduced acquisition time, providing higher patient throughput. (orig.)

  7. Reducing power consumption while performing collective operations on a plurality of compute nodes

    Science.gov (United States)

    Archer, Charles J [Rochester, MN; Blocksome, Michael A [Rochester, MN; Peters, Amanda E [Rochester, MN; Ratterman, Joseph D [Rochester, MN; Smith, Brian E [Rochester, MN

    2011-10-18

    Methods, apparatus, and products are disclosed for reducing power consumption while performing collective operations on a plurality of compute nodes that include: receiving, by each compute node, instructions to perform a type of collective operation; selecting, by each compute node from a plurality of collective operations for the collective operation type, a particular collective operation in dependence upon power consumption characteristics for each of the plurality of collective operations; and executing, by each compute node, the selected collective operation.

  8. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  9. An Application-Based Performance Evaluation of NASAs Nebula Cloud Computing Platform

    Science.gov (United States)

    Saini, Subhash; Heistand, Steve; Jin, Haoqiang; Chang, Johnny; Hood, Robert T.; Mehrotra, Piyush; Biswas, Rupak

    2012-01-01

    The high performance computing (HPC) community has shown tremendous interest in exploring cloud computing as it promises high potential. In this paper, we examine the feasibility, performance, and scalability of production quality scientific and engineering applications of interest to NASA on NASA's cloud computing platform, called Nebula, hosted at Ames Research Center. This work represents the comprehensive evaluation of Nebula using NUTTCP, HPCC, NPB, I/O, and MPI function benchmarks as well as four applications representative of the NASA HPC workload. Specifically, we compare Nebula performance on some of these benchmarks and applications to that of NASA s Pleiades supercomputer, a traditional HPC system. We also investigate the impact of virtIO and jumbo frames on interconnect performance. Overall results indicate that on Nebula (i) virtIO and jumbo frames improve network bandwidth by a factor of 5x, (ii) there is a significant virtualization layer overhead of about 10% to 25%, (iii) write performance is lower by a factor of 25x, (iv) latency for short MPI messages is very high, and (v) overall performance is 15% to 48% lower than that on Pleiades for NASA HPC applications. We also comment on the usability of the cloud platform.

  10. Inclusive vision for high performance computing at the CSIR

    CSIR Research Space (South Africa)

    Gazendam, A

    2006-02-01

    Full Text Available and computationally intensive applications. A number of different technologies and standards were identified as core to the open and distributed high-performance infrastructure envisaged...

  11. High performance computer code for molecular dynamics simulations

    International Nuclear Information System (INIS)

    Levay, I.; Toekesi, K.

    2007-01-01

    Complete text of publication follows. Molecular Dynamics (MD) simulation is a widely used technique for modeling complicated physical phenomena. Since 2005 we are developing a MD simulations code for PC computers. The computer code is written in C++ object oriented programming language. The aim of our work is twofold: a) to develop a fast computer code for the study of random walk of guest atoms in Be crystal, b) 3 dimensional (3D) visualization of the particles motion. In this case we mimic the motion of the guest atoms in the crystal (diffusion-type motion), and the motion of atoms in the crystallattice (crystal deformation). Nowadays, it is common to use Graphics Devices in intensive computational problems. There are several ways to use this extreme processing performance, but never before was so easy to programming these devices as now. The CUDA (Compute Unified Device) Architecture introduced by nVidia Corporation in 2007 is a very useful for every processor hungry application. A Unified-architecture GPU include 96-128, or more stream processors, so the raw calculation performance is 576(!) GFLOPS. It is ten times faster, than the fastest dual Core CPU [Fig.1]. Our improved MD simulation software uses this new technology, which speed up our software and the code run 10 times faster in the critical calculation code segment. Although the GPU is a very powerful tool, it has a strongly paralleled structure. It means, that we have to create an algorithm, which works on several processors without deadlock. Our code currently uses 256 threads, shared and constant on-chip memory, instead of global memory, which is 100 times slower than others. It is possible to implement the total algorithm on GPU, therefore we do not need to download and upload the data in every iteration. On behalf of maximal throughput, every thread run with the same instructions

  12. Computed tomography as a source of electron density information for radiation treatment planning

    International Nuclear Information System (INIS)

    Skrzynski, Witold; Slusarczyk-Kacprzyk, Wioletta; Bulski, Wojciech; Zielinska-Dabrowska, Sylwia; Wachowicz, Marta; Kukolowicz, Pawel F.

    2010-01-01

    Purpose: to evaluate the performance of computed tomography (CT) systems of various designs as a source of electron density (ρ el ) data for treatment planning of radiation therapy. Material and methods: dependence of CT numbers on relative electron density of tissue-equivalent materials (HU-ρ el relationship) was measured for several general-purpose CT systems (single-slice, multislice, wide-bore multislice), for radiotherapy simulators with a single-slice CT and kV CBCT (cone-beam CT) options, as well as for linear accelerators with kV and MV CBCT systems. Electron density phantoms of four sizes were used. Measurement data were compared with the standard HU-ρ el relationships predefined in two commercial treatment-planning systems (TPS). Results: the HU-ρ el relationships obtained with all of the general-purpose CT scanners operating at voltages close to 120 kV were very similar to each other and close to those predefined in TPS. Some dependency of HU values on tube voltage was observed for bone-equivalent materials. For a given tube voltage, differences in results obtained for different phantoms were larger than those obtained for different CT scanners. For radiotherapy simulators and for kV CBCT systems, the information on ρ el was much less precise because of poor uniformity of images. For MV CBCT, the results were significantly different than for kV systems due to the differing energy spectrum of the beam. Conclusion: the HU-ρ el relationships predefined in TPS can be used for general-purpose CT systems operating at voltages close to 120 kV. For nontypical imaging systems (e.g., CBCT), the relationship can be significantly different and, therefore, it should always be measured and carefully analyzed before using CT data for treatment planning. (orig.)

  13. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  14. High performance parallel computing of flows in complex geometries: II. Applications

    International Nuclear Information System (INIS)

    Gourdain, N; Gicquel, L; Staffelbach, G; Vermorel, O; Duchaine, F; Boussuge, J-F; Poinsot, T

    2009-01-01

    Present regulations in terms of pollutant emissions, noise and economical constraints, require new approaches and designs in the fields of energy supply and transportation. It is now well established that the next breakthrough will come from a better understanding of unsteady flow effects and by considering the entire system and not only isolated components. However, these aspects are still not well taken into account by the numerical approaches or understood whatever the design stage considered. The main challenge is essentially due to the computational requirements inferred by such complex systems if it is to be simulated by use of supercomputers. This paper shows how new challenges can be addressed by using parallel computing platforms for distinct elements of a more complex systems as encountered in aeronautical applications. Based on numerical simulations performed with modern aerodynamic and reactive flow solvers, this work underlines the interest of high-performance computing for solving flow in complex industrial configurations such as aircrafts, combustion chambers and turbomachines. Performance indicators related to parallel computing efficiency are presented, showing that establishing fair criterions is a difficult task for complex industrial applications. Examples of numerical simulations performed in industrial systems are also described with a particular interest for the computational time and the potential design improvements obtained with high-fidelity and multi-physics computing methods. These simulations use either unsteady Reynolds-averaged Navier-Stokes methods or large eddy simulation and deal with turbulent unsteady flows, such as coupled flow phenomena (thermo-acoustic instabilities, buffet, etc). Some examples of the difficulties with grid generation and data analysis are also presented when dealing with these complex industrial applications.

  15. High performance computing and communications: Advancing the frontiers of information technology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

  16. High-performance computing on GPUs for resistivity logging of oil and gas wells

    Science.gov (United States)

    Glinskikh, V.; Dudaev, A.; Nechaev, O.; Surodina, I.

    2017-10-01

    We developed and implemented into software an algorithm for high-performance simulation of electrical logs from oil and gas wells using high-performance heterogeneous computing. The numerical solution of the 2D forward problem is based on the finite-element method and the Cholesky decomposition for solving a system of linear algebraic equations (SLAE). Software implementations of the algorithm used the NVIDIA CUDA technology and computing libraries are made, allowing us to perform decomposition of SLAE and find its solution on central processor unit (CPU) and graphics processor unit (GPU). The calculation time is analyzed depending on the matrix size and number of its non-zero elements. We estimated the computing speed on CPU and GPU, including high-performance heterogeneous CPU-GPU computing. Using the developed algorithm, we simulated resistivity data in realistic models.

  17. Assessment of coronary artery bypass graft patency by multidetector computed tomography and electron-beam tomography

    NARCIS (Netherlands)

    Piers, LH; Dorgelo, J; Tio, RA; Jessurun, GAJ; Oudkerk, M; Zijlstra, F

    This case report describes the use of retrospectively ECG-gated 16-slice multidetector computed tomography (MDCT) and electron-beam tomography (EBT) for assessing bypass graft patency in two patients with recurrent angina after coronary artery bypass graft surgery. The results of each tomographic

  18. Instructional Approach to Molecular Electronic Structure Theory

    Science.gov (United States)

    Dykstra, Clifford E.; Schaefer, Henry F.

    1977-01-01

    Describes a graduate quantum mechanics projects in which students write a computer program that performs ab initio calculations on the electronic structure of a simple molecule. Theoretical potential energy curves are produced. (MLH)

  19. Comprehensive evaluation of anomalous pulmonary venous connection by electron beam computed tomography as compared with ultrasound

    International Nuclear Information System (INIS)

    Zhang Shaoxiong; Dai Ruping; Bai Hua; He Sha; Jing Baolian

    1999-01-01

    Objective: To investigate the clinical value of electron beam computed tomography (EBCT) in diagnosis of anomalous pulmonary venous connection. Methods: Retrospective analysis on 14 cases with anomalous pulmonary venous connection was performed using EBCT volume scan. The slice thickness and scan time were 3 mm and 100 ms respectively. Non-ionic contrast medium was applied. Three dimensional reconstruction of EBCT images were carried out on all cases. Meanwhile, ultrasound echocardiography was performed on all patients. Conventional cardiovascular angiography was performed on 8 patients and 2 cases received operation. Results: Ten patients with total anomalous pulmonary venous connection, including 6 cases of supra-cardiac type and 4 cases of cardiac type, were proved by EBCT examination. Among them, 3 cases of abnormal pulmonary venous drainage were not revealed by conventional cardiovascular angiography. Among four patients with partial pulmonary venous connection, including cardiac type in 2 cases, supra-cardiac type and infra-cardiac type in 1 case respectively, only one of them was demonstrated by echocardiography. Conclusion: EBCT has significant value in diagnosis of anomalous pulmonary venous connection which may not be detectable with echocardiography or even cardiovascular angiography

  20. Performance Evaluation of a Mobile Wireless Computational Grid ...

    African Journals Online (AJOL)

    This work developed and simulated a mathematical model for a mobile wireless computational Grid architecture using networks of queuing theory. This was in order to evaluate the performance of theload-balancing three tier hierarchical configuration. The throughput and resource utilizationmetrics were measured and the ...

  1. Dynamic Performance Optimization for Cloud Computing Using M/M/m Queueing System

    Directory of Open Access Journals (Sweden)

    Lizheng Guo

    2014-01-01

    Full Text Available Successful development of cloud computing has attracted more and more people and enterprises to use it. On one hand, using cloud computing reduces the cost; on the other hand, using cloud computing improves the efficiency. As the users are largely concerned about the Quality of Services (QoS, performance optimization of the cloud computing has become critical to its successful application. In order to optimize the performance of multiple requesters and services in cloud computing, by means of queueing theory, we analyze and conduct the equation of each parameter of the services in the data center. Then, through analyzing the performance parameters of the queueing system, we propose the synthesis optimization mode, function, and strategy. Lastly, we set up the simulation based on the synthesis optimization mode; we also compare and analyze the simulation results to the classical optimization methods (short service time first and first in, first out method, which show that the proposed model can optimize the average wait time, average queue length, and the number of customer.

  2. Electronic structure of elements and compounds and electronic phases of solids

    International Nuclear Information System (INIS)

    Nadykto, B.A.

    2000-01-01

    The paper reviews technique and computed energies for various electronic states of many-electron multiply charged ions, molecular ions, and electronic phases of solids. The model used allows computation of the state energy for free many-electron multiply charged ions with relative accuracy ∼10 -4 suitable for analysis of spectroscopy data

  3. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Michael J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  4. High-performance computing for structural mechanics and earthquake/tsunami engineering

    CERN Document Server

    Hori, Muneo; Ohsaki, Makoto

    2016-01-01

    Huge earthquakes and tsunamis have caused serious damage to important structures such as civil infrastructure elements, buildings and power plants around the globe.  To quantitatively evaluate such damage processes and to design effective prevention and mitigation measures, the latest high-performance computational mechanics technologies, which include telascale to petascale computers, can offer powerful tools. The phenomena covered in this book include seismic wave propagation in the crust and soil, seismic response of infrastructure elements such as tunnels considering soil-structure interactions, seismic response of high-rise buildings, seismic response of nuclear power plants, tsunami run-up over coastal towns and tsunami inundation considering fluid-structure interactions. The book provides all necessary information for addressing these phenomena, ranging from the fundamentals of high-performance computing for finite element methods, key algorithms of accurate dynamic structural analysis, fluid flows ...

  5. High performance computing and communications: FY 1995 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-01

    The High Performance Computing and Communications (HPCC) Program was formally established following passage of the High Performance Computing Act of 1991 signed on December 9, 1991. Ten federal agencies in collaboration with scientists and managers from US industry, universities, and laboratories have developed the HPCC Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1994 and FY 1995. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency. Although the Department of Education is an official HPCC agency, its current funding and reporting of crosscut activities goes through the Committee on Education and Health Resources, not the HPCC Program. For this reason the Implementation Plan covers nine HPCC agencies.

  6. An Introduction to Parallel Cluster Computing Using PVM for Computer Modeling and Simulation of Engineering Problems

    International Nuclear Information System (INIS)

    Spencer, VN

    2001-01-01

    An investigation has been conducted regarding the ability of clustered personal computers to improve the performance of executing software simulations for solving engineering problems. The power and utility of personal computers continues to grow exponentially through advances in computing capabilities such as newer microprocessors, advances in microchip technologies, electronic packaging, and cost effective gigabyte-size hard drive capacity. Many engineering problems require significant computing power. Therefore, the computation has to be done by high-performance computer systems that cost millions of dollars and need gigabytes of memory to complete the task. Alternately, it is feasible to provide adequate computing in the form of clustered personal computers. This method cuts the cost and size by linking (clustering) personal computers together across a network. Clusters also have the advantage that they can be used as stand-alone computers when they are not operating as a parallel computer. Parallel computing software to exploit clusters is available for computer operating systems like Unix, Windows NT, or Linux. This project concentrates on the use of Windows NT, and the Parallel Virtual Machine (PVM) system to solve an engineering dynamics problem in Fortran

  7. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure.

    Science.gov (United States)

    Wang, Henry; Ma, Yunzhi; Pratx, Guillem; Xing, Lei

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47× speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed.

  8. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    International Nuclear Information System (INIS)

    Wang, Henry; Ma Yunzhi; Pratx, Guillem; Xing Lei

    2011-01-01

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47x speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. (note)

  9. Toward real-time Monte Carlo simulation using a commercial cloud computing infrastructure

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Henry [Department of Electrical Engineering, Stanford University, Stanford, CA 94305 (United States); Ma Yunzhi; Pratx, Guillem; Xing Lei, E-mail: hwang41@stanford.edu [Department of Radiation Oncology, Stanford University School of Medicine, Stanford, CA 94305-5847 (United States)

    2011-09-07

    Monte Carlo (MC) methods are the gold standard for modeling photon and electron transport in a heterogeneous medium; however, their computational cost prohibits their routine use in the clinic. Cloud computing, wherein computing resources are allocated on-demand from a third party, is a new approach for high performance computing and is implemented to perform ultra-fast MC calculation in radiation therapy. We deployed the EGS5 MC package in a commercial cloud environment. Launched from a single local computer with Internet access, a Python script allocates a remote virtual cluster. A handshaking protocol designates master and worker nodes. The EGS5 binaries and the simulation data are initially loaded onto the master node. The simulation is then distributed among independent worker nodes via the message passing interface, and the results aggregated on the local computer for display and data analysis. The described approach is evaluated for pencil beams and broad beams of high-energy electrons and photons. The output of cloud-based MC simulation is identical to that produced by single-threaded implementation. For 1 million electrons, a simulation that takes 2.58 h on a local computer can be executed in 3.3 min on the cloud with 100 nodes, a 47x speed-up. Simulation time scales inversely with the number of parallel nodes. The parallelization overhead is also negligible for large simulations. Cloud computing represents one of the most important recent advances in supercomputing technology and provides a promising platform for substantially improved MC simulation. In addition to the significant speed up, cloud computing builds a layer of abstraction for high performance parallel computing, which may change the way dose calculations are performed and radiation treatment plans are completed. (note)

  10. Variations of dose distribution in high energy electron beams as a function of geometrical parameters of irradiation. Application to computer calculation

    International Nuclear Information System (INIS)

    Villeret, O.

    1985-04-01

    An algorithm is developed for the purpose of compter treatment planning of electron therapy. The method uses experimental absorbed dose distribution data in the irradiated medium for electron beams in the 8-20 MeV range delivered by the Sagittaire linear accelerator (study of central axis depth dose, beam profiles) in various geometrical conditions. Experimental verification of the computer program showed agreement with 2% between dose measurement and computer calculation [fr

  11. Play for Performance: Using Computer Games to Improve Motivation and Test-Taking Performance

    Science.gov (United States)

    Dennis, Alan R.; Bhagwatwar, Akshay; Minas, Randall K.

    2013-01-01

    The importance of testing, especially certification and high-stakes testing, has increased substantially over the past decade. Building on the "serious gaming" literature and the psychology "priming" literature, we developed a computer game designed to improve test-taking performance using psychological priming. The game primed…

  12. 8th International Workshop on Parallel Tools for High Performance Computing

    CERN Document Server

    Gracia, José; Knüpfer, Andreas; Resch, Michael; Nagel, Wolfgang

    2015-01-01

    Numerical simulation and modelling using High Performance Computing has evolved into an established technique in academic and industrial research. At the same time, the High Performance Computing infrastructure is becoming ever more complex. For instance, most of the current top systems around the world use thousands of nodes in which classical CPUs are combined with accelerator cards in order to enhance their compute power and energy efficiency. This complexity can only be mastered with adequate development and optimization tools. Key topics addressed by these tools include parallelization on heterogeneous systems, performance optimization for CPUs and accelerators, debugging of increasingly complex scientific applications, and optimization of energy usage in the spirit of green IT. This book represents the proceedings of the 8th International Parallel Tools Workshop, held October 1-2, 2014 in Stuttgart, Germany – which is a forum to discuss the latest advancements in the parallel tools.

  13. Recent advances in PC-Linux systems for electronic structure computations by optimized compilers and numerical libraries.

    Science.gov (United States)

    Yu, Jen-Shiang K; Yu, Chin-Hui

    2002-01-01

    One of the most frequently used packages for electronic structure research, GAUSSIAN 98, is compiled on Linux systems with various hardware configurations, including AMD Athlon (with the "Thunderbird" core), AthlonMP, and AthlonXP (with the "Palomino" core) systems as well as the Intel Pentium 4 (with the "Willamette" core) machines. The default PGI FORTRAN compiler (pgf77) and the Intel FORTRAN compiler (ifc) are respectively employed with different architectural optimization options to compile GAUSSIAN 98 and test the performance improvement. In addition to the BLAS library included in revision A.11 of this package, the Automatically Tuned Linear Algebra Software (ATLAS) library is linked against the binary executables to improve the performance. Various Hartree-Fock, density-functional theories, and the MP2 calculations are done for benchmarking purposes. It is found that the combination of ifc with ATLAS library gives the best performance for GAUSSIAN 98 on all of these PC-Linux computers, including AMD and Intel CPUs. Even on AMD systems, the Intel FORTRAN compiler invariably produces binaries with better performance than pgf77. The enhancement provided by the ATLAS library is more significant for post-Hartree-Fock calculations. The performance on one single CPU is potentially as good as that on an Alpha 21264A workstation or an SGI supercomputer. The floating-point marks by SpecFP2000 have similar trends to the results of GAUSSIAN 98 package.

  14. Computer fan performance enhancement via acoustic perturbations

    Energy Technology Data Exchange (ETDEWEB)

    Greenblatt, David, E-mail: davidg@technion.ac.il [Faculty of Mechanical Engineering, Technion - Israel Institute of Technology, Haifa (Israel); Avraham, Tzahi; Golan, Maayan [Faculty of Mechanical Engineering, Technion - Israel Institute of Technology, Haifa (Israel)

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Computer fan effectiveness was increased by introducing acoustic perturbations. Black-Right-Pointing-Pointer Acoustic perturbations controlled blade boundary layer separation. Black-Right-Pointing-Pointer Optimum frequencies corresponded with airfoils studies. Black-Right-Pointing-Pointer Exploitation of flow instabilities was responsible for performance improvements. Black-Right-Pointing-Pointer Peak pressure and peak flowrate were increased by 40% and 15% respectively. - Abstract: A novel technique for increasing computer fan effectiveness, based on introducing acoustic perturbations onto the fan blades to control boundary layer separation, was assessed. Experiments were conducted in a specially designed facility that simultaneously allowed characterization of fan performance and introduction of the perturbations. A parametric study was conducted to determine the optimum control parameters, namely those that deliver the largest increase in fan pressure for a given flowrate. The optimum reduced frequencies corresponded with those identified on stationary airfoils and it was thus concluded that the exploitation of Kelvin-Helmholtz instabilities, commonly observed on airfoils, was responsible for the fan blade performance improvements. The optimum control inputs, such as acoustic frequency and sound pressure level, showed some variation with different fan flowrates. With the near-optimum control conditions identified, the full operational envelope of the fan, when subjected to acoustic perturbations, was assessed. The peak pressure and peak flowrate were increased by up to 40% and 15% respectively. The peak fan efficiency increased with acoustic perturbations but the overall system efficiency was reduced when the speaker input power was accounted for.

  15. Computer fan performance enhancement via acoustic perturbations

    International Nuclear Information System (INIS)

    Greenblatt, David; Avraham, Tzahi; Golan, Maayan

    2012-01-01

    Highlights: ► Computer fan effectiveness was increased by introducing acoustic perturbations. ► Acoustic perturbations controlled blade boundary layer separation. ► Optimum frequencies corresponded with airfoils studies. ► Exploitation of flow instabilities was responsible for performance improvements. ► Peak pressure and peak flowrate were increased by 40% and 15% respectively. - Abstract: A novel technique for increasing computer fan effectiveness, based on introducing acoustic perturbations onto the fan blades to control boundary layer separation, was assessed. Experiments were conducted in a specially designed facility that simultaneously allowed characterization of fan performance and introduction of the perturbations. A parametric study was conducted to determine the optimum control parameters, namely those that deliver the largest increase in fan pressure for a given flowrate. The optimum reduced frequencies corresponded with those identified on stationary airfoils and it was thus concluded that the exploitation of Kelvin–Helmholtz instabilities, commonly observed on airfoils, was responsible for the fan blade performance improvements. The optimum control inputs, such as acoustic frequency and sound pressure level, showed some variation with different fan flowrates. With the near-optimum control conditions identified, the full operational envelope of the fan, when subjected to acoustic perturbations, was assessed. The peak pressure and peak flowrate were increased by up to 40% and 15% respectively. The peak fan efficiency increased with acoustic perturbations but the overall system efficiency was reduced when the speaker input power was accounted for.

  16. Performance of particle in cell methods on highly concurrent computational architectures

    International Nuclear Information System (INIS)

    Adams, M.F.; Ethier, S.; Wichmann, N.

    2009-01-01

    Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system of equations used in simulations of magnetic fusion plasmas. PIC methods use grid based computations, for solving Poisson's equation or more generally Maxwell's equations, as well as Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of discretizations, deterministic field solves and Monte-Carlo methods for the Vlasov equation, pose challenges in understanding and optimizing performance on today large scale computers which require high levels of concurrency. These challenges arises from the need to optimize two very different types of processes and the interactions between them. Modern cache based high-end computers have very deep memory hierarchies and high degrees of concurrency which must be utilized effectively to achieve good performance. The effective use of these machines requires maximizing concurrency by eliminating serial or redundant work and minimizing global communication. A related issue is minimizing the memory traffic between levels of the memory hierarchy because performance is often limited by the bandwidths and latencies of the memory system. This paper discusses some of the performance issues, particularly in regard to parallelism, of PIC methods. The gyrokinetic toroidal code (GTC) is used for these studies and a new radial grid decomposition is presented and evaluated. Scaling of the code is demonstrated on ITER sized plasmas with up to 16K Cray XT3/4 cores.

  17. Performance of particle in cell methods on highly concurrent computational architectures

    International Nuclear Information System (INIS)

    Adams, M F; Ethier, S; Wichmann, N

    2007-01-01

    Particle in cell (PIC) methods are effective in computing Vlasov-Poisson system of equations used in simulations of magnetic fusion plasmas. PIC methods use grid based computations, for solving Poisson's equation or more generally Maxwell's equations, as well as Monte-Carlo type methods to sample the Vlasov equation. The presence of two types of discretizations, deterministic field solves and Monte-Carlo methods for the Vlasov equation, pose challenges in understanding and optimizing performance on today large scale computers which require high levels of concurrency. These challenges arises from the need to optimize two very different types of processes and the interactions between them. Modern cache based high-end computers have very deep memory hierarchies and high degrees of concurrency which must be utilized effectively to achieve good performance. The effective use of these machines requires maximizing concurrency by eliminating serial or redundant work and minimizing global communication. A related issue is minimizing the memory traffic between levels of the memory hierarchy because performance is often limited by the bandwidths and latencies of the memory system. This paper discusses some of the performance issues, particularly in regard to parallelism, of PIC methods. The gyrokinetic toroidal code (GTC) is used for these studies and a new radial grid decomposition is presented and evaluated. Scaling of the code is demonstrated on ITER sized plasmas with up to 16K Cray XT3/4 cores

  18. Scalability of DL_POLY on High Performance Computing Platform

    CSIR Research Space (South Africa)

    Mabakane, Mabule S

    2017-12-01

    Full Text Available stream_source_info Mabakanea_19979_2017.pdf.txt stream_content_type text/plain stream_size 33716 Content-Encoding UTF-8 stream_name Mabakanea_19979_2017.pdf.txt Content-Type text/plain; charset=UTF-8 SACJ 29(3) December... when using many processors within the compute nodes of the supercomputer. The type of the processors of compute nodes and their memory also play an important role in the overall performance of the parallel application running on a supercomputer. DL...

  19. Evaluation of Electron Beam Welding Performance of AA6061-T6 Plate-type Fuel Assembly

    International Nuclear Information System (INIS)

    Kim, Soo-Sung; Seo, Kyoung-Seok; Lee, Don-Bae; Park, Jong-Man; Lee, Yoon-Sang; Lee, Chong-Tak

    2014-01-01

    As one of the most commonly used heat-treatable aluminum alloys, AA6061-T6 aluminum alloy is available in a wide range of structural materials. Typically, it is used in structural members, auto-body sheet and many other applications. Generally, this alloy is easily welded by conventional GTAW (Gas Tungsten Arc Welding), LBW (Laser Beam Welding) and EBW(Electron Beam Welding). However, certain characteristics, such as solidification cracking, porosity, HAZ (Heat-affected Zone) degradation must be considered during welding. Because of high energy density and low heat input, especially LBW and EBW processes possess the advantage of minimizing the fusing zone and HAZ and producing deeper penetration than arc welding processes. In present study, to apply for the plate-type nuclear fuel fabrication and assembly, a fundamental electron beam welding experiment using AA6061-T6 aluminum alloy specimens was conducted. Furthermore, to establish the suitable welding process, and satisfy the requirements of the weld quality, EBW apparatus using an electron welding gun and vacuum chamber was developed, and preliminary investigations for optimizing the welding parameters of the specimens using AA6061-T6 aluminum plates were also performed. The EB weld quality of AA6061-T6 aluminum alloy for the plate-type fuel assembly has been also studied by the weld penetrations of side plate to end fitting and fixing bar and weld inspections using computed tomography

  20. Performance Measurements in a High Throughput Computing Environment

    CERN Document Server

    AUTHOR|(CDS)2145966; Gribaudo, Marco

    The IT infrastructures of companies and research centres are implementing new technologies to satisfy the increasing need of computing resources for big data analysis. In this context, resource profiling plays a crucial role in identifying areas where the improvement of the utilisation efficiency is needed. In order to deal with the profiling and optimisation of computing resources, two complementary approaches can be adopted: the measurement-based approach and the model-based approach. The measurement-based approach gathers and analyses performance metrics executing benchmark applications on computing resources. Instead, the model-based approach implies the design and implementation of a model as an abstraction of the real system, selecting only those aspects relevant to the study. This Thesis originates from a project carried out by the author within the CERN IT department. CERN is an international scientific laboratory that conducts fundamental researches in the domain of elementary particle physics. The p...

  1. Analytical Performance Verification of FCS-MPC Applied to Power Electronic Converters

    DEFF Research Database (Denmark)

    Novak, Mateja; Dragicevic, Tomislav; Blaabjerg, Frede

    2017-01-01

    Since the introduction of finite control set model predictive control (FCS-MPC) in power electronics the algorithm has been missing an important aspect that would speed up its implementation in industry: a simple method to verify the algorithm performance. This paper proposes to use a statistical...... model checking (SMC) method for performance evaluation of the algorithm applied to power electronics converters. SMC is simple to implement, intuitive and it requires only an operational model of the system that can be simulated and checked against properties. Device under test for control algorithm...

  2. An electron beam linear scanning mode for industrial limited-angle nano-computed tomography

    Science.gov (United States)

    Wang, Chengxiang; Zeng, Li; Yu, Wei; Zhang, Lingli; Guo, Yumeng; Gong, Changcheng

    2018-01-01

    Nano-computed tomography (nano-CT), which utilizes X-rays to research the inner structure of some small objects and has been widely utilized in biomedical research, electronic technology, geology, material sciences, etc., is a high spatial resolution and non-destructive research technique. A traditional nano-CT scanning model with a very high mechanical precision and stability of object manipulator, which is difficult to reach when the scanned object is continuously rotated, is required for high resolution imaging. To reduce the scanning time and attain a stable and high resolution imaging in industrial non-destructive testing, we study an electron beam linear scanning mode of nano-CT system that can avoid mechanical vibration and object movement caused by the continuously rotated object. Furthermore, to further save the scanning time and study how small the scanning range could be considered with acceptable spatial resolution, an alternating iterative algorithm based on ℓ0 minimization is utilized to limited-angle nano-CT reconstruction problem with the electron beam linear scanning mode. The experimental results confirm the feasibility of the electron beam linear scanning mode of nano-CT system.

  3. High-Performance Compute Infrastructure in Astronomy: 2020 Is Only Months Away

    Science.gov (United States)

    Berriman, B.; Deelman, E.; Juve, G.; Rynge, M.; Vöckler, J. S.

    2012-09-01

    By 2020, astronomy will be awash with as much as 60 PB of public data. Full scientific exploitation of such massive volumes of data will require high-performance computing on server farms co-located with the data. Development of this computing model will be a community-wide enterprise that has profound cultural and technical implications. Astronomers must be prepared to develop environment-agnostic applications that support parallel processing. The community must investigate the applicability and cost-benefit of emerging technologies such as cloud computing to astronomy, and must engage the Computer Science community to develop science-driven cyberinfrastructure such as workflow schedulers and optimizers. We report here the results of collaborations between a science center, IPAC, and a Computer Science research institute, ISI. These collaborations may be considered pathfinders in developing a high-performance compute infrastructure in astronomy. These collaborations investigated two exemplar large-scale science-driver workflow applications: 1) Calculation of an infrared atlas of the Galactic Plane at 18 different wavelengths by placing data from multiple surveys on a common plate scale and co-registering all the pixels; 2) Calculation of an atlas of periodicities present in the public Kepler data sets, which currently contain 380,000 light curves. These products have been generated with two workflow applications, written in C for performance and designed to support parallel processing on multiple environments and platforms, but with different compute resource needs: the Montage image mosaic engine is I/O-bound, and the NASA Star and Exoplanet Database periodogram code is CPU-bound. Our presentation will report cost and performance metrics and lessons-learned for continuing development. Applicability of Cloud Computing: Commercial Cloud providers generally charge for all operations, including processing, transfer of input and output data, and for storage of data

  4. Computer programs for unit-cell determination in electron diffraction experiments

    International Nuclear Information System (INIS)

    Li, X.Z.

    2005-01-01

    A set of computer programs for unit-cell determination from an electron diffraction tilt series and pattern indexing has been developed on the basis of several well-established algorithms. In this approach, a reduced direct primitive cell is first determined from experimental data, in the means time, the measurement errors of the tilt angles are checked and minimized. The derived primitive cell is then checked for possible higher lattice symmetry and transformed into a proper conventional cell. Finally a least-squares refinement procedure is adopted to generate optimum lattice parameters on the basis of the lengths of basic reflections in each diffraction pattern and the indices of these reflections. Examples are given to show the usage of the programs

  5. NINJA: Java for High Performance Numerical Computing

    Directory of Open Access Journals (Sweden)

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  6. Performance of the EUSO-Balloon electronics

    International Nuclear Information System (INIS)

    Barrillon, P.; Dagoret, S.; Miyamoto, H.; Moretto, C.; Bacholle, S.; Blaksley, C; Gorodetzky, P.; Jung, A.; Prévôt, G.; Prat, P.; Bayer, J.; Blin, S.; Taille, C. De La; Cafagna, F.; Fornaro, C.; Karczmarczyk, J.; Tanco, G. Medina; Osteria, G.; Perfetto, F.; Park, I.

    2016-01-01

    The 24th of August 2014, the EUSO-Balloon instrument went for a night flight for several hours, 40 km above Timmins (Canada) balloon launching site, concretizing the hard work of an important part of the JEM-EUSO collaboration started 3 years before. This instrument consists of a telescope made of two lenses and a complex electronic chain divided in two main sub-systems: the PDM (Photo Detector Module) and the DP (Data Processor). Each of them is made of several innovative elements developed and tested in a short time. This paper presents their performances before and during the flight

  7. The correlation between a passion for computer games and the school performance of younger schoolchildren.

    Directory of Open Access Journals (Sweden)

    Maliy D.V.

    2015-07-01

    Full Text Available Today computer games occupy a significant place in children’s lives and fundamentally affect the process of the formation and development of their personalities. A number of present-day researchers assert that computer games have a developmental effect on players. Others share the point of view that computer games have negative effects on the cognitive and emotional spheres of a child and claim that children with low self-esteem who neglect their schoolwork and have difficulties in communication are particularly passionate about computer games. This article reviews theoretical and experimental pedagogical and psychological studies of the nature of the correlation between a passion for computer games and the school performance of younger schoolchildren. Our analysis of foreign and Russian psychology studies regarding the problem of playing activities mediated by information and computer technologies allowed us to single out the main criteria for children’s passion for computer games and school performance. This article presents the results of a pilot study of the nature of the correlation between a passion for computer games and the school performance of younger schoolchildren. The research involved 32 pupils (12 girls and 20 boys aged 10-11 years in the 4th grade. The general hypothesis was that there are divergent correlations between the passion of younger schoolchildren for computer games and their school performance. A questionnaire survey administered to the pupils allowed us to obtain information about the amount of time they devoted to computer games, their preferences for computer-game genres, and the extent of their passion for games. To determine the level of school performance we analyzed class registers. To establish the correlation between a passion for computer games and the school performance of younger schoolchildren, as well as to determine the effect of a passion for computer games on the personal qualities of the children

  8. Development of an electron gun for high power CW electron linac (1). Beam experiment for basic performance of electron gun

    International Nuclear Information System (INIS)

    Yamazaki, Yoshio; Nomura, Masahiro; Komata, Tomoki

    1999-05-01

    Presently, the Beam Group of Oarai Engineering Center in Japan Nuclear Cycle Development Institute (JNC) completed the high power CW electron linac. Then we started full-scale beam experiments after the government permission for a radiation equipment had given last January. Measurements of basic performance for the mesh-grid type electron gun have been done to launch stable beam at 300 mA peak current downstream of the accelerator. These experiments disclosed to increase beam loss in the electron gun in some cases of voltage supplied the mesh-grid in spite of same beam current from gun. Consequently, we could find the best condition for mesh-grid voltage and heater current to supply stable beam at 300 mA peak current for accelerator study. (author)

  9. FY 1993 Blue Book: Grand Challenges 1993: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  10. High-performance floating-point image computing workstation for medical applications

    Science.gov (United States)

    Mills, Karl S.; Wong, Gilman K.; Kim, Yongmin

    1990-07-01

    The medical imaging field relies increasingly on imaging and graphics techniques in diverse applications with needs similar to (or more stringent than) those of the military, industrial and scientific communities. However, most image processing and graphics systems available for use in medical imaging today are either expensive, specialized, or in most cases both. High performance imaging and graphics workstations which can provide real-time results for a number of applications, while maintaining affordability and flexibility, can facilitate the application of digital image computing techniques in many different areas. This paper describes the hardware and software architecture of a medium-cost floating-point image processing and display subsystem for the NeXT computer, and its applications as a medical imaging workstation. Medical imaging applications of the workstation include use in a Picture Archiving and Communications System (PACS), in multimodal image processing and 3-D graphics workstation for a broad range of imaging modalities, and as an electronic alternator utilizing its multiple monitor display capability and large and fast frame buffer. The subsystem provides a 2048 x 2048 x 32-bit frame buffer (16 Mbytes of image storage) and supports both 8-bit gray scale and 32-bit true color images. When used to display 8-bit gray scale images, up to four different 256-color palettes may be used for each of four 2K x 2K x 8-bit image frames. Three of these image frames can be used simultaneously to provide pixel selectable region of interest display. A 1280 x 1024 pixel screen with 1: 1 aspect ratio can be windowed into the frame buffer for display of any portion of the processed image or images. In addition, the system provides hardware support for integer zoom and an 82-color cursor. This subsystem is implemented on an add-in board occupying a single slot in the NeXT computer. Up to three boards may be added to the NeXT for multiple display capability (e

  11. Use of electronic music as an occupational therapy modality in spinal cord injury rehabilitation: an occupational performance model.

    Science.gov (United States)

    Lee, B; Nantais, T

    1996-05-01

    This article describes an electronic music program that allows clients with spinal cord injury (SCI) to form musical bands and play songs while performing therapeutic exercise in an occupational therapy program. Clients create the music by activating upper extremity exercise devices that are connected to a synthesizer through a computer. The bands choose the songs they would like to play and meet twice a week for 1 hr to practice. The 8-week program often concludes with a public performance. The music program is intended to motivate client participation in physical rehabilitation while promoting self-esteem, emotional expression, and peer support. It is based on the model of occupational performance and the theory of purposeful activity. To date, 33 persons have taken part. Client, therapist, and public response has been positive because this program highlights the abilities of persons with SCI, thereby encouraging their reintegration into the community.

  12. Performativity, Fabrication and Trust: Exploring Computer-Mediated Moderation

    Science.gov (United States)

    Clapham, Andrew

    2013-01-01

    Based on research conducted in an English secondary school, this paper explores computer-mediated moderation as a performative tool. The Module Assessment Meeting (MAM) was the moderation approach under investigation. I mobilise ethnographic data generated by a key informant, and triangulated with that from other actors in the setting, in order to…

  13. Routing performance analysis and optimization within a massively parallel computer

    Science.gov (United States)

    Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen

    2013-04-16

    An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.

  14. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  15. Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Kozacik, Stephen [EM Photonics, Inc., Newark, DE (United States)

    2017-05-15

    Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.

  16. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bremer, Peer-Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohr, Bernd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schulz, Martin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pasccci, Valerio [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gamblin, Todd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brunst, Holger [Dresden Univ. of Technology (Germany)

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  17. SCEAPI: A unified Restful Web API for High-Performance Computing

    Science.gov (United States)

    Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi

    2017-10-01

    The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.

  18. Performance of a direct detection camera for off-axis electron holography

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Shery L.Y., E-mail: shery.chang@asu.edu [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons, and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); LeRoy Eyring Center for Solid State Science, Arizona State University, Tempe, AZ 85287 (United States); Dwyer, Christian [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons, and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany); Department of Physics, Arizona State University, Tempe, AZ 85287 (United States); Barthel, Juri; Boothroyd, Chris B.; Dunin-Borkowski, Rafal E. [Ernst Ruska-Centre for Microscopy and Spectroscopy with Electrons, and Peter Grünberg Institute, Forschungszentrum Jülich, D-52425 Jülich (Germany)

    2016-02-15

    The performance of a direct detection camera (DDC) is evaluated in the context of off-axis electron holographic experiments in a transmission electron microscope. Its performance is also compared directly with that of a conventional charge-coupled device (CCD) camera. The DDC evaluated here can be operated either by the detection of individual electron events (counting mode) or by the effective integration of many such events during a given exposure time (linear mode). It is demonstrated that the improved modulation transfer functions and detective quantum efficiencies of both modes of the DDC give rise to significant benefits over the conventional CCD cameras, specifically, a significant improvement in the visibility of the holographic fringes and a reduction of the statistical error in the phase of the reconstructed electron wave function. The DDC's linear mode, which can handle higher dose rates, allows optimisation of the dose rate to achieve the best phase resolution for a wide variety of experimental conditions. For suitable conditions, the counting mode can potentially utilise a significantly lower dose to achieve a phase resolution that is comparable to that achieved using the linear mode. The use of multiple holograms and correlation techniques to increase the total dose in counting mode is also demonstrated. - Highlights: • Performance of a direct detection camera for off-axis electron holography has been evaluated. • Better holographic fringe visibility and phase resolution are achieved using DDC. • Both counting and linear modes offered by DDC are advantageous for different dose regimes.

  19. Performance of a direct detection camera for off-axis electron holography

    International Nuclear Information System (INIS)

    Chang, Shery L.Y.; Dwyer, Christian; Barthel, Juri; Boothroyd, Chris B.; Dunin-Borkowski, Rafal E.

    2016-01-01

    The performance of a direct detection camera (DDC) is evaluated in the context of off-axis electron holographic experiments in a transmission electron microscope. Its performance is also compared directly with that of a conventional charge-coupled device (CCD) camera. The DDC evaluated here can be operated either by the detection of individual electron events (counting mode) or by the effective integration of many such events during a given exposure time (linear mode). It is demonstrated that the improved modulation transfer functions and detective quantum efficiencies of both modes of the DDC give rise to significant benefits over the conventional CCD cameras, specifically, a significant improvement in the visibility of the holographic fringes and a reduction of the statistical error in the phase of the reconstructed electron wave function. The DDC's linear mode, which can handle higher dose rates, allows optimisation of the dose rate to achieve the best phase resolution for a wide variety of experimental conditions. For suitable conditions, the counting mode can potentially utilise a significantly lower dose to achieve a phase resolution that is comparable to that achieved using the linear mode. The use of multiple holograms and correlation techniques to increase the total dose in counting mode is also demonstrated. - Highlights: • Performance of a direct detection camera for off-axis electron holography has been evaluated. • Better holographic fringe visibility and phase resolution are achieved using DDC. • Both counting and linear modes offered by DDC are advantageous for different dose regimes.

  20. Thinking processes used by high-performing students in a computer programming task

    Directory of Open Access Journals (Sweden)

    Marietjie Havenga

    2011-07-01

    Full Text Available Computer programmers must be able to understand programming source code and write programs that execute complex tasks to solve real-world problems. This article is a trans- disciplinary study at the intersection of computer programming, education and psychology. It outlines the role of mental processes in the process of programming and indicates how successful thinking processes can support computer science students in writing correct and well-defined programs. A mixed methods approach was used to better understand the thinking activities and programming processes of participating students. Data collection involved both computer programs and students’ reflective thinking processes recorded in their journals. This enabled analysis of psychological dimensions of participants’ thinking processes and their problem-solving activities as they considered a programming problem. Findings indicate that the cognitive, reflective and psychological processes used by high-performing programmers contributed to their success in solving a complex programming problem. Based on the thinking processes of high performers, we propose a model of integrated thinking processes, which can support computer programming students. Keywords: Computer programming, education, mixed methods research, thinking processes.  Disciplines: Computer programming, education, psychology

  1. Rotary engine performance computer program (RCEMAP and RCEMAPPC): User's guide

    Science.gov (United States)

    Bartrand, Timothy A.; Willis, Edward A.

    1993-01-01

    This report is a user's guide for a computer code that simulates the performance of several rotary combustion engine configurations. It is intended to assist prospective users in getting started with RCEMAP and/or RCEMAPPC. RCEMAP (Rotary Combustion Engine performance MAP generating code) is the mainframe version, while RCEMAPPC is a simplified subset designed for the personal computer, or PC, environment. Both versions are based on an open, zero-dimensional combustion system model for the prediction of instantaneous pressures, temperature, chemical composition and other in-chamber thermodynamic properties. Both versions predict overall engine performance and thermal characteristics, including bmep, bsfc, exhaust gas temperature, average material temperatures, and turbocharger operating conditions. Required inputs include engine geometry, materials, constants for use in the combustion heat release model, and turbomachinery maps. Illustrative examples and sample input files for both versions are included.

  2. Computer-assisted expert case definition in electronic health records.

    Science.gov (United States)

    Walker, Alexander M; Zhou, Xiaofeng; Ananthakrishnan, Ashwin N; Weiss, Lisa S; Shen, Rongjun; Sobel, Rachel E; Bate, Andrew; Reynolds, Robert F

    2016-02-01

    To describe how computer-assisted presentation of case data can lead experts to infer machine-implementable rules for case definition in electronic health records. As an illustration the technique has been applied to obtain a definition of acute liver dysfunction (ALD) in persons with inflammatory bowel disease (IBD). The technique consists of repeatedly sampling new batches of case candidates from an enriched pool of persons meeting presumed minimal inclusion criteria, classifying the candidates by a machine-implementable candidate rule and by a human expert, and then updating the rule so that it captures new distinctions introduced by the expert. Iteration continues until an update results in an acceptably small number of changes to form a final case definition. The technique was applied to structured data and terms derived by natural language processing from text records in 29,336 adults with IBD. Over three rounds the technique led to rules with increasing predictive value, as the experts identified exceptions, and increasing sensitivity, as the experts identified missing inclusion criteria. In the final rule inclusion and exclusion terms were often keyed to an ALD onset date. When compared against clinical review in an independent test round, the derived final case definition had a sensitivity of 92% and a positive predictive value of 79%. An iterative technique of machine-supported expert review can yield a case definition that accommodates available data, incorporates pre-existing medical knowledge, is transparent and is open to continuous improvement. The expert updates to rules may be informative in themselves. In this limited setting, the final case definition for ALD performed better than previous, published attempts using expert definitions. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  3. Computer simulation of steady-state performance of air-to-air heat pumps

    Energy Technology Data Exchange (ETDEWEB)

    Ellison, R D; Creswick, F A

    1978-03-01

    A computer model by which the performance of air-to-air heat pumps can be simulated is described. The intended use of the model is to evaluate analytically the improvements in performance that can be effected by various component improvements. The model is based on a trio of independent simulation programs originated at the Massachusetts Institute of Technology Heat Transfer Laboratory. The three programs have been combined so that user intervention and decision making between major steps of the simulation are unnecessary. The program was further modified by substituting a new compressor model and adding a capillary tube model, both of which are described. Performance predicted by the computer model is shown to be in reasonable agreement with performance data observed in our laboratory. Planned modifications by which the utility of the computer model can be enhanced in the future are described. User instructions and a FORTRAN listing of the program are included.

  4. Unravelling the structure of matter on high-performance computers

    International Nuclear Information System (INIS)

    Kieu, T.D.; McKellar, B.H.J.

    1992-11-01

    The various phenomena and the different forms of matter in nature are believed to be the manifestation of only a handful set of fundamental building blocks-the elementary particles-which interact through the four fundamental forces. In the study of the structure of matter at this level one has to consider forces which are not sufficiently weak to be treated as small perturbations to the system, an example of which is the strong force that binds the nucleons together. High-performance computers, both vector and parallel machines, have facilitated the necessary non-perturbative treatments. The principles and the techniques of computer simulations applied to Quantum Chromodynamics are explained examples include the strong interactions, the calculation of the mass of nucleons and their decay rates. Some commercial and special-purpose high-performance machines for such calculations are also mentioned. 3 refs., 2 tabs

  5. Performance and Economic Analysis of Distributed Power Electronics in Photovoltaic Systems

    Energy Technology Data Exchange (ETDEWEB)

    Deline, C.; Marion, B.; Granata, J.; Gonzalez, S.

    2011-01-01

    Distributed electronics like micro-inverters and DC-DC converters can help recover mismatch and shading losses in photovoltaic (PV) systems. Under partially shaded conditions, the use of distributed electronics can recover between 15-40% of annual performance loss or more, depending on the system configuration and type of device used. Additional value-added features may also increase the benefit of using per-panel distributed electronics, including increased safety, reduced system design constraints and added monitoring and diagnostics. The economics of these devices will also become more favorable as production volume increases, and integration within the solar panel?s junction box reduces part count and installation time. Some potential liabilities of per-panel devices include increased PV system cost, additional points of failure, and an insertion loss that may or may not offset performance gains under particular mismatch conditions.

  6. Electron Identification Performance and First Measurement of $W \\to e + \

    CERN Document Server

    Ueno, Rynichi

    2010-01-01

    The identification of electrons is important for the ATLAS experiment because electrons are present in many interactions of interest produced at the Large Hadron Collider. A deep knowledge of the detector, the electron identification algorithms, and the calibration techniques are crucial in order to accomplish this task. This thesis work presents a Monte Carlo study using electrons from the W —> e + v process to evaluate the performance of the ATLAS electromagnetic calorimeter. A significant number of electrons was produced in the early ATLAS collision runs at centre-of-mass energies of 900 GeV and 7 TeV between November 2009 and April 2010, and their properties are presented. Finally, a first measurement of W —> e + v process with the ATLAS experiment was successfully accomplished with the first C = 1.0 nb_ 1 of data at the 7 TeV collision energy, and the properties of the W candidates are also detailed.

  7. Five- and six-electron harmonium atoms: Highly accurate electronic properties and their application to benchmarking of approximate 1-matrix functionals

    Science.gov (United States)

    Cioslowski, Jerzy; Strasburger, Krzysztof

    2018-04-01

    Electronic properties of several states of the five- and six-electron harmonium atoms are obtained from large-scale calculations employing explicitly correlated basis functions. The high accuracy of the computed energies (including their components), natural spinorbitals, and their occupation numbers makes them suitable for testing, calibration, and benchmarking of approximate formalisms of quantum chemistry and solid state physics. In the case of the five-electron species, the availability of the new data for a wide range of the confinement strengths ω allows for confirmation and generalization of the previously reached conclusions concerning the performance of the presently known approximations for the electron-electron repulsion energy in terms of the 1-matrix that are at heart of the density matrix functional theory (DMFT). On the other hand, the properties of the three low-lying states of the six-electron harmonium atom, computed at ω = 500 and ω = 1000, uncover deficiencies of the 1-matrix functionals not revealed by previous studies. In general, the previously published assessment of the present implementations of DMFT being of poor accuracy is found to hold. Extending the present work to harmonically confined systems with even more electrons is most likely counterproductive as the steep increase in computational cost required to maintain sufficient accuracy of the calculated properties is not expected to be matched by the benefits of additional information gathered from the resulting benchmarks.

  8. Pictorial review: Electron beam computed tomography and multislice spiral computed tomography for cardiac imaging

    International Nuclear Information System (INIS)

    Lembcke, Alexander; Hein, Patrick A.; Dohmen, Pascal M.; Klessen, Christian; Wiese, Till H.; Hoffmann, Udo; Hamm, Bernd; Enzweiler, Christian N.H.

    2006-01-01

    Electron beam computed tomography (EBCT) revolutionized cardiac imaging by combining a constant high temporal resolution with prospective ECG triggering. For years, EBCT was the primary technique for some non-invasive diagnostic cardiac procedures such as calcium scoring and non-invasive angiography of the coronary arteries. Multislice spiral computed tomography (MSCT) on the other hand significantly advanced cardiac imaging through high volume coverage, improved spatial resolution and retrospective ECG gating. This pictorial review will illustrate the basic differences between both modalities with special emphasis to their image quality. Several experimental and clinical examples demonstrate the strengths and limitations of both imaging modalities in an intraindividual comparison for a broad range of diagnostic applications such as coronary artery calcium scoring, coronary angiography including stent visualization as well as functional assessment of the cardiac ventricles and valves. In general, our examples indicate that EBCT suffers from a number of shortcomings such as limited spatial resolution and a low contrast-to-noise ratio. Thus, EBCT should now only be used in selected cases where a constant high temporal resolution is a crucial issue, such as dynamic (cine) imaging. Due to isotropic submillimeter spatial resolution and retrospective data selection MSCT seems to be the non-invasive method of choice for cardiac imaging in general, and for assessment of the coronary arteries in particular. However, technical developments are still needed to further improve the temporal resolution in MSCT and to reduce the substantial radiation exposure

  9. Characterization and switching performance of electron-beam controlled discharges

    International Nuclear Information System (INIS)

    Lowry, J.F.; Kline, L.E.; Heberlein, J.V.R.

    1986-01-01

    The electron-beam sustained discharge switch is an attractive concept for repetitive pulsed power switching because it has a demonstrated capability to interrupt direct current and because it is inherently scalable. The authors report on experiments with this type of switch in a 4-kV dc circuit. A wire-ion-plasma (WIP) electron-beam (e-beam) gun is used to irradiate and sustain a switch discharge with a 100-cm/sup 2/ cross-sectional area in l atm of N/sub 2/ or CH/sub 4/. Interruption of 8-10-μs pulses of up to 1.9 kA, and of 100-μs pulses of 150 A has been demonstrated in methane, and interruption against higher recovery voltages (11 kV) has been performed at 1.2 kA by adding series inductance to the circuit. These values represent power supply limitations rather than limitations of the switch itself. A comparison of the measured discharge characteristics with theoretical predictions shows that the measured switch conductivities are higher than the predicted values for given e-beam current values. A qualitative explanation for this observation is offered by considering the effects of electron reflection from the discharge anode and of nonlinear paths for the beam electrons across the discharge gap. The authors conclude that the switching performance of the e-beam controlled discharge switch corresponds to its design parameters, and that for a given switch size a lower voltage drop during the on time can be expected compared with the voltage drop predicted by previously published theory

  10. Modeling and performance analysis for composite network–compute service provisioning in software-defined cloud environments

    Directory of Open Access Journals (Sweden)

    Qiang Duan

    2015-08-01

    Full Text Available The crucial role of networking in Cloud computing calls for a holistic vision of both networking and computing systems that leads to composite network–compute service provisioning. Software-Defined Network (SDN is a fundamental advancement in networking that enables network programmability. SDN and software-defined compute/storage systems form a Software-Defined Cloud Environment (SDCE that may greatly facilitate composite network–compute service provisioning to Cloud users. Therefore, networking and computing systems need to be modeled and analyzed as composite service provisioning systems in order to obtain thorough understanding about service performance in SDCEs. In this paper, a novel approach for modeling composite network–compute service capabilities and a technique for evaluating composite network–compute service performance are developed. The analytic method proposed in this paper is general and agnostic to service implementation technologies; thus is applicable to a wide variety of network–compute services in SDCEs. The results obtained in this paper provide useful guidelines for federated control and management of networking and computing resources to achieve Cloud service performance guarantees.

  11. Multi-Language Programming Environments for High Performance Java Computing

    OpenAIRE

    Vladimir Getov; Paul Gray; Sava Mintchev; Vaidy Sunderam

    1999-01-01

    Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI) tool which provides ...

  12. Design and performance of TPC readout electronics for the NA49 experiment

    Energy Technology Data Exchange (ETDEWEB)

    Bieser, F. [Lawrence Berkeley Lab., CA (United States); Cooper, G. [Lawrence Berkeley Lab., CA (United States); Cwienk, W. [Max-Planck-Institut fuer Physik, Muenchen (Germany); Eckardt, V. [Max-Planck-Institut fuer Physik, Muenchen (Germany); Fessler, H. [Max-Planck-Institut fuer Physik, Muenchen (Germany); Fischer, H.G. [European Lab. for Particle Physics (CERN), Geneva (Switzerland); Gabler, F. [Frankfurt Univ. (Germany). Fachbereich 13 - Physik; Gornicki, E. [Institute of Nuclear Physics, Cracow (Poland); Hearn, W.E. [Lawrence Berkeley Lab., CA (United States); Heupke, W. [Frankfurt Univ. (Germany). Fachbereich 13 - Physik; Irmscher, D. [Lawrence Berkeley Lab., CA (United States); Jacobs, P. [Lawrence Berkeley Lab., CA (United States); Kleinfelder, S. [Lawrence Berkeley Lab., CA (United States); Lindenstruth, V. [Lawrence Berkeley Lab., CA (United States); Machowski, B. [Institute of Nuclear Physics, Cracow (Poland); Marks, K. [Lawrence Berkeley Lab., CA (United States); Milgrome, O. [Lawrence Berkeley Lab., CA (United States); Mock, A. [Max-Planck-Institut fuer Physik, Muenchen (Germany); Noggle, T. [Lawrence Berkeley Lab., CA (United States); Pimpl, W. [Max-Planck-Institut fuer Physik, Muenchen (Germany); Poskanzer, A.M. [Lawrence Berkeley Lab., CA (United States); Rauch, W. [Max-Planck-Institut fuer Physik, Muenchen (Germany); Renfordt, R. [European Lab. for Particle Physics (CERN), Geneva (Switzerland)]|[Frankfurt Univ. (Germany). Fachbereich 13 -Physik; Ritter, H.G. [Lawrence Berkeley Lab., CA (United States)]|[European Lab. for Particle Physics (CERN), Geneva (Switzerland); Roehrich, D. [Frankfurt Univ. (Germany). Fachbereich 13 - Physik; Rudolph, H. [Lawrence Berkeley Lab., CA (United States); Rueschmann, G.W. [Frankfurt Univ. (Germany). Fachbereich 13 - Physik; Schaefer, E. [Max-Planck-Institut fuer Physik, Muenchen (Germany); Seyboth, P. [Max-Planck-Institut fuer Physik, Muenchen (Germany); Seyerlein, J.

    1997-02-01

    Highly integrated readout electronics were developed and produced for the 182000 channels of the four TPCs of the NA49 heavy-ion fixed target experiment at the CERN SPS. The large number of channels, the high packing density and required cost minimization led to the choice of a custom electronics system. The requirements, the design and the performance of the electronics components are described. (orig.).

  13. Performance measurements in 3D ideal magnetohydrodynamic stability computations

    International Nuclear Information System (INIS)

    Anderson, D.V.; Cooper, W.A.; Gruber, R.; Schwenn, U.

    1989-10-01

    The 3D ideal magnetohydrodynamic stability code TERPSICHORE has been designed to take advantage of vector and microtasking capabilities of the latest CRAY computers. To keep the number of operations small most efficient algorithms have been applied in each computational step. The program investigates the stability properties of fusion reactor relevant plasma configurations confined by magnetic fields. For a typical 3D HELIAS configuration that has been considered we obtain an overall performance in excess of 1 Gflops on an eight processor CRAY-YMP machine. (author) 3 figs., 1 tab., 11 refs

  14. Nuclear forces and high-performance computing: The perfect match

    International Nuclear Information System (INIS)

    Luu, T; Walker-Loud, A

    2009-01-01

    High-performance computing is now enabling the calculation of certain hadronic interaction parameters directly from Quantum Chromodynamics, the quantum field theory that governs the behavior of quarks and gluons and is ultimately responsible for the nuclear strong force. In this paper we briefly describe the state of the field and show how other aspects of hadronic interactions will be ascertained in the near future. We give estimates of computational requirements needed to obtain these goals, and outline a procedure for incorporating these results into the broader nuclear physics community.

  15. HIGH-PERFORMANCE COMPUTING FOR THE STUDY OF EARTH AND ENVIRONMENTAL SCIENCE MATERIALS USING SYNCHROTRON X-RAY COMPUTED MICROTOMOGRAPHY

    International Nuclear Information System (INIS)

    FENG, H.; JONES, K.W.; MCGUIGAN, M.; SMITH, G.J.; SPILETIC, J.

    2001-01-01

    Synchrotron x-ray computed microtomography (CMT) is a non-destructive method for examination of rock, soil, and other types of samples studied in the earth and environmental sciences. The high x-ray intensities of the synchrotron source make possible the acquisition of tomographic volumes at a high rate that requires the application of high-performance computing techniques for data reconstruction to produce the three-dimensional volumes, for their visualization, and for data analysis. These problems are exacerbated by the need to share information between collaborators at widely separated locations over both local and tide-area networks. A summary of the CMT technique and examples of applications are given here together with a discussion of the applications of high-performance computing methods to improve the experimental techniques and analysis of the data

  16. HIGH-PERFORMANCE COMPUTING FOR THE STUDY OF EARTH AND ENVIRONMENTAL SCIENCE MATERIALS USING SYNCHROTRON X-RAY COMPUTED MICROTOMOGRAPHY.

    Energy Technology Data Exchange (ETDEWEB)

    FENG,H.; JONES,K.W.; MCGUIGAN,M.; SMITH,G.J.; SPILETIC,J.

    2001-10-12

    Synchrotron x-ray computed microtomography (CMT) is a non-destructive method for examination of rock, soil, and other types of samples studied in the earth and environmental sciences. The high x-ray intensities of the synchrotron source make possible the acquisition of tomographic volumes at a high rate that requires the application of high-performance computing techniques for data reconstruction to produce the three-dimensional volumes, for their visualization, and for data analysis. These problems are exacerbated by the need to share information between collaborators at widely separated locations over both local and tide-area networks. A summary of the CMT technique and examples of applications are given here together with a discussion of the applications of high-performance computing methods to improve the experimental techniques and analysis of the data.

  17. High Performance Computing - Power Application Programming Interface Specification.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  18. Computational Methodologies for Developing Structure–Morphology–Performance Relationships in Organic Solar Cells: A Protocol Review

    KAUST Repository

    Do, Khanh; Ravva, Mahesh Kumar; Wang, Tonghui; Bredas, Jean-Luc

    2016-01-01

    We outline a step-by-step protocol that incorporates a number of theoretical and computational methodologies to evaluate the structural and electronic properties of pi-conjugated semiconducting materials in the condensed phase. Our focus

  19. High-performance blob-based iterative three-dimensional reconstruction in electron tomography using multi-GPUs

    Directory of Open Access Journals (Sweden)

    Wan Xiaohua

    2012-06-01

    Full Text Available Abstract Background Three-dimensional (3D reconstruction in electron tomography (ET has emerged as a leading technique to elucidate the molecular structures of complex biological specimens. Blob-based iterative methods are advantageous reconstruction methods for 3D reconstruction in ET, but demand huge computational costs. Multiple graphic processing units (multi-GPUs offer an affordable platform to meet these demands. However, a synchronous communication scheme between multi-GPUs leads to idle GPU time, and a weighted matrix involved in iterative methods cannot be loaded into GPUs especially for large images due to the limited available memory of GPUs. Results In this paper we propose a multilevel parallel strategy combined with an asynchronous communication scheme and a blob-ELLR data structure to efficiently perform blob-based iterative reconstructions on multi-GPUs. The asynchronous communication scheme is used to minimize the idle GPU time so as to asynchronously overlap communications with computations. The blob-ELLR data structure only needs nearly 1/16 of the storage space in comparison with ELLPACK-R (ELLR data structure and yields significant acceleration. Conclusions Experimental results indicate that the multilevel parallel scheme combined with the asynchronous communication scheme and the blob-ELLR data structure allows efficient implementations of 3D reconstruction in ET on multi-GPUs.

  20. Validation of an Improved Computer-Assisted Technique for Mining Free-Text Electronic Medical Records.

    Science.gov (United States)

    Duz, Marco; Marshall, John F; Parkin, Tim

    2017-06-29

    The use of electronic medical records (EMRs) offers opportunity for clinical epidemiological research. With large EMR databases, automated analysis processes are necessary but require thorough validation before they can be routinely used. The aim of this study was to validate a computer-assisted technique using commercially available content analysis software (SimStat-WordStat v.6 (SS/WS), Provalis Research) for mining free-text EMRs. The dataset used for the validation process included life-long EMRs from 335 patients (17,563 rows of data), selected at random from a larger dataset (141,543 patients, ~2.6 million rows of data) and obtained from 10 equine veterinary practices in the United Kingdom. The ability of the computer-assisted technique to detect rows of data (cases) of colic, renal failure, right dorsal colitis, and non-steroidal anti-inflammatory drug (NSAID) use in the population was compared with manual classification. The first step of the computer-assisted analysis process was the definition of inclusion dictionaries to identify cases, including terms identifying a condition of interest. Words in inclusion dictionaries were selected from the list of all words in the dataset obtained in SS/WS. The second step consisted of defining an exclusion dictionary, including combinations of words to remove cases erroneously classified by the inclusion dictionary alone. The third step was the definition of a reinclusion dictionary to reinclude cases that had been erroneously classified by the exclusion dictionary. Finally, cases obtained by the exclusion dictionary were removed from cases obtained by the inclusion dictionary, and cases from the reinclusion dictionary were subsequently reincluded using Rv3.0.2 (R Foundation for Statistical Computing, Vienna, Austria). Manual analysis was performed as a separate process by a single experienced clinician reading through the dataset once and classifying each row of data based on the interpretation of the free

  1. Tutorial on Computing: Technological Advances, Social Implications, Ethical and Legal Issues

    OpenAIRE

    Debnath, Narayan

    2012-01-01

    Computing and information technology have made significant advances. The use of computing and technology is a major aspect of our lives, and this use will only continue to increase in our lifetime. Electronic digital computers and high performance communication networks are central to contemporary information technology. The computing applications in a wide range of areas including business, communications, medical research, transportation, entertainments, and education are transforming lo...

  2. Automated processing of dynamic properties of intraventricular pressure by computer program and electronic circuit.

    Science.gov (United States)

    Adler, D; Mahler, Y

    1980-04-01

    A procedure for automatic detection and digital processing of the maximum first derivative of the intraventricular pressure (dp/dtmax), time to dp/dtmax(t - dp/dt) and beat-to-beat intervals have been developed. The procedure integrates simple electronic circuits with a short program using a simple algorithm for the detection of the points of interest. The tasks of differentiating the pressure signal and detecting the onset of contraction were done by electronics, while the tasks of finding the values of dp/dtmax, t - dp/dt, beat-to-beat intervals and all computations needed were done by software. Software/hardware 'trade off' considerations and the accuracy and reliability of the system are discussed.

  3. Simulated performance of the in-beam conversion-electron spectrometer, SPICE

    Energy Technology Data Exchange (ETDEWEB)

    Ketelhut, S., E-mail: ketelhut@triumf.ca [TRIUMF, 4004 Wesbrook Mall, Vancouver, British Columbia, Canada V6T 2A3 (Canada); Evitts, L.J.; Garnsworthy, A.B.; Bolton, C.; Ball, G.C.; Churchman, R. [TRIUMF, 4004 Wesbrook Mall, Vancouver, British Columbia, Canada V6T 2A3 (Canada); Dunlop, R. [Department of Physics, University of Guelph, Guelph, Ontario, Canada N1G 2W1 (Canada); Hackman, G.; Henderson, R.; Moukaddam, M. [TRIUMF, 4004 Wesbrook Mall, Vancouver, British Columbia, Canada V6T 2A3 (Canada); Rand, E.T.; Svensson, C.E. [Department of Physics, University of Guelph, Guelph, Ontario, Canada N1G 2W1 (Canada); Witmer, J. [TRIUMF, 4004 Wesbrook Mall, Vancouver, British Columbia, Canada V6T 2A3 (Canada)

    2014-07-01

    The SPICE spectrometer is a new in-beam electron spectrometer designed to operate in conjunction with the TIGRESS HPGe Clover array at TRIUMF-ISAC. The spectrometer consists of a large area, annular, segmented lithium-drifted silicon electron detector shielded from the target by a photon shield. A permanent magnetic lens directs electrons around the photon shield to the detector. Experiments will be performed utilising Coulomb excitation, inelastic-scattering, transfer and fusion–evaporation reactions using stable and radioactive ion beams with suitable heavy-ion detection. Good detection efficiency can be achieved in a large energy range up to 3500 keV electron energy using several magnetic lens designs which are quickly interchangeable. COMSOL and Geant4 simulations have been used to maximise the detection efficiency. In addition, the simulations have guided the design of components to minimise the contributions from various sources of backgrounds.

  4. Computational Analysis on Performance of Thermal Energy Storage (TES) Diffuser

    Science.gov (United States)

    Adib, M. A. H. M.; Adnan, F.; Ismail, A. R.; Kardigama, K.; Salaam, H. A.; Ahmad, Z.; Johari, N. H.; Anuar, Z.; Azmi, N. S. N.

    2012-09-01

    Application of thermal energy storage (TES) system reduces cost and energy consumption. The performance of the overall operation is affected by diffuser design. In this study, computational analysis is used to determine the thermocline thickness. Three dimensional simulations with different tank height-to-diameter ratio (HD), diffuser opening and the effect of difference number of diffuser holes are investigated. Medium HD tanks simulations with double ring octagonal diffuser show good thermocline behavior and clear distinction between warm and cold water. The result show, the best performance of thermocline thickness during 50% time charging occur in medium tank with height-to-diameter ratio of 4.0 and double ring octagonal diffuser with 48 holes (9mm opening ~ 60%) acceptable compared to diffuser with 6mm ~ 40% and 12mm ~ 80% opening. The conclusion is computational analysis method are very useful in the study on performance of thermal energy storage (TES).

  5. Some algorithms for the solution of the symmetric eigenvalue problem on a multiprocessor electronic computer

    International Nuclear Information System (INIS)

    Molchanov, I.N.; Khimich, A.N.

    1984-01-01

    This article shows how a reflection method can be used to find the eigenvalues of a matrix by transforming the matrix to tridiagonal form. The method of conjugate gradients is used to find the smallest eigenvalue and the corresponding eigenvector of symmetric positive-definite band matrices. Topics considered include the computational scheme of the reflection method, the organization of parallel calculations by the reflection method, the computational scheme of the conjugate gradient method, the organization of parallel calculations by the conjugate gradient method, and the effectiveness of parallel algorithms. It is concluded that it is possible to increase the overall effectiveness of the multiprocessor electronic computers by either letting the newly available processors of a new problem operate in the multiprocessor mode, or by improving the coefficient of uniform partition of the original information

  6. Web-Based Job Submission Interface for the GAMESS Computational Chemistry Program

    Science.gov (United States)

    Perri, M. J.; Weber, S. H.

    2014-01-01

    A Web site is described that facilitates use of the free computational chemistry software: General Atomic and Molecular Electronic Structure System (GAMESS). Its goal is to provide an opportunity for undergraduate students to perform computational chemistry experiments without the need to purchase expensive software.

  7. Synaptic electronics: materials, devices and applications.

    Science.gov (United States)

    Kuzum, Duygu; Yu, Shimeng; Wong, H-S Philip

    2013-09-27

    In this paper, the recent progress of synaptic electronics is reviewed. The basics of biological synaptic plasticity and learning are described. The material properties and electrical switching characteristics of a variety of synaptic devices are discussed, with a focus on the use of synaptic devices for neuromorphic or brain-inspired computing. Performance metrics desirable for large-scale implementations of synaptic devices are illustrated. A review of recent work on targeted computing applications with synaptic devices is presented.

  8. Synaptic electronics: materials, devices and applications

    International Nuclear Information System (INIS)

    Kuzum, Duygu; Yu, Shimeng; Philip Wong, H-S

    2013-01-01

    In this paper, the recent progress of synaptic electronics is reviewed. The basics of biological synaptic plasticity and learning are described. The material properties and electrical switching characteristics of a variety of synaptic devices are discussed, with a focus on the use of synaptic devices for neuromorphic or brain-inspired computing. Performance metrics desirable for large-scale implementations of synaptic devices are illustrated. A review of recent work on targeted computing applications with synaptic devices is presented. (topical review)

  9. Full surface examination of small spheres with a computer controlled scanning electron microscope

    International Nuclear Information System (INIS)

    Ward, C.M.; Willenborg, D.L.; Montgomery, K.L.

    1979-01-01

    This report discusses a computer automated stage and Scanning Electron Microscopy (SEM) system for detecting defects in glass spheres for inertial confinement laser fusion experiments. This system detects submicron defects and permits inclusion of acceptable spheres in targets after examination. The stage used to examine and manipulate the spheres through 4π steradians is described. Primary image recording is made on a roster scanning video disc. The need for SEM stability and methods of achieving it are discussed

  10. High performance stream computing for particle beam transport simulations

    International Nuclear Information System (INIS)

    Appleby, R; Bailey, D; Higham, J; Salt, M

    2008-01-01

    Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the simultaneous transport of many particles using dedicated hardware. Modern Graphics Processing Units (GPUs) are powerful and affordable stream computing devices. The results of simulations of particle transport through the booster-to-storage-ring transfer line of the DIAMOND synchrotron light source using an NVidia GeForce 7900 GPU are compared to the standard transport code MAD. It is found that particle transport calculations are suitable for stream processing and large performance increases are possible. The accuracy and potential speed gains are compared and the prospects for future work in the area are discussed

  11. Computational modelling of expressive music performance in hexaphonic guitar

    OpenAIRE

    Siquier, Marc

    2017-01-01

    Computational modelling of expressive music performance has been widely studied in the past. While previous work in this area has been mainly focused on classical piano music, there has been very little work on guitar music, and such work has focused on monophonic guitar playing. In this work, we present a machine learning approach to automatically generate expressive performances from non expressive music scores for polyphonic guitar. We treated guitar as an hexaphonic instrument, obtaining ...

  12. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda; Yokota, Rio; Keyes, David E.

    2016-01-01

    model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization

  13. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  14. Transformational silicon electronics

    KAUST Repository

    Rojas, Jhonathan Prieto

    2014-02-25

    In today\\'s traditional electronics such as in computers or in mobile phones, billions of high-performance, ultra-low-power devices are neatly integrated in extremely compact areas on rigid and brittle but low-cost bulk monocrystalline silicon (100) wafers. Ninety percent of global electronics are made up of silicon. Therefore, we have developed a generic low-cost regenerative batch fabrication process to transform such wafers full of devices into thin (5 μm), mechanically flexible, optically semitransparent silicon fabric with devices, then recycling the remaining wafer to generate multiple silicon fabric with chips and devices, ensuring low-cost and optimal utilization of the whole substrate. We show monocrystalline, amorphous, and polycrystalline silicon and silicon dioxide fabric, all from low-cost bulk silicon (100) wafers with the semiconductor industry\\'s most advanced high-κ/metal gate stack based high-performance, ultra-low-power capacitors, field effect transistors, energy harvesters, and storage to emphasize the effectiveness and versatility of this process to transform traditional electronics into flexible and semitransparent ones for multipurpose applications. © 2014 American Chemical Society.

  15. Evaluation of runaway-electron effects on plasma-facing components for NET

    Science.gov (United States)

    Bolt, H.; Calén, H.

    1991-03-01

    Runaway electrons which are generated during disruptions can cause serious damage to plasma facing components in a next generation device like NET. A study was performed to quantify the response of NET plasma facing components to runaway-electron impact. For the determination of the energy deposition in the component materials Monte Carlo computations were performed. Since the subsurface metal structures can be strongly heated under runaway-electron impact from the computed results damage threshold values for the thermal excursions were derived. These damage thresholds are strongly dependent on the materials selection and the component design. For a carbonmolybdenum divertor with 10 and 20 mm carbon armour thickness and 1 degree electron incidence the damage thresholds are 100 MJ/m 2 and 220 MJ/m 2. The thresholds for a carbon-copper divertor under the same conditions are about 50% lower. On the first wall damage is anticipated for energy depositions above 180 MJ/m 2.

  16. Quantitative Test of the Evolution of Geant4 Electron Backscattering Simulation

    CERN Document Server

    Basaglia, Tullio; Hoff, Gabriela; Kim, Chan Hyeong; Kim, Sung Hun; Pia, Maria Grazia; Saracco, Paolo

    2016-01-01

    Evolutions of Geant4 code have affected the simulation of electron backscattering with respect to previously published results. Their effects are quantified by analyzing the compatibility of the simulated electron backscattering fraction with a large collection of experimental data for a wide set of physics configuration options available in Geant4. Special emphasis is placed on two electron scattering implementations first released in Geant4 version 10.2: the Goudsmit-Saunderson multiple scattering model and a single Coulomb scattering model based on Mott cross section calculation. The new Goudsmit-Saunderson multiple scattering model appears to perform equally or less accurately than the model implemented in previous Geant4 versions, depending on the electron energy. The new Coulomb scattering model was flawed from a physics point of view, but computationally fast in Geant4 version 10.2; the physics correction released in Geant4 version 10.2p01 severely degrades its computational performance. Evolutions in ...

  17. Building fast, reliable, and adaptive software for computational science

    International Nuclear Information System (INIS)

    Rendell, A P; Antony, J; Armstrong, W; Janes, P; Yang, R

    2008-01-01

    Building fast, reliable, and adaptive software is a constant challenge for computational science, especially given recent developments in computer architecture. This paper outlines some of our efforts to address these three issues in the context of computational chemistry. First, a simple linear performance that can be used to model and predict the performance of Hartree-Fock calculations is discussed. Second, the use of interval arithmetic to assess the numerical reliability of the sort of integrals used in electronic structure methods is presented. Third, use of dynamic code modification as part of a framework to support adaptive software is outlined

  18. The path toward HEP High Performance Computing

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  19. Summary Report of Working Group 2: Computation

    International Nuclear Information System (INIS)

    Stoltz, P. H.; Tsung, R. S.

    2009-01-01

    The working group on computation addressed three physics areas: (i) plasma-based accelerators (laser-driven and beam-driven), (ii) high gradient structure-based accelerators, and (iii) electron beam sources and transport [1]. Highlights of the talks in these areas included new models of breakdown on the microscopic scale, new three-dimensional multipacting calculations with both finite difference and finite element codes, and detailed comparisons of new electron gun models with standard models such as PARMELA. The group also addressed two areas of advances in computation: (i) new algorithms, including simulation in a Lorentz-boosted frame that can reduce computation time orders of magnitude, and (ii) new hardware architectures, like graphics processing units and Cell processors that promise dramatic increases in computing power. Highlights of the talks in these areas included results from the first large-scale parallel finite element particle-in-cell code (PIC), many order-of-magnitude speedup of, and details of porting the VPIC code to the Roadrunner supercomputer. The working group featured two plenary talks, one by Brian Albright of Los Alamos National Laboratory on the performance of the VPIC code on the Roadrunner supercomputer, and one by David Bruhwiler of Tech-X Corporation on recent advances in computation for advanced accelerators. Highlights of the talk by Albright included the first one trillion particle simulations, a sustained performance of 0.3 petaflops, and an eight times speedup of science calculations, including back-scatter in laser-plasma interaction. Highlights of the talk by Bruhwiler included simulations of 10 GeV accelerator laser wakefield stages including external injection, new developments in electromagnetic simulations of electron guns using finite difference and finite element approaches.

  20. Performance evaluation for compressible flow calculations on five parallel computers of different architectures

    International Nuclear Information System (INIS)

    Kimura, Toshiya.

    1997-03-01

    A two-dimensional explicit Euler solver has been implemented for five MIMD parallel computers of different machine architectures in Center for Promotion of Computational Science and Engineering of Japan Atomic Energy Research Institute. These parallel computers are Fujitsu VPP300, NEC SX-4, CRAY T94, IBM SP2, and Hitachi SR2201. The code was parallelized by several parallelization methods, and a typical compressible flow problem has been calculated for different grid sizes changing the number of processors. Their effective performances for parallel calculations, such as calculation speed, speed-up ratio and parallel efficiency, have been investigated and evaluated. The communication time among processors has been also measured and evaluated. As a result, the differences on the performance and the characteristics between vector-parallel and scalar-parallel computers can be pointed, and it will present the basic data for efficient use of parallel computers and for large scale CFD simulations on parallel computers. (author)

  1. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  2. Desiderata for computable representations of electronic health records-driven phenotype algorithms.

    Science.gov (United States)

    Mo, Huan; Thompson, William K; Rasmussen, Luke V; Pacheco, Jennifer A; Jiang, Guoqian; Kiefer, Richard; Zhu, Qian; Xu, Jie; Montague, Enid; Carrell, David S; Lingren, Todd; Mentch, Frank D; Ni, Yizhao; Wehbe, Firas H; Peissig, Peggy L; Tromp, Gerard; Larson, Eric B; Chute, Christopher G; Pathak, Jyotishman; Denny, Joshua C; Speltz, Peter; Kho, Abel N; Jarvik, Gail P; Bejan, Cosmin A; Williams, Marc S; Borthwick, Kenneth; Kitchner, Terrie E; Roden, Dan M; Harris, Paul A

    2015-11-01

    Electronic health records (EHRs) are increasingly used for clinical and translational research through the creation of phenotype algorithms. Currently, phenotype algorithms are most commonly represented as noncomputable descriptive documents and knowledge artifacts that detail the protocols for querying diagnoses, symptoms, procedures, medications, and/or text-driven medical concepts, and are primarily meant for human comprehension. We present desiderata for developing a computable phenotype representation model (PheRM). A team of clinicians and informaticians reviewed common features for multisite phenotype algorithms published in PheKB.org and existing phenotype representation platforms. We also evaluated well-known diagnostic criteria and clinical decision-making guidelines to encompass a broader category of algorithms. We propose 10 desired characteristics for a flexible, computable PheRM: (1) structure clinical data into queryable forms; (2) recommend use of a common data model, but also support customization for the variability and availability of EHR data among sites; (3) support both human-readable and computable representations of phenotype algorithms; (4) implement set operations and relational algebra for modeling phenotype algorithms; (5) represent phenotype criteria with structured rules; (6) support defining temporal relations between events; (7) use standardized terminologies and ontologies, and facilitate reuse of value sets; (8) define representations for text searching and natural language processing; (9) provide interfaces for external software algorithms; and (10) maintain backward compatibility. A computable PheRM is needed for true phenotype portability and reliability across different EHR products and healthcare systems. These desiderata are a guide to inform the establishment and evolution of EHR phenotype algorithm authoring platforms and languages. © The Author 2015. Published by Oxford University Press on behalf of the American Medical

  3. The Bravyi-Kitaev transformation for quantum computation of electronic structure

    Science.gov (United States)

    Seeley, Jacob T.; Richard, Martin J.; Love, Peter J.

    2012-12-01

    Quantum simulation is an important application of future quantum computers with applications in quantum chemistry, condensed matter, and beyond. Quantum simulation of fermionic systems presents a specific challenge. The Jordan-Wigner transformation allows for representation of a fermionic operator by O(n) qubit operations. Here, we develop an alternative method of simulating fermions with qubits, first proposed by Bravyi and Kitaev [Ann. Phys. 298, 210 (2002), 10.1006/aphy.2002.6254; e-print arXiv:quant-ph/0003137v2], that reduces the simulation cost to O(log n) qubit operations for one fermionic operation. We apply this new Bravyi-Kitaev transformation to the task of simulating quantum chemical Hamiltonians, and give a detailed example for the simplest possible case of molecular hydrogen in a minimal basis. We show that the quantum circuit for simulating a single Trotter time step of the Bravyi-Kitaev derived Hamiltonian for H2 requires fewer gate applications than the equivalent circuit derived from the Jordan-Wigner transformation. Since the scaling of the Bravyi-Kitaev method is asymptotically better than the Jordan-Wigner method, this result for molecular hydrogen in a minimal basis demonstrates the superior efficiency of the Bravyi-Kitaev method for all quantum computations of electronic structure.

  4. Using high performance interconnects in a distributed computing and mass storage environment

    International Nuclear Information System (INIS)

    Ernst, M.

    1994-01-01

    Detector Collaborations of the HERA Experiments typically involve more than 500 physicists from a few dozen institutes. These physicists require access to large amounts of data in a fully transparent manner. Important issues include Distributed Mass Storage Management Systems in a Distributed and Heterogeneous Computing Environment. At the very center of a distributed system, including tens of CPUs and network attached mass storage peripherals are the communication links. Today scientists are witnessing an integration of computing and communication technology with the open-quote network close-quote becoming the computer. This contribution reports on a centrally operated computing facility for the HERA Experiments at DESY, including Symmetric Multiprocessor Machines (84 Processors), presently more than 400 GByte of magnetic disk and 40 TB of automoted tape storage, tied together by a HIPPI open-quote network close-quote. Focussing on the High Performance Interconnect technology, details will be provided about the HIPPI based open-quote Backplane close-quote configured around a 20 Gigabit/s Multi Media Router and the performance and efficiency of the related computer interfaces

  5. Computational Analysis on Performance of Thermal Energy Storage (TES) Diffuser

    International Nuclear Information System (INIS)

    Adib, M A H M; Ismail, A R; Kardigama, K; Salaam, H A; Ahmad, Z; Johari, N H; Anuar, Z; Azmi, N S N; Adnan, F

    2012-01-01

    Application of thermal energy storage (TES) system reduces cost and energy consumption. The performance of the overall operation is affected by diffuser design. In this study, computational analysis is used to determine the thermocline thickness. Three dimensional simulations with different tank height-to-diameter ratio (HD), diffuser opening and the effect of difference number of diffuser holes are investigated. Medium HD tanks simulations with double ring octagonal diffuser show good thermocline behavior and clear distinction between warm and cold water. The result show, the best performance of thermocline thickness during 50% time charging occur in medium tank with height-to-diameter ratio of 4.0 and double ring octagonal diffuser with 48 holes (9mm opening ∼ 60%) acceptable compared to diffuser with 6mm ∼ 40% and 12mm ∼ 80% opening. The conclusion is computational analysis method are very useful in the study on performance of thermal energy storage (TES).

  6. Performances of screen-printing silver thick films: Rheology, morphology, mechanical and electronic properties

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Jung-Shiun; Liang, Jau-En; Yi, Han-Liou [Department of Chemical Engineering, National Chung Cheng University, Chia Yi 621, Taiwan, ROC (China); Chen, Shu-Hua [China Steel Corporation, Kaohsiung City 806, Taiwan, ROC (China); Hua, Chi-Chung, E-mail: chmcch@ccu.edu.tw [Department of Chemical Engineering, National Chung Cheng University, Chia Yi 621, Taiwan, ROC (China)

    2016-06-15

    Numerous recent applications with inorganic solar cells and energy storage electrodes make use of silver pastes through processes like screen-printing to fabricate fine conductive lines for electron conducting purpose. To date, however, there have been few studies that systematically revealed the properties of the silver paste in relation to the mechanical and electronic performances of screen-printing thick films. In this work, the rheological properties of a series of model silver pastes made of silver powders of varying size (0.9, 1.3, and 1.5 μm) and shape (irregular and spherical) were explored, and the results were systematically correlated with the morphological feature (scanning electron microscopy, SEM) and mechanical (peeling test) and electronic (transmission line method, TLM) performances of screen-printing dried or sintered thick films. We provided evidence of generally intimate correlations between the powder dispersion state in silver pastes—which is shown to be well captured by the rheological protocols employed herein—and the performances of screen-printing thick films. Overall, this study suggests the powder dispersion state and the associated phase behavior of a paste sample can significantly impact not only the morphological and electronic but also mechanical performances of screen-printing thick films, and, in future perspectives, a proper combination of silver powders of different sizes and even shapes could help reconcile quality and stability of an optimum silver paste. - Highlights: • Powder dispersion correlates well with screen-printing thick film performances. • Rheological fingerprints can be utilized to fathom the powder dispersion state. • Good polymer-powder interactions in the paste ensure good powder dispersion. • Time-dependent gel-like viscoelastic features are found with optimum silver pastes. • The size and shape of functional powder affect the dispersion and film performances.

  7. Design and performance of a Tesla transformer type relativistic electron beam generator

    International Nuclear Information System (INIS)

    Jain, K.K.; Chennareddy, D.; John, P.I.; Saxena, Y.C.

    1986-01-01

    A relativistic electron beam generator driven by an air core Tesla transformer is described. The Tesla transformer circuit analysis is outlined and computational results are presented for the case when the coaxial water line has finite resistance. The transformer has a coupling coefficient of 0.56 and a step-up ratio of 25. The Tesla transformer can provide 800 kV at the peak of the second half cycle of the secondary output voltage and has been tested up to 600 kV. A 100-200 keV, 15-20 kA electron beam having 150 ns pulse width has been obtained. The beam generator described is being used for the beam injection into a toroidal device BETA. (author). 20 refs. 9 figures

  8. Photovoltaic Shading Testbed for Module-Level Power Electronics: 2016 Performance Data Update

    Energy Technology Data Exchange (ETDEWEB)

    Deline, Chris [National Renewable Energy Lab. (NREL), Golden, CO (United States); Meydbray, Jenya [PV Evolution Labs (PVEL), Davis, CA (United States); Donovan, Matt [PV Evolution Labs (PVEL), Davis, CA (United States)

    2016-09-01

    The 2012 NREL report 'Photovoltaic Shading Testbed for Module-Level Power Electronics' provides a standard methodology for estimating the performance benefit of distributed power electronics under partial shading conditions. Since the release of the report, experiments have been conducted for a number of products and for different system configurations. Drawing from these experiences, updates to the test and analysis methods are recommended. Proposed changes in data processing have the benefit of reducing the sensitivity to measurement errors and weather variability, as well as bringing the updated performance score in line with measured and simulated values of the shade recovery benefit of distributed PV power electronics. Also, due to the emergence of new technologies including sub-module embedded power electronics, the shading method has been extended to include power electronics that operate at a finer granularity than the module level. An update to the method is proposed to account for these emerging technologies that respond to shading differently than module-level devices. The partial shading test remains a repeatable test procedure that attempts to simulate shading situations as would be experienced by typical residential or commercial rooftop photovoltaic (PV) systems. Performance data for multiple products tested using this method are discussed, based on equipment from Enphase, Solar Edge, Maxim Integrated and SMA. In general, the annual recovery of shading losses from the module-level electronics evaluated is 25-35%, with the major difference between different trials being related to the number of parallel strings in the test installation rather than differences between the equipment tested. Appendix D data has been added in this update.

  9. Ultra-fast computation of electronic spectra for large systems by tight-binding based simplified Tamm-Dancoff approximation (sTDA-xTB)

    International Nuclear Information System (INIS)

    Grimme, Stefan; Bannwarth, Christoph

    2016-01-01

    The computational bottleneck of the extremely fast simplified Tamm-Dancoff approximated (sTDA) time-dependent density functional theory procedure [S. Grimme, J. Chem. Phys. 138, 244104 (2013)] for the computation of electronic spectra for large systems is the determination of the ground state Kohn-Sham orbitals and eigenvalues. This limits such treatments to single structures with a few hundred atoms and hence, e.g., sampling along molecular dynamics trajectories for flexible systems or the calculation of chromophore aggregates is often not possible. The aim of this work is to solve this problem by a specifically designed semi-empirical tight binding (TB) procedure similar to the well established self-consistent-charge density functional TB scheme. The new special purpose method provides orbitals and orbital energies of hybrid density functional character for a subsequent and basically unmodified sTDA procedure. Compared to many previous semi-empirical excited state methods, an advantage of the ansatz is that a general eigenvalue problem in a non-orthogonal, extended atomic orbital basis is solved and therefore correct occupied/virtual orbital energy splittings as well as Rydberg levels are obtained. A key idea for the success of the new model is that the determination of atomic charges (describing an effective electron-electron interaction) and the one-particle spectrum is decoupled and treated by two differently parametrized Hamiltonians/basis sets. The three-diagonalization-step composite procedure can routinely compute broad range electronic spectra (0-8 eV) within minutes of computation time for systems composed of 500-1000 atoms with an accuracy typical of standard time-dependent density functional theory (0.3-0.5 eV average error). An easily extendable parametrization based on coupled-cluster and density functional computed reference data for the elements H–Zn including transition metals is described. The accuracy of the method termed sTDA-xTB is first

  10. Ultra-fast computation of electronic spectra for large systems by tight-binding based simplified Tamm-Dancoff approximation (sTDA-xTB)

    Energy Technology Data Exchange (ETDEWEB)

    Grimme, Stefan, E-mail: grimme@thch.uni-bonn.de; Bannwarth, Christoph [Mulliken Center for Theoretical Chemistry, Institut für Physikalische und Theoretische Chemie, Rheinische Friedrich-Wilhelms Universität Bonn, Beringstraße 4, 53115 Bonn (Germany)

    2016-08-07

    The computational bottleneck of the extremely fast simplified Tamm-Dancoff approximated (sTDA) time-dependent density functional theory procedure [S. Grimme, J. Chem. Phys. 138, 244104 (2013)] for the computation of electronic spectra for large systems is the determination of the ground state Kohn-Sham orbitals and eigenvalues. This limits such treatments to single structures with a few hundred atoms and hence, e.g., sampling along molecular dynamics trajectories for flexible systems or the calculation of chromophore aggregates is often not possible. The aim of this work is to solve this problem by a specifically designed semi-empirical tight binding (TB) procedure similar to the well established self-consistent-charge density functional TB scheme. The new special purpose method provides orbitals and orbital energies of hybrid density functional character for a subsequent and basically unmodified sTDA procedure. Compared to many previous semi-empirical excited state methods, an advantage of the ansatz is that a general eigenvalue problem in a non-orthogonal, extended atomic orbital basis is solved and therefore correct occupied/virtual orbital energy splittings as well as Rydberg levels are obtained. A key idea for the success of the new model is that the determination of atomic charges (describing an effective electron-electron interaction) and the one-particle spectrum is decoupled and treated by two differently parametrized Hamiltonians/basis sets. The three-diagonalization-step composite procedure can routinely compute broad range electronic spectra (0-8 eV) within minutes of computation time for systems composed of 500-1000 atoms with an accuracy typical of standard time-dependent density functional theory (0.3-0.5 eV average error). An easily extendable parametrization based on coupled-cluster and density functional computed reference data for the elements H–Zn including transition metals is described. The accuracy of the method termed sTDA-xTB is first

  11. Electronic Mail for Personal Computers: Development Issues.

    Science.gov (United States)

    Tomer, Christinger

    1994-01-01

    Examines competing, commercially developed electronic mail programs and how these technologies will affect the functionality and quality of electronic mail. How new standards for client-server mail systems are likely to enhance messaging capabilities and the use of electronic mail for information retrieval are considered. (Contains eight…

  12. The accuracy of molecular bond lengths computed by multireference electronic structure methods

    International Nuclear Information System (INIS)

    Shepard, Ron; Kedziora, Gary S.; Lischka, Hans; Shavitt, Isaiah; Mueller, Thomas; Szalay, Peter G.; Kallay, Mihaly; Seth, Michael

    2008-01-01

    We compare experimental R e values with computed R e values for 20 molecules using three multireference electronic structure methods, MCSCF, MR-SDCI, and MR-AQCC. Three correlation-consistent orbital basis sets are used, along with complete basis set extrapolations, for all of the molecules. These data complement those computed previously with single-reference methods. Several trends are observed. The SCF R e values tend to be shorter than the experimental values, and the MCSCF values tend to be longer than the experimental values. We attribute these trends to the ionic contamination of the SCF wave function and to the corresponding systematic distortion of the potential energy curve. For the individual bonds, the MR-SDCI R e values tend to be shorter than the MR-AQCC values, which in turn tend to be shorter than the MCSCF values. Compared to the previous single-reference results, the MCSCF values are roughly comparable to the MP4 and CCSD methods, which are more accurate than might be expected due to the fact that these MCSCF wave functions include no extra-valence electron correlation effects. This suggests that static valence correlation effects, such as near-degeneracies and the ability to dissociate correctly to neutral fragments, play an important role in determining the shape of the potential energy surface, even near equilibrium structures. The MR-SDCI and MR-AQCC methods predict R e values with an accuracy comparable to, or better than, the best single-reference methods (MP4, CCSD, and CCSD(T)), despite the fact that triple and higher excitations into the extra-valence orbital space are included in the single-reference methods but are absent in the multireference wave functions. The computed R e values using the multireference methods tend to be smooth and monotonic with basis set improvement. The molecular structures are optimized using analytic energy gradients, and the timings for these calculations show the practical advantage of using variational wave

  13. The accuracy of molecular bond lengths computed by multireference electronic structure methods

    Energy Technology Data Exchange (ETDEWEB)

    Shepard, Ron [Chemical Sciences and Engineering Division, Argonne National Laboratory, Argonne, IL 60439 (United States)], E-mail: shepard@tcg.anl.gov; Kedziora, Gary S. [High Performance Technologies Inc., 2435 5th Street, WPAFB, OH 45433 (United States); Lischka, Hans [Institute for Theoretical Chemistry, University of Vienna, Waehringerstrasse 17, A-1090 Vienna (Austria); Shavitt, Isaiah [Department of Chemistry, University of Illinois, 600 S. Mathews Avenue, Urbana, IL 61801 (United States); Mueller, Thomas [Juelich Supercomputer Centre, Research Centre Juelich, D-52425 Juelich (Germany); Szalay, Peter G. [Laboratory for Theoretical Chemistry, Institute of Chemistry, Eoetvoes Lorand University, P.O. Box 32, H-1518 Budapest (Hungary); Kallay, Mihaly [Department of Physical Chemistry and Materials Science, Budapest University of Technology and Economics, P.O. Box 91, H-1521 Budapest (Hungary); Seth, Michael [Department of Chemistry, University of Calgary, 2500 University Drive, N.W., Calgary, Alberta, T2N 1N4 (Canada)

    2008-06-16

    We compare experimental R{sub e} values with computed R{sub e} values for 20 molecules using three multireference electronic structure methods, MCSCF, MR-SDCI, and MR-AQCC. Three correlation-consistent orbital basis sets are used, along with complete basis set extrapolations, for all of the molecules. These data complement those computed previously with single-reference methods. Several trends are observed. The SCF R{sub e} values tend to be shorter than the experimental values, and the MCSCF values tend to be longer than the experimental values. We attribute these trends to the ionic contamination of the SCF wave function and to the corresponding systematic distortion of the potential energy curve. For the individual bonds, the MR-SDCI R{sub e} values tend to be shorter than the MR-AQCC values, which in turn tend to be shorter than the MCSCF values. Compared to the previous single-reference results, the MCSCF values are roughly comparable to the MP4 and CCSD methods, which are more accurate than might be expected due to the fact that these MCSCF wave functions include no extra-valence electron correlation effects. This suggests that static valence correlation effects, such as near-degeneracies and the ability to dissociate correctly to neutral fragments, play an important role in determining the shape of the potential energy surface, even near equilibrium structures. The MR-SDCI and MR-AQCC methods predict R{sub e} values with an accuracy comparable to, or better than, the best single-reference methods (MP4, CCSD, and CCSD(T)), despite the fact that triple and higher excitations into the extra-valence orbital space are included in the single-reference methods but are absent in the multireference wave functions. The computed R{sub e} values using the multireference methods tend to be smooth and monotonic with basis set improvement. The molecular structures are optimized using analytic energy gradients, and the timings for these calculations show the practical

  14. Comparison of optimal performance at 300 keV of three direct electron detectors for use in low dose electron microscopy

    Energy Technology Data Exchange (ETDEWEB)

    McMullan, G., E-mail: gm2@mrc-lmb.cam.ac.uk [MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge CB2 0QH (United Kingdom); Faruqi, A.R. [MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge CB2 0QH (United Kingdom); Clare, D. [Crystallography and Institute of Structural and Molecular Biology, Birkbeck College, University of London, Malet Street, London WC1E 7HX (United Kingdom); Henderson, R. [MRC Laboratory of Molecular Biology, Francis Crick Avenue, Cambridge CB2 0QH (United Kingdom)

    2014-12-15

    Low dose electron imaging applications such as electron cryo-microscopy are now benefitting from the improved performance and flexibility of recently introduced electron imaging detectors in which electrons are directly incident on backthinned CMOS sensors. There are currently three commercially available detectors of this type: the Direct Electron DE-20, the FEI Falcon II and the Gatan K2 Summit. These have different characteristics and so it is important to compare their imaging properties carefully with a view to optimise how each is used. Results at 300 keV for both the modulation transfer function (MTF) and the detective quantum efficiency (DQE) are presented. Of these, the DQE is the most important in the study of radiation sensitive samples where detector performance is crucial. We find that all three detectors have a better DQE than film. The K2 Summit has the best DQE at low spatial frequencies but with increasing spatial frequency its DQE falls below that of the Falcon II. - Highlights: • Three direct electron detectors offer better DQE than film at 300 keV. • Recorded 300 keV electron events on the detectors have very similar Landau distributions. • The Gatan K2 Summit detector has the highest DQE at low spatial frequency. • The FEI Falcon II detector has the highest DQE beyond one half the Nyquist frequency. • The Direct Electron DE-20 detector has the fastest data acquisition rate.

  15. Comparison of optimal performance at 300 keV of three direct electron detectors for use in low dose electron microscopy

    International Nuclear Information System (INIS)

    McMullan, G.; Faruqi, A.R.; Clare, D.; Henderson, R.

    2014-01-01

    Low dose electron imaging applications such as electron cryo-microscopy are now benefitting from the improved performance and flexibility of recently introduced electron imaging detectors in which electrons are directly incident on backthinned CMOS sensors. There are currently three commercially available detectors of this type: the Direct Electron DE-20, the FEI Falcon II and the Gatan K2 Summit. These have different characteristics and so it is important to compare their imaging properties carefully with a view to optimise how each is used. Results at 300 keV for both the modulation transfer function (MTF) and the detective quantum efficiency (DQE) are presented. Of these, the DQE is the most important in the study of radiation sensitive samples where detector performance is crucial. We find that all three detectors have a better DQE than film. The K2 Summit has the best DQE at low spatial frequencies but with increasing spatial frequency its DQE falls below that of the Falcon II. - Highlights: • Three direct electron detectors offer better DQE than film at 300 keV. • Recorded 300 keV electron events on the detectors have very similar Landau distributions. • The Gatan K2 Summit detector has the highest DQE at low spatial frequency. • The FEI Falcon II detector has the highest DQE beyond one half the Nyquist frequency. • The Direct Electron DE-20 detector has the fastest data acquisition rate

  16. x-y-recording in transmission electron microscopy. A versatile and inexpensive interface to personal computers with application to stereology.

    Science.gov (United States)

    Rickmann, M; Siklós, L; Joó, F; Wolff, J R

    1990-09-01

    An interface for IBM XT/AT-compatible computers is described which has been designed to read the actual specimen stage position of electron microscopes. The complete system consists of (i) optical incremental encoders attached to the x- and y-stage drivers of the microscope, (ii) two keypads for operator input, (iii) an interface card fitted to the bus of the personal computer, (iv) a standard configuration IBM XT (or compatible) personal computer optionally equipped with a (v) HP Graphic Language controllable colour plotter. The small size of the encoders and their connection to the stage drivers by simple ribbed belts allows an easy adaptation of the system to most electron microscopes. Operation of the interface card itself is supported by any high-level language available for personal computers. By the modular concept of these languages, the system can be customized to various applications, and no computer expertise is needed for actual operation. The present configuration offers an inexpensive attachment, which covers a wide range of applications from a simple notebook to high-resolution (200-nm) mapping of tissue. Since section coordinates can be processed in real-time, stereological estimations can be derived directly "on microscope". This is exemplified by an application in which particle numbers were determined by the disector method.

  17. Computational micromechanics analysis of electron hopping and interfacial damage induced piezoresistive response in carbon nanotube-polymer nanocomposites

    International Nuclear Information System (INIS)

    Chaurasia, A K; Seidel, G D; Ren, X

    2014-01-01

    Carbon nanotube (CNT)-polymer nanocomposites have been observed to exhibit an effective macroscale piezoresistive response, i.e., change in macroscale resistivity when subjected to applied deformation. The macroscale piezoresistive response of CNT-polymer nanocomposites leads to deformation/strain sensing capabilities. It is believed that the nanoscale phenomenon of electron hopping is the major driving force behind the observed macroscale piezoresistivity of such nanocomposites. Additionally, CNT-polymer nanocomposites provide damage sensing capabilities because of local changes in electron hopping pathways at the nanoscale because of initiation/evolution of damage. The primary focus of the current work is to explore the effect of interfacial separation and damage at the nanoscale CNT-polymer interface on the effective macroscale piezoresistive response. Interfacial separation and damage are allowed to evolve at the CNT-polymer interface through coupled electromechanical cohesive zones, within a finite element based computational micromechanics framework, resulting in electron hopping based current density across the separated CNT-polymer interface. The macroscale effective material properties and gauge factors are evaluated using micromechanics techniques based on electrostatic energy equivalence. The impact of the electron hopping mechanism, nanoscale interface separation and damage evolution on the effective nanocomposite electrostatic and piezoresistive response is studied in comparison with the perfectly bonded interface. The effective electrostatic/piezoresistive response for the perfectly bonded interface is obtained based on a computational micromechanics model developed in the authors’ earlier work. It is observed that the macroscale effective gauge factors are highly sensitive to strain induced formation/disruption of electron hopping pathways, interface separation and the initiation/evolution of interfacial damage. (paper)

  18. The high performance cluster computing system for BES offline data analysis

    International Nuclear Information System (INIS)

    Sun Yongzhao; Xu Dong; Zhang Shaoqiang; Yang Ting

    2004-01-01

    A high performance cluster computing system (EPCfarm) is introduced, which used for BES offline data analysis. The setup and the characteristics of the hardware and software of EPCfarm are described. The PBS, a queue management package, and the performance of EPCfarm is presented also. (authors)

  19. Scalability of DL_POLY on High Performance Computing Platform

    Directory of Open Access Journals (Sweden)

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  20. Simulation of 10 A electron-beam formation and collection for a high current electron-beam ion source

    Science.gov (United States)

    Kponou, A.; Beebe, E.; Pikin, A.; Kuznetsov, G.; Batazova, M.; Tiunov, M.

    1998-02-01

    Presented is a report on the development of an electron-beam ion source (EBIS) for the relativistic heavy ion collider at Brookhaven National Laboratory (BNL) which requires operating with a 10 A electron beam. This is approximately an order of magnitude higher current than in any existing EBIS device. A test stand is presently being designed and constructed where EBIS components will be tested. It will be reported in a separate paper at this conference. The design of the 10 A electron gun, drift tubes, and electron collector requires extensive computer simulations. Calculations have been performed at Novosibirsk and BNL using two different programs, SAM and EGUN. Results of these simulations will be presented.

  1. Evaluating computer program performance on the CRAY-1

    International Nuclear Information System (INIS)

    Rudsinski, L.; Pieper, G.W.

    1979-01-01

    The Advanced Scientific Computers Project of Argonne's Applied Mathematics Division has two objectives: to evaluate supercomputers and to determine their effect on Argonne's computing workload. Initial efforts have focused on the CRAY-1, which is the only advanced computer currently available. Users from seven Argonne divisions executed test programs on the CRAY and made performance comparisons with the IBM 370/195 at Argonne. This report describes these experiences and discusses various techniques for improving run times on the CRAY. Direct translations of code from scalar to vector processor reduced running times as much as two-fold, and this reduction will become more pronounced as the CRAY compiler is developed. Further improvement (two- to ten-fold) was realized by making minor code changes to facilitate compiler recognition of the parallel and vector structure within the programs. Finally, extensive rewriting of the FORTRAN code structure reduced execution times dramatically, in three cases by a factor of more than 20; and even greater reduction should be possible by changing algorithms within a production code. It is condluded that the CRAY-1 would be of great benefit to Argonne researchers. Existing codes could be modified with relative ease to run significantly faster than on the 370/195. More important, the CRAY would permit scientists to investigate complex problems currently deemed infeasibile on traditional scalar machines. Finally, an interface between the CRAY-1 and IBM computers such as the 370/195, scheduled by Cray Research for the first quarter of 1979, would considerably facilitate the task of integrating the CRAY into Argonne's Central Computing Facility. 13 tables

  2. High performance simulation for the Silva project using the tera computer

    International Nuclear Information System (INIS)

    Bergeaud, V.; La Hargue, J.P.; Mougery, F.; Boulet, M.; Scheurer, B.; Le Fur, J.F.; Comte, M.; Benisti, D.; Lamare, J. de; Petit, A.

    2003-01-01

    In the context of the SILVA Project (Atomic Vapor Laser Isotope Separation), numerical simulation of the plant scale propagation of laser beams through uranium vapour was a great challenge. The PRODIGE code has been developed to achieve this goal. Here we focus on the task of achieving high performance simulation on the TERA computer. We describe the main issues for optimizing the parallelization of the PRODIGE code on TERA. Thus, we discuss advantages and drawbacks of the implemented diagonal parallelization scheme. As a consequence, it has been found fruitful to fit out the code in three aspects: memory allocation, MPI communications and interconnection network bandwidth usage. We stress out the interest of MPI/IO in this context and the benefit obtained for production computations on TERA. Finally, we shall illustrate our developments. We indicate some performance measurements reflecting the good parallelization properties of PRODIGE on the TERA computer. The code is currently used for demonstrating the feasibility of the laser propagation at a plant enrichment level and for preparing the 2003 Menphis experiment. We conclude by emphasizing the contribution of high performance TERA simulation to the project. (authors)

  3. High performance simulation for the Silva project using the tera computer

    Energy Technology Data Exchange (ETDEWEB)

    Bergeaud, V.; La Hargue, J.P.; Mougery, F. [CS Communication and Systemes, 92 - Clamart (France); Boulet, M.; Scheurer, B. [CEA Bruyeres-le-Chatel, 91 - Bruyeres-le-Chatel (France); Le Fur, J.F.; Comte, M.; Benisti, D.; Lamare, J. de; Petit, A. [CEA Saclay, 91 - Gif sur Yvette (France)

    2003-07-01

    In the context of the SILVA Project (Atomic Vapor Laser Isotope Separation), numerical simulation of the plant scale propagation of laser beams through uranium vapour was a great challenge. The PRODIGE code has been developed to achieve this goal. Here we focus on the task of achieving high performance simulation on the TERA computer. We describe the main issues for optimizing the parallelization of the PRODIGE code on TERA. Thus, we discuss advantages and drawbacks of the implemented diagonal parallelization scheme. As a consequence, it has been found fruitful to fit out the code in three aspects: memory allocation, MPI communications and interconnection network bandwidth usage. We stress out the interest of MPI/IO in this context and the benefit obtained for production computations on TERA. Finally, we shall illustrate our developments. We indicate some performance measurements reflecting the good parallelization properties of PRODIGE on the TERA computer. The code is currently used for demonstrating the feasibility of the laser propagation at a plant enrichment level and for preparing the 2003 Menphis experiment. We conclude by emphasizing the contribution of high performance TERA simulation to the project. (authors)

  4. A Strategy for Automatic Performance Tuning of Stencil Computations on GPUs

    Directory of Open Access Journals (Sweden)

    Joseph D. Garvey

    2018-01-01

    Full Text Available We propose and evaluate a novel strategy for tuning the performance of a class of stencil computations on Graphics Processing Units. The strategy uses a machine learning model to predict the optimal way to load data from memory followed by a heuristic that divides other optimizations into groups and exhaustively explores one group at a time. We use a set of 104 synthetic OpenCL stencil benchmarks that are representative of many real stencil computations. We first demonstrate the need for auto-tuning by showing that the optimization space is sufficiently complex that simple approaches to determining a high-performing configuration fail. We then demonstrate the effectiveness of our approach on NVIDIA and AMD GPUs. Relative to a random sampling of the space, we find configurations that are 12%/32% faster on the NVIDIA/AMD platform in 71% and 4% less time, respectively. Relative to an expert search, we achieve 5% and 9% better performance on the two platforms in 89% and 76% less time. We also evaluate our strategy for different stencil computational intensities, varying array sizes and shapes, and in combination with expert search.

  5. Optimization of electron beam crosslinking of wire and cable insulation

    International Nuclear Information System (INIS)

    Zimek, Zbigniew; Przybytniak, Grażyna; Nowicki, Andrzej

    2012-01-01

    The computer simulations based on Monte Carlo (MC) method and the ModeCEB software were carried out in connection with electron beam (EB) radiation set-up for crosslinking of electric wire and cable insulation. The theoretical predictions for absorbed dose distribution in irradiated electric insulation induced by scanned EB were compared to the experimental results of irradiation that was carried out in the experimental set-up based on ILU 6 electron accelerator with electron energy 0.5–2.0 MeV. The computer simulation of the dose distributions in two-sided irradiation system by a scanned electron beam in multilayer circular objects was performed for various process parameters, namely electric wire and cable geometry (thickness of insulation layers and copper wire diameter), type of polymer insulation, electron energy, energy spread and geometry of electron beam, electric wire and cable layout in irradiation zone. The geometry of electron beam distribution in the irradiation zone was measured using CTA and PVC foil dosimeters for available electron energy range. The temperature rise of the irradiated electric wire and irradiation homogeneity were evaluated for different experimental conditions to optimize technological process parameters. The results of computer simulation are consistent with the experimental data of dose distribution evaluated by gel-fraction measurements. Such conformity indicates that ModeCEB computer simulation is reliable and sufficient for optimization absorbed dose distribution in the multi-layer circular objects irradiated with scanned electron beams. - Highlights: ► We model wire and cables irradiation process by Monte Carlo simulations. ► We optimize irradiation configuration for various process parameters. ► Temperature rise and irradiation homogeneity were evaluated. ► Calculation (dose) and experimental (gel-fraction) results were compared. ► Computer simulation was found reliable and sufficient for process optimization.

  6. Electronic Nose Testing Procedure for the Definition of Minimum Performance Requirements for Environmental Odor Monitoring

    Directory of Open Access Journals (Sweden)

    Lidia Eusebio

    2016-09-01

    Full Text Available Despite initial enthusiasm towards electronic noses and their possible application in different fields, and quite a lot of promising results, several criticalities emerge from most published research studies, and, as a matter of fact, the diffusion of electronic noses in real-life applications is still very limited. In general, a first step towards large-scale-diffusion of an analysis method, is standardization. The aim of this paper is describing the experimental procedure adopted in order to evaluate electronic nose performances, with the final purpose of establishing minimum performance requirements, which is considered to be a first crucial step towards standardization of the specific case of electronic nose application for environmental odor monitoring at receptors. Based on the experimental results of the performance testing of a commercialized electronic nose type with respect to three criteria (i.e., response invariability to variable atmospheric conditions, instrumental detection limit, and odor classification accuracy, it was possible to hypothesize a logic that could be adopted for the definition of minimum performance requirements, according to the idea that these are technologically achievable.

  7. Extreme Scale Computing to Secure the Nation

    Energy Technology Data Exchange (ETDEWEB)

    Brown, D L; McGraw, J R; Johnson, J R; Frincke, D

    2009-11-10

    Since the dawn of modern electronic computing in the mid 1940's, U.S. national security programs have been dominant users of every new generation of high-performance computer. Indeed, the first general-purpose electronic computer, ENIAC (the Electronic Numerical Integrator and Computer), was used to calculate the expected explosive yield of early thermonuclear weapons designs. Even the U. S. numerical weather prediction program, another early application for high-performance computing, was initially funded jointly by sponsors that included the U.S. Air Force and Navy, agencies interested in accurate weather predictions to support U.S. military operations. For the decades of the cold war, national security requirements continued to drive the development of high performance computing (HPC), including advancement of the computing hardware and development of sophisticated simulation codes to support weapons and military aircraft design, numerical weather prediction as well as data-intensive applications such as cryptography and cybersecurity U.S. national security concerns continue to drive the development of high-performance computers and software in the U.S. and in fact, events following the end of the cold war have driven an increase in the growth rate of computer performance at the high-end of the market. This mainly derives from our nation's observance of a moratorium on underground nuclear testing beginning in 1992, followed by our voluntary adherence to the Comprehensive Test Ban Treaty (CTBT) beginning in 1995. The CTBT prohibits further underground nuclear tests, which in the past had been a key component of the nation's science-based program for assuring the reliability, performance and safety of U.S. nuclear weapons. In response to this change, the U.S. Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship (SBSS) program in response to the Fiscal Year 1994 National Defense Authorization Act, which requires, 'in the

  8. Modified Monte Carlo method for study of electron transport in degenerate electron gas in the presence of electron-electron interactions, application to graphene

    Science.gov (United States)

    Borowik, Piotr; Thobel, Jean-Luc; Adamowicz, Leszek

    2017-07-01

    Standard computational methods used to take account of the Pauli Exclusion Principle into Monte Carlo (MC) simulations of electron transport in semiconductors may give unphysical results in low field regime, where obtained electron distribution function takes values exceeding unity. Modified algorithms were already proposed and allow to correctly account for electron scattering on phonons or impurities. Present paper extends this approach and proposes improved simulation scheme allowing including Pauli exclusion principle for electron-electron (e-e) scattering into MC simulations. Simulations with significantly reduced computational cost recreate correct values of the electron distribution function. Proposed algorithm is applied to study transport properties of degenerate electrons in graphene with e-e interactions. This required adapting the treatment of e-e scattering in the case of linear band dispersion relation. Hence, this part of the simulation algorithm is described in details.

  9. Canadian conference on electrical and computer engineering proceedings. Congres canadien en genie electrique et informatique

    Energy Technology Data Exchange (ETDEWEB)

    Bhargava, V K [ed.

    1993-01-01

    A conference was held on the subject of electrical and computer engineering. Papers were presented on the subjects of artificial intelligence, video, signal processing, radar, power electronics, neural networks, control, computer systems, transportation electronics, software tools, error control coding, electrothermal phenomena, performance evaluation of computer systems, wireless communication, satellite communication, very large scale integration, parallel processing, pattern recognition, telephony, graphs and algorithms, multimedia, broadcast systems, remote sensing, computer networks, modulation and coding, robotics, computer architecture, spread spectrum, image processing, microwave circuits, biomedical engineering, specification and verification, image restoration, communications networks, computer-aided design, drives, energy systems, expert systems, and optics. Separate abstracts have been prepared for 56 papers from the conference.

  10. FY 1995 Blue Book: High Performance Computing and Communications: Technology for the National Information Infrastructure

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — The Federal High Performance Computing and Communications HPCC Program was created to accelerate the development of future generations of high performance computers...

  11. ''In situ'' electronic testing method of a neutron detector performance

    International Nuclear Information System (INIS)

    Gonzalez, J.M.; Levai, F.

    1987-01-01

    The method allows detection of any important change in the electrical characteristics of a neutron sensor channel. It checks the response signal produced by an electronic detector circuit when a pulse generator is connected as input signal in the high voltage supply. The electronic circuit compares the detector capacitance value, previously measured, against a reference value, which is adjusted in a window type comparator electronic circuit to detect any important degrading condition of the capacitance value in a detector-cable system. The ''in-situ'' electronic testing method of neutron detector performance has been verified in a laboratory atmosphere to be a potential method to detect any significant change in the capacitance value of a nuclear sensor and its connecting cable, also checking: detector disconnections, cable disconnections, length changes of the connecting cable, electric short-opened circuits in the sensor channel, and any electrical trouble in the detector-connector-cable system. The experimental practices were carried out by simulation of several electric changes in a nuclear sensor-cable system from a linear D.C. channel which measures reactor power during nuclear reactor operation. It was made at the Training Reactor Electronic Laboratory. The results and conclusions obtained at the Laboratory were proved, satisfactorily, in the Electronic Instrumentation of Budapest Technical University Training Reactor, Hungary

  12. FPGA Compute Acceleration for High-Throughput Data Processing in High-Energy Physics Experiments

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    The upgrades of the four large experiments of the LHC at CERN in the coming years will result in a huge increase of data bandwidth for each experiment which needs to be processed very efficiently. For example the LHCb experiment will upgrade its detector 2019/2020 to a 'triggerless' readout scheme, where all of the readout electronics and several sub-detector parts will be replaced. The new readout electronics will be able to readout the detector at 40MHz. This increases the data bandwidth from the detector down to the event filter farm to 40TBit/s, which must be processed to select the interesting proton-proton collisions for later storage. The architecture of such a computing farm, which can process this amount of data as efficiently as possible, is a challenging task and several compute accelerator technologies are being considered.    In the high performance computing sector more and more FPGA compute accelerators are being used to improve the compute performance and reduce the...

  13. A COTS-based single board radiation-hardened computer for space applications

    International Nuclear Information System (INIS)

    Stewart, S.; Hillman, R.; Layton, P.; Krawzsenek, D.

    1999-01-01

    There is great community interest in the ability to use COTS (Commercial-Off-The-Shelf) technology in radiation environments. Space Electronics, Inc. has developed a high performance COTS-based radiation hardened computer. COTS approaches were selected for both hardware and software. Through parts testing, selection and packaging, all requirements have been met without parts or process development. Reliability, total ionizing dose and single event performance are attractive. The characteristics, performance and radiation resistance of the single board computer will be presented. (authors)

  14. Scintillator performance considerations for dedicated breast computed tomography

    Science.gov (United States)

    Vedantham, Srinivasan; Shi, Linxi; Karellas, Andrew

    2017-09-01

    Dedicated breast computed tomography (BCT) is an emerging clinical modality that can eliminate tissue superposition and has the potential for improved sensitivity and specificity for breast cancer detection and diagnosis. It is performed without physical compression of the breast. Most of the dedicated BCT systems use large-area detectors operating in cone-beam geometry and are referred to as cone-beam breast CT (CBBCT) systems. The large-area detectors in CBBCT systems are energy-integrating, indirect-type detectors employing a scintillator that converts x-ray photons to light, followed by detection of optical photons. A key consideration that determines the image quality achieved by such CBBCT systems is the choice of scintillator and its performance characteristics. In this work, a framework for analyzing the impact of the scintillator on CBBCT performance and its use for task-specific optimization of CBBCT imaging performance is described.

  15. In-cylinder diesel spray combustion simulations using parallel computation: A performance benchmarking study

    International Nuclear Information System (INIS)

    Pang, Kar Mun; Ng, Hoon Kiat; Gan, Suyin

    2012-01-01

    Highlights: ► A performance benchmarking exercise is conducted for diesel combustion simulations. ► The reduced chemical mechanism shows its advantages over base and skeletal models. ► High efficiency and great reduction of CPU runtime are achieved through 4-node solver. ► Increasing ISAT memory from 0.1 to 2 GB reduces the CPU runtime by almost 35%. ► Combustion and soot processes are predicted well with minimal computational cost. - Abstract: In the present study, in-cylinder diesel combustion simulation was performed with parallel processing on an Intel Xeon Quad-Core platform to allow both fluid dynamics and chemical kinetics of the surrogate diesel fuel model to be solved simultaneously on multiple processors. Here, Cartesian Z-Coordinate was selected as the most appropriate partitioning algorithm since it computationally bisects the domain such that the dynamic load associated with fuel particle tracking was evenly distributed during parallel computations. Other variables examined included number of compute nodes, chemistry sizes and in situ adaptive tabulation (ISAT) parameters. Based on the performance benchmarking test conducted, parallel configuration of 4-compute node was found to reduce the computational runtime most efficiently whereby a parallel efficiency of up to 75.4% was achieved. The simulation results also indicated that accuracy level was insensitive to the number of partitions or the partitioning algorithms. The effect of reducing the number of species on computational runtime was observed to be more significant than reducing the number of reactions. Besides, the study showed that an increase in the ISAT maximum storage of up to 2 GB reduced the computational runtime by 50%. Also, the ISAT error tolerance of 10 −3 was chosen to strike a balance between results accuracy and computational runtime. The optimised parameters in parallel processing and ISAT, as well as the use of the in-house reduced chemistry model allowed accurate

  16. Phosphorescent rhenium emitters based on two electron-withdrawing diamine ligands: Structure, characterization and electroluminescent performance

    Energy Technology Data Exchange (ETDEWEB)

    Rui, Mei, E-mail: meirui2015@163.com [College of Science, Hebei North University, Zhangjiakou 075000, Hebei (China); Yuhong, Wang [College of Science, Hebei North University, Zhangjiakou 075000, Hebei (China); Yinting, Wang; Na, Zhang [Communication Training Base of The Headquarters of The General Staff, Zhangjiakou 075100, Hebei (China)

    2014-09-15

    In this paper, two diamine ligands having electron-withdrawing oxadiazole group and their corresponding Re(I) complexes were synthesized. Their geometric structure, electronic transition, photophysical property, thermal stability and electrochemical property were discussed in detail. Experimental data suggested that both complexes were promising yellow emitters with suited energy levels and good thermal stability for electroluminescent application. The correlation between emission performance and electron-withdrawing group was analyzed. It was found that electron-withdrawing group favored emission performance improvement. Their electroluminescence performance was also explored. Yellow electroluminescence was observed with maximum brightness of 1743 cd/m{sup 2}. - Highlights: • Oxadiazole derived diamine ligands and their Re(I) complexes were synthesized. • Their characters and properties were analyzed and compared in detail. • Electron-withdrawing group was proved to be positive for PL improvement. • Electroluminescence was obtained with maximum brightness of 1743 cd/m{sup 2}.

  17. Room design for high-performance electron microscopy

    International Nuclear Information System (INIS)

    Muller, David A.; Kirkland, Earl J.; Thomas, Malcolm G.; Grazul, John L.; Fitting, Lena; Weyland, Matthew

    2006-01-01

    Aberration correctors correct aberrations, not instabilities. Rather, as spatial resolution improves, a microscope's sensitivity to room environment becomes more noticeable, not less. Room design is now an essential part of the microscope installation process. Previously ignorable annoyances like computer fans, desk lamps and that chiller in the service corridor now may become the limiting factors in the microscopes performance. We discuss methods to quantitatively characterize the instrument's response to magnetic, mechanical, acoustical and thermal disturbances and thus predict the limits that the environment places on imaging and spectroscopy

  18. Electron holography at atomic dimensions -- Present state

    International Nuclear Information System (INIS)

    Lehmann, M.; Lichte, H.

    1999-01-01

    An electron microscope is a wave optical instrument where the object information is carried by an electron wave. However, an important information, the phase of the electron wave, is lost, because only intensities can be recorded in a conventional electron micrograph. Off-axis electron holography solves this phase problem by encoding amplitude and phase information in an interference pattern, the so-called hologram. After reconstruction, a rather unrestricted wave optical analysis can be performed on a computer. The possibilities as well as the current limitations of off-axis electron holography at atomic dimensions are discussed, and they are illustrated at two applications of structure characterization of ε-NbN and YBCO-1237. Finally, an electron microscope equipped with a Cs-corrector, a monochromator, and a Moellenstedt biprism is outlined for subangstrom holography

  19. Heat exchanger performance analysis programs for the personal computer

    International Nuclear Information System (INIS)

    Putman, R.E.

    1992-01-01

    Numerous utility industry heat exchange calculations are repetitive and thus lend themselves to being performed on a Personal Computer. These programs may be regarded as engineering tools which, when put together, can form a Toolbox. However, the practicing Results Engineer in the utility industry desires not only programs that are robust as well as easy to use but can also be used both on desktop and laptop PC's. The latter also offer the opportunity to take the computer into the plant or control room, and use it there to process test or operating data right on the spot. Most programs evolve through the needs which arise in the course of day-to-day work. This paper describes several of the more useful programs of this type and outlines some of the guidelines to be followed when designing personal computer programs for use by the practicing Results Engineer

  20. Overview of Parallel Platforms for Common High Performance Computing

    Directory of Open Access Journals (Sweden)

    T. Fryza

    2012-04-01

    Full Text Available The paper deals with various parallel platforms used for high performance computing in the signal processing domain. More precisely, the methods exploiting the multicores central processing units such as message passing interface and OpenMP are taken into account. The properties of the programming methods are experimentally proved in the application of a fast Fourier transform and a discrete cosine transform and they are compared with the possibilities of MATLAB's built-in functions and Texas Instruments digital signal processors with very long instruction word architectures. New FFT and DCT implementations were proposed and tested. The implementation phase was compared with CPU based computing methods and with possibilities of the Texas Instruments digital signal processing library on C6747 floating-point DSPs. The optimal combination of computing methods in the signal processing domain and new, fast routines' implementation is proposed as well.

  1. RAPPORT: running scientific high-performance computing applications on the cloud.

    Science.gov (United States)

    Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt

    2013-01-28

    Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.

  2. Efficient method for computing the electronic transport properties of a multiterminal system

    Science.gov (United States)

    Lima, Leandro R. F.; Dusko, Amintor; Lewenkopf, Caio

    2018-04-01

    We present a multiprobe recursive Green's function method to compute the transport properties of mesoscopic systems using the Landauer-Büttiker approach. By introducing an adaptive partition scheme, we map the multiprobe problem into the standard two-probe recursive Green's function method. We apply the method to compute the longitudinal and Hall resistances of a disordered graphene sample, a system of current interest. We show that the performance and accuracy of our method compares very well with other state-of-the-art schemes.

  3. Electron-Muon Ranger: performance in the MICE Muon Beam

    CERN Document Server

    Adams, D.; Vankova-Kirilova, G.; Bertoni, R.; Bonesini, M.; Chignoli, F.; Mazza, R.; Palladino, V.; de Bari, A.; Cecchet, G.; Capponi, M.; Iaciofano, A.; Orestano, D.; Pastore, F.; Tortora, L.; Kuno, Y.; Sakamoto, H.; Ishimoto, S.; Filthaut, F.; Hansen, O.M.; Ramberger, S.; Vretenar, M.; Asfandiyarov, R.; Bene, P.; Blondel, A.; Cadoux, F.; Debieux, S.; Drielsma, F.; Graulich, J.S.; Husi, C.; Karadzhov, Y.; Masciocchi, F.; Nicola, L.; Messomo, E.Noah; Rothenfusser, K.; Sandstrom, R.; Wisting, H.; Charnley, G.; Collomb, N.; Gallagher, A.; Grant, A.; Griffiths, S.; Hartnett, T.; Martlew, B.; Moss, A.; Muir, A.; Mullacrane, I.; Oates, A.; Owens, P.; Stokes, G.; Warburton, P.; White, C.; Adams, D.; Barclay, P.; Bayliss, V.; Bradshaw, T.W.; Courthold, M.; Francis, V.; Fry, L.; Hayler, T.; Hills, M.; Lintern, A.; Macwaters, C.; Nichols, A.; Preece, R.; Ricciardi, S.; Rogers, C.; Stanley, T.; Tarrant, J.; Watson, S.; Wilson, A.; Bayes, R.; Nugent, J.C.; Soler, F.J.P.; Cooke, P.; Gamet, R.; Alekou, A.; Apollonio, M.; Barber, G.; Colling, D.; Dobbs, A.; Dornan, P.; Hunt, C.; Lagrange, J-B.; Long, K.; Martyniak, J.; Middleton, S.; Pasternak, J.; Santos, E.; Savidge, T.; Uchida, M.A.; Blackmore, V.J.; Carlisle, T.; Cobb, J.H.; Lau, W.; Rayner, M.A.; Tunnell, C.D.; Booth, C.N.; Hodgson, P.; Langlands, J.; Nicholson, R.; Overton, E.; Robinson, M.; Smith, P.J.; Dick, A.; Ronald, K.; Speirs, D.; Whyte, C.G.; Young, A.; Boyd, S.; Franchini, P.; Greis, J.; Pidcott, C.; Taylor, I.; Gardener, R.; Kyberd, P.; Littlefield, M.; Nebrensky, J.J.; Bross, A.D.; Fitzpatrick, T.; Leonova, M.; Moretti, A.; Neuffer, D.; Popovic, M.; Rubinov, P.; Rucinski, R.; Roberts, T.J.; Bowring, D.; DeMello, A.; Gourlay, S.; Li, D.; Prestemon, S.; Virostek, S.; Zisman, M.; Hanlet, P.; Kafka, G.; Kaplan, D.M.; Rajaram, D.; Snopok, P.; Torun, Y.; Blot, S.; Kim, Y.K.; Bravar, U.; Onel, Y.; Cremaldi, L.M.; Hart, T.L.; Luo, T.; Sanders, D.A.; Summers, D.J.; Cline, D.; Yang, X.; Coney, L.; Hanson, G.G.; Heidt, C.

    2015-12-16

    The Muon Ionization Cooling Experiment (MICE) will perform a detailed study of ionization cooling to evaluate the feasibility of the technique. To carry out this program, MICE requires an efficient particle-identification (PID) system to identify muons. The Electron-Muon Ranger (EMR) is a fully-active tracking-calorimeter that forms part of the PID system and tags muons that traverse the cooling channel without decaying. The detector is capable of identifying electrons with an efficiency of 98.6%, providing a purity for the MICE beam that exceeds 99.8%. The EMR also proved to be a powerful tool for the reconstruction of muon momenta in the range 100-280 MeV/$c$.

  4. Electron-muon ranger: performance in the MICE muon beam

    International Nuclear Information System (INIS)

    Adams, D.; Barclay, P.; Bayliss, V.; Bradshaw, T.W.; Alekou, A.; Apollonio, M.; Barber, G.; Asfandiyarov, R.; Bene, P.; Blondel, A.; De Bari, A.; Bayes, R.; Bertoni, R.; Bonesini, M.; Blackmore, V.J.; Blot, S.; Bogomilov, M.; Booth, C.N.; Bowring, D.; Boyd, S.

    2015-01-01

    The Muon Ionization Cooling Experiment (MICE) will perform a detailed study of ionization cooling to evaluate the feasibility of the technique. To carry out this program, MICE requires an efficient particle-identification (PID) system to identify muons. The Electron-Muon Ranger (EMR) is a fully-active tracking-calorimeter that forms part of the PID system and tags muons that traverse the cooling channel without decaying. The detector is capable of identifying electrons with an efficiency of 98.6%, providing a purity for the MICE beam that exceeds 99.8%. The EMR also proved to be a powerful tool for the reconstruction of muon momenta in the range 100–280 MeV/c

  5. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatest number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern

  6. Electronic theodolite intersection systems

    OpenAIRE

    Bingley, R. M.

    1990-01-01

    The development of electronic surveying instruments, such as electronic theodolites, and concurrent advances in computer technology, has revolutionised engineering surveying; one of the more recent examples being the introduction of Electronic Theodolite Intersection Systems (ETISs). An ETIS consists of two or more electronic theodolites and a computer, with peripheral hardware and suitable software. The theoretical principles on which they are based have been known for a long time, but ...

  7. DEISA2: supporting and developing a European high-performance computing ecosystem

    International Nuclear Information System (INIS)

    Lederer, H

    2008-01-01

    The DEISA Consortium has deployed and operated the Distributed European Infrastructure for Supercomputing Applications. Through the EU FP7 DEISA2 project (funded for three years as of May 2008), the consortium is continuing to support and enhance the distributed high-performance computing infrastructure and its activities and services relevant for applications enabling, operation, and technologies, as these are indispensable for the effective support of computational sciences for high-performance computing (HPC). The service-provisioning model will be extended from one that supports single projects to one supporting virtual European communities. Collaborative activities will also be carried out with new European and other international initiatives. Of strategic importance is cooperation with the PRACE project, which is preparing for the installation of a limited number of leadership-class Tier-0 supercomputers in Europe. The key role and aim of DEISA will be to deliver a turnkey operational solution for a persistent European HPC ecosystem that will integrate national Tier-1 centers and the new Tier-0 centers

  8. Effectiveness and cost-effectiveness of computer and other electronic aids for smoking cessation: a systematic review and network meta-analysis.

    Science.gov (United States)

    Chen, Y-F; Madan, J; Welton, N; Yahaya, I; Aveyard, P; Bauld, L; Wang, D; Fry-Smith, A; Munafò, M R

    2012-01-01

    of ongoing studies including National Institute for Health Research (NIHR) Clinical Research Network Portfolio Database, Current Controlled Trials and ClinicalTrials.gov were also searched, and further information was sought from contacts with experts. Randomised controlled trials (RCTs) and quasi-RCTs evaluating smoking cessation programmes that utilise computer, internet, mobile telephone or other electronic aids in adult smokers were included in the effectiveness review. Relevant studies of other design were included in the cost-effectiveness review and supplementary review. Pair-wise meta-analyses using both random- and fixed-effects models were carried out. Bayesian mixed-treatment comparisons (MTCs) were also performed. A de novo decision-analytical model was constructed for estimating the cost-effectiveness of interventions. Expected value of perfect information (EVPI) was calculated. Narrative synthesis of key themes and issues that may influence the acceptability and usability of electronic aids was provided in the supplementary review. This effectiveness review included 60 RCTs/quasi-RCTs reported in 77 publications. Pooled estimate for prolonged abstinence [relative risk (RR) = 1.32, 95% confidence interval (CI) 1.21 to 1.45] and point prevalence abstinence (RR = 1.14, 95% CI 1.07 to 1.22) suggested that computer and other electronic aids increase the likelihood of cessation compared with no intervention or generic self-help materials. There was no significant difference in effect sizes between aid to cessation studies (which provide support to smokers who are ready to quit) and cessation induction studies (which attempt to encourage a cessation attempt in smokers who are not yet ready to quit). Results from MTC also showed small but significant intervention effect (time to relapse, mean hazard ratio 0.87, 95% credible interval 0.83 to 0.92). Cost-threshold analyses indicated some form of electronic intervention is likely to be cost-effective when added to

  9. High performance computations using dynamical nucleation theory

    International Nuclear Information System (INIS)

    Windus, T L; Crosby, L D; Kathmann, S M

    2008-01-01

    Chemists continue to explore the use of very large computations to perform simulations that describe the molecular level physics of critical challenges in science. In this paper, we describe the Dynamical Nucleation Theory Monte Carlo (DNTMC) model - a model for determining molecular scale nucleation rate constants - and its parallel capabilities. The potential for bottlenecks and the challenges to running on future petascale or larger resources are delineated. A 'master-slave' solution is proposed to scale to the petascale and will be developed in the NWChem software. In addition, mathematical and data analysis challenges are described

  10. Monte Carlo computation of Bremsstrahlung intensity and energy spectrum from a 15 MV linear electron accelerator tungsten target to optimise LINAC head shielding

    International Nuclear Information System (INIS)

    Biju, K.; Sharma, Amiya; Yadav, R.K.; Kannan, R.; Bhatt, B.C.

    2003-01-01

    The knowledge of exact photon intensity and energy distributions from the target of an electron target is necessary while designing the shielding for the accelerator head from radiation safety point of view. The computations were carried out for the intensity and energy distribution of photon spectrum from a 0.4 cm thick tungsten target in different angular directions for 15 MeV electrons using a validated Monte Carlo code MCNP4A. Similar results were computed for 30 MeV electrons and found agreeing with the data available in literature. These graphs and the TVT values in lead help to suggest an optimum shielding thickness for 15 MV Linac head. (author)

  11. High-performance secure multi-party computation for data mining applications

    DEFF Research Database (Denmark)

    Bogdanov, Dan; Niitsoo, Margus; Toft, Tomas

    2012-01-01

    Secure multi-party computation (MPC) is a technique well suited for privacy-preserving data mining. Even with the recent progress in two-party computation techniques such as fully homomorphic encryption, general MPC remains relevant as it has shown promising performance metrics in real...... operations such as multiplication and comparison. Secondly, the confidential processing of financial data requires the use of more complex primitives, including a secure division operation. This paper describes new protocols in the Sharemind model for secure multiplication, share conversion, equality, bit...

  12. An electronic edition of eighteenth-century drama: the materiality of editing in performance

    OpenAIRE

    Pinto, Isabel

    2016-01-01

    In the domain of electronic edition, drama’s specificity has been considered in terms of metadata improvements and possibilities. At the same time, an increasing closeness between art history research and performance art has demonstrated its methodological value to assess the complex nature of the archive. My post-doctoral research follows the lead and goes as far as proposing that performance art can be an adequate methodology when preparing the electronic edition of eighteenth-century drama...

  13. 14th annual Results and Review Workshop on High Performance Computing in Science and Engineering

    CERN Document Server

    Nagel, Wolfgang E; Resch, Michael M; Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2011; High Performance Computing in Science and Engineering '11

    2012-01-01

    This book presents the state-of-the-art in simulation on supercomputers. Leading researchers present results achieved on systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2011. The reports cover all fields of computational science and engineering, ranging from CFD to computational physics and chemistry, to computer science, with a special emphasis on industrially relevant applications. Presenting results for both vector systems and microprocessor-based systems, the book allows readers to compare the performance levels and usability of various architectures. As HLRS

  14. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  15. Data mining technique for a secure electronic payment transaction using MJk-RSA in mobile computing

    Science.gov (United States)

    G. V., Ramesh Babu; Narayana, G.; Sulaiman, A.; Padmavathamma, M.

    2012-04-01

    Due to the evolution of the Electronic Learning (E-Learning), one can easily get desired information on computer or mobile system connected through Internet. Currently E-Learning materials are easily accessible on the desktop computer system, but in future, most of the information shall also be available on small digital devices like Mobile, PDA, etc. Most of the E-Learning materials are paid and customer has to pay entire amount through credit/debit card system. Therefore, it is very important to study about the security of the credit/debit card numbers. The present paper is an attempt in this direction and a security technique is presented to secure the credit/debit card numbers supplied over the Internet to access the E-Learning materials or any kind of purchase through Internet. A well known method i.e. Data Cube Technique is used to design the security model of the credit/debit card system. The major objective of this paper is to design a practical electronic payment protocol which is the safest and most secured mode of transaction. This technique may reduce fake transactions which are above 20% at the global level.

  16. Computer control of the high-voltage power supply for the DIII-D electron cyclotron heating system

    International Nuclear Information System (INIS)

    Clow, D.D.; Kellman, D.H.

    1992-01-01

    This paper reports on the DIII-D Electron Cyclotron Heating (ECH) high voltage power supply which is controlled by a computer. Operational control is input via keyboard and mouse, and computer/power supply interfact is accomplished with a Computer Assisted Monitoring and Control (CAMAC) system. User-friendly tools allow the design and layout of simulated control panels on the computer screen. Panel controls and indicators can be changed, added or deleted, and simple editing of user-specific processes can quickly modify control and fault logic. Databases can be defined, and control panel functions are easily referred to various data channels. User-specific processes are written and linked using Fortran, to manage control and data acquisition through CAMAC. The resulting control system has significant advantages over the hardware it emulates: changes in logic, layout, and function are quickly and easily incorporated; data storage, retrieval, and processing are flexible and simply accomplished; physical components subject to wear and degradation are minimized. In addition, the system can be expanded to multiplex control of several power supplies, each with its own database, through a single computer console

  17. Construction of Blaze at the University of Illinois at Chicago: A Shared, High-Performance, Visual Computer for Next-Generation Cyberinfrastructure-Accelerated Scientific, Engineering, Medical and Public Policy Research

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Maxine D. [Acting Director, EVL; Leigh, Jason [PI

    2014-02-17

    The Blaze high-performance visual computing system serves the high-performance computing research and education needs of University of Illinois at Chicago (UIC). Blaze consists of a state-of-the-art, networked, computer cluster and ultra-high-resolution visualization system called CAVE2(TM) that is currently not available anywhere in Illinois. This system is connected via a high-speed 100-Gigabit network to the State of Illinois' I-WIRE optical network, as well as to national and international high speed networks, such as the Internet2, and the Global Lambda Integrated Facility. This enables Blaze to serve as an on-ramp to national cyberinfrastructure, such as the National Science Foundation’s Blue Waters petascale computer at the National Center for Supercomputing Applications at the University of Illinois at Chicago and the Department of Energy’s Argonne Leadership Computing Facility (ALCF) at Argonne National Laboratory. DOE award # DE-SC005067, leveraged with NSF award #CNS-0959053 for “Development of the Next-Generation CAVE Virtual Environment (NG-CAVE),” enabled us to create a first-of-its-kind high-performance visual computing system. The UIC Electronic Visualization Laboratory (EVL) worked with two U.S. companies to advance their commercial products and maintain U.S. leadership in the global information technology economy. New applications are being enabled with the CAVE2/Blaze visual computing system that is advancing scientific research and education in the U.S. and globally, and help train the next-generation workforce.

  18. A far-infrared Michelson interferometer for tokamak electron density measurements using computer-generated reference fringes

    International Nuclear Information System (INIS)

    Krug, P.A.; Stimson, P.A.; Falconer, I.S.

    1986-01-01

    A simple far-infrared interferometer which uses the 394 μm laser line from optically-pumped formic acid vapour to measure tokamak electron density is described. This interferometer is unusual in requiring only one detector and a single probing beam since reference fringes during the plasma shot are obtained by computer interpolation between the fringes observed immediately before and after the shot. Electron density has been measured with a phase resolution corresponding to + - 1/20 wavelength fringe shift, which is equivalent to a central density resolution of + - 0.1 x 10 19 m -3 for an assumed parabolic density distribution in a plasma of diameter of 0.2 m, and with a time resolution of 0.2 ms. (author)

  19. High Performance Spaceflight Computing (HPSC)

    Data.gov (United States)

    National Aeronautics and Space Administration — Space-based computing has not kept up with the needs of current and future NASA missions. We are developing a next-generation flight computing system that addresses...

  20. A computational perspective of vibrational and electronic analysis of potential photosensitizer 2-chlorothioxanthone

    Science.gov (United States)

    Ali, Narmeen; Mansha, Asim; Asim, Sadia; Zahoor, Ameer Fawad; Ghafoor, Sidra; Akbar, Muhammad Usman

    2018-03-01

    This paper deals with combined theoretical and experimental study of geometric, electronic and vibrational properties of 2-chlorothioxanthone (CTX) molecule which is potential photosensitizer. The FT-IR spectrum of CTX in solid phase was recorded in 4000-400 cm-1 region. The UV-Vis. absorption spectrum was also recorded in the laboratory as well as computed at DFT/B3LYP level in five different phases viz. gas, water, DMSO, acetone and ethanol. The quantum mechanics based theoretical IR and Raman spectra were also calculated for the title compound employing HF and DFT functional with 3-21G+, 6-31G+ and 6-311G+, 6-311G++ basis sets, respectively, and assignment of each vibrational frequency has been done on the basis of potential energy distribution (PED). A comparison has been made between theoretical and experimental vibrational spectra as well as for the UV-Vis. absorption spectra. The computed infra red & Raman spectra by DFT compared with experimental spectra along with reliable vibrational assignment based on PED. The calculated electronic properties, results of natural bonding orbital (NBO) analysis, charge distribution, dipole moment and energies have been reported in the paper. Bimolecular quenching of triplet state of CTX in the presence of triethylamine, 2-propanol triethylamine and diazobicyclooctane (DABCO) reflect the interactions between them. The bimolecular quenching rate constant is fastest for interaction of 3CTX in the presence of DABCO reflecting their stronger interactions.

  1. Impedance computations and beam-based measurements: A problem of discrepancy

    Science.gov (United States)

    Smaluk, Victor

    2018-04-01

    High intensity of particle beams is crucial for high-performance operation of modern electron-positron storage rings, both colliders and light sources. The beam intensity is limited by the interaction of the beam with self-induced electromagnetic fields (wake fields) proportional to the vacuum chamber impedance. For a new accelerator project, the total broadband impedance is computed by element-wise wake-field simulations using computer codes. For a machine in operation, the impedance can be measured experimentally using beam-based techniques. In this article, a comparative analysis of impedance computations and beam-based measurements is presented for 15 electron-positron storage rings. The measured data and the predictions based on the computed impedance budgets show a significant discrepancy. Three possible reasons for the discrepancy are discussed: interference of the wake fields excited by a beam in adjacent components of the vacuum chamber, effect of computation mesh size, and effect of insufficient bandwidth of the computed impedance.

  2. An empirical model to describe performance degradation for warranty abuse detection in portable electronics

    International Nuclear Information System (INIS)

    Oh, Hyunseok; Choi, Seunghyuk; Kim, Keunsu; Youn, Byeng D.; Pecht, Michael

    2015-01-01

    Portable electronics makers have introduced liquid damage indicators (LDIs) into their products to detect warranty abuse caused by water damage. However, under certain conditions, these indicators can exhibit inconsistencies in detecting liquid damage. This study is motivated by the fact that the reliability of LDIs in portable electronics is suspected. In this paper, first, the scheme of life tests is devised for LDIs in conjunction with a robust color classification rule. Second, a degradation model is proposed by considering the two physical mechanisms—(1) phase change from vapor to water and (2) water transport in the porous paper—for LDIs. Finally, the degradation model is validated with additional tests using actual smartphone sets subjected to the thermal cycling of −15 °C to 25 °C and the relative humidity of 95%. By employing the innovative life testing scheme and the novel performance degradation model, it is expected that the performance of LDIs for a particular application can be assessed quickly and accurately. - Highlights: • Devise an efficient scheme of life testing for a warranty abuse detector in portable electronics. • Develop a performance degradation model for the warranty abuse detector used in portable electronics. • Validate the performance degradation model with life tests of actual smartphone sets. • Help make a decision on warranty service in portable electronics manufacturers

  3. Plasma Wind Tunnel Testing of Electron Transpiration Cooling Concept

    Science.gov (United States)

    2017-02-28

    Colorado State University ETC Electron Transpiration Cooling LHTS Local Heat Transfer Simulation LTE Local Thermodynamic Equilibrium RCC Reinforced...ceramic electric material testing in plasma environment (not performed), 4. measurements and analysis of the Electron Transpiration Cooling (Sec. 4.2). 2...VKI 1D boundary layer code for computation of enthalpy and boundary layer parameters: a) iterate on ’virtually measured ’ heat flux, b) once enthalpy

  4. Characteristics and performances of electronic personal dosemeters

    International Nuclear Information System (INIS)

    Aubert, B.

    2002-01-01

    The regulations have made obligation for 2 years to measure and analyse the amounts of radiations actually received during an operation. The whole of these measurements taken uninterrupted for an immediate reading is indicated like the operational dosimetry, which is carried out with the means of personal electronic dosemeters. This study analyses the legislation relating to this type of dosimetry as well as the requirements in medical environment, and presents an assessment of the characteristics and performances of the devices available on the French market at the beginning of 2002 starting from the information provided by the various manufacturers. (author)

  5. Electronic Nose and Electronic Tongue

    Science.gov (United States)

    Bhattacharyya, Nabarun; Bandhopadhyay, Rajib

    Human beings have five senses, namely, vision, hearing, touch, smell and taste. The sensors for vision, hearing and touch have been developed for several years. The need for sensors capable of mimicking the senses of smell and taste have been felt only recently in food industry, environmental monitoring and several industrial applications. In the ever-widening horizon of frontier research in the field of electronics and advanced computing, emergence of electronic nose (E-Nose) and electronic tongue (E-Tongue) have been drawing attention of scientists and technologists for more than a decade. By intelligent integration of multitudes of technologies like chemometrics, microelectronics and advanced soft computing, human olfaction has been successfully mimicked by such new techniques called machine olfaction (Pearce et al. 2002). But the very essence of such research and development efforts has centered on development of customized electronic nose and electronic tongue solutions specific to individual applications. In fact, research trends as of date clearly points to the fact that a machine olfaction system as versatile, universal and broadband as human nose and human tongue may not be feasible in the decades to come. But application specific solutions may definitely be demonstrated and commercialized by modulation in sensor design and fine-tuning the soft computing solutions. This chapter deals with theory, developments of E-Nose and E-Tongue technology and their applications. Also a succinct account of future trends of R&D efforts in this field with an objective of establishing co-relation between machine olfaction and human perception has been included.

  6. Performing three-dimensional neutral particle transport calculations on tera scale computers

    International Nuclear Information System (INIS)

    Woodward, C.S.; Brown, P.N.; Chang, B.; Dorr, M.R.; Hanebutte, U.R.

    1999-01-01

    A scalable, parallel code system to perform neutral particle transport calculations in three dimensions is presented. To utilize the hyper-cluster architecture of emerging tera scale computers, the parallel code successfully combines the MPI message passing and paradigms. The code's capabilities are demonstrated by a shielding calculation containing over 14 billion unknowns. This calculation was accomplished on the IBM SP ''ASCI-Blue-Pacific computer located at Lawrence Livermore National Laboratory (LLNL)

  7. Facilitating the design and operation of computer-controlled radiochemistry synthesizers with an open-quotes Electronic Toolboxclose quotes

    International Nuclear Information System (INIS)

    Feliu, A.L.

    1991-01-01

    Positron emission tomography (PET) is a non-invasive diagnostic imaging technique requiring rapid and reliable radiopharmaceutical production. Automated systems offer a host of potential advantages over manually or remotely operated apparatus, including reduced personnel requirements, lower radiation exposure to personel, reliable yields, and reproducible product purity. However, the burden of routine radiopharmaceutical production most often remains a labor-intensive responsibility of highly trained radiochemists. In order to ease the transition between manual, remote-controlled, and computer-controlled radiochemical synthesis, an electronic toolbox with graphical user interface was developed as a generic process control system compatible with a variety of common radiochemical operations. This work is specifically aimed to make automated techniques more accessible by emphasizing the similarities between manual and automated chemistry and by minimizing the computer programming effort required. This paper discusses the structural elements of the electronic toolbox approach to radiochemistry process control, and its ramifications for the designers and end-users of automated synthesizers

  8. Running Interactive Jobs on Peregrine | High-Performance Computing | NREL

    Science.gov (United States)

    shell prompt, which allows users to execute commands and scripts as they would on the login nodes. Login performed on the compute nodes rather than on login nodes. This page provides instructions and examples of , start GUIs etc. and the commands will execute on that node instead of on the login node. The -V option

  9. Electron cloud and ion effects

    CERN Document Server

    Arduini, Gianluigi

    2002-01-01

    The significant progress in the understanding and control of machine impedances has allowed obtaining beams with increasing brilliance. Dense positively charged beams generate electron clouds via gas ionization, photoemission and multipacting. The electron cloud in turn interacts with the beam and the surrounding environment originating fast coupled and single bunch instabilities, emittance blow-up, additional loads to vacuum and cryogenic systems, perturbation to beam diagnostics and feedbacks and it constitutes a serious limitation to machine performance. In a similar way high brilliance electron beams are mainly affected by positively charged ions produced by residual gas ionization. Recent observations of electron cloud build-up and its effects in present accelerators are reviewed and compared with theory and with the results of state-of-the-art computer simulations. Two-stream instabilities induced by the interaction between electron beams and ions are discussed. The implications for future accelerators ...

  10. Experimental study of matrix carbon field-emission cathodes and computer aided design of electron guns for microwave power devices, exploring these cathodes

    International Nuclear Information System (INIS)

    Grigoriev, Y.A.; Petrosyan, A.I.; Penzyakov, V.V.; Pimenov, V.G.; Rogovin, V.I.; Shesterkin, V.I.; Kudryashov, V.P.; Semyonov, V.C.

    1997-01-01

    The experimental study of matrix carbon field-emission cathodes (MCFECs), which has led to the stable operation of the cathodes with current emission values up to 100 mA, is described. A method of computer aided design of TWT electron guns (EGs) with MCFEC, based on the results of the MCFEC emission experimental study, is presented. The experimental MCFEC emission characteristics are used to define the field gain coefficient K and the cathode effective emission area S eff . The EG program computes the electric field upon the MCFEC surface, multiplies it by the K value and uses the Fowler Nordheim law and the S eff value to calculate the MCFEC current; the electron trajectories are computed as well. copyright 1997 American Vacuum Society

  11. Computational algorithms for analysis of data from thin-film thermoresistors on a radio-electronic printed circuit board

    International Nuclear Information System (INIS)

    Korneeva, Anna; Shaydurov, Vladimir

    2016-01-01

    In the paper, the data analysis is considered for thin-film thermoresistors coated on to a radio-electronic printed circuit board to determine possible zones of its overheating. A mathematical model consists in an underdetermined system of linear algebraic equations with an infinite set of solutions. For computing a more real solution, two additional conditions are used: the smoothness of a solution and the positiveness of an increase of temperature during overheating. Computational experiments demonstrate that an overheating zone is determined exactly with a tolerable accuracy of temperature in it.

  12. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    KAUST Repository

    Downes, Turlough P.

    2013-01-01

    As our understanding of the world around us increases it becomes more challenging to make use of what we already know, and to increase our understanding still further. Computational modeling and simulation have become critical tools in addressing this challenge. The requirements of high-resolution, accurate modeling have outstripped the ability of desktop computers and even small clusters to provide the necessary compute power. Many applications in the scientific and engineering domains now need very large amounts of compute time, while other applications, particularly in the life sciences, frequently have large data I/O requirements. There is thus a growing need for a range of high performance applications which can utilize parallel compute systems effectively, which have efficient data handling strategies and which have the capacity to utilise current and future systems. The High Performance and Scientific Applications topic aims to highlight recent progress in the use of advanced computing and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators, and to deal with difficult I/O requirements. © 2013 Springer-Verlag.

  13. High perveance electron gun for the electron cooling system

    International Nuclear Information System (INIS)

    Korotaev, Yu.; Meshkov, I.; Petrov, A.; Sidorin, A.; Smirnov, A.; Syresin, E.; Titkova, I.

    2000-01-01

    The cooling time in the electron cooling system is inversely proportional to the beam current. To obtain high current of the electron beam the control electrode of the gun is provided with a positive potential and an electrostatic trap for secondary electrons appears inside the electron gun. This leads to a decrease in the gun perveance. To avoid this problem, the adiabatic high perveance electron gun with the clearing control electrode is designed in JINR (J. Bosser, Y. Korotaev, I. Meshkov, E. Syresin et al., Nucl. Instr. and Meth. A 391 (1996) 103. Yu. Korotaev, I. Meshkov, A. Sidorin, A. Smirnov, E. Syresin, The generation of electron beams with perveance of 3-6 μA/V 3/2 , Proceedings of SCHEF'99). The clearing control electrode has a transverse electric field, which clears secondary electrons. Computer simulations of the potential map were made with RELAX3D computer code (C.J. Kost, F.W. Jones, RELAX3D User's Guide and References Manual)

  14. High perveance electron gun for the electron cooling system

    CERN Document Server

    Korotaev, Yu V; Petrov, A; Sidorin, A; Smirnov, A; Syresin, E M; Titkova, I

    2000-01-01

    The cooling time in the electron cooling system is inversely proportional to the beam current. To obtain high current of the electron beam the control electrode of the gun is provided with a positive potential and an electrostatic trap for secondary electrons appears inside the electron gun. This leads to a decrease in the gun perveance. To avoid this problem, the adiabatic high perveance electron gun with the clearing control electrode is designed in JINR (J. Bosser, Y. Korotaev, I. Meshkov, E. Syresin et al., Nucl. Instr. and Meth. A 391 (1996) 103. Yu. Korotaev, I. Meshkov, A. Sidorin, A. Smirnov, E. Syresin, The generation of electron beams with perveance of 3-6 mu A/V sup 3 sup / sup 2 , Proceedings of SCHEF'99). The clearing control electrode has a transverse electric field, which clears secondary electrons. Computer simulations of the potential map were made with RELAX3D computer code (C.J. Kost, F.W. Jones, RELAX3D User's Guide and References Manual).

  15. Performance characteristics of a Kodak computed radiography system.

    Science.gov (United States)

    Bradford, C D; Peppler, W W; Dobbins, J T

    1999-01-01

    The performance characteristics of a photostimulable phosphor based computed radiographic (CR) system were studied. The modulation transfer function (MTF), noise power spectra (NPS), and detective quantum efficiency (DQE) of the Kodak Digital Science computed radiography (CR) system (Eastman Kodak Co.-model 400) were measured and compared to previously published results of a Fuji based CR system (Philips Medical Systems-PCR model 7000). To maximize comparability, the same measurement techniques and analysis methods were used. The DQE at four exposure levels (30, 3, 0.3, 0.03 mR) and two plate types (standard and high resolution) were calculated from the NPS and MTF measurements. The NPS was determined from two-dimensional Fourier analysis of uniformly exposed plates. The presampling MTF was determined from the Fourier transform (FT) of the system's finely sampled line spread function (LSF) as produced by a narrow slit. A comparison of the slit type ("beveled edge" versus "straight edge") and its effect on the resulting MTF measurements was also performed. The results show that both systems are comparable in resolution performance. The noise power studies indicated a higher level of noise for the Kodak images (approximately 20% at the low exposure levels and 40%-70% at higher exposure levels). Within the clinically relevant exposure range (0.3-3 mR), the resulting DQE for the Kodak plates ranged between 20%-50% lower than for the corresponding Fuji plates. Measurements of the presampling MTF with the two slit types have shown that a correction factor can be applied to compensate for transmission through the relief edges.

  16. Systems, methods and computer-readable media to model kinetic performance of rechargeable electrochemical devices

    Science.gov (United States)

    Gering, Kevin L.

    2013-01-01

    A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics. The computing system also analyzes the cell information of the electrochemical cell with a Butler-Volmer (BV) expression modified to determine exchange current density of the electrochemical cell by including kinetic performance information related to pulse-time dependence, electrode surface availability, or a combination thereof. A set of sigmoid-based expressions may be included with the modified-BV expression to determine kinetic performance as a function of pulse time. The determined exchange current density may be used with the modified-BV expression, with or without the sigmoid expressions, to analyze other characteristics of the electrochemical cell. Model parameters can be defined in terms of cell aging, making the overall kinetics model amenable to predictive estimates of cell kinetic performance along the aging timeline.

  17. Simulation of 10 A electron-beam formation and collection for a high current electron-beam ion source

    International Nuclear Information System (INIS)

    Kponou, A.; Beebe, E.; Pikin, A.; Kuznetsov, G.; Batazova, M.; Tiunov, M.

    1998-01-01

    Presented is a report on the development of an electron-beam ion source (EBIS) for the relativistic heavy ion collider at Brookhaven National Laboratory (BNL) which requires operating with a 10 A electron beam. This is approximately an order of magnitude higher current than in any existing EBIS device. A test stand is presently being designed and constructed where EBIS components will be tested. It will be reported in a separate paper at this conference. The design of the 10 A electron gun, drift tubes, and electron collector requires extensive computer simulations. Calculations have been performed at Novosibirsk and BNL using two different programs, SAM and EGUN. Results of these simulations will be presented. copyright 1998 American Institute of Physics

  18. Spatial Processing of Urban Acoustic Wave Fields from High-Performance Computations

    National Research Council Canada - National Science Library

    Ketcham, Stephen A; Wilson, D. K; Cudney, Harley H; Parker, Michael W

    2007-01-01

    .... The objective of this work is to develop spatial processing techniques for acoustic wave propagation data from three-dimensional high-performance computations to quantify scattering due to urban...

  19. Modern Electronic Devices: An Increasingly Common Cause of Skin Disorders in Consumers.

    Science.gov (United States)

    Corazza, Monica; Minghetti, Sara; Bertoldi, Alberto Maria; Martina, Emanuela; Virgili, Annarosa; Borghi, Alessandro

    2016-01-01

    : The modern conveniences and enjoyment brought about by electronic devices bring with them some health concerns. In particular, personal electronic devices are responsible for rising cases of several skin disorders, including pressure, friction, contact dermatitis, and other physical dermatitis. The universal use of such devices, either for work or recreational purposes, will probably increase the occurrence of polymorphous skin manifestations over time. It is important for clinicians to consider electronics as potential sources of dermatological ailments, for proper patient management. We performed a literature review on skin disorders associated with the personal use of modern technology, including personal computers and laptops, personal computer accessories, mobile phones, tablets, video games, and consoles.

  20. Computer-aided Detection of Lung Cancer on Chest Radiographs: Effect on Observer Performance

    NARCIS (Netherlands)

    de Hoop, Bartjan; de Boo, Diederik W.; Gietema, Hester A.; van Hoorn, Frans; Mearadji, Banafsche; Schijf, Laura; van Ginneken, Bram; Prokop, Mathias; Schaefer-Prokop, Cornelia

    2010-01-01

    Purpose: To assess how computer-aided detection (CAD) affects reader performance in detecting early lung cancer on chest radiographs. Materials and Methods: In this ethics committee-approved study, 46 individuals with 49 computed tomographically (CT)-detected and histologically proved lung cancers