WorldWideScience

Sample records for performance computing technology

  1. Teaching Machines, Programming, Computers, and Instructional Technology: The Roots of Performance Technology.

    Science.gov (United States)

    Deutsch, William

    1992-01-01

    Reviews the history of the development of the field of performance technology. Highlights include early teaching machines, instructional technology, learning theory, programed instruction, the systems approach, needs assessment, branching versus linear program formats, programing languages, and computer-assisted instruction. (LRW)

  2. FY 1995 Blue Book: High Performance Computing and Communications: Technology for the National Information Infrastructure

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — The Federal High Performance Computing and Communications HPCC Program was created to accelerate the development of future generations of high performance computers...

  3. On the impact of quantum computing technology on future developments in high-performance scientific computing

    OpenAIRE

    Möller, Matthias; Vuik, Cornelis

    2017-01-01

    Quantum computing technologies have become a hot topic in academia and industry receiving much attention and financial support from all sides. Building a quantum computer that can be used practically is in itself an outstanding challenge that has become the ‘new race to the moon’. Next to researchers and vendors of future computing technologies, national authorities are showing strong interest in maturing this technology due to its known potential to break many of today’s encryption technique...

  4. On the impact of quantum computing technology on future developments in high-performance scientific computing

    NARCIS (Netherlands)

    Möller, M.; Vuik, C.

    2017-01-01

    Quantum computing technologies have become a hot topic in academia and industry receiving much attention and financial support from all sides. Building a quantum computer that can be used practically is in itself an outstanding challenge that has become the ‘new race to the moon’. Next to

  5. FY 1997 Blue Book: High Performance Computing and Communications: Advancing the Frontiers of Information Technology

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — The Federal High Performance Computing and Communications HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of...

  6. High performance computing and communications: Advancing the frontiers of information technology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

  7. International Conference on Modern Mathematical Methods and High Performance Computing in Science and Technology

    CERN Document Server

    Srivastava, HM; Venturino, Ezio; Resch, Michael; Gupta, Vijay

    2016-01-01

    The book discusses important results in modern mathematical models and high performance computing, such as applied operations research, simulation of operations, statistical modeling and applications, invisibility regions and regular meta-materials, unmanned vehicles, modern radar techniques/SAR imaging, satellite remote sensing, coding, and robotic systems. Furthermore, it is valuable as a reference work and as a basis for further study and research. All contributing authors are respected academicians, scientists and researchers from around the globe. All the papers were presented at the international conference on Modern Mathematical Methods and High Performance Computing in Science & Technology (M3HPCST 2015), held at Raj Kumar Goel Institute of Technology, Ghaziabad, India, from 27–29 December 2015, and peer-reviewed by international experts. The conference provided an exceptional platform for leading researchers, academicians, developers, engineers and technocrats from a broad range of disciplines ...

  8. Information Technology Service Management with Cloud Computing Approach to Improve Administration System and Online Learning Performance

    Directory of Open Access Journals (Sweden)

    Wilianto Wilianto

    2015-10-01

    Full Text Available This work discusses the development of information technology service management using cloud computing approach to improve the performance of administration system and online learning at STMIK IBBI Medan, Indonesia. The network topology is modeled and simulated for system administration and online learning. The same network topology is developed in cloud computing using Amazon AWS architecture. The model is designed and modeled using Riverbed Academic Edition Modeler to obtain values of the parameters: delay, load, CPU utilization, and throughput. The simu- lation results are the following. For network topology 1, without cloud computing, the average delay is 54  ms, load 110 000 bits/s, CPU utilization 1.1%, and throughput 440  bits/s.  With  cloud  computing,  the  average  delay  is 45 ms,  load  2 800  bits/s,  CPU  utilization  0.03%,  and throughput 540 bits/s. For network topology 2, without cloud computing, the average delay is 39  ms, load 3 500 bits/s, CPU utilization 0.02%, and throughput database server 1 400 bits/s. With cloud computing, the average delay is 26 ms, load 5 400 bits/s, CPU utilization email server 0.0001%, FTP server 0.001%, HTTP server 0.0002%, throughput email server 85 bits/s, FTP    server 100 bits/sec, and HTTP server 95  bits/s.  Thus,  the  delay, the load, and the CPU utilization decrease; but,  the throughput increases. Information technology service management with cloud computing approach has better performance.

  9. Additive Manufacturing and High-Performance Computing: a Disruptive Latent Technology

    Science.gov (United States)

    Goodwin, Bruce

    2015-03-01

    This presentation will discuss the relationship between recent advances in Additive Manufacturing (AM) technology, High-Performance Computing (HPC) simulation and design capabilities, and related advances in Uncertainty Quantification (UQ), and then examines their impacts upon national and international security. The presentation surveys how AM accelerates the fabrication process, while HPC combined with UQ provides a fast track for the engineering design cycle. The combination of AM and HPC/UQ almost eliminates the engineering design and prototype iterative cycle, thereby dramatically reducing cost of production and time-to-market. These methods thereby present significant benefits for US national interests, both civilian and military, in an age of austerity. Finally, considering cyber security issues and the advent of the ``cloud,'' these disruptive, currently latent technologies may well enable proliferation and so challenge both nuclear and non-nuclear aspects of international security.

  10. Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing. The PRIMA Project

    Energy Technology Data Exchange (ETDEWEB)

    Malony, Allen D. [Univ. of Oregon, Eugene, OR (United States). Dept. of Computer and Information Science; Wolf, Felix G. [Wilhelm-Johnen-Strasse, Julich (Germany). Forschungszentrum Julich GmbH

    2014-01-31

    The growing number of cores provided by today’s high-­end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-­performance computing (HPC) users to collect and analyze relevant performance data – even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensively across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-­fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to

  11. Performance Refactoring of Instrumentation, Measurement, and Analysis Technologies for Petascale Computing: the PRIMA Project

    Energy Technology Data Exchange (ETDEWEB)

    Malony, Allen D. [Department of Computer and Information Science, University of Oregon; Wolf, Felix G. [Juelich Supercomputing Centre, Forschungszentrum Juelich

    2014-01-31

    The growing number of cores provided by today’s high-end computing systems present substantial challenges to application developers in their pursuit of parallel efficiency. To find the most effective optimization strategy, application developers need insight into the runtime behavior of their code. The University of Oregon (UO) and the Juelich Supercomputing Centre of Forschungszentrum Juelich (FZJ) develop the performance analysis tools TAU and Scalasca, respectively, which allow high-performance computing (HPC) users to collect and analyze relevant performance data – even at very large scales. TAU and Scalasca are considered among the most advanced parallel performance systems available, and are used extensively across HPC centers in the U.S., Germany, and around the world. The TAU and Scalasca groups share a heritage of parallel performance tool research and partnership throughout the past fifteen years. Indeed, the close interactions of the two groups resulted in a cross-fertilization of tool ideas and technologies that pushed TAU and Scalasca to what they are today. It also produced two performance systems with an increasing degree of functional overlap. While each tool has its specific analysis focus, the tools were implementing measurement infrastructures that were substantially similar. Because each tool provides complementary performance analysis, sharing of measurement results is valuable to provide the user with more facets to understand performance behavior. However, each measurement system was producing performance data in different formats, requiring data interoperability tools to be created. A common measurement and instrumentation system was needed to more closely integrate TAU and Scalasca and to avoid the duplication of development and maintenance effort. The PRIMA (Performance Refactoring of Instrumentation, Measurement, and Analysis) project was proposed over three years ago as a joint international effort between UO and FZJ to accomplish

  12. FY 2000 Blue Book: High Performance Computing and Communications: Information Technology Frontiers for a New Millennium

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — As we near the dawn of a new millennium, advances made possible by computing, information, and communications research and development R and D ? once barely...

  13. Brief: Managing computing technology

    International Nuclear Information System (INIS)

    Startzman, R.A.

    1994-01-01

    While computing is applied widely in the production segment of the petroleum industry, its effective application is the primary goal of computing management. Computing technology has changed significantly since the 1950's, when computers first began to influence petroleum technology. The ability to accomplish traditional tasks faster and more economically probably is the most important effect that computing has had on the industry. While speed and lower cost are important, are they enough? Can computing change the basic functions of the industry? When new computing technology is introduced improperly, it can clash with traditional petroleum technology. This paper examines the role of management in merging these technologies

  14. CUDA/GPU Technology : Parallel Programming For High Performance Scientific Computing

    OpenAIRE

    YUHENDRA; KUZE, Hiroaki; JOSAPHAT, Tetuko Sri Sumantyo

    2009-01-01

    [ABSTRACT]Graphics processing units (GP Us) originally designed for computer video cards have emerged as the most powerful chip in a high-performance workstation. In the high performance computation capabilities, graphic processing units (GPU) lead to much more powerful performance than conventional CPUs by means of parallel processing. In 2007, the birth of Compute Unified Device Architecture (CUDA) and CUDA-enabled GPUs by NVIDIA Corporation brought a revolution in the general purpose GPU a...

  15. The effect of using in computer skills on teachers’ perceived self-efficacy beliefs towards technology integration, attitudes and performance

    Directory of Open Access Journals (Sweden)

    Badrie Mohammad Nour ELDaou

    2016-10-01

    Full Text Available The current study analyzesthe relationship between the apparentteacher’s Self-efficacyand attitudes towardsintegrating technology into classroom teaching, self-evaluation reportsand computer performance results. Pre-post measurement of the Computer Technology Integration Survey (CTIS (Wang et al, 2004 was used to determine theconfidence level with of 60 science teachers and 12 mixed-major teachers enrolled at the Lebanese University, Faculty of Education in the academic year 2011-2012. Pre –post measurement onteachers’attitudes towards usingtechnologywas examined using an opened and a closed questionnaire.Teachers’ performance was measured by means of their Activeinspire projects results using active boards after their third practice of training in computer skills and Activeinspire program. To accumulate data on teachers’ self-report, this study uses Robert Reasoner's five components: feeling of security, feeling of belonging, feeling of identity, feeling of goal, and self-actualization which teachers used to rate themselves (Reasoner,1983. The study acknowledged probable impacts of computer training skills on teachers ‘self-evaluation report, effectiveness of computer technology skills, and evaluations of self-efficacy attitudes toward technology integration. Pearson correlation revealed a strong relationship r= 0.99 between the perceived self-efficacy towards technology incorporation and teachers’ self-evaluation report. Also, the findings of this research revealed that 82.7% of teachers earned high computer technology scores on their Activeinspire projects and 33.3% received excellent grades on computer performance test. Recommendations and potential research were discussed

  16. Summary of researches being performed in the Institute of Mathematics and Computer Science on computer science and information technologies

    Directory of Open Access Journals (Sweden)

    Artiom Alhazov

    2008-07-01

    Full Text Available Evolution of the informatization notion (which assumes automation of majority of human activities applying computers, computer networks, information technologies towards the notion of {\\it Global Information Society} (GIS challenges the determination of new paradigms of society: automation and intellectualization of production, new level of education and teaching, formation of new styles of work, active participation in decision making, etc. To assure transition to GIS for any society, including that from Republic of Moldova, requires both special training and broad application of progressive technologies and information systems. Methodological aspects concerning impact of GIS creation over the citizen, economic unit, national economy in the aggregate demands a profound study. Without systematic approach to these aspects the GIS creation would have confront great difficulties. Collective of researchers from the Institute of Mathematics and Computer Science (IMCS of Academy of Sciences of Moldova, which work in the field of computer science, constitutes the center of advanced researches and activates in those directions of researches of computer science which facilitate technologies and applications without of which the development of GIS cannot be assured.

  17. The Effect of Using in Computer Skills on Teachers’ Perceived Self-Efficacy Beliefs Towards Technology Integration, Attitudes and Performance

    Directory of Open Access Journals (Sweden)

    Badrie Mohammad Nour EL-Daou

    2016-07-01

    Full Text Available The current study analyzes the relationship between the apparent teacher’s Self-efficacy and attitudes towards integrating technology into classroom teaching, self- evaluation reports and computer performance results. Pre-post measurement of the Computer Technology Integration Survey (CTIS (Wang et al,2004 was used to determine the confidence level with of 60 science teachers and 12 mixed-major teachers enrolled at the Lebanese University, Faculty of Education in the academic year 2011-2012. Pre –post measurement on teachers’ attitudes towards using technology was examined using an opened and a closed questionnaire. Teachers’ performance was measured by means of their Activeinspire projects results using active boards after their third practice of training in computer skills and Activeinspire program. To accumulate data on teachers’ self-report, this study uses Robert Reasoner's five components: feeling of security, feeling of belonging, feeling of identity, feeling of goal, and self-actualization which teachers used to rate themselves (Reasoner,1983. The study acknowledged probable impacts of computer training skills on teachers ‘self-evaluation report, effectiveness of computer technology skills, and evaluations of self-efficacy attitudes toward technology integration. Pearson correlation revealed a strong relationship r = 0.99 between the perceived self-efficacy towards technology incorporation and teachers’ self-evaluation report. Also, the findings of this research revealed that 82.7% of teachers earned high computer technology scores on their Activeinspire projects and 33.3% received excellent grades on computer performance test. Recommendations and potential research were discussed.

  18. Computer programs for capital cost estimation, lifetime economic performance simulation, and computation of cost indexes for laser fusion and other advanced technology facilities

    International Nuclear Information System (INIS)

    Pendergrass, J.H.

    1978-01-01

    Three FORTRAN programs, CAPITAL, VENTURE, and INDEXER, have been developed to automate computations used in assessing the economic viability of proposed or conceptual laser fusion and other advanced-technology facilities, as well as conventional projects. The types of calculations performed by these programs are, respectively, capital cost estimation, lifetime economic performance simulation, and computation of cost indexes. The codes permit these three topics to be addressed with considerable sophistication commensurate with user requirements and available data

  19. [Earth Science Technology Office's Computational Technologies Project

    Science.gov (United States)

    Fischer, James (Technical Monitor); Merkey, Phillip

    2005-01-01

    This grant supported the effort to characterize the problem domain of the Earth Science Technology Office's Computational Technologies Project, to engage the Beowulf Cluster Computing Community as well as the High Performance Computing Research Community so that we can predict the applicability of said technologies to the scientific community represented by the CT project and formulate long term strategies to provide the computational resources necessary to attain the anticipated scientific objectives of the CT project. Specifically, the goal of the evaluation effort is to use the information gathered over the course of the Round-3 investigations to quantify the trends in scientific expectations, the algorithmic requirements and capabilities of high-performance computers to satisfy this anticipated need.

  20. Air Force Science & Technology Issues & Opportunities Regarding High Performance Embedded Computing

    Science.gov (United States)

    2009-09-23

    price-performance advantage include: large scale simulations of neuromorphic computing models GOTCHA radar video SAR for wide area persistent...the handcuffs were not for me and that the military had so far got … Neuromorphic example: Robust recognition of occluded text Gotcha SAR PCID Image...Architecture 16 cores / chip 10 x 10 stacks / board50 chips / stack EDRAM AFPGA EDRAM AFPGA EDRAM AFPGA EDRAM AFPGA EDRAM AFPGA EDRAM AFPGA EDRAM AFPGA EDRAM

  1. Computer Technology for Industry

    Science.gov (United States)

    1979-01-01

    In this age of the computer, more and more business firms are automating their operations for increased efficiency in a great variety of jobs, from simple accounting to managing inventories, from precise machining to analyzing complex structures. In the interest of national productivity, NASA is providing assistance both to longtime computer users and newcomers to automated operations. Through a special technology utilization service, NASA saves industry time and money by making available already developed computer programs which have secondary utility. A computer program is essentially a set of instructions which tells the computer how to produce desired information or effect by drawing upon its stored input. Developing a new program from scratch can be costly and time-consuming. Very often, however, a program developed for one purpose can readily be adapted to a totally different application. To help industry take advantage of existing computer technology, NASA operates the Computer Software Management and Information Center (COSMIC)(registered TradeMark),located at the University of Georgia. COSMIC maintains a large library of computer programs developed for NASA, the Department of Defense, the Department of Energy and other technology-generating agencies of the government. The Center gets a continual flow of software packages, screens them for adaptability to private sector usage, stores them and informs potential customers of their availability.

  2. Technologies and tools for high-performance distributed computing. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Karonis, Nicholas T.

    2000-05-01

    In this project we studied the practical use of the MPI message-passing interface in advanced distributed computing environments. We built on the existing software infrastructure provided by the Globus Toolkit{trademark}, the MPICH portable implementation of MPI, and the MPICH-G integration of MPICH with Globus. As a result of this project we have replaced MPICH-G with its successor MPICH-G2, which is also an integration of MPICH with Globus. MPICH-G2 delivers significant improvements in message passing performance when compared to its predecessor MPICH-G and was based on superior software design principles resulting in a software base that was much easier to make the functional extensions and improvements we did. Using Globus services we replaced the default implementation of MPI's collective operations in MPICH-G2 with more efficient multilevel topology-aware collective operations which, in turn, led to the development of a new timing methodology for broadcasts [8]. MPICH-G2 was extended to include client/server functionality from the MPI-2 standard [23] to facilitate remote visualization applications and, through the use of MPI idioms, MPICH-G2 provided application-level control of quality-of-service parameters as well as application-level discovery of underlying Grid-topology information. Finally, MPICH-G2 was successfully used in a number of applications including an award-winning record-setting computation in numerical relativity. In the sections that follow we describe in detail the accomplishments of this project, we present experimental results quantifying the performance improvements, and conclude with a discussion of our applications experiences. This project resulted in a significant increase in the utility of MPICH-G2.

  3. Computer architecture technology trends

    CERN Document Server

    1991-01-01

    Please note this is a Short Discount publication. This year's edition of Computer Architecture Technology Trends analyses the trends which are taking place in the architecture of computing systems today. Due to the sheer number of different applications to which computers are being applied, there seems no end to the different adoptions which proliferate. There are, however, some underlying trends which appear. Decision makers should be aware of these trends when specifying architectures, particularly for future applications. This report is fully revised and updated and provides insight in

  4. Human Computer Music Performance

    OpenAIRE

    Dannenberg, Roger B.

    2012-01-01

    Human Computer Music Performance (HCMP) is the study of music performance by live human performers and real-time computer-based performers. One goal of HCMP is to create a highly autonomous artificial performer that can fill the role of a human, especially in a popular music setting. This will require advances in automated music listening and understanding, new representations for music, techniques for music synchronization, real-time human-computer communication, music generation, sound synt...

  5. A Longitudinal Examination of the Effects of Computer Self-efficacy Growth on Performance during Technology Training

    Directory of Open Access Journals (Sweden)

    James P. Downey

    2015-02-01

    Full Text Available Technology training in the classroom is critical in preparing students for upper level classes as well as professional careers, especially in fields such as technology. One of the key enablers to this process is computer self-efficacy (CSE, which has an extensive stream of empirical research. Despite this, one of the missing pieces is how CSE actually changes during training, and how such change is related to antecedents and performance outcomes. Measuring change requires repeated data gathering and the use of latent growth modeling, a relatively new statistical technique. This study examines CSE (specifically general CSE or GCSE growth over time during training, and how this growth is influenced by anxiety and gender and influences performance, using a semester-long lab course covering three applications. The use of GCSE growth more accurately models how students actually learn in a technology classroom. It provides novel clarity in the interaction of gender, anxiety, GCSE, specific CSEs, and performance during training. The study finds that the relationship between anxiety and self-efficacy decreases over time during training, becoming non-significant; it clarifies the significant role gender plays in influencing GCSE at the start of and during training. It finds GCSE influences application performance only through specific CSEs.

  6. Trusted Computing Technologies, Intel Trusted Execution Technology.

    Energy Technology Data Exchange (ETDEWEB)

    Guise, Max Joseph; Wendt, Jeremy Daniel

    2011-01-01

    We describe the current state-of-the-art in Trusted Computing Technologies - focusing mainly on Intel's Trusted Execution Technology (TXT). This document is based on existing documentation and tests of two existing TXT-based systems: Intel's Trusted Boot and Invisible Things Lab's Qubes OS. We describe what features are lacking in current implementations, describe what a mature system could provide, and present a list of developments to watch. Critical systems perform operation-critical computations on high importance data. In such systems, the inputs, computation steps, and outputs may be highly sensitive. Sensitive components must be protected from both unauthorized release, and unauthorized alteration: Unauthorized users should not access the sensitive input and sensitive output data, nor be able to alter them; the computation contains intermediate data with the same requirements, and executes algorithms that the unauthorized should not be able to know or alter. Due to various system requirements, such critical systems are frequently built from commercial hardware, employ commercial software, and require network access. These hardware, software, and network system components increase the risk that sensitive input data, computation, and output data may be compromised.

  7. CLOUD COMPUTING TECHNOLOGY TRENDS

    Directory of Open Access Journals (Sweden)

    Cristian IVANUS

    2014-05-01

    Full Text Available Cloud computing has been a tremendous innovation, through which applications became available online, accessible through an Internet connection and using any computing device (computer, smartphone or tablet. According to one of the most recent studies conducted in 2012 by Everest Group and Cloud Connect, 57% of companies said they already use SaaS application (Software as a Service, and 38% reported using standard tools PaaS (Platform as a Service. However, in the most cases, the users of these solutions highlighted the fact that one of the main obstacles in the development of this technology is the fact that, in cloud, the application is not available without an Internet connection. The new challenge of the cloud system has become now the offline, specifically accessing SaaS applications without being connected to the Internet. This topic is directly related to user productivity within companies as productivity growth is one of the key promises of cloud computing system applications transformation. The aim of this paper is the presentation of some important aspects related to the offline cloud system and regulatory trends in the European Union (EU.

  8. A Longitudinal Examination of the Effects of Computer Self-Efficacy Growth on Performance during Technology Training

    Science.gov (United States)

    Downey, James P.; Kher, Hemant V.

    2015-01-01

    Technology training in the classroom is critical in preparing students for upper level classes as well as professional careers, especially in fields such as technology. One of the key enablers to this process is computer self-efficacy (CSE), which has an extensive stream of empirical research. Despite this, one of the missing pieces is how CSE…

  9. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  10. Center for Advanced Computational Technology

    Science.gov (United States)

    Noor, Ahmed K.

    2000-01-01

    The Center for Advanced Computational Technology (ACT) was established to serve as a focal point for diverse research activities pertaining to application of advanced computational technology to future aerospace systems. These activities include the use of numerical simulations, artificial intelligence methods, multimedia and synthetic environments, and computational intelligence, in the modeling, analysis, sensitivity studies, optimization, design and operation of future aerospace systems. The Center is located at NASA Langley and is an integral part of the School of Engineering and Applied Science of the University of Virginia. The Center has four specific objectives: 1) conduct innovative research on applications of advanced computational technology to aerospace systems; 2) act as pathfinder by demonstrating to the research community what can be done (high-potential, high-risk research); 3) help in identifying future directions of research in support of the aeronautical and space missions of the twenty-first century; and 4) help in the rapid transfer of research results to industry and in broadening awareness among researchers and engineers of the state-of-the-art in applications of advanced computational technology to the analysis, design prototyping and operations of aerospace and other high-performance engineering systems. In addition to research, Center activities include helping in the planning and coordination of the activities of a multi-center team of NASA and JPL researchers who are developing an intelligent synthesis environment for future aerospace systems; organizing workshops and national symposia; as well as writing state-of-the-art monographs and NASA special publications on timely topics.

  11. Computational technologies a first course

    CERN Document Server

    Borisov, Victor S; Grigoriev, Aleksander V 1; Kolesov, Alexandr E 1; Popov, Petr A 1; Sirditov, Ivan K 1; Vabishchevich, Petr N 1; Vasilieva, Maria V 1; Zakharov, Petr E 1; Vabishchevich, Petr N 0

    2015-01-01

    In this book we describe the basic elements of present computational technologies that use the algorithmic languages C/C++. The emphasis is on GNU compilers and libraries, FOSS for the solution of computational mathematics problems and visualization of the obtained data. Many examples illustrate the basic features of computational technologies.

  12. High Performance Computing Multicast

    Science.gov (United States)

    2012-02-01

    A History of the Virtual Synchrony Replication Model,” in Replication: Theory and Practice, Charron-Bost, B., Pedone, F., and Schiper, A. (Eds...Performance Computing IP / IPv4 Internet Protocol (version 4.0) IPMC Internet Protocol MultiCast LAN Local Area Network MCMD Dr. Multicast MPI

  13. Computer Technology Directory.

    Science.gov (United States)

    Exceptional Parent, 1990

    1990-01-01

    This directory lists approximately 300 commercial vendors that offer computer hardware, software, and communication aids for children with disabilities. The company listings indicate computer compatibility and specific disabilities served by their products. (JDD)

  14. Understanding computer and information technology

    International Nuclear Information System (INIS)

    Choi, Yun Cheol; Han, Tack Don; Im, Sun Beom

    2009-01-01

    This book consists of four parts. The first part describes IT technology and information community understanding of computer system, constitution of software system and information system and application of software. The second part is about computer network, information and communication, application and internet service. The third part contains application and multi media, application of mobile computer, ubiquitous computing and ubiquitous environment and computer and digital life. The last part explains information security and ethics of information-oriented society, information industry and IT venture, digital contents technology and industry and the future and development of information-oriented society.

  15. Performing stencil computations

    Energy Technology Data Exchange (ETDEWEB)

    Donofrio, David

    2018-01-16

    A method and apparatus for performing stencil computations efficiently are disclosed. In one embodiment, a processor receives an offset, and in response, retrieves a value from a memory via a single instruction, where the retrieving comprises: identifying, based on the offset, one of a plurality of registers of the processor; loading an address stored in the identified register; and retrieving from the memory the value at the address.

  16. Computers: Educational Technology Paradox?

    Science.gov (United States)

    Hashim, Hajah Rugayah Hj.; Mustapha, Wan Narita

    2005-01-01

    As we move further into the new millennium, the need to involve and adapt learners with new technology have been the main aim of many institutions of higher learning in Malaysia. The involvement of the government in huge technology-based projects like the Multimedia Super Corridor Highway (MSC) and one of its flagships, the Smart Schools have…

  17. Computer Technology for Industry

    Science.gov (United States)

    1982-01-01

    Shell Oil Company used a COSMIC program, called VISCEL to insure the accuracy of the company's new computer code for analyzing polymers, and chemical compounds. Shell reported that there were no other programs available that could provide the necessary calculations. Shell produces chemicals for plastic products used in the manufacture of automobiles, housewares, appliances, film, textiles, electronic equipment and furniture.

  18. Ubiquitous Computing Technologies in Education

    Science.gov (United States)

    Hwang, Gwo-Jen; Wu, Ting-Ting; Chen, Yen-Jung

    2007-01-01

    The prosperous development of wireless communication and sensor technologies has attracted the attention of researchers from both computer and education fields. Various investigations have been made for applying the new technologies to education purposes, such that more active and adaptive learning activities can be conducted in the real world.…

  19. CPE--A New Perspective: The Impact of the Technology Revolution. Proceedings of the Computer Performance Evaluation Users Group Meeting (19th, San Francisco, California, October 25-28, 1983). Final Report. Reports on Computer Science and Technology.

    Science.gov (United States)

    Mobray, Deborah, Ed.

    Papers on local area networks (LANs), modelling techniques, software improvement, capacity planning, software engineering, microcomputers and end user computing, cost accounting and chargeback, configuration and performance management, and benchmarking presented at this conference include: (1) "Theoretical Performance Analysis of Virtual…

  20. Information technology and computational physics

    CERN Document Server

    Kóczy, László; Mesiar, Radko; Kacprzyk, Janusz

    2017-01-01

    A broad spectrum of modern Information Technology (IT) tools, techniques, main developments and still open challenges is presented. Emphasis is on new research directions in various fields of science and technology that are related to data analysis, data mining, knowledge discovery, information retrieval, clustering and classification, decision making and decision support, control, computational mathematics and physics, to name a few. Applications in many relevant fields are presented, notably in telecommunication, social networks, recommender systems, fault detection, robotics, image analysis and recognition, electronics, etc. The methods used by the authors range from high level formal mathematical tools and techniques, through algorithmic and computational tools, to modern metaheuristics.

  1. Optical Computers and Space Technology

    Science.gov (United States)

    Abdeldayem, Hossin A.; Frazier, Donald O.; Penn, Benjamin; Paley, Mark S.; Witherow, William K.; Banks, Curtis; Hicks, Rosilen; Shields, Angela

    1995-01-01

    The rapidly increasing demand for greater speed and efficiency on the information superhighway requires significant improvements over conventional electronic logic circuits. Optical interconnections and optical integrated circuits are strong candidates to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by the conventional electronic logic circuits. The new optical technology has increased the demand for high quality optical materials. NASA's recent involvement in processing optical materials in space has demonstrated that a new and unique class of high quality optical materials are processible in a microgravity environment. Microgravity processing can induce improved orders in these materials and could have a significant impact on the development of optical computers. We will discuss NASA's role in processing these materials and report on some of the associated nonlinear optical properties which are quite useful for optical computers technology.

  2. Computer technology and computer programming research and strategies

    CERN Document Server

    Antonakos, James L

    2011-01-01

    Covering a broad range of new topics in computer technology and programming, this volume discusses encryption techniques, SQL generation, Web 2.0 technologies, and visual sensor networks. It also examines reconfigurable computing, video streaming, animation techniques, and more. Readers will learn about an educational tool and game to help students learn computer programming. The book also explores a new medical technology paradigm centered on wireless technology and cloud computing designed to overcome the problems of increasing health technology costs.

  3. Cooperation, Technology, and Performance: A Case Study.

    Science.gov (United States)

    Cavanagh, Thomas; Dickenson, Sabrina; Brandt, Suzanne

    1999-01-01

    Describes the CTP (Cooperation, Technology, and Performance) model and explains how it is used by the Department of Veterans Affairs-Veteran's Benefit Administration (VBA) for training. Discusses task analysis; computer-based training; cooperative-based learning environments; technology-based learning; performance-assessment methods; courseware…

  4. Infinite possibilities: Computational structures technology

    Science.gov (United States)

    Beam, Sherilee F.

    1994-12-01

    Computational Fluid Dynamics (or CFD) methods are very familiar to the research community. Even the general public has had some exposure to CFD images, primarily through the news media. However, very little attention has been paid to CST--Computational Structures Technology. Yet, no important design can be completed without it. During the first half of this century, researchers only dreamed of designing and building structures on a computer. Today their dreams have become practical realities as computational methods are used in all phases of design, fabrication and testing of engineering systems. Increasingly complex structures can now be built in even shorter periods of time. Over the past four decades, computer technology has been developing, and early finite element methods have grown from small in-house programs to numerous commercial software programs. When coupled with advanced computing systems, they help engineers make dramatic leaps in designing and testing concepts. The goals of CST include: predicting how a structure will behave under actual operating conditions; designing and complementing other experiments conducted on a structure; investigating microstructural damage or chaotic, unpredictable behavior; helping material developers in improving material systems; and being a useful tool in design systems optimization and sensitivity techniques. Applying CST to a structure problem requires five steps: (1) observe the specific problem; (2) develop a computational model for numerical simulation; (3) develop and assemble software and hardware for running the codes; (4) post-process and interpret the results; and (5) use the model to analyze and design the actual structure. Researchers in both industry and academia continue to make significant contributions to advance this technology with improvements in software, collaborative computing environments and supercomputing systems. As these environments and systems evolve, computational structures technology will

  5. A Technique for Continuous Evaluation of Student Performance in Two Different Domains: Structural Engineering and Computer Information Technology

    Science.gov (United States)

    Desai, Niranjan; Stefanek, George

    2017-01-01

    Student access to the Internet has made it much easier for students to find solutions to traditional homework problems online and thereby has made this traditional assessment method of monitoring student progress and gauging the assimilation of knowledge in engineering and technology courses less reliable. This paper presents an in-class,…

  6. Computer technology forecasting at the National Laboratories

    International Nuclear Information System (INIS)

    Peskin, A.M.

    1980-01-01

    The DOE Office of ADP Management organized a group of scientists and computer professionals, mostly from their own national laboratories, to prepare an annually updated technology forecast to accompany the Department's five-year ADP Plan. The activities of the task force were originally reported in an informal presentation made at the ACM Conference in 1978. This presentation represents an update of that report. It also deals with the process of applying the results obtained at a particular computing center, Brookhaven National Laboratory. Computer technology forecasting is a difficult and hazardous endeavor, but it can reap considerable advantage. The forecast performed on an industry-wide basis can be applied to the particular needs of a given installation, and thus give installation managers considerable guidance in planning. A beneficial side effect of this process is that it forces installation managers, who might otherwise tend to preoccupy themselves with immediate problems, to focus on longer term goals and means to their ends

  7. Current Capabilities at SNL for the Integration of Small Modular Reactors onto Smart Microgrids Using Sandia's Smart Microgrid Technology High Performance Computing and Advanced Manufacturing.

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez, Salvador B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-05-01

    Smart grids are a crucial component for enabling the nation’s future energy needs, as part of a modernization effort led by the Department of Energy. Smart grids and smart microgrids are being considered in niche applications, and as part of a comprehensive energy strategy to help manage the nation’s growing energy demands, for critical infrastructures, military installations, small rural communities, and large populations with limited water supplies. As part of a far-reaching strategic initiative, Sandia National Laboratories (SNL) presents herein a unique, three-pronged approach to integrate small modular reactors (SMRs) into microgrids, with the goal of providing economically-competitive, reliable, and secure energy to meet the nation’s needs. SNL’s triad methodology involves an innovative blend of smart microgrid technology, high performance computing (HPC), and advanced manufacturing (AM). In this report, Sandia’s current capabilities in those areas are summarized, as well as paths forward that will enable DOE to achieve its energy goals. In the area of smart grid/microgrid technology, Sandia’s current computational capabilities can model the entire grid, including temporal aspects and cyber security issues. Our tools include system development, integration, testing and evaluation, monitoring, and sustainment.

  8. AHPCRC - Army High Performance Computing Research Center

    Science.gov (United States)

    2010-01-01

    computing. Of particular interest is the ability of a distrib- uted jamming network (DJN) to jam signals in all or part of a sensor or communications net...and reasoning, assistive technologies. FRIEDRICH (FRITZ) PRINZ Finmeccanica Professor of Engineering, Robert Bosch Chair, Department of Engineering...High Performance Computing Research Center www.ahpcrc.org BARBARA BRYAN AHPCRC Research and Outreach Manager, HPTi (650) 604-3732 bbryan@hpti.com Ms

  9. Evolution of Cloud Computing and Enabling Technologies

    OpenAIRE

    Rabi Prasad Padhy; Manas Ranjan Patra

    2012-01-01

    We present an overview of the history of forecasting software over the past 25 years, concentrating especially on the interaction between computing and technologies from mainframe computing to cloud computing. The cloud computing is latest one. For delivering the vision of  various  of computing models, this paper lightly explains the architecture, characteristics, advantages, applications and issues of various computing models like PC computing, internet computing etc and related technologie...

  10. Computer-Related Task Performance

    DEFF Research Database (Denmark)

    Longstreet, Phil; Xiao, Xiao; Sarker, Saonee

    2016-01-01

    The existing information system (IS) literature has acknowledged computer self-efficacy (CSE) as an important factor contributing to enhancements in computer-related task performance. However, the empirical results of CSE on performance have not always been consistent, and increasing an individual......'s CSE is often a cumbersome process. Thus, we introduce the theoretical concept of self-prophecy (SP) and examine how this social influence strategy can be used to improve computer-related task performance. Two experiments are conducted to examine the influence of SP on task performance. Results show...... that SP and CSE interact to influence performance. Implications are then discussed in terms of organizations’ ability to increase performance....

  11. Computer based training: Technology and trends

    International Nuclear Information System (INIS)

    O'Neal, A.F.

    1986-01-01

    Computer Based Training (CBT) offers great potential for revolutionizing the training environment. Tremendous advances in computer cost performance, instructional design science, and authoring systems have combined to put CBT within the reach of all. The ability of today's CBT systems to implement powerful training strategies, simulate complex processes and systems, and individualize and control the training process make it certain that CBT will now, at long last, live up to its potential. This paper reviews the major technologies and trends involved and offers some suggestions for getting started in CBT

  12. Inclusive vision for high performance computing at the CSIR

    CSIR Research Space (South Africa)

    Gazendam, A

    2006-02-01

    Full Text Available and computationally intensive applications. A number of different technologies and standards were identified as core to the open and distributed high-performance infrastructure envisaged...

  13. Philosophy of computing and information technology

    OpenAIRE

    Brey, Philip A.E.; Soraker, Johnny; Meijers, A.

    2009-01-01

    Philosophy has been described as having taken a “computational turn,” referring to the ways in which computers and information technology throw new light upon traditional philosophical issues, provide new tools and concepts for philosophical reasoning, and pose theoretical and practical questions that cannot readily be approached within traditional philosophical frameworks. As such, computer technology is arguably the technology that has had the most profound impact on philosophy. Philosopher...

  14. Employee Resistance to Computer Technology.

    Science.gov (United States)

    Ewert, Alan

    1984-01-01

    The introduction of computers to the work place may cause employee stress. Aggressive, protective, and avoidance behaviors are forms of staff resistance. The development of good training programs will enhance productivity. Suggestions for evaluating computer systems are offered. (DF)

  15. Technology Performance Level Assessment Methodology.

    Energy Technology Data Exchange (ETDEWEB)

    Roberts, Jesse D.; Bull, Diana L; Malins, Robert Joseph; Costello, Ronan Patrick; Aurelien Babarit; Kim Nielsen; Claudio Bittencourt Ferreira; Ben Kennedy; Kathryn Dykes; Jochem Weber

    2017-04-01

    The technology performance level (TPL) assessments can be applied at all technology development stages and associated technology readiness levels (TRLs). Even, and particularly, at low TRLs the TPL assessment is very effective as it, holistically, considers a wide range of WEC attributes that determine the techno-economic performance potential of the WEC farm when fully developed for commercial operation. The TPL assessment also highlights potential showstoppers at the earliest possible stage of the WEC technology development. Hence, the TPL assessment identifies the technology independent “performance requirements.” In order to achieve a successful solution, the entirety of the performance requirements within the TPL must be considered because, in the end, all the stakeholder needs must be achieved. The basis for performing a TPL assessment comes from the information provided in a dedicated format, the Technical Submission Form (TSF). The TSF requests information from the WEC developer that is required to answer the questions posed in the TPL assessment document.

  16. Cloud Computing for Maintenance Performance Improvement

    OpenAIRE

    Kour, Ravdeep; Karim, Ramin; Parida, Aditya

    2013-01-01

    Cloud Computing is an emerging research area. It can be utilised for acquiring an effective and efficient information logistics. This paper uses cloud-based technology for the establishment of information logistics for railway system which requires information based on data from different data sources (e.g. railway maintenance, railway operation, and railway business data). In order to improve the performance of the maintenance process relevant data from various sources need to be acquired, f...

  17. Improving engineers' performance with computers

    International Nuclear Information System (INIS)

    Purvis, E.E. III

    1984-01-01

    The problem addressed is how to improve the performance of engineers in the design, operation, and maintenance of nuclear power plants. The application of computer science to this problem offers a challenge in maximizing the use of developments outside the nuclear industry and setting priorities to address the most fruitful areas first. Areas of potential benefits include data base management through design, analysis, procurement, construction, operation maintenance, cost, schedule and interface control and planning, and quality engineering on specifications, inspection, and training

  18. Technological Capability and Firm Performance

    Directory of Open Access Journals (Sweden)

    Fernanda Maciel Reichert

    2014-08-01

    Full Text Available This research aims to investigate the relationship between investments in technological capability and economic performance in Brazilian firms. Based on economic development theory and on developed countries history, it is assumed that this relationship is positive. Through key indicators, 133 Brazilian firms have been analyzed. Given the economic circumstances of an emerging economy, which the majority of businesses are primarily based on low and medium-low-technology industries, it is not possible to affirm the existence of a positive relation between technological capability and firm performance. There are other elements that allow firms to achieve such results. Firms of lower technological intensity industries performed above average in the economic performance indicators, adversely, they invested below average in technological capability. These findings do not diminish the merit of firms’ and country’s success. They in fact confirm a historical tradition of a country that concentrates its efforts on basic industries.

  19. Future Computing Technology (2/3)

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Computing of the future will be affected by a number of fundamental technologies in development today, many of which are already on the way to becoming commercialized. In this series of lectures, we will discuss hardware and software development that will become mainstream in the timeframe of a few years and how they will shape or change the computing landscape - commercial and personal alike. Topics range from processor and memory aspects, programming models and the limits of artificial intelligence, up to end-user interaction with wearables or e-textiles. We discuss the impact of these technologies on the art of programming, the data centres of the future and daily life. On the second day of the Future Computing Technology series, we will talk about ubiquitous computing. From smart watches through mobile devices to virtual reality, computing devices surround us, and innovative new technologies are introduces every day. We will briefly explore how this propagation might continue, how computers can take ove...

  20. The evolution of computer technology

    CERN Document Server

    Kamar, Haq

    2018-01-01

    Today it seems that computers occupy every single space in life. This book traces the evolution of computers from the humble beginnings as simple calculators up to the modern day jack-of-all trades devices like the iPhone. Readers will learn about how computers evolved from humongous military-issue refrigerators to the spiffy, delicate, and intriguing devices that many modern people feel they can't live without anymore. Readers will also discover the historical significance of computers, and their pivotal roles in World War II, the Space Race, and the emergence of modern Western powers.

  1. Advanced fuel technology and performance

    International Nuclear Information System (INIS)

    1985-10-01

    The purpose of the Advisory Group Meeting on Advanced Fuel Technology and Performance was to review the experience of advanced fuel fabrication technology, its performance, peculiarities of the back-end of the nuclear fuel cycle with regard to all types of reactors and to outline the future trends. As a result of the meeting recommendations were made for the future conduct of work on advanced fuel technology and performance. A separate abstract was prepared for each of the 20 papers in this issue

  2. Is Computer Science Compatible with Technological Literacy?

    Science.gov (United States)

    Buckler, Chris; Koperski, Kevin; Loveland, Thomas R.

    2018-01-01

    Although technology education evolved over time, and pressure increased to infuse more engineering principles and increase links to STEM (science technology, engineering, and mathematics) initiatives, there has never been an official alignment between technology and engineering education and computer science. There is movement at the federal level…

  3. Education & Technology: Reflections on Computing in Classrooms.

    Science.gov (United States)

    Fisher, Charles, Ed.; Dwyer, David C., Ed.; Yocam, Keith, Ed.

    This volume examines learning in the age of technology, describes changing practices in technology-rich classrooms, and proposes new ways to support teachers as they incorporate technology into their work. It commemorates the eleventh anniversary of the Apple Classrooms of Tomorrow (ACOT) Project, when Apple Computer, Inc., in partnership with a…

  4. Recent technology on steam turbine performance improvement

    International Nuclear Information System (INIS)

    Hirada, M.; Watanabe, E.; Tashiro, H.

    1991-01-01

    Continuous efforts have been made to improve turbine efficiency by applying the latest aerodynamic technologies to meet the energy saving requirement. In recent years, there has been considerable improvement in the field of computational fluid dynamics and these new technologies have been applied to the new blade design for HP, IP and LP turbines. Experimental verification for the new blade in turbine tests has established the overall turbine performance improvement and the excellent correspondence of flow pattern to the predicted value. This paper introduces the latest design technologies for the newly developed high efficiency blade and the verification test results

  5. Computing, Information and Communications Technology (CICT) Website

    Science.gov (United States)

    Hardman, John; Tu, Eugene (Technical Monitor)

    2002-01-01

    The Computing, Information and Communications Technology Program (CICT) was established in 2001 to ensure NASA's Continuing leadership in emerging technologies. It is a coordinated, Agency-wide effort to develop and deploy key enabling technologies for a broad range of mission-critical tasks. The NASA CICT program is designed to address Agency-specific computing, information, and communications technology requirements beyond the projected capabilities of commercially available solutions. The areas of technical focus have been chosen for their impact on NASA's missions, their national importance, and the technical challenge they provide to the Program. In order to meet its objectives, the CICT Program is organized into the following four technology focused projects: 1) Computing, Networking and Information Systems (CNIS); 2) Intelligent Systems (IS); 3) Space Communications (SC); 4) Information Technology Strategic Research (ITSR).

  6. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  7. Computed radiography systems performance evaluation

    International Nuclear Information System (INIS)

    Xavier, Clarice C.; Nersissian, Denise Y.; Furquim, Tania A.C.

    2009-01-01

    The performance of a computed radiography system was evaluated, according to the AAPM Report No. 93. Evaluation tests proposed by the publication were performed, and the following nonconformities were found: imaging p/ate (lP) dark noise, which compromises the clinical image acquired using the IP; exposure indicator uncalibrated, which can cause underexposure to the IP; nonlinearity of the system response, which causes overexposure; resolution limit under the declared by the manufacturer and erasure thoroughness uncalibrated, impairing structures visualization; Moire pattern visualized at the grid response, and IP Throughput over the specified by the manufacturer. These non-conformities indicate that digital imaging systems' lack of calibration can cause an increase in dose in order that image prob/ems can be so/ved. (author)

  8. (CICT) Computing, Information, and Communications Technology Overview

    Science.gov (United States)

    VanDalsem, William R.

    2003-01-01

    The goal of the Computing, Information, and Communications Technology (CICT) program is to enable NASA's Scientific Research, Space Exploration, and Aerospace Technology Missions with greater mission assurance, for less cost, with increased science return through the development and use of advanced computing, information and communications technologies. This viewgraph presentation includes diagrams of how the political guidance behind CICT is structured. The presentation profiles each part of the NASA Mission in detail, and relates the Mission to the activities of CICT. CICT's Integrated Capability Goal is illustrated, and hypothetical missions which could be enabled by CICT are profiled. CICT technology development is profiled.

  9. High performance fuel technology development

    Energy Technology Data Exchange (ETDEWEB)

    Koon, Yang Hyun; Kim, Keon Sik; Park, Jeong Yong; Yang, Yong Sik; In, Wang Kee; Kim, Hyung Kyu [KAERI, Daejeon (Korea, Republic of)

    2012-01-15

    {omicron} Development of High Plasticity and Annular Pellet - Development of strong candidates of ultra high burn-up fuel pellets for a PCI remedy - Development of fabrication technology of annular fuel pellet {omicron} Development of High Performance Cladding Materials - Irradiation test of HANA claddings in Halden research reactor and the evaluation of the in-pile performance - Development of the final candidates for the next generation cladding materials. - Development of the manufacturing technology for the dual-cooled fuel cladding tubes. {omicron} Irradiated Fuel Performance Evaluation Technology Development - Development of performance analysis code system for the dual-cooled fuel - Development of fuel performance-proving technology {omicron} Feasibility Studies on Dual-Cooled Annular Fuel Core - Analysis on the property of a reactor core with dual-cooled fuel - Feasibility evaluation on the dual-cooled fuel core {omicron} Development of Design Technology for Dual-Cooled Fuel Structure - Definition of technical issues and invention of concept for dual-cooled fuel structure - Basic design and development of main structure components for dual- cooled fuel - Basic design of a dual-cooled fuel rod.

  10. [Computer technologies in teaching pathological anatomy].

    Science.gov (United States)

    Ponomarev, A B; Fedorov, D N

    2015-01-01

    The paper gives experience with personal computers used at the Academician A.L. Strukov Department of Pathological Anatomy for more than 20 years. It shows the objective necessity of introducing computer technologies at all stages of acquiring skills in anatomical pathology, including lectures, students' free work, test check, etc.

  11. Computer science research and technology volume 3

    CERN Document Server

    Bauer, Janice P

    2011-01-01

    This book presents leading-edge research from across the globe in the field of computer science research, technology and applications. Each contribution has been carefully selected for inclusion based on the significance of the research to this fast-moving and diverse field. Some topics included are: network topology; agile programming; virtualization; and reconfigurable computing.

  12. Philosophy of computing and information technology

    NARCIS (Netherlands)

    Brey, Philip A.E.; Soraker, Johnny; Meijers, A.

    2009-01-01

    Philosophy has been described as having taken a “computational turn,” referring to the ways in which computers and information technology throw new light upon traditional philosophical issues, provide new tools and concepts for philosophical reasoning, and pose theoretical and practical questions

  13. A Revolution in Information Technology - Cloud Computing

    OpenAIRE

    Divya BHATT

    2012-01-01

    What is the Internet? It is collection of “interconnected networks” represented as a Cloud in network diagrams and Cloud Computing is a metaphor for certain parts of the Internet. The IT enterprises and individuals are searching for a way to reduce the cost of computation, storage and communication. Cloud Computing is an Internet-based technology providing “On-Demand” solutions for addressing these scenarios that should be flexible enough for adaptation and responsive to requirements. The hug...

  14. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    Directory of Open Access Journals (Sweden)

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  15. Research on Key Technologies of Cloud Computing

    Science.gov (United States)

    Zhang, Shufen; Yan, Hongcan; Chen, Xuebin

    With the development of multi-core processors, virtualization, distributed storage, broadband Internet and automatic management, a new type of computing mode named cloud computing is produced. It distributes computation task on the resource pool which consists of massive computers, so the application systems can obtain the computing power, the storage space and software service according to its demand. It can concentrate all the computing resources and manage them automatically by the software without intervene. This makes application offers not to annoy for tedious details and more absorbed in his business. It will be advantageous to innovation and reduce cost. It's the ultimate goal of cloud computing to provide calculation, services and applications as a public facility for the public, So that people can use the computer resources just like using water, electricity, gas and telephone. Currently, the understanding of cloud computing is developing and changing constantly, cloud computing still has no unanimous definition. This paper describes three main service forms of cloud computing: SAAS, PAAS, IAAS, compared the definition of cloud computing which is given by Google, Amazon, IBM and other companies, summarized the basic characteristics of cloud computing, and emphasized on the key technologies such as data storage, data management, virtualization and programming model.

  16. MUSICAL-COMPUTER TECHNOLOGY: THE LABORATORY

    Directory of Open Access Journals (Sweden)

    Gorbunova Irina B.

    2012-12-01

    Full Text Available The article deals with musically-computer technology in the educational system on example of the Educational and Methodical Laboratory Music & Computer Technologies at the Herzen State Pedagogical University of Russia, St. Petersburg. Interdisciplinary field of professional activities relates to the creation and application of specialized music software and hardware tools and the knowledges in music and informatics. A realization of the concept of musical-computer education in preparing music teachers is through basic educational programs of vocational training, supplementary education, professional development of teachers and methodical support via Internet. In addition, the laboratory Music & Computer Technologies engaged in scientific activity: it is, above all, specialized researches in the field of pedagogy and international conferences.

  17. High Performance Spaceflight Computing (HPSC)

    Data.gov (United States)

    National Aeronautics and Space Administration — Space-based computing has not kept up with the needs of current and future NASA missions. We are developing a next-generation flight computing system that addresses...

  18. RISC Processors and High Performance Computing

    Science.gov (United States)

    Bailey, David H.; Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    This tutorial will discuss the top five RISC microprocessors and the parallel systems in which they are used. It will provide a unique cross-machine comparison not available elsewhere. The effective performance of these processors will be compared by citing standard benchmarks in the context of real applications. The latest NAS Parallel Benchmarks, both absolute performance and performance per dollar, will be listed. The next generation of the NPB will be described. The tutorial will conclude with a discussion of future directions in the field. Technology Transfer Considerations: All of these computer systems are commercially available internationally. Information about these processors is available in the public domain, mostly from the vendors themselves. The NAS Parallel Benchmarks and their results have been previously approved numerous times for public release, beginning back in 1991.

  19. High performance computing in Windows Azure cloud

    OpenAIRE

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  20. Integrating Human Performance and Technology

    International Nuclear Information System (INIS)

    Farris, Ronald K.; Medema, Heather

    2012-01-01

    Human error is a significant factor in the cause and/or complication of events that occur in the commercial nuclear industry. In recent years, great gains have been made using Human Performance (HU) tools focused on targeting individual behaviors. However, the cost of improving HU is growing and resistance to add yet another HU tool certainly exists, particularly for those tools that increase the paperwork for operations. Improvements in HU that are the result of leveraging existing technology, such as hand-held mobile technologies, have the potential to reduce human error in controlling system configurations, safety tag-outs, and other verifications. Operator rounds, valve lineup verifications, containment closure verifications, safety and equipment protection, and system tagging can be supported by field-deployable wireless technologies. These devices can also support the availability of critical component data in the main control room and other locations. This research pilot project reviewing wireless hand-held technology is part of the Light Water Reactor Sustainability Program (LWRSP), a research and development (R and D) program sponsored by the U. S. Department of Energy (DOE). The project is being performed in close collaboration with industry R and D programs to provide the technical foundations for licensing, and managing the long-term, safe, and economical operation of current nuclear power plants. The LWRSP vision is to develop technologies and other solutions that can improve the reliability, sustain the safety, and extend the life of the current nuclear reactor fleet. (author)

  1. Integrating Human Performance and Technology

    Energy Technology Data Exchange (ETDEWEB)

    Ronald K. Farris; Heather Medema

    2012-05-01

    Human error is a significant factor in the cause and/or complication of events that occur in the commercial nuclear industry. In recent years, great gains have been made using Human Performance (HU) tools focused on targeting individual behaviors. However, the cost of improving HU is growing and resistance to add yet another HU tool certainly exists, particularly for those tools that increase the paperwork for operations. Improvements in HU that are the result of leveraging existing technology, such as hand-held mobile technologies, have the potential to reduce human error in controlling system configurations, safety tag-outs, and other verifications. Operator rounds, valve line-up verifications, containment closure verifications, safety & equipment protection, and system tagging can be supported by field-deployable wireless technologies. These devices can also support the availability of critical component data in the main control room and other locations. This research pilot project reviewing wireless hand-held technology is part of the Light Water Reactor Sustainability Program (LWRSP), a research and development (R&D) program sponsored by the U. S. Department of Energy (DOE). The project is being performed in close collaboration with industry R&D programs to provide the technical foundations for licensing, and managing the long-term, safe, and economical operation of current nuclear power plants. The LWRSP vision is to develop technologies and other solutions that can improve the reliability, sustain the safety, and extend the life of the current nuclear reactor fleet.

  2. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  3. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  4. Future Computing Technology (3/3)

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Computing of the future will be affected by a number of fundamental technologies in development today, many of which are already on the way to becoming commercialized. In this series of lectures, we will discuss hardware and software development that will become mainstream in the timeframe of a few years and how they will shape or change the computing landscape - commercial and personal alike. Topics range from processor and memory aspects, programming models and the limits of artificial intelligence, up to end-user interaction with wearables or e-textiles. We discuss the impact of these technologies on the art of programming, the data centres of the future and daily life. On the third day of the Future Computing Technology series, we will touch on societal aspects of the future of computing. Our perception of computers may at time seem passive, but in reality we are a vital chain of the feedback loop. Human-computer interaction, innovative forms of computers, privacy, process automation, threats and medica...

  5. Mathematics for engineering, technology and computing science

    CERN Document Server

    Martin, Hedley G

    1970-01-01

    Mathematics for Engineering, Technology and Computing Science is a text on mathematics for courses in engineering, technology, and computing science. It covers linear algebra, ordinary differential equations, and vector analysis, together with line and multiple integrals. This book consists of eight chapters and begins with a discussion on determinants and linear equations, with emphasis on how the value of a determinant is defined and how it may be obtained. Solution of linear equations and the dependence between linear equations are also considered. The next chapter introduces the reader to

  6. PRIMARY SCHOOL PRINCIPALS’ ATTITUDES TOWARDS COMPUTER TECHNOLOGY IN THE USE OF COMPUTER TECHNOLOGY IN SCHOOL ADMINISTRATION

    OpenAIRE

    GÜNBAYI, İlhan; CANTÜRK, Gökhan

    2011-01-01

    The aim of the study is to determine the usage of computer technology in school administration, primary school administrators’ attitudes towards computer technology, administrators’ and teachers’ computer literacy level. The study was modeled as a survey search. The population of the study consists primary school principals, assistant principals in public primary schools in the center of Antalya. The data were collected from 161 (%51) administrator questionnaires in 68 of 129 public primary s...

  7. The path toward HEP High Performance Computing

    International Nuclear Information System (INIS)

    Apostolakis, John; Brun, René; Gheata, Andrei; Wenzel, Sandro; Carminati, Federico

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit

  8. A High Performance VLSI Computer Architecture For Computer Graphics

    Science.gov (United States)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  9. New data processing technologies at LHC: From Grid to Cloud Computing and beyond

    International Nuclear Information System (INIS)

    De Salvo, A.

    2011-01-01

    Since a few years the LHC experiments at CERN are successfully using the Grid Computing Technologies for their distributed data processing activities, on a global scale. Recently, the experience gained with the current systems allowed the design of the future Computing Models, involving new technologies like Could Computing, virtualization and high performance distributed database access. In this paper we shall describe the new computational technologies of the LHC experiments at CERN, comparing them with the current models, in terms of features and performance.

  10. Technology for Large-Scale Translation of Clinical Practice Guidelines: A Pilot Study of the Performance of a Hybrid Human and Computer-Assisted Approach.

    Science.gov (United States)

    Van de Velde, Stijn; Macken, Lieve; Vanneste, Koen; Goossens, Martine; Vanschoenbeek, Jan; Aertgeerts, Bert; Vanopstal, Klaar; Vander Stichele, Robert; Buysschaert, Joost

    2015-10-09

    The construction of EBMPracticeNet, a national electronic point-of-care information platform in Belgium, began in 2011 to optimize quality of care by promoting evidence-based decision making. The project involved, among other tasks, the translation of 940 EBM Guidelines of Duodecim Medical Publications from English into Dutch and French. Considering the scale of the translation process, it was decided to make use of computer-aided translation performed by certificated translators with limited expertise in medical translation. Our consortium used a hybrid approach, involving a human translator supported by a translation memory (using SDL Trados Studio), terminology recognition (using SDL MultiTerm terminology databases) from medical terminology databases, and support from online machine translation. This resulted in a validated translation memory, which is now in use for the translation of new and updated guidelines. The objective of this experiment was to evaluate the performance of the hybrid human and computer-assisted approach in comparison with translation unsupported by translation memory and terminology recognition. A comparison was also made with the translation efficiency of an expert medical translator. We conducted a pilot study in which two sets of 30 new and 30 updated guidelines were randomized to one of three groups. Comparable guidelines were translated (1) by certificated junior translators without medical specialization using the hybrid method, (2) by an experienced medical translator without this support, and (3) by the same junior translators without the support of the validated translation memory. A medical proofreader who was blinded for the translation procedure, evaluated the translated guidelines for acceptability and adequacy. Translation speed was measured by recording translation and post-editing time. The human translation edit rate was calculated as a metric to evaluate the quality of the translation. A further evaluation was made of

  11. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  12. GPGPU-based explicit finite element computations for applications in biomechanics: the performance of material models, element technologies, and hardware generations.

    Science.gov (United States)

    Strbac, V; Pierce, D M; Vander Sloten, J; Famaey, N

    2017-12-01

    Finite element (FE) simulations are increasingly valuable in assessing and improving the performance of biomedical devices and procedures. Due to high computational demands such simulations may become difficult or even infeasible, especially when considering nearly incompressible and anisotropic material models prevalent in analyses of soft tissues. Implementations of GPGPU-based explicit FEs predominantly cover isotropic materials, e.g. the neo-Hookean model. To elucidate the computational expense of anisotropic materials, we implement the Gasser-Ogden-Holzapfel dispersed, fiber-reinforced model and compare solution times against the neo-Hookean model. Implementations of GPGPU-based explicit FEs conventionally rely on single-point (under) integration. To elucidate the expense of full and selective-reduced integration (more reliable) we implement both and compare corresponding solution times against those generated using underintegration. To better understand the advancement of hardware, we compare results generated using representative Nvidia GPGPUs from three recent generations: Fermi (C2075), Kepler (K20c), and Maxwell (GTX980). We explore scaling by solving the same boundary value problem (an extension-inflation test on a segment of human aorta) with progressively larger FE meshes. Our results demonstrate substantial improvements in simulation speeds relative to two benchmark FE codes (up to 300[Formula: see text] while maintaining accuracy), and thus open many avenues to novel applications in biomechanics and medicine.

  13. Multicore Challenges and Benefits for High Performance Scientific Computing

    Directory of Open Access Journals (Sweden)

    Ida M.B. Nielsen

    2008-01-01

    Full Text Available Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexity of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.

  14. Future Computing Technology (1/3)

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    Computing of the future will be affected by a number of fundamental technologies in development today, many of which are already on the way to becoming commercialized. In this series of lectures, we will discuss hardware and software development that will become mainstream in the timeframe of a few years and how they will shape or change the computing landscape - commercial and personal alike. Topics range from processor and memory aspects, programming models and the limits of artificial intelligence, up to end-user interaction with wearables or e-textiles. We discuss the impact of these technologies on the art of programming, the data centres of the future and daily life. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP and Go...

  15. Network survivability performance (computer diskette)

    Science.gov (United States)

    1993-11-01

    File characteristics: Data file; 1 file. Physical description: 1 computer diskette; 3 1/2 in.; high density; 2.0MB. System requirements: Mac; Word. This technical report has been developed to address the survivability of telecommunications networks including services. It responds to the need for a common understanding of, and assessment techniques for network survivability, availability, integrity, and reliability. It provides a basis for designing and operating telecommunication networks to user expectations for network survivability.

  16. Application of software technology to a future spacecraft computer design

    Science.gov (United States)

    Labaugh, R. J.

    1980-01-01

    A study was conducted to determine how major improvements in spacecraft computer systems can be obtained from recent advances in hardware and software technology. Investigations into integrated circuit technology indicated that the CMOS/SOS chip set being developed for the Air Force Avionics Laboratory at Wright Patterson had the best potential for improving the performance of spaceborne computer systems. An integral part of the chip set is the bit slice arithmetic and logic unit. The flexibility allowed by microprogramming, combined with the software investigations, led to the specification of a baseline architecture and instruction set.

  17. Field-programmable custom computing technology architectures, tools, and applications

    CERN Document Server

    Luk, Wayne; Pocek, Ken

    2000-01-01

    Field-Programmable Custom Computing Technology: Architectures, Tools, and Applications brings together in one place important contributions and up-to-date research results in this fast-moving area. In seven selected chapters, the book describes the latest advances in architectures, design methods, and applications of field-programmable devices for high-performance reconfigurable systems. The contributors to this work were selected from the leading researchers and practitioners in the field. It will be valuable to anyone working or researching in the field of custom computing technology. It serves as an excellent reference, providing insight into some of the most challenging issues being examined today.

  18. Guide to cloud computing for business and technology managers from distributed computing to cloudware applications

    CERN Document Server

    Kale, Vivek

    2014-01-01

    Guide to Cloud Computing for Business and Technology Managers: From Distributed Computing to Cloudware Applications unravels the mystery of cloud computing and explains how it can transform the operating contexts of business enterprises. It provides a clear understanding of what cloud computing really means, what it can do, and when it is practical to use. Addressing the primary management and operation concerns of cloudware, including performance, measurement, monitoring, and security, this pragmatic book:Introduces the enterprise applications integration (EAI) solutions that were a first ste

  19. High-performance computing — an overview

    Science.gov (United States)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  20. Compiler Technology for Parallel Scientific Computation

    Directory of Open Access Journals (Sweden)

    Can Özturan

    1994-01-01

    Full Text Available There is a need for compiler technology that, given the source program, will generate efficient parallel codes for different architectures with minimal user involvement. Parallel computation is becoming indispensable in solving large-scale problems in science and engineering. Yet, the use of parallel computation is limited by the high costs of developing the needed software. To overcome this difficulty we advocate a comprehensive approach to the development of scalable architecture-independent software for scientific computation based on our experience with equational programming language (EPL. Our approach is based on a program decomposition, parallel code synthesis, and run-time support for parallel scientific computation. The program decomposition is guided by the source program annotations provided by the user. The synthesis of parallel code is based on configurations that describe the overall computation as a set of interacting components. Run-time support is provided by the compiler-generated code that redistributes computation and data during object program execution. The generated parallel code is optimized using techniques of data alignment, operator placement, wavefront determination, and memory optimization. In this article we discuss annotations, configurations, parallel code generation, and run-time support suitable for parallel programs written in the functional parallel programming language EPL and in Fortran.

  1. Computer Education and Instructional Technology Teacher Trainees' Opinions about Cloud Computing Technology

    Science.gov (United States)

    Karamete, Aysen

    2015-01-01

    This study aims to show the present conditions about the usage of cloud computing in the department of Computer Education and Instructional Technology (CEIT) amongst teacher trainees in School of Necatibey Education, Balikesir University, Turkey. In this study, a questionnaire with open-ended questions was used. 17 CEIT teacher trainees…

  2. Tutorial on Computing: Technological Advances, Social Implications, Ethical and Legal Issues

    OpenAIRE

    Debnath, Narayan

    2012-01-01

    Computing and information technology have made significant advances. The use of computing and technology is a major aspect of our lives, and this use will only continue to increase in our lifetime. Electronic digital computers and high performance communication networks are central to contemporary information technology. The computing applications in a wide range of areas including business, communications, medical research, transportation, entertainments, and education are transforming lo...

  3. US QCD computational performance studies with PERI

    International Nuclear Information System (INIS)

    Zhang, Y; Fowler, R; Huck, K; Malony, A; Porterfield, A; Reed, D; Shende, S; Taylor, V; Wu, X

    2007-01-01

    We report on some of the interactions between two SciDAC projects: The National Computational Infrastructure for Lattice Gauge Theory (USQCD), and the Performance Engineering Research Institute (PERI). Many modern scientific programs consistently report the need for faster computational resources to maintain global competitiveness. However, as the size and complexity of emerging high end computing (HEC) systems continue to rise, achieving good performance on such systems is becoming ever more challenging. In order to take full advantage of the resources, it is crucial to understand the characteristics of relevant scientific applications and the systems these applications are running on. Using tools developed under PERI and by other performance measurement researchers, we studied the performance of two applications, MILC and Chroma, on several high performance computing systems at DOE laboratories. In the case of Chroma, we discuss how the use of C++ and modern software engineering and programming methods are driving the evolution of performance tools

  4. Fast magnetic field computation in fusion technology using GPU technology

    Energy Technology Data Exchange (ETDEWEB)

    Chiariello, Andrea Gaetano [Ass. EURATOM/ENEA/CREATE, Dipartimento di Ingegneria Industriale e dell’Informazione, Seconda Università di Napoli, Via Roma 29, Aversa (CE) (Italy); Formisano, Alessandro, E-mail: Alessandro.Formisano@unina2.it [Ass. EURATOM/ENEA/CREATE, Dipartimento di Ingegneria Industriale e dell’Informazione, Seconda Università di Napoli, Via Roma 29, Aversa (CE) (Italy); Martone, Raffaele [Ass. EURATOM/ENEA/CREATE, Dipartimento di Ingegneria Industriale e dell’Informazione, Seconda Università di Napoli, Via Roma 29, Aversa (CE) (Italy)

    2013-10-15

    Highlights: ► The paper deals with high accuracy numerical simulations of high field magnets. ► The porting of existing codes of High Performance Computing architectures allowed to obtain a relevant speedup while not reducing computational accuracy. ► Some examples of applications, referred to ITER-like magnets, are reported. -- Abstract: One of the main issues in the simulation of Tokamaks functioning is the reliable and accurate computation of actual field maps in the plasma chamber. In this paper a tool able to accurately compute magnetic field maps produced by active coils of any 3D shape, wound with high number of conductors, is presented. Under linearity assumption, the coil winding is modeled by means of “sticks”, following each conductor's shape, and the contribution of each stick is computed using high speed Graphic Computing Units (GPU's). Relevant speed enhancements with respect to standard parallel computing environment are achieved in this way.

  5. How does technological regime affect performance of technology development projects?

    NARCIS (Netherlands)

    Song, Michael; Hooshangi, Soheil; Zhao, Y. Lisa; Halman, Johannes I.M.

    2014-01-01

    In this study, we examine how technological regime affects the performance of technology development projects (i.e., project quality, sales, and profit). Technological regime is defined as the set of attributes of a technological environment where the innovative activities of firms take place.

  6. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  7. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  8. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  9. National Energy Research Scientific Computing Center (NERSC): Advancing the frontiers of computational science and technology

    Energy Technology Data Exchange (ETDEWEB)

    Hules, J. [ed.

    1996-11-01

    National Energy Research Scientific Computing Center (NERSC) provides researchers with high-performance computing tools to tackle science`s biggest and most challenging problems. Founded in 1974 by DOE/ER, the Controlled Thermonuclear Research Computer Center was the first unclassified supercomputer center and was the model for those that followed. Over the years the center`s name was changed to the National Magnetic Fusion Energy Computer Center and then to NERSC; it was relocated to LBNL. NERSC, one of the largest unclassified scientific computing resources in the world, is the principal provider of general-purpose computing services to DOE/ER programs: Magnetic Fusion Energy, High Energy and Nuclear Physics, Basic Energy Sciences, Health and Environmental Research, and the Office of Computational and Technology Research. NERSC users are a diverse community located throughout US and in several foreign countries. This brochure describes: the NERSC advantage, its computational resources and services, future technologies, scientific resources, and computational science of scale (interdisciplinary research over a decade or longer; examples: combustion in engines, waste management chemistry, global climate change modeling).

  10. Embedded High Performance Scalable Computing Systems

    National Research Council Canada - National Science Library

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  11. Improving student retention in computer engineering technology

    Science.gov (United States)

    Pierozinski, Russell Ivan

    The purpose of this research project was to improve student retention in the Computer Engineering Technology program at the Northern Alberta Institute of Technology by reducing the number of dropouts and increasing the graduation rate. This action research project utilized a mixed methods approach of a survey and face-to-face interviews. The participants were male and female, with a large majority ranging from 18 to 21 years of age. The research found that participants recognized their skills and capability, but their capacity to remain in the program was dependent on understanding and meeting the demanding pace and rigour of the program. The participants recognized that curriculum delivery along with instructor-student interaction had an impact on student retention. To be successful in the program, students required support in four domains: academic, learning management, career, and social.

  12. Applications of computational intelligence in biomedical technology

    CERN Document Server

    Majernik, Jaroslav; Pancerz, Krzysztof; Zaitseva, Elena

    2016-01-01

    This book presents latest results and selected applications of Computational Intelligence in Biomedical Technologies. Most of contributions deal with problems of Biomedical and Medical Informatics, ranging from theoretical considerations to practical applications. Various aspects of development methods and algorithms in Biomedical and Medical Informatics as well as Algorithms for medical image processing, modeling methods are discussed. Individual contributions also cover medical decision making support, estimation of risks of treatments, reliability of medical systems, problems of practical clinical applications and many other topics  This book is intended for scientists interested in problems of Biomedical Technologies, for researchers and academic staff, for all dealing with Biomedical and Medical Informatics, as well as PhD students. Useful information is offered also to IT companies, developers of equipment and/or software for medicine and medical professionals.  .

  13. Cloud Computing for Complex Performance Codes.

    Energy Technology Data Exchange (ETDEWEB)

    Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Klein, Brandon Thorin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Miner, John Gifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.

  14. Performing quantum computing experiments in the cloud

    Science.gov (United States)

    Devitt, Simon J.

    2016-09-01

    Quantum computing technology has reached a second renaissance in the past five years. Increased interest from both the private and public sector combined with extraordinary theoretical and experimental progress has solidified this technology as a major advancement in the 21st century. As anticipated my many, some of the first realizations of quantum computing technology has occured over the cloud, with users logging onto dedicated hardware over the classical internet. Recently, IBM has released the Quantum Experience, which allows users to access a five-qubit quantum processor. In this paper we take advantage of this online availability of actual quantum hardware and present four quantum information experiments. We utilize the IBM chip to realize protocols in quantum error correction, quantum arithmetic, quantum graph theory, and fault-tolerant quantum computation by accessing the device remotely through the cloud. While the results are subject to significant noise, the correct results are returned from the chip. This demonstrates the power of experimental groups opening up their technology to a wider audience and will hopefully allow for the next stage of development in quantum information technology.

  15. Computational intelligence for technology enhanced learning

    Energy Technology Data Exchange (ETDEWEB)

    Xhafa, Fatos [Polytechnic Univ. of Catalonia, Barcelona (Spain). Dept. of Languages and Informatics Systems; Caballe, Santi; Daradoumis, Thanasis [Open Univ. of Catalonia, Barcelona (Spain). Dept. of Computer Sciences Multimedia and Telecommunications; Abraham, Ajith [Machine Intelligence Research Labs (MIR Labs), Auburn, WA (United States). Scientific Network for Innovation and Research Excellence; Juan Perez, Angel Alejandro (eds.) [Open Univ. of Catalonia, Barcelona (Spain). Dept. of Information Sciences

    2010-07-01

    E-Learning has become one of the most wide spread ways of distance teaching and learning. Technologies such as Web, Grid, and Mobile and Wireless networks are pushing teaching and learning communities to find new and intelligent ways of using these technologies to enhance teaching and learning activities. Indeed, these new technologies can play an important role in increasing the support to teachers and learners, to shorten the time to learning and teaching; yet, it is necessary to use intelligent techniques to take advantage of these new technologies to achieve the desired support to teachers and learners and enhance learners' performance in distributed learning environments. The chapters of this volume bring advances in using intelligent techniques for technology enhanced learning as well as development of e-Learning applications based on such techniques and supported by technology. Such intelligent techniques include clustering and classification for personalization of learning, intelligent context-aware techniques, adaptive learning, data mining techniques and ontologies in e-Learning systems, among others. Academics, scientists, software developers, teachers and tutors and students interested in e-Learning will find this book useful for their academic, research and practice activity. (orig.)

  16. Benchmarking high performance computing architectures with CMS’ skeleton framework

    Science.gov (United States)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  17. An Adaptive Middleware for Improved Computational Performance

    DEFF Research Database (Denmark)

    Bonnichsen, Lars Frydendal

    , we are improving computational performance by exploiting modern hardware features, such as dynamic voltage-frequency scaling and transactional memory. Adapting software is an iterative process, requiring that we continually revisit it to meet new requirements or realities; a time consuming process......The performance improvements in computer systems over the past 60 years have been fueled by an exponential increase in energy efficiency. In recent years, the phenomenon known as the end of Dennard’s scaling has slowed energy efficiency improvements — but improving computer energy efficiency...... is more important now than ever. Traditionally, most improvements in computer energy efficiency have come from improvements in lithography — the ability to produce smaller transistors — and computer architecture - the ability to apply those transistors efficiently. Since the end of scaling, we have seen...

  18. International Conference on Computers and Advanced Technology in Education

    CERN Document Server

    Advanced Information Technology in Education

    2012-01-01

    The volume includes a set of selected papers extended and revised from the 2011 International Conference on Computers and Advanced Technology in Education. With the development of computers and advanced technology, the human social activities are changing basically. Education, especially the education reforms in different countries, has been experiencing the great help from the computers and advanced technology. Generally speaking, education is a field which needs more information, while the computers, advanced technology and internet are a good information provider. Also, with the aid of the computer and advanced technology, persons can make the education an effective combination. Therefore, computers and advanced technology should be regarded as an important media in the modern education. Volume Advanced Information Technology in Education is to provide a forum for researchers, educators, engineers, and government officials involved in the general areas of computers and advanced technology in education to d...

  19. Ceramic PPC technology and performance

    International Nuclear Information System (INIS)

    Akimov, V.; Arefiev, A.; Bizzeti, A.; Civinini, C.; Choumilov, E.; Alessandro, R. d'; Ferrando, A.; Kolotaev, Y.; Kuleshov, S.; Malinin, A.; Martemianov, A.; Martinez-Laso, L.; Mikhailov, K.; Pojidaev, V.; Rojkov, A.; Serov, V.; Smirnitsky, A.

    1994-01-01

    Mass production technology for PCCs (Parallel Plate Chambers) is described. This technology provides a precise manufacture of chamber components and a high uniformity of chamber properties. Only radiation hard materials were used. Results on the chamber uniformity, the detection efficiency and the timing properties of PPCs are presented. (orig.)

  20. Optical interconnection networks for high-performance computing systems

    International Nuclear Information System (INIS)

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  1. Spacecraft computer technology at Southwest Research Institute

    Science.gov (United States)

    Shirley, D. J.

    1993-01-01

    Southwest Research Institute (SwRI) has developed and delivered spacecraft computers for a number of different near-Earth-orbit spacecraft including shuttle experiments and SDIO free-flyer experiments. We describe the evolution of the basic SwRI spacecraft computer design from those weighing in at 20 to 25 lb and using 20 to 30 W to newer models weighing less than 5 lb and using only about 5 W, yet delivering twice the processing throughput. Because of their reduced size, weight, and power, these newer designs are especially applicable to planetary instrument requirements. The basis of our design evolution has been the availability of more powerful processor chip sets and the development of higher density packaging technology, coupled with more aggressive design strategies in incorporating high-density FPGA technology and use of high-density memory chips. In addition to reductions in size, weight, and power, the newer designs also address the necessity of survival in the harsh radiation environment of space. Spurred by participation in such programs as MSTI, LACE, RME, Delta 181, Delta Star, and RADARSAT, our designs have evolved in response to program demands to be small, low-powered units, radiation tolerant enough to be suitable for both Earth-orbit microsats and for planetary instruments. Present designs already include MIL-STD-1750 and Multi-Chip Module (MCM) technology with near-term plans to include RISC processors and higher-density MCM's. Long term plans include development of whole-core processors on one or two MCM's.

  2. Function Follows Performance in Evolutionary Computational Processing

    DEFF Research Database (Denmark)

    Pasold, Anke; Foged, Isak Worre

    2011-01-01

    As the title ‘Function Follows Performance in Evolutionary Computational Processing’ suggests, this paper explores the potentials of employing multiple design and evaluation criteria within one processing model in order to account for a number of performative parameters desired within varied...

  3. Computer Simulation Performed for Columbia Project Cooling System

    Science.gov (United States)

    Ahmad, Jasim

    2005-01-01

    This demo shows a high-fidelity simulation of the air flow in the main computer room housing the Columbia (10,024 intel titanium processors) system. The simulation asseses the performance of the cooling system and identified deficiencies, and recommended modifications to eliminate them. It used two in house software packages on NAS supercomputers: Chimera Grid tools to generate a geometric model of the computer room, OVERFLOW-2 code for fluid and thermal simulation. This state-of-the-art technology can be easily extended to provide a general capability for air flow analyses on any modern computer room. Columbia_CFD_black.tiff

  4. Evaluating Technology Resistance and Technology Satisfaction on Students' Performance

    Science.gov (United States)

    Norzaidi, Mohd Daud; Salwani, Mohamed Intan

    2009-01-01

    Purpose: Using the extended task-technology fit (TTF) model, this paper aims to examine technology resistance, technology satisfaction and internet usage on students' performance. Design/methodology/approach: The study was conducted at Universiti Teknologi MARA, Johor, Malaysia and questionnaires were distributed to 354 undergraduate students.…

  5. FY 1992 Blue Book: Grand Challenges: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  6. FY 1993 Blue Book: Grand Challenges 1993: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  7. Computer Networking Laboratory for Undergraduate Computer Technology Program

    National Research Council Canada - National Science Library

    Naghedolfeizi, Masoud

    2000-01-01

    ...) To improve the quality of education in the existing courses related to computer networks and data communications as well as other computer science courses such programming languages and computer...

  8. Computer-aided design and computer science technology

    Science.gov (United States)

    Fulton, R. E.; Voigt, S. J.

    1976-01-01

    A description is presented of computer-aided design requirements and the resulting computer science advances needed to support aerospace design. The aerospace design environment is examined, taking into account problems of data handling and aspects of computer hardware and software. The interactive terminal is normally the primary interface between the computer system and the engineering designer. Attention is given to user aids, interactive design, interactive computations, the characteristics of design information, data management requirements, hardware advancements, and computer science developments.

  9. DURIP: High Performance Computing in Biomathematics Applications

    Science.gov (United States)

    2017-05-10

    Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied

  10. Multimedia Image Technology and Computer Aided Manufacturing Engineering Analysis

    Science.gov (United States)

    Nan, Song

    2018-03-01

    Since the reform and opening up, with the continuous development of science and technology in China, more and more advanced science and technology have emerged under the trend of diversification. Multimedia imaging technology, for example, has a significant and positive impact on computer aided manufacturing engineering in China. From the perspective of scientific and technological advancement and development, the multimedia image technology has a very positive influence on the application and development of computer-aided manufacturing engineering, whether in function or function play. Therefore, this paper mainly starts from the concept of multimedia image technology to analyze the application of multimedia image technology in computer aided manufacturing engineering.

  11. Performance Aspects of Synthesizable Computing Systems

    DEFF Research Database (Denmark)

    Schleuniger, Pascal

    Embedded systems are used in a broad range of applications that demand high performance within severely constrained mechanical, power, and cost requirements. Embedded systems implemented in ASIC technology tend to provide the highest performance, lowest power consumption and lowest unit cost. How...

  12. Technological Evolution on Computed Tomography and Radioprotection

    Energy Technology Data Exchange (ETDEWEB)

    Leite, Bruno Barros; Ribeiro, Nuno Carrilho [Servico de Radiologia, Hospital de Curry Cabral, Rua da Beneficencia, 8, 1069-166 Lisboa (Portugal)

    2006-05-15

    Computed Tomography (CT) has been available since the 70s and has experienced a dramatic technical evolution. Multi-detector technology is our current standard, offering capabilities unthinkable only a decade ago. Yet, we must nor forget the ionizing nature of CT's scanning energy (X-rays). It represents the most important cause of medical-associated radiation exposure to the general public, with a trend to increase. It is compulsory to intervene with the objective of dose reduction, following ALARA policies. Currently there are some technical advances that allow dose reduction, without sacrificing diagnostic image capabilities. However, human intervention is also essential. We must keep investment on education so that CT exams are don when they are really useful in clinical decision. Alternative techniques should also be considered. Image quality must not be searched disregarding the biological effects of radiation. Generally, it is possible to obtain clinically acceptable images with lower dose protocols. (author)

  13. Technological Evolution on Computed Tomography and Radioprotection

    International Nuclear Information System (INIS)

    Leite, Bruno Barros; Ribeiro, Nuno Carrilho

    2006-01-01

    Computed Tomography (CT) has been available since the 70s and has experienced a dramatic technical evolution. Multi-detector technology is our current standard, offering capabilities unthinkable only a decade ago. Yet, we must nor forget the ionizing nature of CT's scanning energy (X-rays). It represents the most important cause of medical-associated radiation exposure to the general public, with a trend to increase. It is compulsory to intervene with the objective of dose reduction, following ALARA policies. Currently there are some technical advances that allow dose reduction, without sacrificing diagnostic image capabilities. However, human intervention is also essential. We must keep investment on education so that CT exams are don when they are really useful in clinical decision. Alternative techniques should also be considered. Image quality must not be searched disregarding the biological effects of radiation. Generally, it is possible to obtain clinically acceptable images with lower dose protocols. (author)

  14. Computer technique for evaluating collimator performance

    International Nuclear Information System (INIS)

    Rollo, F.D.

    1975-01-01

    A computer program has been developed to theoretically evaluate the overall performance of collimators used with radioisotope scanners and γ cameras. The first step of the program involves the determination of the line spread function (LSF) and geometrical efficiency from the fundamental parameters of the collimator being evaluated. The working equations can be applied to any plane of interest. The resulting LSF is applied to subroutine computer programs which compute corresponding modulation transfer function and contrast efficiency functions. The latter function is then combined with appropriate geometrical efficiency data to determine the performance index function. The overall computer program allows one to predict from the physical parameters of the collimator alone how well the collimator will reproduce various sized spherical voids of activity in the image plane. The collimator performance program can be used to compare the performance of various collimator types, to study the effects of source depth on collimator performance, and to assist in the design of collimators. The theory of the collimator performance equation is discussed, a comparison between the experimental and theoretical LSF values is made, and examples of the application of the technique are presented

  15. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Science.gov (United States)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the

  16. Misleading Performance Claims in Parallel Computations

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2009-05-29

    In a previous humorous note entitled 'Twelve Ways to Fool the Masses,' I outlined twelve common ways in which performance figures for technical computer systems can be distorted. In this paper and accompanying conference talk, I give a reprise of these twelve 'methods' and give some actual examples that have appeared in peer-reviewed literature in years past. I then propose guidelines for reporting performance, the adoption of which would raise the level of professionalism and reduce the level of confusion, not only in the world of device simulation but also in the larger arena of technical computing.

  17. 6th International Conference on High Performance Scientific Computing

    CERN Document Server

    Phu, Hoang; Rannacher, Rolf; Schlöder, Johannes

    2017-01-01

    This proceedings volume highlights a selection of papers presented at the Sixth International Conference on High Performance Scientific Computing, which took place in Hanoi, Vietnam on March 16-20, 2015. The conference was jointly organized by the Heidelberg Institute of Theoretical Studies (HITS), the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University, and the Vietnam Institute for Advanced Study in Mathematics, Ministry of Education The contributions cover a broad, interdisciplinary spectrum of scientific computing and showcase recent advances in theory, methods, and practical applications. Subjects covered numerical simulation, methods for optimization and control, parallel computing, and software development, as well as the applications of scientific computing in physics, mechanics, biomechanics and robotics, material science, hydrology, biotechnology, medicine, transport, scheduling, and in...

  18. Computer-Based Technologies in Dentistry: Types and Applications

    Directory of Open Access Journals (Sweden)

    Rajaa Mahdi Musawi

    2016-10-01

    Full Text Available During dental education, dental students learn how to examine patients, make diagnosis, plan treatment and perform dental procedures perfectly and efficiently. However, progresses in computer-based technologies including virtual reality (VR simulators, augmented reality (AR and computer aided design/computer aided manufacturing (CAD/CAM systems have resulted in new modalities for instruction and practice of dentistry. Virtual reality dental simulators enable repeated, objective and assessable practice in various controlled situations. Superimposition of three-dimensional (3D virtual images on actual images in AR allows surgeons to simultaneously visualize the surgical site and superimpose informative 3D images of invisible regions on the surgical site to serve as a guide. The use of CAD/CAM systems for designing and manufacturing of dental appliances and prostheses has been well established.This article reviews computer-based technologies, their application in dentistry and their potentials and limitations in promoting dental education, training and practice. Practitioners will be able to choose from a broader spectrum of options in their field of practice by becoming familiar with new modalities of training and practice.Keywords: Virtual Reality Exposure Therapy; Immersion; Computer-Aided Design; Dentistry; Education

  19. High-performance computing for airborne applications

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Manuzatto, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-01-01

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  20. Mobile Computing and Ubiquitous Networking: Concepts, Technologies and Challenges.

    Science.gov (United States)

    Pierre, Samuel

    2001-01-01

    Analyzes concepts, technologies and challenges related to mobile computing and networking. Defines basic concepts of cellular systems. Describes the evolution of wireless technologies that constitute the foundations of mobile computing and ubiquitous networking. Presents characterization and issues of mobile computing. Analyzes economical and…

  1. Computer Science and Technology Publications. NBS Publications List 84.

    Science.gov (United States)

    National Bureau of Standards (DOC), Washington, DC. Inst. for Computer Sciences and Technology.

    This bibliography lists publications of the Institute for Computer Sciences and Technology of the National Bureau of Standards. Publications are listed by subject in the areas of computer security, computer networking, and automation technology. Sections list publications of: (1) current Federal Information Processing Standards; (2) computer…

  2. Computing, Information, and Communications Technology (CICT) Program Overview

    Science.gov (United States)

    VanDalsem, William R.

    2003-01-01

    The Computing, Information and Communications Technology (CICT) Program's goal is to enable NASA's Scientific Research, Space Exploration, and Aerospace Technology Missions with greater mission assurance, for less cost, with increased science return through the development and use of advanced computing, information and communication technologies

  3. Technology in Paralympic sport: performance enhancement or essential for performance?

    Science.gov (United States)

    Burkett, Brendan

    2010-02-01

    People with disabilities often depend on assistive devices to enable activities of daily living as well as to compete in sport. Technological developments in sport can be controversial. To review, identify and describe current technological developments in assistive devices used in the summer Paralympic Games; and to prepare for the London 2012 Games, the future challenges and the role of technology are debated. A systematic review of the peer-reviewed literature and personal observations of technological developments at the Athens (2004) and Beijing (2008) Paralympic Games was conducted. Standard assistive devices can inhibit the Paralympians' abilities to perform the strenuous activities of their sports. Although many Paralympic sports only require technology similar to their Olympic counterparts, several unique technological modifications have been made in prosthetic and wheelchair devices. Technology is essential for the Paralympic athlete, and the potential technological advantage for a Paralympian, when competing against an Olympian, is unclear. Technology must match the individual requirements of the athlete with the sport in order for Paralympians to safely maximise their performance. Within the 'performance enhancement or essential for performance?' debate, any potential increase in mechanical performance from an assistive device must be considered holistically with the compensatory consequences the disability creates. To avoid potential technology controversies at the 2012 London Olympic and Paralympic Games, the role of technology in sport must be clarified.

  4. High performance parallel computers for science

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  5. Performative Computation-aided Design Optimization

    Directory of Open Access Journals (Sweden)

    Ming Tang

    2012-12-01

    Full Text Available This article discusses a collaborative research and teaching project between the University of Cincinnati, Perkins+Will’s Tech Lab, and the University of North Carolina Greensboro. The primary investigation focuses on the simulation, optimization, and generation of architectural designs using performance-based computational design approaches. The projects examine various design methods, including relationships between building form, performance and the use of proprietary software tools for parametric design.

  6. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  7. The Impact of Cloud Computing Technologies in E-learning

    Directory of Open Access Journals (Sweden)

    Hosam Farouk El-Sofany

    2013-01-01

    Full Text Available Cloud computing is a new computing model which is based on the grid computing, distributed computing, parallel computing and virtualization technologies define the shape of a new technology. It is the core technology of the next generation of network computing platform, especially in the field of education, cloud computing is the basic environment and platform of the future E-learning. It provides secure data storage, convenient internet services and strong computing power. This article mainly focuses on the research of the application of cloud computing in E-learning environment. The research study shows that the cloud platform is valued for both students and instructors to achieve the course objective. The paper presents the nature, benefits and cloud computing services, as a platform for e-learning environment.

  8. Survey of computer vision technology for UVA navigation

    Science.gov (United States)

    Xie, Bo; Fan, Xiang; Li, Sijian

    2017-11-01

    carried out at high speed. The system is applied to rapid response system. (2) The visual system of distributed network. There are several discrete image data acquisition sensor in different locations, which transmit image data to the node processor to increase the sampling rate. (3) The visual system combined with observer. The system combines image sensors with the external observers to make up for lack of visual equipment. To some degree, these systems overcome lacks of the early visual system, including low frequency, low processing efficiency and strong noise. In the end, the difficulties of navigation based on computer version technology in practical application are briefly discussed. (1) Due to the huge workload of image operation , the real-time performance of the system is poor. (2) Due to the large environmental impact , the anti-interference ability of the system is poor.(3) Due to the ability to work in a particular environment, the system has poor adaptability.

  9. High performance computing in linear control

    International Nuclear Information System (INIS)

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  10. JPRS Report, Science & Technology, USSR: Computers

    Science.gov (United States)

    1990-01-31

    physicochemical treatment, heat treatment and machining, respectively; a* - deforming; Pe - cutting; TO, TOc - technological equipment and technological...Technological Preparation of Series Production], Moscow, "Mashinostroyeniye", 1981, 287 pp. 7. Tsvetkov, V.D., " Sistema avtomatizirovannogo proektirovaniya

  11. DEFACTO: A Design Environment for Adaptive Computing Technology

    National Research Council Canada - National Science Library

    Hall, Mary

    2003-01-01

    This report describes the activities of the DEFACTO project, a Design Environment for Adaptive Computing Technology funded under the DARPA Adaptive Computing Systems and Just-In-Time-Hardware programs...

  12. A Financial Technology Entrepreneurship Program for Computer Science Students

    Science.gov (United States)

    Lawler, James P.; Joseph, Anthony

    2011-01-01

    Education in entrepreneurship is becoming a critical area of curricula for computer science students. Few schools of computer science have a concentration in entrepreneurship in the computing curricula. The paper presents Technology Entrepreneurship in the curricula at a leading school of computer science and information systems, in which students…

  13. Cloud Computing. Technology Briefing. Number 1

    Science.gov (United States)

    Alberta Education, 2013

    2013-01-01

    Cloud computing is Internet-based computing in which shared resources, software and information are delivered as a service that computers or mobile devices can access on demand. Cloud computing is already used extensively in education. Free or low-cost cloud-based services are used daily by learners and educators to support learning, social…

  14. HPCToolkit: performance tools for scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Tallent, N; Mellor-Crummey, J; Adhianto, L; Fagan, M; Krentel, M [Department of Computer Science, Rice University, Houston, TX 77005 (United States)

    2008-07-15

    As part of the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC) program, science teams are tackling problems that require simulation and modeling on petascale computers. As part of activities associated with the SciDAC Center for Scalable Application Development Software (CScADS) and the Performance Engineering Research Institute (PERI), Rice University is building software tools for performance analysis of scientific applications on the leadership-class platforms. In this poster abstract, we briefly describe the HPCToolkit performance tools and how they can be used to pinpoint bottlenecks in SPMD and multi-threaded parallel codes. We demonstrate HPCToolkit's utility by applying it to two SciDAC applications: the S3D code for simulation of turbulent combustion and the MFDn code for ab initio calculations of microscopic structure of nuclei.

  15. HPCToolkit: performance tools for scientific computing

    International Nuclear Information System (INIS)

    Tallent, N; Mellor-Crummey, J; Adhianto, L; Fagan, M; Krentel, M

    2008-01-01

    As part of the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC) program, science teams are tackling problems that require simulation and modeling on petascale computers. As part of activities associated with the SciDAC Center for Scalable Application Development Software (CScADS) and the Performance Engineering Research Institute (PERI), Rice University is building software tools for performance analysis of scientific applications on the leadership-class platforms. In this poster abstract, we briefly describe the HPCToolkit performance tools and how they can be used to pinpoint bottlenecks in SPMD and multi-threaded parallel codes. We demonstrate HPCToolkit's utility by applying it to two SciDAC applications: the S3D code for simulation of turbulent combustion and the MFDn code for ab initio calculations of microscopic structure of nuclei

  16. Effect of information and communication technology on nursing performance.

    Science.gov (United States)

    Fujino, Yuriko; Kawamoto, Rieko

    2013-05-01

    The aim of this study was to investigate the influence of information and communication technology use and skills on nursing performance. Questionnaires were prepared relating to using the technology, practical skills in utilizing information, the Six-Dimension Scale of Nursing Performance, and demographics. In all, 556 nurses took part (response rate, 72.6%). A two-way analysis of variance was used to determine the influence of years of nursing experience on the relationship between nursing performance and information and communication technology use. The results showed that the group possessing high technological skills had greater nursing ability than the group with low skills; the level of nursing performance improved with years of experience in the former group, but not in the latter group. Regarding information and communication technology use, the results showed that nursing performance improved among participants who used computers for sending and receiving e-mails, but it decreased for those who used cell phones for e-mail. The results suggest that nursing performance may be negatively affected if information and communication technology are inappropriately used. Informatics education should therefore be provided for all nurses, and it should include information use relating to cell phones and computers.

  17. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Michael J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  18. 3rd International Conference on Computer & Communication Technologies

    CERN Document Server

    Bhateja, Vikrant; Raju, K; Janakiramaiah, B

    2017-01-01

    The book is a compilation of high-quality scientific papers presented at the 3rd International Conference on Computer & Communication Technologies (IC3T 2016). The individual papers address cutting-edge technologies and applications of soft computing, artificial intelligence and communication. In addition, a variety of further topics are discussed, which include data mining, machine intelligence, fuzzy computing, sensor networks, signal and image processing, human-computer interaction, web intelligence, etc. As such, it offers readers a valuable and unique resource.

  19. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  20. Computer performance optimization systems, applications, processes

    CERN Document Server

    Osterhage, Wolfgang W

    2013-01-01

    Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting p

  1. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  2. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  3. Use of Soft Computing Technologies For Rocket Engine Control

    Science.gov (United States)

    Trevino, Luis C.; Olcmen, Semih; Polites, Michael

    2003-01-01

    The problem to be addressed in this paper is to explore how the use of Soft Computing Technologies (SCT) could be employed to further improve overall engine system reliability and performance. Specifically, this will be presented by enhancing rocket engine control and engine health management (EHM) using SCT coupled with conventional control technologies, and sound software engineering practices used in Marshall s Flight Software Group. The principle goals are to improve software management, software development time and maintenance, processor execution, fault tolerance and mitigation, and nonlinear control in power level transitions. The intent is not to discuss any shortcomings of existing engine control and EHM methodologies, but to provide alternative design choices for control, EHM, implementation, performance, and sustaining engineering. The approaches outlined in this paper will require knowledge in the fields of rocket engine propulsion, software engineering for embedded systems, and soft computing technologies (i.e., neural networks, fuzzy logic, and Bayesian belief networks), much of which is presented in this paper. The first targeted demonstration rocket engine platform is the MC-1 (formerly FASTRAC Engine) which is simulated with hardware and software in the Marshall Avionics & Software Testbed laboratory that

  4. A methodology for performing computer security reviews

    International Nuclear Information System (INIS)

    Hunteman, W.J.

    1991-01-01

    DOE Order 5637.1, ''Classified Computer Security,'' requires regular reviews of the computer security activities for an ADP system and for a site. Based on experiences gained in the Los Alamos computer security program through interactions with DOE facilities, we have developed a methodology to aid a site or security officer in performing a comprehensive computer security review. The methodology is designed to aid a reviewer in defining goals of the review (e.g., preparation for inspection), determining security requirements based on DOE policies, determining threats/vulnerabilities based on DOE and local threat guidance, and identifying critical system components to be reviewed. Application of the methodology will result in review procedures and checklists oriented to the review goals, the target system, and DOE policy requirements. The review methodology can be used to prepare for an audit or inspection and as a periodic self-check tool to determine the status of the computer security program for a site or specific ADP system. 1 tab

  5. A methodology for performing computer security reviews

    International Nuclear Information System (INIS)

    Hunteman, W.J.

    1991-01-01

    This paper reports on DIE Order 5637.1, Classified Computer Security, which requires regular reviews of the computer security activities for an ADP system and for a site. Based on experiences gained in the Los Alamos computer security program through interactions with DOE facilities, the authors have developed a methodology to aid a site or security officer in performing a comprehensive computer security review. The methodology is designed to aid a reviewer in defining goals of the review (e.g., preparation for inspection), determining security requirements based on DOE policies, determining threats/vulnerabilities based on DOE and local threat guidance, and identifying critical system components to be reviewed. Application of the methodology will result in review procedures and checklists oriented to the review goals, the target system, and DOE policy requirements. The review methodology can be used to prepare for an audit or inspection and as a periodic self-check tool to determine the status of the computer security program for a site or specific ADP system

  6. An esthetics rehabilitation with computer-aided design/ computer-aided manufacturing technology.

    Science.gov (United States)

    Mazaro, Josá Vitor Quinelli; de Mello, Caroline Cantieri; Zavanelli, Adriana Cristina; Santiago, Joel Ferreira; Amoroso, Andressa Paschoal; Pellizzer, Eduardo Piza

    2014-07-01

    This paper describes a case of a rehabilitation involving Computer Aided Design/Computer Aided Manufacturing (CAD-CAM) system in implant supported and dental supported prostheses using zirconia as framework. The CAD-CAM technology has developed considerably over last few years, becoming a reality in dental practice. Among the widely used systems are the systems based on zirconia which demonstrate important physical and mechanical properties of high strength, adequate fracture toughness, biocompatibility and esthetics, and are indicated for unitary prosthetic restorations and posterior and anterior framework. All the modeling was performed by using CAD-CAM system and prostheses were cemented using resin cement best suited for each situation. The rehabilitation of the maxillary arch using zirconia framework demonstrated satisfactory esthetic and functional results after a 12-month control and revealed no biological and technical complications. This article shows the important of use technology CAD/CAM in the manufacture of dental prosthesis and implant-supported.

  7. High performance computing and communications: FY 1996 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  8. Bioinformatics and Computational Core Technology Center

    Data.gov (United States)

    Federal Laboratory Consortium — SERVICES PROVIDED BY THE COMPUTER CORE FACILITYEvaluation, purchase, set up, and maintenance of the computer hardware and network for the 170 users in the research...

  9. Monitoring SLAC High Performance UNIX Computing Systems

    International Nuclear Information System (INIS)

    Lettsome, Annette K.

    2005-01-01

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface

  10. High Performance Computing Operations Review Report

    Energy Technology Data Exchange (ETDEWEB)

    Cupps, Kimberly C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  11. African Journals Online: Technology, Computer Science ...

    African Journals Online (AJOL)

    Items 1 - 29 of 29 ... ... aspects of science, technology, agriculture, health and other related fields. ... International Journal of Engineering, Science and Technology ... Mechanical Engineering, Petroleum Engineering, Physics and other related ...

  12. Technology Leadership in Malaysia's High Performance School

    Science.gov (United States)

    Yieng, Wong Ai; Daud, Khadijah Binti

    2017-01-01

    Headmaster as leader of the school also plays a role as a technology leader. This applies to the high performance schools (HPS) headmaster as well. The HPS excel in all aspects of education. In this study, researcher is interested in examining the role of the headmaster as a technology leader through interviews with three headmasters of high…

  13. Design Anthropology, Emerging Technologies and Alternative Computational Futures

    DEFF Research Database (Denmark)

    Smith, Rachel Charlotte

    Emerging technologies are providing a new field for design anthropological inquiry that unite experiences, imaginaries and materialities in complex way and demands new approaches to developing sustainable computational futures.......Emerging technologies are providing a new field for design anthropological inquiry that unite experiences, imaginaries and materialities in complex way and demands new approaches to developing sustainable computational futures....

  14. Women and Computer Based Technologies: A Feminist Perspective.

    Science.gov (United States)

    Morritt, Hope

    The use of computer based technologies by professional women in education is examined through a feminist standpoint theory in this paper. The theory is grounded in eight claims which form the basis of the conceptual framework for the study. The experiences of nine women participants with computer based technologies were categorized using three…

  15. Performance Evaluation Methods for Assistive Robotic Technology

    Science.gov (United States)

    Tsui, Katherine M.; Feil-Seifer, David J.; Matarić, Maja J.; Yanco, Holly A.

    Robots have been developed for several assistive technology domains, including intervention for Autism Spectrum Disorders, eldercare, and post-stroke rehabilitation. Assistive robots have also been used to promote independent living through the use of devices such as intelligent wheelchairs, assistive robotic arms, and external limb prostheses. Work in the broad field of assistive robotic technology can be divided into two major research phases: technology development, in which new devices, software, and interfaces are created; and clinical, in which assistive technology is applied to a given end-user population. Moving from technology development towards clinical applications is a significant challenge. Developing performance metrics for assistive robots poses a related set of challenges. In this paper, we survey several areas of assistive robotic technology in order to derive and demonstrate domain-specific means for evaluating the performance of such systems. We also present two case studies of applied performance measures and a discussion regarding the ubiquity of functional performance measures across the sampled domains. Finally, we present guidelines for incorporating human performance metrics into end-user evaluations of assistive robotic technologies.

  16. Advances in Computing and Information Technology : Proceedings of the Second International Conference on Advances in Computing and Information Technology

    CERN Document Server

    Nagamalai, Dhinaharan; Chaki, Nabendu

    2013-01-01

    The international conference on Advances in Computing and Information technology (ACITY 2012) provides an excellent international forum for both academics and professionals for sharing knowledge and results in theory, methodology and applications of Computer Science and Information Technology. The Second International Conference on Advances in Computing and Information technology (ACITY 2012), held in Chennai, India, during July 13-15, 2012, covered a number of topics in all major fields of Computer Science and Information Technology including: networking and communications, network security and applications, web and internet computing, ubiquitous computing, algorithms, bioinformatics, digital image processing and pattern recognition, artificial intelligence, soft computing and applications. Upon a strength review process, a number of high-quality, presenting not only innovative ideas but also a founded evaluation and a strong argumentation of the same, were selected and collected in the present proceedings, ...

  17. CICT Computing, Information, and Communications Technology Program

    Science.gov (United States)

    Laufenberg, Lawrence; Tu, Eugene (Technical Monitor)

    2002-01-01

    The CICT Program is part of the NASA Aerospace Technology Enterprise's fundamental technology thrust to develop tools. processes, and technologies that enable new aerospace system capabilities and missions. The CICT Program's four key objectives are: Provide seamless access to NASA resources- including ground-, air-, and space-based distributed information technology resources-so that NASA scientists and engineers can more easily control missions, make new scientific discoveries, and design the next-generation space vehicles, provide high-data delivery from these assets directly to users for missions, develop goal-oriented human-centered systems, and research, develop and evaluate revolutionary technology.

  18. Visual Analysis of Cloud Computing Performance Using Behavioral Lines.

    Science.gov (United States)

    Muelder, Chris; Zhu, Biao; Chen, Wei; Zhang, Hongxin; Ma, Kwan-Liu

    2016-02-29

    Cloud computing is an essential technology to Big Data analytics and services. A cloud computing system is often comprised of a large number of parallel computing and storage devices. Monitoring the usage and performance of such a system is important for efficient operations, maintenance, and security. Tracing every application on a large cloud system is untenable due to scale and privacy issues. But profile data can be collected relatively efficiently by regularly sampling the state of the system, including properties such as CPU load, memory usage, network usage, and others, creating a set of multivariate time series for each system. Adequate tools for studying such large-scale, multidimensional data are lacking. In this paper, we present a visual based analysis approach to understanding and analyzing the performance and behavior of cloud computing systems. Our design is based on similarity measures and a layout method to portray the behavior of each compute node over time. When visualizing a large number of behavioral lines together, distinct patterns often appear suggesting particular types of performance bottleneck. The resulting system provides multiple linked views, which allow the user to interactively explore the data by examining the data or a selected subset at different levels of detail. Our case studies, which use datasets collected from two different cloud systems, show that this visual based approach is effective in identifying trends and anomalies of the systems.

  19. NINJA: Java for High Performance Numerical Computing

    Directory of Open Access Journals (Sweden)

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  20. The Adoption of Cloud Computing Technology for Library Services ...

    African Journals Online (AJOL)

    The study investigated the rationales for the adoption of cloud computing technology for library services in NOUN Library. Issues related to the existing computer network available in NOUN library such as LAN, WAN, rationales for the adoption of cloud computing in NOUN library such as the need to disclose their collections ...

  1. Designing and Implementing Performance Technology for Teachers

    Directory of Open Access Journals (Sweden)

    Joi L. Moore

    2004-06-01

    Full Text Available This paper synthesizes research findings from a performance analysis of teacher tasks and the implementation of performance technology. These findings are aligned with design and implementation theories to provide understanding of the complex factors and events that occur during the implementation process. The article describes the necessary elements and conditions for designing and implementing performance tools in school environments that will encourage usage, efficient performance, and positive attitudes. Two models provide a visual representation of causal relationships between the implementation factors and the technology user. Although the implementation process can become complex because of the simultaneous events and phases, it can be properly managed through good communication and strategic involvement of teachers during the design and development process. The models may be able to assist technology designers and advocates with presenting innovations to teachers who are frequently asked to try technical solutions for performance support or improvement.

  2. Development of superconductor electronics technology for high-end computing

    Energy Technology Data Exchange (ETDEWEB)

    Silver, A [Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109-8099 (United States); Kleinsasser, A [Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109-8099 (United States); Kerber, G [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States); Herr, Q [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States); Dorojevets, M [Department of Electrical and Computer Engineering, SUNY-Stony Brook, NY 11794-2350 (United States); Bunyk, P [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States); Abelson, L [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States)

    2003-12-01

    This paper describes our programme to develop and demonstrate ultra-high performance single flux quantum (SFQ) VLSI technology that will enable superconducting digital processors for petaFLOPS-scale computing. In the hybrid technology, multi-threaded architecture, the computational engine to power a petaFLOPS machine at affordable power will consist of 4096 SFQ multi-chip processors, with 50 to 100 GHz clock frequency and associated cryogenic RAM. We present the superconducting technology requirements, progress to date and our plan to meet these requirements. We improved SFQ Nb VLSI by two generations, to a 8 kA cm{sup -2}, 1.25 {mu}m junction process, incorporated new CAD tools into our methodology, demonstrated methods for recycling the bias current and data communication at speeds up to 60 Gb s{sup -1}, both on and between chips through passive transmission lines. FLUX-1 is the most ambitious project implemented in SFQ technology to date, a prototype general-purpose 8 bit microprocessor chip. We are testing the FLUX-1 chip (5K gates, 20 GHz clock) and designing a 32 bit floating-point SFQ multiplier with vector-register memory. We report correct operation of the complete stripline-connected gate library with large bias margins, as well as several larger functional units used in FLUX-1. The next stage will be an SFQ multi-processor machine. Important challenges include further reducing chip supply current and on-chip power dissipation, developing at least 64 kbit, sub-nanosecond cryogenic RAM chips, developing thermally and electrically efficient high data rate cryogenic-to-ambient input/output technology and improving Nb VLSI to increase gate density.

  3. Development of superconductor electronics technology for high-end computing

    International Nuclear Information System (INIS)

    Silver, A; Kleinsasser, A; Kerber, G; Herr, Q; Dorojevets, M; Bunyk, P; Abelson, L

    2003-01-01

    This paper describes our programme to develop and demonstrate ultra-high performance single flux quantum (SFQ) VLSI technology that will enable superconducting digital processors for petaFLOPS-scale computing. In the hybrid technology, multi-threaded architecture, the computational engine to power a petaFLOPS machine at affordable power will consist of 4096 SFQ multi-chip processors, with 50 to 100 GHz clock frequency and associated cryogenic RAM. We present the superconducting technology requirements, progress to date and our plan to meet these requirements. We improved SFQ Nb VLSI by two generations, to a 8 kA cm -2 , 1.25 μm junction process, incorporated new CAD tools into our methodology, demonstrated methods for recycling the bias current and data communication at speeds up to 60 Gb s -1 , both on and between chips through passive transmission lines. FLUX-1 is the most ambitious project implemented in SFQ technology to date, a prototype general-purpose 8 bit microprocessor chip. We are testing the FLUX-1 chip (5K gates, 20 GHz clock) and designing a 32 bit floating-point SFQ multiplier with vector-register memory. We report correct operation of the complete stripline-connected gate library with large bias margins, as well as several larger functional units used in FLUX-1. The next stage will be an SFQ multi-processor machine. Important challenges include further reducing chip supply current and on-chip power dissipation, developing at least 64 kbit, sub-nanosecond cryogenic RAM chips, developing thermally and electrically efficient high data rate cryogenic-to-ambient input/output technology and improving Nb VLSI to increase gate density

  4. Study of application technology of ultra-high speed computer to the elucidation of complex phenomena

    International Nuclear Information System (INIS)

    Sekiguchi, Tomotsugu

    1996-01-01

    The basic design of numerical information library in the decentralized computer network was explained at the first step of constructing the application technology of ultra-high speed computer to the elucidation of complex phenomena. Establishment of the system makes possible to construct the efficient application environment of ultra-high speed computer system to be scalable with the different computing systems. We named the system Ninf (Network Information Library for High Performance Computing). The summary of application technology of library was described as follows: the application technology of library under the distributed environment, numeric constants, retrieval of value, library of special functions, computing library, Ninf library interface, Ninf remote library and registration. By the system, user is able to use the program concentrating the analyzing technology of numerical value with high precision, reliability and speed. (S.Y.)

  5. USSR Report, Cybernetics Computers and Automation Technology

    Science.gov (United States)

    1985-09-05

    organization, the SKALD program utilizes a dictionary or data base to generate SKALD poetry at the computer center of Minsk State Pedagogical ...wonderful capabilities at the^ Krasnoyarsk branch of the USSR AN [Academy of Sciences] Siberian section’s Computer Center. They began training the kids

  6. Computer fan performance enhancement via acoustic perturbations

    Energy Technology Data Exchange (ETDEWEB)

    Greenblatt, David, E-mail: davidg@technion.ac.il [Faculty of Mechanical Engineering, Technion - Israel Institute of Technology, Haifa (Israel); Avraham, Tzahi; Golan, Maayan [Faculty of Mechanical Engineering, Technion - Israel Institute of Technology, Haifa (Israel)

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Computer fan effectiveness was increased by introducing acoustic perturbations. Black-Right-Pointing-Pointer Acoustic perturbations controlled blade boundary layer separation. Black-Right-Pointing-Pointer Optimum frequencies corresponded with airfoils studies. Black-Right-Pointing-Pointer Exploitation of flow instabilities was responsible for performance improvements. Black-Right-Pointing-Pointer Peak pressure and peak flowrate were increased by 40% and 15% respectively. - Abstract: A novel technique for increasing computer fan effectiveness, based on introducing acoustic perturbations onto the fan blades to control boundary layer separation, was assessed. Experiments were conducted in a specially designed facility that simultaneously allowed characterization of fan performance and introduction of the perturbations. A parametric study was conducted to determine the optimum control parameters, namely those that deliver the largest increase in fan pressure for a given flowrate. The optimum reduced frequencies corresponded with those identified on stationary airfoils and it was thus concluded that the exploitation of Kelvin-Helmholtz instabilities, commonly observed on airfoils, was responsible for the fan blade performance improvements. The optimum control inputs, such as acoustic frequency and sound pressure level, showed some variation with different fan flowrates. With the near-optimum control conditions identified, the full operational envelope of the fan, when subjected to acoustic perturbations, was assessed. The peak pressure and peak flowrate were increased by up to 40% and 15% respectively. The peak fan efficiency increased with acoustic perturbations but the overall system efficiency was reduced when the speaker input power was accounted for.

  7. Computer fan performance enhancement via acoustic perturbations

    International Nuclear Information System (INIS)

    Greenblatt, David; Avraham, Tzahi; Golan, Maayan

    2012-01-01

    Highlights: ► Computer fan effectiveness was increased by introducing acoustic perturbations. ► Acoustic perturbations controlled blade boundary layer separation. ► Optimum frequencies corresponded with airfoils studies. ► Exploitation of flow instabilities was responsible for performance improvements. ► Peak pressure and peak flowrate were increased by 40% and 15% respectively. - Abstract: A novel technique for increasing computer fan effectiveness, based on introducing acoustic perturbations onto the fan blades to control boundary layer separation, was assessed. Experiments were conducted in a specially designed facility that simultaneously allowed characterization of fan performance and introduction of the perturbations. A parametric study was conducted to determine the optimum control parameters, namely those that deliver the largest increase in fan pressure for a given flowrate. The optimum reduced frequencies corresponded with those identified on stationary airfoils and it was thus concluded that the exploitation of Kelvin–Helmholtz instabilities, commonly observed on airfoils, was responsible for the fan blade performance improvements. The optimum control inputs, such as acoustic frequency and sound pressure level, showed some variation with different fan flowrates. With the near-optimum control conditions identified, the full operational envelope of the fan, when subjected to acoustic perturbations, was assessed. The peak pressure and peak flowrate were increased by up to 40% and 15% respectively. The peak fan efficiency increased with acoustic perturbations but the overall system efficiency was reduced when the speaker input power was accounted for.

  8. Performance Measurements in a High Throughput Computing Environment

    CERN Document Server

    AUTHOR|(CDS)2145966; Gribaudo, Marco

    The IT infrastructures of companies and research centres are implementing new technologies to satisfy the increasing need of computing resources for big data analysis. In this context, resource profiling plays a crucial role in identifying areas where the improvement of the utilisation efficiency is needed. In order to deal with the profiling and optimisation of computing resources, two complementary approaches can be adopted: the measurement-based approach and the model-based approach. The measurement-based approach gathers and analyses performance metrics executing benchmark applications on computing resources. Instead, the model-based approach implies the design and implementation of a model as an abstraction of the real system, selecting only those aspects relevant to the study. This Thesis originates from a project carried out by the author within the CERN IT department. CERN is an international scientific laboratory that conducts fundamental researches in the domain of elementary particle physics. The p...

  9. High performance computations using dynamical nucleation theory

    International Nuclear Information System (INIS)

    Windus, T L; Crosby, L D; Kathmann, S M

    2008-01-01

    Chemists continue to explore the use of very large computations to perform simulations that describe the molecular level physics of critical challenges in science. In this paper, we describe the Dynamical Nucleation Theory Monte Carlo (DNTMC) model - a model for determining molecular scale nucleation rate constants - and its parallel capabilities. The potential for bottlenecks and the challenges to running on future petascale or larger resources are delineated. A 'master-slave' solution is proposed to scale to the petascale and will be developed in the NWChem software. In addition, mathematical and data analysis challenges are described

  10. Edge computing technologies for Internet of Things: a primer

    Directory of Open Access Journals (Sweden)

    Yuan Ai

    2018-04-01

    Full Text Available With the rapid development of mobile internet and Internet of Things applications, the conventional centralized cloud computing is encountering severe challenges, such as high latency, low Spectral Efficiency (SE, and non-adaptive machine type of communication. Motivated to solve these challenges, a new technology is driving a trend that shifts the function of centralized cloud computing to edge devices of networks. Several edge computing technologies originating from different backgrounds to decrease latency, improve SE, and support the massive machine type of communication have been emerging. This paper comprehensively presents a tutorial on three typical edge computing technologies, namely mobile edge computing, cloudlets, and fog computing. In particular, the standardization efforts, principles, architectures, and applications of these three technologies are summarized and compared. From the viewpoint of radio access network, the differences between mobile edge computing and fog computing are highlighted, and the characteristics of fog computing-based radio access network are discussed. Finally, open issues and future research directions are identified as well. Keywords: Internet of Things (IoT, Mobile edge computing, Cloudlets, Fog computing

  11. Audit and Evaluation of Computer Security. Computer Science and Technology.

    Science.gov (United States)

    Ruthberg, Zella G.

    This is a collection of consensus reports, each produced at a session of an invitational workshop sponsored by the National Bureau of Standards. The purpose of the workshop was to explore the state-of-the-art and define appropriate subjects for future research in the audit and evaluation of computer security. Leading experts in the audit and…

  12. Reviews of computing technology: Software overview

    Energy Technology Data Exchange (ETDEWEB)

    Hartshorn, W.R.; Johnson, A.L.

    1994-01-05

    The Savannah River Site Computing Architecture states that the site computing environment will be standards-based, data-driven, and workstation-oriented. Larger server systems deliver needed information to users in a client-server relationship. Goals of the Architecture include utilizing computing resources effectively, maintaining a high level of data integrity, developing a robust infrastructure, and storing data in such a way as to promote accessibility and usability. This document describes the current storage environment at Savannah River Site (SRS) and presents some of the problems that will be faced and strategies that are planned over the next few years.

  13. Analysis of parallel computing performance of the code MCNP

    International Nuclear Information System (INIS)

    Wang Lei; Wang Kan; Yu Ganglin

    2006-01-01

    Parallel computing can reduce the running time of the code MCNP effectively. With the MPI message transmitting software, MCNP5 can achieve its parallel computing on PC cluster with Windows operating system. Parallel computing performance of MCNP is influenced by factors such as the type, the complexity level and the parameter configuration of the computing problem. This paper analyzes the parallel computing performance of MCNP regarding with these factors and gives measures to improve the MCNP parallel computing performance. (authors)

  14. Providing Learning Computing Labs using Hosting and Virtualization Technologies

    Directory of Open Access Journals (Sweden)

    Armide González

    2011-05-01

    Full Text Available This paper presents a computing hosting system to provide virtual computing laboratories for learning activities. This system is based on hosting and virtualization technologies. All the components used in its development are free software tools. The computing lab model provided by the system is a more sustainable and scalable alternative than the traditional academic computing lab, and it requires lower costs of installation and operation.

  15. Wearable computer technology for dismounted applications

    Science.gov (United States)

    Daniels, Reginald

    2010-04-01

    Small computing devices which rival the compact size of traditional personal digital assistants (PDA) have recently established a market niche. These computing devices are small enough to be considered unobtrusive for humans to wear. The computing devices are also powerful enough to run full multi-tasking general purpose operating systems. This paper will explore the wearable computer information system for dismounted applications recently fielded for ground-based US Air Force use. The environments that the information systems are used in will be reviewed, as well as a description of the net-centric, ground-based warrior. The paper will conclude with a discussion regarding the importance of intuitive, usable, and unobtrusive operator interfaces for dismounted operators.

  16. Emerging Trends in Technology Education Computer Applications.

    Science.gov (United States)

    Hazari, Sunil I.

    1993-01-01

    Graphical User Interface (GUI)--and its variant, pen computing--is rapidly replacing older types of operating environments. Despite its heavier demand for processing power, GUI has many advantages. (SK)

  17. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  18. JPRS Report, Science & Technology, USSR: Computers.

    Science.gov (United States)

    1988-07-08

    Computer Graphics in Ergonomie Design (A. H. Kudryavtsev; TEKHNICHESKAYA ESTETIKA, No 9, 1987) 45 Prospecting Systems Based on Electrical and Seismic...kodirovanlya, 1987 9835 44 APPLICATIONS UDC 331.101.1:62,001.66:681.3:766 Computer Graphics in Ergonomie Design 18630003 Moscow TEKHN1CHESKAYA ESTETIKA in...characteristics (visual, aural and other sensory capabilities), Figure 1. Ergonomie CAD System Structure (10) (15) CPEflCTBA MAWWHHOW TPAOMKH

  19. Computational methods in molecular imaging technologies

    CERN Document Server

    Gunjan, Vinit Kumar; Venkatesh, C; Amarnath, M

    2017-01-01

    This book highlights the experimental investigations that have been carried out on magnetic resonance imaging and computed tomography (MRI & CT) images using state-of-the-art Computational Image processing techniques, and tabulates the statistical values wherever necessary. In a very simple and straightforward way, it explains how image processing methods are used to improve the quality of medical images and facilitate analysis. It offers a valuable resource for researchers, engineers, medical doctors and bioinformatics experts alike.

  20. Photovoltaic technology, performance, manufacturing cost and markets

    International Nuclear Information System (INIS)

    Maycock, P.D.

    1999-01-01

    A comprehensive discussion of key aspects of photovoltaic energy conversion systems will provide the basis for forecasting PV module shipments from 1999 to 2010. Principal areas covered include: (1) Technology and Performance Status: The module efficiency and performance are described for commercial cell technologies including single crystal silicon, polycrystal silicon, ribbon silicon, film silicon on low cost substrate, amorphous silicon, copper indium diselenide, and cadmium telluride; (2) Manufacturing cost: 1999 costs for PV technologies in production (single crystal silicon, polycrystal silicon, and amorphous silicon) are developed. Manufacturing costs for 10--25 MW plants and 100 MW plants will be estimated; (3) The world PV market is summarized by region, top ten companies, and technology; and (4) Forecast of the World Market (seven market sectors) to 2010 will be presented. Key assumptions, price of modules, incentive programs, price of competing electricity generation will be detailed

  1. The 9th international conference on computing and information technology

    CERN Document Server

    Unger, Herwig; Boonkrong, Sirapat; IC2IT2013

    2013-01-01

    This volume contains the papers of the 9th International Conference on Computing and Information Technology (IC2IT 2013) held at King Mongkut's University of Technology North Bangkok (KMUTNB), Bangkok, Thailand, on May 9th-10th, 2013. Traditionally, the conference is organized in conjunction with the National Conference on Computing and Information Technology, one of the leading Thai national events in the area of Computer Science and Engineering. The conference as well as this volume is structured into 3 main tracks on Data Networks/Communication, Data Mining/Machine Learning, and Human Interfaces/Image processing.  

  2. Merging Technology and Emotions: Introduction to Affective Computing.

    Science.gov (United States)

    Brigham, Tara J

    2017-01-01

    Affective computing technologies are designed to sense and respond based on human emotions. This technology allows a computer system to process the information gathered from various sensors to assess the emotional state of an individual. The system then offers a distinct response based on what it "felt." While this is completely unlike how most people interact with electronics today, this technology is likely to trickle into future everyday life. This column will explain what affective computing is, some of its benefits, and concerns with its adoption. It will also provide an overview of its implication in the library setting and offer selected examples of how and where it is currently being used.

  3. The path toward HEP High Performance Computing

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  4. USSR Report, Cybernetics, Computers and Automation Technology

    Science.gov (United States)

    1987-03-31

    version of the system was tested by adapting PAL-11 and MACRO-11 assembly code for the "Elektronika=60" and "Elektronika-60M" computers; ASM -86 for the...GS, "On the Results of Evaluation of Insurance Payments in Collective and State Farms and Private Households," the actuarial analysis tables based

  5. Computed tomography - old ideas and new technology

    Energy Technology Data Exchange (ETDEWEB)

    Fleischmann, Dominik; Boas, F.E. [Stanford University, School of Medicine, Department of Radiology, Stanford, CA (United States)

    2011-03-15

    Several recently introduced 'new' techniques in computed tomography - iterative reconstruction, gated cardiac CT, multiple-source, and dual-energy CT - actually date back to the early days of CT. We review the historic origins and evolution of these techniques, which may provide some insight into the latest innovations in commercial CT systems. (orig.)

  6. Cloud Computing Technologies Facilitate Earth Research

    Science.gov (United States)

    2015-01-01

    Under a Space Act Agreement, NASA partnered with Seattle-based Amazon Web Services to make the agency's climate and Earth science satellite data publicly available on the company's servers. Users can access the data for free, but they can also pay to use Amazon's computing services to analyze and visualize information using the same software available to NASA researchers.

  7. Girls and Computer Technology: Barrier or Key?

    Science.gov (United States)

    Gipson, Joella

    1997-01-01

    Discusses the disparity in numbers of girls and boys taking math, science, and computer classes in elementary and secondary schools, and examines steps being taken to better prepare girls, especially minority girls, for an increasingly technical society. A program in Michigan is described that involved a school and business partnership. (LRW)

  8. Soft Computing in Construction Information Technology

    NARCIS (Netherlands)

    Ciftcioglu, O.; Durmisevic, S.; Sariyildiz, S.

    2001-01-01

    The last decade, civil engineering has exercised a rapidly growing interest in the application of neurally inspired computing techniques. The motive for this interest was the promises of certain information processing characteristics, which are similar to some extend, to those of human brain. The

  9. USSR Report: Cybernetics, Computers and Automation Technology

    Science.gov (United States)

    1986-12-03

    Georgian SSR Academy of Sciences: "Ready for Dialogue"] [Text] Computers in schools, auditoria , and educational laboratories are an phenomenon to which we...professional-technical academies and VUZ auditoria . Obviously, the color of the screens and the characters on them is of major importance for people

  10. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  11. Computing handbook information systems and information technology

    CERN Document Server

    Topi, Heikki

    2014-01-01

    Disciplinary Foundations and Global ImpactEvolving Discipline of Information Systems Heikki TopiDiscipline of Information Technology Barry M. Lunt and Han ReichgeltInformation Systems as a Practical Discipline Juhani IivariInformation Technology Han Reichgelt, Joseph J. Ekstrom, Art Gowan, and Barry M. LuntSociotechnical Approaches to the Study of Information Systems Steve Sawyer and Mohammad Hossein JarrahiIT and Global Development Erkki SutinenUsing ICT for Development, Societal Transformation, and Beyond Sherif KamelTechnical Foundations of Data and Database ManagementData Models Avi Silber

  12. The potential impact of computer-aided assessment technology in ...

    African Journals Online (AJOL)

    The potential impact of computer-aided assessment technology in higher education. ... Further more 'Increased number of students in Higher Education and the ... benefits, limitations, impacts on student learning and strategies for developing ...

  13. Cloud manufacturing distributed computing technologies for global and sustainable manufacturing

    CERN Document Server

    Mehnen, Jörn

    2013-01-01

    Global networks, which are the primary pillars of the modern manufacturing industry and supply chains, can only cope with the new challenges, requirements and demands when supported by new computing and Internet-based technologies. Cloud Manufacturing: Distributed Computing Technologies for Global and Sustainable Manufacturing introduces a new paradigm for scalable service-oriented sustainable and globally distributed manufacturing systems.   The eleven chapters in this book provide an updated overview of the latest technological development and applications in relevant research areas.  Following an introduction to the essential features of Cloud Computing, chapters cover a range of methods and applications such as the factors that actually affect adoption of the Cloud Computing technology in manufacturing companies and new geometrical simplification method to stream 3-Dimensional design and manufacturing data via the Internet. This is further supported case studies and real life data for Waste Electrical ...

  14. DICOM standard in computer-aided medical technologies

    International Nuclear Information System (INIS)

    Plotnikov, A.V.; Prilutskij, D.A.; Selishchev, S.V.

    1997-01-01

    The paper outlines one of the promising standards to transmit images in medicine, in radiology in particular. the essence of the standard DICOM is disclosed and promises of its introduction into computer-aided medical technologies

  15. A High Performance COTS Based Computer Architecture

    Science.gov (United States)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  16. Performance evaluation of a computed radiography system

    Energy Technology Data Exchange (ETDEWEB)

    Roussilhe, J.; Fallet, E. [Carestream Health France, 71 - Chalon/Saone (France); Mango, St.A. [Carestream Health, Inc. Rochester, New York (United States)

    2007-07-01

    Computed radiography (CR) standards have been formalized and published in Europe and in the US. The CR system classification is defined in those standards by - minimum normalized signal-to-noise ratio (SNRN), and - maximum basic spatial resolution (SRb). Both the signal-to-noise ratio (SNR) and the contrast sensitivity of a CR system depend on the dose (exposure time and conditions) at the detector. Because of their wide dynamic range, the same storage phosphor imaging plate can qualify for all six CR system classes. The exposure characteristics from 30 to 450 kV, the contrast sensitivity, and the spatial resolution of the KODAK INDUSTREX CR Digital System have been thoroughly evaluated. This paper will present some of the factors that determine the system's spatial resolution performance. (authors)

  17. FY 1996 Blue Book: High Performance Computing and Communications: Foundations for America`s Information Future

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — The Federal High Performance Computing and Communications HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of...

  18. Restricted access processor - An application of computer security technology

    Science.gov (United States)

    Mcmahon, E. M.

    1985-01-01

    This paper describes a security guard device that is currently being developed by Computer Sciences Corporation (CSC). The methods used to provide assurance that the system meets its security requirements include the system architecture, a system security evaluation, and the application of formal and informal verification techniques. The combination of state-of-the-art technology and the incorporation of new verification procedures results in a demonstration of the feasibility of computer security technology for operational applications.

  19. Applications of Computer Technology in Complex Craniofacial Reconstruction

    Directory of Open Access Journals (Sweden)

    Kristopher M. Day, MD

    2018-03-01

    Conclusion:. Modern 3D technology allows the surgeon to better analyze complex craniofacial deformities, precisely plan surgical correction with computer simulation of results, customize osteotomies, plan distractions, and print 3DPCI, as needed. The use of advanced 3D computer technology can be applied safely and potentially improve aesthetic and functional outcomes after complex craniofacial reconstruction. These techniques warrant further study and may be reproducible in various centers of care.

  20. 10th International Conference on Computing and Information Technology

    CERN Document Server

    Unger, Herwig; Meesad, Phayung

    2014-01-01

    Computer and Information Technology (CIT) are now involved in governmental, industrial, and business domains more than ever. Thus, it is important for CIT personnel to continue academic research to improve technology and its adoption to modern applications. The up-to-date research and technologies must be distributed to researchers and CIT community continuously to aid future development. The 10th International Conference on Computing and Information Technology (IC 2 IT2014) organized by King Mongkut's University of Technology North Bangkok (KMUTNB) and partners provides an exchange of the state of the art and future developments in the two key areas of this process: Computer Networking and Data Mining. Behind the background of the foundation of ASEAN, it becomes clear that efficient languages, business principles and communication methods need to be adapted, unified and especially optimized to gain a maximum benefit to the users and customers of future IT systems.

  1. Review of Enabling Technologies to Facilitate Secure Compute Customization

    Energy Technology Data Exchange (ETDEWEB)

    Aderholdt, Ferrol [Tennessee Technological University; Caldwell, Blake A [ORNL; Hicks, Susan Elaine [ORNL; Koch, Scott M [ORNL; Naughton, III, Thomas J [ORNL; Pelfrey, Daniel S [ORNL; Pogge, James R [Tennessee Technological University; Scott, Stephen L [Tennessee Technological University; Shipman, Galen M [ORNL; Sorrillo, Lawrence [ORNL

    2014-12-01

    High performance computing environments are often used for a wide variety of workloads ranging from simulation, data transformation and analysis, and complex workflows to name just a few. These systems may process data for a variety of users, often requiring strong separation between job allocations. There are many challenges to establishing these secure enclaves within the shared infrastructure of high-performance computing (HPC) environments. The isolation mechanisms in the system software are the basic building blocks for enabling secure compute enclaves. There are a variety of approaches and the focus of this report is to review the different virtualization technologies that facilitate the creation of secure compute enclaves. The report reviews current operating system (OS) protection mechanisms and modern virtualization technologies to better understand the performance/isolation properties. We also examine the feasibility of running ``virtualized'' computing resources as non-privileged users, and providing controlled administrative permissions for standard users running within a virtualized context. Our examination includes technologies such as Linux containers (LXC [32], Docker [15]) and full virtualization (KVM [26], Xen [5]). We categorize these different approaches to virtualization into two broad groups: OS-level virtualization and system-level virtualization. The OS-level virtualization uses containers to allow a single OS kernel to be partitioned to create Virtual Environments (VE), e.g., LXC. The resources within the host's kernel are only virtualized in the sense of separate namespaces. In contrast, system-level virtualization uses hypervisors to manage multiple OS kernels and virtualize the physical resources (hardware) to create Virtual Machines (VM), e.g., Xen, KVM. This terminology of VE and VM, detailed in Section 2, is used throughout the report to distinguish between the two different approaches to providing virtualized execution

  2. International Conference on Emerging Technologies for Information Systems, Computing, and Management

    CERN Document Server

    Ma, Tinghuai; Emerging Technologies for Information Systems, Computing, and Management

    2013-01-01

    This book aims to examine innovation in the fields of information technology, software engineering, industrial engineering, management engineering. Topics covered in this publication include; Information System Security, Privacy, Quality Assurance, High-Performance Computing and Information System Management and Integration. The book presents papers from The Second International Conference for Emerging Technologies Information Systems, Computing, and Management (ICM2012) which was held on December 1 to 2, 2012 in Hangzhou, China.

  3. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  4. Computer technology: its potential for industrial energy conservation. A technology applications manual

    Energy Technology Data Exchange (ETDEWEB)

    None

    1979-01-01

    Today, computer technology is within the reach of practically any industrial corporation regardless of product size. This manual highlights a few of the many applications of computers in the process industry and provides the technical reader with a basic understanding of computer technology, terminology, and the interactions among the various elements of a process computer system. The manual has been organized to separate process applications and economics from computer technology. Chapter 1 introduces the present status of process computer technology and describes the four major applications - monitoring, analysis, control, and optimization. The basic components of a process computer system also are defined. Energy-saving applications in the four major categories defined in Chapter 1 are discussed in Chapter 2. The economics of process computer systems is the topic of Chapter 3, where the historical trend of process computer system costs is presented. Evaluating a process for the possible implementation of a computer system requires a basic understanding of computer technology as well as familiarity with the potential applications; Chapter 4 provides enough technical information for an evaluation. Computer and associated peripheral costs and the logical sequence of steps in the development of a microprocessor-based process control system are covered in Chapter 5.

  5. JPRS Report, Science & Technology, USSR: Computers

    Science.gov (United States)

    1987-09-23

    pages of Literary Gazette, it would be appropriate to proceed with a literary example. Not just elegance of handwriting (made absolutely unnecessary... adult population of the industrially developed nations would have been absorbed by scientific organizations. For this reason, the phenomenon of so...The Institute’s festivities are over. The young specialists in the computer department are in an elated mood . Thanks to their enthusiasm, clearness

  6. Using Computer Technology To Aid the Disabled Reader.

    Science.gov (United States)

    Balajthy, Ernest

    When matched for achievement level and educational objectives, computer technology can be particularly effective with at-risk students. Computer-assisted instructional software is the most widely available type of software. An exciting development pertinent to literacy education is the development of the "electronic book" (also called…

  7. Strategic Planning for Computer-Based Educational Technology.

    Science.gov (United States)

    Bozeman, William C.

    1984-01-01

    Offers educational practitioners direction for the development of a master plan for the implementation and application of computer-based educational technology by briefly examining computers in education, discussing organizational change from a theoretical perspective, and presenting an overview of the planning strategy known as the planning and…

  8. The Computer Industry. High Technology Industries: Profiles and Outlooks.

    Science.gov (United States)

    International Trade Administration (DOC), Washington, DC.

    A series of meetings was held to assess future problems in United States high technology, particularly in the fields of robotics, computers, semiconductors, and telecommunications. This report, which focuses on the computer industry, includes a profile of this industry and the papers presented by industry speakers during the meetings. The profile…

  9. Audio Technology and Mobile Human Computer Interaction

    DEFF Research Database (Denmark)

    Chamberlain, Alan; Bødker, Mads; Hazzard, Adrian

    2017-01-01

    Audio-based mobile technology is opening up a range of new interactive possibilities. This paper brings some of those possibilities to light by offering a range of perspectives based in this area. It is not only the technical systems that are developing, but novel approaches to the design...... and understanding of audio-based mobile systems are evolving to offer new perspectives on interaction and design and support such systems to be applied in areas, such as the humanities....

  10. Medical imaging technology reviews and computational applications

    CERN Document Server

    Dewi, Dyah

    2015-01-01

    This book presents the latest research findings and reviews in the field of medical imaging technology, covering ultrasound diagnostics approaches for detecting osteoarthritis, breast carcinoma and cardiovascular conditions, image guided biopsy and segmentation techniques for detecting lung cancer, image fusion, and simulating fluid flows for cardiovascular applications. It offers a useful guide for students, lecturers and professional researchers in the fields of biomedical engineering and image processing.

  11. High-Performance Computing Paradigm and Infrastructure

    CERN Document Server

    Yang, Laurence T

    2006-01-01

    With hyperthreading in Intel processors, hypertransport links in next generation AMD processors, multi-core silicon in today's high-end microprocessors from IBM and emerging grid computing, parallel and distributed computers have moved into the mainstream

  12. Computed Tomography Technology: Development and Applications for Defence

    International Nuclear Information System (INIS)

    Baheti, G. L.; Saxena, Nisheet; Tripathi, D. K.; Songara, K. C.; Meghwal, L. R.; Meena, V. L.

    2008-01-01

    Computed Tomography(CT) has revolutionized the field of Non-Destructive Testing and Evaluation (NDT and E). Tomography for industrial applications warrants design and development of customized solutions catering to specific visualization requirements. Present paper highlights Tomography Technology Solutions implemented at Defence Laboratory, Jodhpur (DLJ). Details on the technological developments carried out and their utilization for various Defence applications has been covered.

  13. COMPUGIRLS: Stepping Stone to Future Computer-Based Technology Pathways

    Science.gov (United States)

    Lee, Jieun; Husman, Jenefer; Scott, Kimberly A.; Eggum-Wilkens, Natalie D.

    2015-01-01

    The COMPUGIRLS: Culturally relevant technology program for adolescent girls was developed to promote underrepresented girls' future possible selves and career pathways in computer-related technology fields. We hypothesized that the COMPUGIRLS would promote academic possible selves and self-regulation to achieve these possible selves. We compared…

  14. Computers Put a Journalism School on Technology's Leading Edge.

    Science.gov (United States)

    Blum, Debra E.

    1992-01-01

    Since 1985, the University of Missouri at Columbia's School of Journalism has been developing a high-technology environment for student work, including word processing, electronic imaging, networked personal computers, and telecommunications. Some faculty worry that the emphasis on technology may overshadow the concepts, principles, and substance…

  15. Factors Influencing Cloud-Computing Technology Adoption in Developing Countries

    Science.gov (United States)

    Hailu, Alemayehu

    2012-01-01

    Adoption of new technology has complicating components both from the selection, as well as decision-making criteria and process. Although new technology such as cloud computing provides great benefits especially to the developing countries, it has challenges that may complicate the selection decision and subsequent adoption process. This study…

  16. National Survey of Computer Aided Manufacturing in Industrial Technology Programs.

    Science.gov (United States)

    Heidari, Farzin

    The current status of computer-aided manufacturing in the 4-year industrial technology programs in the United States was studied. All industrial technology department chairs were mailed a questionnaire divided into program information, equipment information, and general comments sections. The questionnaire was designed to determine the subjects…

  17. The role of computer simulation in nuclear technologies development

    International Nuclear Information System (INIS)

    Tikhonchev, M.Yu.; Shimansky, G.A.; Lebedeva, E.E.; Lichadeev, V. V.; Ryazanov, D.K.; Tellin, A.I.

    2001-01-01

    In the report the role and purposes of computer simulation in nuclear technologies development is discussed. The authors consider such applications of computer simulation as nuclear safety researches, optimization of technical and economic parameters of acting nuclear plant, planning and support of reactor experiments, research and design new devices and technologies, design and development of 'simulators' for operating personnel training. Among marked applications the following aspects of computer simulation are discussed in the report: neutron-physical, thermal and hydrodynamics models, simulation of isotope structure change and damage dose accumulation for materials under irradiation, simulation of reactor control structures. (authors)

  18. The role of computer simulation in nuclear technology development

    International Nuclear Information System (INIS)

    Tikhonchev, M.Yu.; Shimansky, G.A.; Lebedeva, E.E.; Lichadeev, VV.; Ryazanov, D.K.; Tellin, A.I.

    2000-01-01

    In the report, the role and purpose of computer simulation in nuclear technology development is discussed. The authors consider such applications of computer simulation as: (a) Nuclear safety research; (b) Optimization of technical and economic parameters of acting nuclear plant; (c) Planning and support of reactor experiments; (d) Research and design new devices and technologies; (f) Design and development of 'simulators' for operating personnel training. Among marked applications, the following aspects of computer simulation are discussed in the report: (g) Neutron-physical, thermal and hydrodynamics models; (h) Simulation of isotope structure change and dam- age dose accumulation for materials under irradiation; (i) Simulation of reactor control structures. (authors)

  19. Materials performance in advanced fossil technologies

    International Nuclear Information System (INIS)

    Natesan, K.

    1991-01-01

    A number of advanced technologies are being developed to convert coal into clean fuels for use as a feedstock in chemical plants and for power generation. From the standpoint of component materials, the environments created by coal conversion and combustion in these technologies and their interactions with materials are of interest. This article identifies several modes of materials degradation and possible mechanisms for metal wastage. Available data on the performance of materials in several of the environments are highlighted, and examples of promising research activities to improve the corrosion resistance of materials are presented

  20. 9th International Conference on Advanced Computing & Communication Technologies

    CERN Document Server

    Mandal, Jyotsna; Auluck, Nitin; Nagarajaram, H

    2016-01-01

    This book highlights a collection of high-quality peer-reviewed research papers presented at the Ninth International Conference on Advanced Computing & Communication Technologies (ICACCT-2015) held at Asia Pacific Institute of Information Technology, Panipat, India during 27–29 November 2015. The book discusses a wide variety of industrial, engineering and scientific applications of the emerging techniques. Researchers from academia and industry present their original work and exchange ideas, information, techniques and applications in the field of Advanced Computing and Communication Technology.

  1. Identification of risk factors of computer information technologies in education

    Directory of Open Access Journals (Sweden)

    Hrebniak M.P.

    2014-03-01

    Full Text Available The basic direction of development of secondary school and vocational training is computer training of schoolchildren and students, including distance forms of education and widespread usage of world information systems. The purpose of the work is to determine risk factors for schoolchildren and students, when using modern information and computer technologies. Results of researches allowed to establish dynamics of formation of skills using computer information technologies in education and characteristics of mental ability among schoolchildren and students during training in high school. Common risk factors, while operating CIT, are: intensification and formalization of intellectual activity, adverse ergonomic parameters, unfavorable working posture, excess of hygiene standards by chemical and physical characteristics. The priority preventive directions in applying computer information technology in education are: improvement of optimal visual parameters of activity, rationalization of ergonomic parameters, minimizing of adverse effects of chemical and physical conditions, rationalization of work and rest activity.

  2. Missile signal processing common computer architecture for rapid technology upgrade

    Science.gov (United States)

    Rabinkin, Daniel V.; Rutledge, Edward; Monticciolo, Paul

    2004-10-01

    Interceptor missiles process IR images to locate an intended target and guide the interceptor towards it. Signal processing requirements have increased as the sensor bandwidth increases and interceptors operate against more sophisticated targets. A typical interceptor signal processing chain is comprised of two parts. Front-end video processing operates on all pixels of the image and performs such operations as non-uniformity correction (NUC), image stabilization, frame integration and detection. Back-end target processing, which tracks and classifies targets detected in the image, performs such algorithms as Kalman tracking, spectral feature extraction and target discrimination. In the past, video processing was implemented using ASIC components or FPGAs because computation requirements exceeded the throughput of general-purpose processors. Target processing was performed using hybrid architectures that included ASICs, DSPs and general-purpose processors. The resulting systems tended to be function-specific, and required custom software development. They were developed using non-integrated toolsets and test equipment was developed along with the processor platform. The lifespan of a system utilizing the signal processing platform often spans decades, while the specialized nature of processor hardware and software makes it difficult and costly to upgrade. As a result, the signal processing systems often run on outdated technology, algorithms are difficult to update, and system effectiveness is impaired by the inability to rapidly respond to new threats. A new design approach is made possible three developments; Moore's Law - driven improvement in computational throughput; a newly introduced vector computing capability in general purpose processors; and a modern set of open interface software standards. Today's multiprocessor commercial-off-the-shelf (COTS) platforms have sufficient throughput to support interceptor signal processing requirements. This application

  3. Cost and performance of innovative remediation technologies

    International Nuclear Information System (INIS)

    Cummings, J.B.; Kingscott, J.W.; Fiedler, L.D.

    1995-01-01

    The selection and use of more cost-effective remedies requires better access to data on the performance and cost of technologies used in the field. To make data more widely available, the US Environmental Protection Agency is working jointly with member agencies of the Federal Remediation Technologies Round table to publish case studies of full-scale remediation and demonstration projects. EPA, DoD, and DOE have published case studies of cleanup projects primarily consisting of bioremediation, soil vapor extraction, and thermal desorption. Within the limits of this initial data set, the paper evaluates technology performance and cost. In the analysis of cost factors, the paper shows the use of a standardized Work Breakdown Structure (WBS). Use of the WBS will be important in future reporting of completed projects to facilitate cost comparison. The paper notes the limits to normalization and thus cross-site comparison which can be achieved using the WBS. The paper identifies conclusions from initial efforts to compile cost and performance data, highlights the importance of such efforts to the overall remediation effort, and discusses future cost and performance documentation efforts

  4. Mobile Computing: The Emerging Technology, Sensing, Challenges and Applications

    International Nuclear Information System (INIS)

    Bezboruah, T.

    2010-12-01

    The mobile computing is a computing system in which a computer and all necessary accessories like files and software are taken out to the field. It is a system of computing through which it is being able to use a computing device even when someone being mobile and therefore changing location. The portability is one of the important aspects of mobile computing. The mobile phones are being used to gather scientific data from remote and isolated places that could not be possible to retrieve by other means. The scientists are initiating to use mobile devices and web-based applications to systematically explore interesting scientific aspects of their surroundings, ranging from climate change, environmental pollution to earthquake monitoring. This mobile revolution enables new ideas and innovations to spread out more quickly and efficiently. Here we will discuss in brief about the mobile computing technology, its sensing, challenges and the applications. (author)

  5. What Physicists Should Know About High Performance Computing - Circa 2002

    Science.gov (United States)

    Frederick, Donald

    2002-08-01

    High Performance Computing (HPC) is a dynamic, cross-disciplinary field that traditionally has involved applied mathematicians, computer scientists, and others primarily from the various disciplines that have been major users of HPC resources - physics, chemistry, engineering, with increasing use by those in the life sciences. There is a technological dynamic that is powered by economic as well as by technical innovations and developments. This talk will discuss practical ideas to be considered when developing numerical applications for research purposes. Even with the rapid pace of development in the field, the author believes that these concepts will not become obsolete for a while, and will be of use to scientists who either are considering, or who have already started down the HPC path. These principles will be applied in particular to current parallel HPC systems, but there will also be references of value to desktop users. The talk will cover such topics as: computing hardware basics, single-cpu optimization, compilers, timing, numerical libraries, debugging and profiling tools and the emergence of Computational Grids.

  6. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    CERN Document Server

    Anisenkov, Alexey; The ATLAS collaboration; Alandes Pradillo, Maria

    2016-01-01

    AGIS is the information system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing (ADC) applications and services. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others.

  7. High performance computer code for molecular dynamics simulations

    International Nuclear Information System (INIS)

    Levay, I.; Toekesi, K.

    2007-01-01

    Complete text of publication follows. Molecular Dynamics (MD) simulation is a widely used technique for modeling complicated physical phenomena. Since 2005 we are developing a MD simulations code for PC computers. The computer code is written in C++ object oriented programming language. The aim of our work is twofold: a) to develop a fast computer code for the study of random walk of guest atoms in Be crystal, b) 3 dimensional (3D) visualization of the particles motion. In this case we mimic the motion of the guest atoms in the crystal (diffusion-type motion), and the motion of atoms in the crystallattice (crystal deformation). Nowadays, it is common to use Graphics Devices in intensive computational problems. There are several ways to use this extreme processing performance, but never before was so easy to programming these devices as now. The CUDA (Compute Unified Device) Architecture introduced by nVidia Corporation in 2007 is a very useful for every processor hungry application. A Unified-architecture GPU include 96-128, or more stream processors, so the raw calculation performance is 576(!) GFLOPS. It is ten times faster, than the fastest dual Core CPU [Fig.1]. Our improved MD simulation software uses this new technology, which speed up our software and the code run 10 times faster in the critical calculation code segment. Although the GPU is a very powerful tool, it has a strongly paralleled structure. It means, that we have to create an algorithm, which works on several processors without deadlock. Our code currently uses 256 threads, shared and constant on-chip memory, instead of global memory, which is 100 times slower than others. It is possible to implement the total algorithm on GPU, therefore we do not need to download and upload the data in every iteration. On behalf of maximal throughput, every thread run with the same instructions

  8. Emerging technologies for high performance infrared detectors

    OpenAIRE

    Tan Chee Leong; Mohseni Hooman

    2018-01-01

    Infrared photodetectors (IRPDs) have become important devices in various applications such as night vision, military missile tracking, medical imaging, industry defect imaging, environmental sensing, and exoplanet exploration. Mature semiconductor technologies such as mercury cadmium telluride and III–V material-based photodetectors have been dominating the industry. However, in the last few decades, significant funding and research has been focused to improve the performance of IRPDs such as...

  9. High performance computing and communications: FY 1995 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-01

    The High Performance Computing and Communications (HPCC) Program was formally established following passage of the High Performance Computing Act of 1991 signed on December 9, 1991. Ten federal agencies in collaboration with scientists and managers from US industry, universities, and laboratories have developed the HPCC Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1994 and FY 1995. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency. Although the Department of Education is an official HPCC agency, its current funding and reporting of crosscut activities goes through the Committee on Education and Health Resources, not the HPCC Program. For this reason the Implementation Plan covers nine HPCC agencies.

  10. Leveraging mobile computing and communication technologies in education

    DEFF Research Database (Denmark)

    Annan, Nana Kofi

    education and technology have evolved in tandem over the past years, this dissertation recognises the lapse that there is, in not being able to effectively leverage technology to improve education delivery by most educators. The study appreciates the enormousness of mobile computing and communication...... technologies in contributing to the development of tertiary education delivery, and has taken keen interest to investigate how the capacities of these technologies can be leveraged and incorporated effectively into the pedagogic framework of tertiary education. The purpose is to research into how...... of the results conducted after rigorous theoretical and empirical research unveiled the following: Mobile technologies can be incorporated into tertiary education if it has a strong theoretical underpinning, which links technology and pedagogy; the technology would not work if the user’s concerns in relation...

  11. DOE research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  12. Performance Engineering Technology for Scientific Component Software

    Energy Technology Data Exchange (ETDEWEB)

    Malony, Allen D.

    2007-05-08

    Large-scale, complex scientific applications are beginning to benefit from the use of component software design methodology and technology for software development. Integral to the success of component-based applications is the ability to achieve high-performing code solutions through the use of performance engineering tools for both intra-component and inter-component analysis and optimization. Our work on this project aimed to develop performance engineering technology for scientific component software in association with the DOE CCTTSS SciDAC project (active during the contract period) and the broader Common Component Architecture (CCA) community. Our specific implementation objectives were to extend the TAU performance system and Program Database Toolkit (PDT) to support performance instrumentation, measurement, and analysis of CCA components and frameworks, and to develop performance measurement and monitoring infrastructure that could be integrated in CCA applications. These objectives have been met in the completion of all project milestones and in the transfer of the technology into the continuing CCA activities as part of the DOE TASCS SciDAC2 effort. In addition to these achievements, over the past three years, we have been an active member of the CCA Forum, attending all meetings and serving in several working groups, such as the CCA Toolkit working group, the CQoS working group, and the Tutorial working group. We have contributed significantly to CCA tutorials since SC'04, hosted two CCA meetings, participated in the annual ACTS workshops, and were co-authors on the recent CCA journal paper [24]. There are four main areas where our project has delivered results: component performance instrumentation and measurement, component performance modeling and optimization, performance database and data mining, and online performance monitoring. This final report outlines the achievements in these areas for the entire project period. The submitted progress

  13. Parallel, distributed and GPU computing technologies in single-particle electron microscopy.

    Science.gov (United States)

    Schmeisser, Martin; Heisen, Burkhard C; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger

    2009-07-01

    Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of today's technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined.

  14. Extending the horizons advances in computing, optimization, and decision technologies

    CERN Document Server

    Joseph, Anito; Mehrotra, Anuj; Trick, Michael

    2007-01-01

    Computer Science and Operations Research continue to have a synergistic relationship and this book represents the results of cross-fertilization between OR/MS and CS/AI. It is this interface of OR/CS that makes possible advances that could not have been achieved in isolation. Taken collectively, these articles are indicative of the state-of-the-art in the interface between OR/MS and CS/AI and of the high caliber of research being conducted by members of the INFORMS Computing Society. EXTENDING THE HORIZONS: Advances in Computing, Optimization, and Decision Technologies is a volume that presents the latest, leading research in the design and analysis of algorithms, computational optimization, heuristic search and learning, modeling languages, parallel and distributed computing, simulation, computational logic and visualization. This volume also emphasizes a variety of novel applications in the interface of CS, AI, and OR/MS.

  15. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...

  16. Beyond computer literacy: supporting youth's positive development through technology.

    Science.gov (United States)

    Bers, Marina Umaschi

    2010-01-01

    In a digital era in which technology plays a role in most aspects of a child's life, having the competence and confidence to use computers might be a necessary step, but not a goal in itself. Developing character traits that will serve children to use technology in a safe way to communicate and connect with others, and providing opportunities for children to make a better world through the use of their computational skills, is just as important. The Positive Technological Development framework (PTD), a natural extension of the computer literacy and the technological fluency movements that have influenced the world of educational technology, adds psychosocial, civic, and ethical components to the cognitive ones. PTD examines the developmental tasks of a child growing up in our digital era and provides a model for developing and evaluating technology-rich youth programs. The explicit goal of PTD programs is to support children in the positive uses of technology to lead more fulfilling lives and make the world a better place. This article introduces the concept of PTD and presents examples of the Zora virtual world program for young people that the author developed following this framework.

  17. Mechanical Design Technology--Modified. (Computer Assisted Drafting, Computer Aided Design). Curriculum Grant 84/85.

    Science.gov (United States)

    Schoolcraft Coll., Livonia, MI.

    This document is a curriculum guide for a program in mechanical design technology (computer-assisted drafting and design developed at Schoolcraft College, Livonia, Michigan). The program helps students to acquire the skills of drafters and to interact with electronic equipment, with the option of becoming efficient in the computer-aided…

  18. Portable Computer Technology (PCT) Research and Development Program Phase 2

    Science.gov (United States)

    Castillo, Michael; McGuire, Kenyon; Sorgi, Alan

    1995-01-01

    The subject of this project report, focused on: (1) Design and development of two Advanced Portable Workstation 2 (APW 2) units. These units incorporate advanced technology features such as a low power Pentium processor, a high resolution color display, National Television Standards Committee (NTSC) video handling capabilities, a Personal Computer Memory Card International Association (PCMCIA) interface, and Small Computer System Interface (SCSI) and ethernet interfaces. (2) Use these units to integrate and demonstrate advanced wireless network and portable video capabilities. (3) Qualification of the APW 2 systems for use in specific experiments aboard the Mir Space Station. A major objective of the PCT Phase 2 program was to help guide future choices in computing platforms and techniques for meeting National Aeronautics and Space Administration (NASA) mission objectives. The focus being on the development of optimal configurations of computing hardware, software applications, and network technologies for use on NASA missions.

  19. Peregrine System | High-Performance Computing | NREL

    Science.gov (United States)

    classes of nodes that users access: Login Nodes Peregrine has four login nodes, each of which has Intel E5 /scratch file systems, the /mss file system is mounted on all login nodes. Compute Nodes Peregrine has 2592

  20. International Conference on Computer Science and Information Technology

    CERN Document Server

    Li, Xiaolong

    2014-01-01

    The main objective of CSAIT 2013 is to provide a forum for researchers, educators, engineers and government officials involved in the general areas of Computational Sciences and Information Technology to disseminate their latest research results and exchange views on the future research directions of these fields. A medium like this provides an opportunity to the academicians and industrial professionals to exchange and integrate practice of computer science, application of the academic ideas, improve the academic depth. The in-depth discussions on the subject provide an international communication platform for educational technology and scientific research for the world's universities, engineering field experts, professionals and business executives.

  1. PRODUCTION WELL PERFORMANCE ENHANCEMENT USING SONICATION TECHNOLOGY

    Energy Technology Data Exchange (ETDEWEB)

    Michael A. Adewumi; M. Thaddeus Ityokumbul; Robert W. Watson; Mario Farias; Glenn Heckman; Johnson Olanrewaju; Eltohami Eltohami; Bruce G. Miller; W. Jack Hughes; Thomas C. Montgomery

    2003-12-17

    The objective of this project is to develop a sonic well performance enhancement technology that focuses on near wellbore formations. In order to successfully achieve this objective, a three-year project has been defined with each year consisting of four tasks. The first task is the laboratory-scale study whose goal is to determine the underlying principles of the technology. The second task will develop a scale-up mathematical model to serve as the design guide for tool development. The third task is to develop effective transducers that can operate with variable frequency so that the most effective frequencies can be applied in any given situation. The system, assembled as part of the production string, ensures delivery of sufficient sonic energy to penetrate the near-wellbore formation. The last task is the actual field testing of the tool. The first year of the project has been completed.

  2. The implementation of AI technologies in computer wargames

    Science.gov (United States)

    Tiller, John A.

    2004-08-01

    Computer wargames involve the most in-depth analysis of general game theory. The enumerated turns of a game like chess are dwarfed by the exponentially larger possibilities of even a simple computer wargame. Implementing challenging AI is computer wargames is an important goal in both the commercial and military environments. In the commercial marketplace, customers demand a challenging AI opponent when they play a computer wargame and are frustrated by a lack of competence on the part of the AI. In the military environment, challenging AI opponents are important for several reasons. A challenging AI opponent will force the military professional to avoid routine or set-piece approaches to situations and cause them to think much deeper about military situations before taking action. A good AI opponent would also include national characteristics of the opponent being simulated, thus providing the military professional with even more of a challenge in planning and approach. Implementing current AI technologies in computer wargames is a technological challenge. The goal is to join the needs of AI in computer wargames with the solutions of current AI technologies. This talk will address several of those issues, possible solutions, and currently unsolved problems.

  3. Fast Performance Computing Model for Smart Distributed Power Systems

    Directory of Open Access Journals (Sweden)

    Umair Younas

    2017-06-01

    Full Text Available Plug-in Electric Vehicles (PEVs are becoming the more prominent solution compared to fossil fuels cars technology due to its significant role in Greenhouse Gas (GHG reduction, flexible storage, and ancillary service provision as a Distributed Generation (DG resource in Vehicle to Grid (V2G regulation mode. However, large-scale penetration of PEVs and growing demand of energy intensive Data Centers (DCs brings undesirable higher load peaks in electricity demand hence, impose supply-demand imbalance and threaten the reliability of wholesale and retail power market. In order to overcome the aforementioned challenges, the proposed research considers smart Distributed Power System (DPS comprising conventional sources, renewable energy, V2G regulation, and flexible storage energy resources. Moreover, price and incentive based Demand Response (DR programs are implemented to sustain the balance between net demand and available generating resources in the DPS. In addition, we adapted a novel strategy to implement the computational intensive jobs of the proposed DPS model including incoming load profiles, V2G regulation, battery State of Charge (SOC indication, and fast computation in decision based automated DR algorithm using Fast Performance Computing resources of DCs. In response, DPS provide economical and stable power to DCs under strict power quality constraints. Finally, the improved results are verified using case study of ISO California integrated with hybrid generation.

  4. Quantum Computers: A New Paradigm in Information Technology

    OpenAIRE

    Mahesh S. Raisinghani

    2001-01-01

    The word 'quantum' comes from the Latin word quantus meaning 'how much'. Quantum computing is a fundamentally new mode of information processing that can be performed only by harnessing physical phenomena unique to quantum mechanics (especially quantum interference). Paul Benioff of the Argonne National Laboratory first applied quantum theory to computers in 1981 and David Deutsch of Oxford proposed quantum parallel computers in 1985, years before the realization of qubits in 1995. However, i...

  5. Performance evaluation soil samples utilizing encapsulation technology

    Science.gov (United States)

    Dahlgran, James R.

    1999-01-01

    Performance evaluation soil samples and method of their preparation using encapsulation technology to encapsulate analytes which are introduced into a soil matrix for analysis and evaluation by analytical laboratories. Target analytes are mixed in an appropriate solvent at predetermined concentrations. The mixture is emulsified in a solution of polymeric film forming material. The emulsified solution is polymerized to form microcapsules. The microcapsules are recovered, quantitated and introduced into a soil matrix in a predetermined ratio to form soil samples with the desired analyte concentration.

  6. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  7. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    Science.gov (United States)

    Faraj, Ahmad [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  8. 2nd International Conference on Computer and Communication Technologies

    CERN Document Server

    Raju, K; Mandal, Jyotsna; Bhateja, Vikrant

    2016-01-01

    The book is about all aspects of computing, communication, general sciences and educational research covered at the Second International Conference on Computer & Communication Technologies held during 24-26 July 2015 at Hyderabad. It hosted by CMR Technical Campus in association with Division – V (Education & Research) CSI, India. After a rigorous review only quality papers are selected and included in this book. The entire book is divided into three volumes. Three volumes cover a variety of topics which include medical imaging, networks, data mining, intelligent computing, software design, image processing, mobile computing, digital signals and speech processing, video surveillance and processing, web mining, wireless sensor networks, circuit analysis, fuzzy systems, antenna and communication systems, biomedical signal processing and applications, cloud computing, embedded systems applications and cyber security and digital forensic. The readers of these volumes will be highly benefited from the te...

  9. High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations

    Science.gov (United States)

    Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.

    2003-01-01

    Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.

  10. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  11. Unified, Cross-Platform, Open-Source Library Package for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Kozacik, Stephen [EM Photonics, Inc., Newark, DE (United States)

    2017-05-15

    Compute power is continually increasing, but this increased performance is largely found in sophisticated computing devices and supercomputer resources that are difficult to use, resulting in under-utilization. We developed a unified set of programming tools that will allow users to take full advantage of the new technology by allowing them to work at a level abstracted away from the platform specifics, encouraging the use of modern computing systems, including government-funded supercomputer facilities.

  12. 11th International Conference on Computing and Information Technology

    CERN Document Server

    Meesad, Phayung; Boonkrong, Sirapat

    2015-01-01

    This book presents recent research work and results in the area of communication and information technologies. The book includes the main results of the 11th International Conference on Computing and Information Technology (IC2IT) held during July 2nd-3rd, 2015 in Bangkok, Thailand. The book is divided into the two main parts Data Mining and Machine Learning as well as Data Network and Communications. New algorithms and methods of data mining asr discussed as well as innovative applications and state-of-the-art technologies on data mining, machine learning and data networking.

  13. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    Science.gov (United States)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  14. Implications of Computer Technology. Harvard University Program on Technology and Society.

    Science.gov (United States)

    Taviss, Irene; Burbank, Judith

    Lengthy abstracts of a small number of selected books and articles on the implications of computer technology are presented, preceded by a brief state-of-the-art survey which traces the impact of computers on the structure of economic and political organizations and socio-cultural patterns. A summary statement introduces each of the three abstract…

  15. Quantum Computers: A New Paradigm in Information Technology

    Directory of Open Access Journals (Sweden)

    Mahesh S. Raisinghani

    2001-01-01

    Full Text Available The word 'quantum' comes from the Latin word quantus meaning 'how much'. Quantum computing is a fundamentally new mode of information processing that can be performed only by harnessing physical phenomena unique to quantum mechanics (especially quantum interference. Paul Benioff of the Argonne National Laboratory first applied quantum theory to computers in 1981 and David Deutsch of Oxford proposed quantum parallel computers in 1985, years before the realization of qubits in 1995. However, it may be well into the 21st century before we see quantum computing used at a commercial level for a variety of reasons discussed in this paper. The subject of quantum computing brings together ideas from classical information theory, computer science, and quantum physics. This paper discusses some of the current advances, applications, and chal-lenges of quantum computing as well as its impact on corporate computing and implications for management. It shows how quantum computing can be utilized to process and store information, as well as impact cryptography for perfectly secure communication, algorithmic searching, factorizing large numbers very rapidly, and simulating quantum-mechanical systems efficiently. A broad interdisciplinary effort will be needed if quantum com-puters are to fulfill their destiny as the world's fastest computing devices.

  16. Chinese-English Automation and Computer Technology Dictionary, Volume 2.

    Science.gov (United States)

    1980-08-01

    Chinese-English Automation and Computer Technology Dictionary VOL 2 ItT: SEP 2LECTE \\This dcuflent h as een c i tsrO tog public te1a sae’ I d~suil to...zhuangbei A information link 04 tongxin ].ianjie zhuangzhi A Iconrwnicatioi link 05 tongxin shebei camuenications euipme~nt; 06 omnications facility

  17. Feedback in Computer Assisted Pronunciation Training: When technology meets pedagogy

    NARCIS (Netherlands)

    Neri, A.; Cucchiarini, C.; Strik, H.

    2002-01-01

    Computer Assisted Language Learning (CALL) has now established itself as a prolific and fast growing area whose advantages are already well-known to educators. Yet, many authors lament the lack of a reliable integrated conceptual framework linking technology advances and second language acquisition

  18. INFLUENCE OF DEVELOPMENT OF COMPUTER TECHNOLOGIES ON TEACHING

    Directory of Open Access Journals (Sweden)

    Olgica Bešić

    2012-09-01

    Full Text Available Our times are characterized by strong changes in technology that have become reality in many areas of society. When compared to production, transport, services, etc education, as a rule, slowly opens to new technologies. However, children at their homes and outside the schools live in a technologically rich environment, and they expect the change in education in accordance with the imperatives of the education for the twenty-first century. In this sense, systems for automated data processing, multimedia systems, then distance learning, virtual schools and other technologies are being introduced into education. They lead to an increase in students' activities, quality evaluation of their knowledge and finally to their progress, all in accordance with individual abilities and knowledge. Mathematics and computers often appear together in the teaching process. Taking into account the teaching of mathematics, computers and software packages have a significant role. The program requirements are not dominant. The emphasis is on mathematical content and the method of presentation. Computers are especially used in solving various mathematical tasks and self-learning of mathematics. Still, many problems that require solutions appear in the process: how to organise lectures, practice, textbooks, collected mathematical problems, written exams, how to assign and check homework. The answers to these questions are not simple and they will probably be sought continuously, with an increasing use of computers in the teaching process. In this paper I have tried to solve some of the questions above.

  19. Parallel distributed computing in modeling of the nanomaterials production technologies

    NARCIS (Netherlands)

    Krzhizhanovskaya, V.V.; Korkhov, V.V.; Zatevakhin, M.A.; Gorbachev, Y.E.

    2008-01-01

    Simulation of physical and chemical processes occurring in the nanomaterial production technologies is a computationally challenging problem, due to the great number of coupled processes, time and length scales to be taken into account. To solve such complex problems with a good level of detail in a

  20. Monte Carlo computation in the applied research of nuclear technology

    International Nuclear Information System (INIS)

    Xu Shuyan; Liu Baojie; Li Qin

    2007-01-01

    This article briefly introduces Monte Carlo Methods and their properties. It narrates the Monte Carlo methods with emphasis in their applications to several domains of nuclear technology. Monte Carlo simulation methods and several commonly used computer software to implement them are also introduced. The proposed methods are demonstrated by a real example. (authors)

  1. Computational Structures Technology for Airframes and Propulsion Systems

    International Nuclear Information System (INIS)

    Noor, A.K.; Housner, J.M.; Starnes, J.H. Jr.; Hopkins, D.A.; Chamis, C.C.

    1992-05-01

    This conference publication contains the presentations and discussions from the joint University of Virginia (UVA)/NASA Workshops. The presentations included NASA Headquarters perspectives on High Speed Civil Transport (HSCT), goals and objectives of the UVA Center for Computational Structures Technology (CST), NASA and Air Force CST activities, CST activities for airframes and propulsion systems in industry, and CST activities at Sandia National Laboratory

  2. Designing Pervasive Computing Technology - In a Nomadic Work Perspective

    DEFF Research Database (Denmark)

    Kristensen, Jannie Friis

    2002-01-01

    In my thesis work I am investigating how the design of pervasive/ubiquitous computing technology, relate to the flexible and individual work practice of nomadic workers. Through empirical studies and with an experimental systems development approach, the work is focused on: a) Supporting...

  3. Pervasive Computing and Communication Technologies for U-Learning

    Science.gov (United States)

    Park, Young C.

    2014-01-01

    The development of digital information transfer, storage and communication methods influences a significant effect on education. The assimilation of pervasive computing and communication technologies marks another great step forward, with Ubiquitous Learning (U-learning) emerging for next generation learners. In the evolutionary view the 5G (or…

  4. The Voice as Computer Interface: A Look at Tomorrow's Technologies.

    Science.gov (United States)

    Lange, Holley R.

    1991-01-01

    Discussion of voice as the communications device for computer-human interaction focuses on voice recognition systems for use within a library environment. Voice technologies are described, including voice response and voice recognition; examples of voice systems in use in libraries are examined; and further possibilities, including use with…

  5. Beyond Computer Literacy: Technology Integration and Curriculum Transformation

    Science.gov (United States)

    Safar, Ammar H.; AlKhezzi, Fahad A.

    2013-01-01

    Personal computers, the Internet, smartphones, and other forms of information and communication technology (ICT) have changed our world, our job, our personal lives, as well as how we manage our knowledge and time effectively and efficiently. Research findings in the past decades have acknowledged and affirmed that the content the ICT medium…

  6. Computer-Assisted Foreign Language Teaching and Learning: Technological Advances

    Science.gov (United States)

    Zou, Bin; Xing, Minjie; Wang, Yuping; Sun, Mingyu; Xiang, Catherine H.

    2013-01-01

    Computer-Assisted Foreign Language Teaching and Learning: Technological Advances highlights new research and an original framework that brings together foreign language teaching, experiments and testing practices that utilize the most recent and widely used e-learning resources. This comprehensive collection of research will offer linguistic…

  7. Providing Assistive Technology Applications as a Service Through Cloud Computing.

    Science.gov (United States)

    Mulfari, Davide; Celesti, Antonio; Villari, Massimo; Puliafito, Antonio

    2015-01-01

    Users with disabilities interact with Personal Computers (PCs) using Assistive Technology (AT) software solutions. Such applications run on a PC that a person with a disability commonly uses. However the configuration of AT applications is not trivial at all, especially whenever the user needs to work on a PC that does not allow him/her to rely on his / her AT tools (e.g., at work, at university, in an Internet point). In this paper, we discuss how cloud computing provides a valid technological solution to enhance such a scenario.With the emergence of cloud computing, many applications are executed on top of virtual machines (VMs). Virtualization allows us to achieve a software implementation of a real computer able to execute a standard operating system and any kind of application. In this paper we propose to build personalized VMs running AT programs and settings. By using the remote desktop technology, our solution enables users to control their customized virtual desktop environment by means of an HTML5-based web interface running on any computer equipped with a browser, whenever they are.

  8. The state of ergonomics for mobile computing technology.

    Science.gov (United States)

    Dennerlein, Jack T

    2015-01-01

    Because mobile computing technologies, such as notebook computers, smart mobile phones, and tablet computers afford users many different configurations through their intended mobility, there is concern about their effects on musculoskeletal pain and a need for usage recommendations. Therefore the main goal of this paper to determine which best practices surrounding the use of mobile computing devices can be gleaned from current field and laboratory studies of mobile computing devices. An expert review was completed. Field studies have documented various user configurations, which often include non-neutral postures, that users adopt when using mobile technology, along with some evidence suggesting that longer duration of use is associated with more discomfort. It is therefore prudent for users to take advantage of their mobility and not get stuck in any given posture for too long. The use of accessories such as appropriate cases or riser stands, as well as external keyboards and pointing devices, can also improve postures and comfort. Overall, the state of ergonomics for mobile technology is a work in progress and there are more research questions to be addressed.

  9. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    Science.gov (United States)

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-10-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and services. Being an intermediate middleware system between clients and external information sources (like central BDII, GOCDB, MyOSG), AGIS defines the relations between experiment specific used resources and physical distributed computing capabilities. Being in production during LHC Runl AGIS became the central information system for Distributed Computing in ATLAS and it is continuously evolving to fulfil new user requests, enable enhanced operations and follow the extension of the ATLAS Computing model. The ATLAS Computing model and data structures used by Distributed Computing applications and services are continuously evolving and trend to fit newer requirements from ADC community. In this note, we describe the evolution and the recent developments of AGIS functionalities, related to integration of new technologies recently become widely used in ATLAS Computing, like flexible computing utilization of opportunistic Cloud and HPC resources, ObjectStore services integration for Distributed Data Management (Rucio) and ATLAS workload management (PanDA) systems, unified storage protocols declaration required for PandDA Pilot site movers and others. The improvements of information model and general updates are also shown, in particular we explain how other collaborations outside ATLAS could benefit the system as a computing resources information catalogue. AGIS is evolving towards a common information system, not coupled to a specific experiment.

  10. Research and development of grid computing technology in center for computational science and e-systems of Japan Atomic Energy Agency

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2007-01-01

    Center for Computational Science and E-systems of the Japan Atomic Energy Agency (CCSE/JAEA) has carried out R and D of grid computing technology. Since 1995, R and D to realize computational assistance for researchers called Seamless Thinking Aid (STA) and then to share intellectual resources called Information Technology Based Laboratory (ITBL) have been conducted, leading to construct an intelligent infrastructure for the atomic energy research called Atomic Energy Grid InfraStructure (AEGIS) under the Japanese national project 'Development and Applications of Advanced High-Performance Supercomputer'. It aims to enable synchronization of three themes: 1) Computer-Aided Research and Development (CARD) to realize and environment for STA, 2) Computer-Aided Engineering (CAEN) to establish Multi Experimental Tools (MEXT), and 3) Computer Aided Science (CASC) to promote the Atomic Energy Research and Investigation (AERI). This article reviewed achievements in R and D of grid computing technology so far obtained. (T. Tanaka)

  11. The Adoption of Grid Computing Technology by Organizations: A Quantitative Study Using Technology Acceptance Model

    Science.gov (United States)

    Udoh, Emmanuel E.

    2010-01-01

    Advances in grid technology have enabled some organizations to harness enormous computational power on demand. However, the prediction of widespread adoption of the grid technology has not materialized despite the obvious grid advantages. This situation has encouraged intense efforts to close the research gap in the grid adoption process. In this…

  12. Information Technology in project-organized electronic and computer technology engineering education

    DEFF Research Database (Denmark)

    Nielsen, Kirsten Mølgaard; Nielsen, Jens Frederik Dalsgaard

    1999-01-01

    This paper describes the integration of IT in the education of electronic and computer technology engineers at Institute of Electronic Systems, Aalborg Uni-versity, Denmark. At the Institute Information Technology is an important tool in the aspects of the education as well as for communication...

  13. Use of new computer technologies in elementary particle physics

    International Nuclear Information System (INIS)

    Gaines, I.; Nash, T.

    1987-01-01

    Elementary particle physics and computers have progressed together for as long as anyone can remember. The symbiosis is surprising considering the dissimilar objectives of these fields, but physics understanding cannot be had simply by detecting the passage of particles. It requires a selection of interesting events and their analysis in comparison with quantitative theoretical predictions. The extraordinary reach made by experimentalists into realms always further removed from everyday observation frequently encountered technology constraints. Pushing away such barriers has been an essential activity of the physicist since long before Rossi developed the first practical electronic AND gates as coincidence circuits in 1930. This article describes the latest episode of this history, the development of new computer technologies to meet the various and increasing appetite for computing of experimental (and theoretical) high energy physics

  14. Advanced Certification Program for Computer Graphic Specialists. Final Performance Report.

    Science.gov (United States)

    Parkland Coll., Champaign, IL.

    A pioneer program in computer graphics was implemented at Parkland College (Illinois) to meet the demand for specialized technicians to visualize data generated on high performance computers. In summer 1989, 23 students were accepted into the pilot program. Courses included C programming, calculus and analytic geometry, computer graphics, and…

  15. Performance of the TRISTAN computer control network

    International Nuclear Information System (INIS)

    Koiso, H.; Abe, K.; Akiyama, A.; Katoh, T.; Kikutani, E.; Kurihara, N.; Kurokawa, S.; Oide, K.; Shinomoto, M.

    1985-01-01

    An N-to-N token ring network of twenty-four minicomputers controls the TRISTAN accelerator complex. The computers are linked by optical fiber cables with 10 Mbps transmission speed. The software system is based on the NODAL, a multi-computer interpreter language developed at CERN SPS. Typical messages exchanged between computers are NODAL programs and NODAL variables transmitted by the EXEC and the REMIT commands. These messages are exchanged as a cluster of packets whose maximum size is 512 bytes. At present, eleven minicomputers are connected to the network and the total length of the ring is 1.5 km. In this condition, the maximum attainable throughput is 980 kbytes/s. The response of a pair of an EXEC and a REMIT transactions which transmit a NODAL array A and one line of program 'REMIT A' and immediately remit the A is measured to be 95+0.039/chi/ ms, where /chi/ is the array size in byte. In ordinary accelerator operations, the maximum channel utilization is 2%, the average packet length is 96 bytes and the transmission rate is 10 kbytes/s

  16. Integrated modeling tool for performance engineering of complex computer systems

    Science.gov (United States)

    Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar

    1989-01-01

    This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.

  17. A high performance scientific cloud computing environment for materials simulations

    Science.gov (United States)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  18. 4th International Conference on Applied Computing and Information Technology

    CERN Document Server

    2017-01-01

    This edited book presents scientific results of the 4th International Conference on Applied Computing and Information Technology (ACIT 2016) which was held on December 12–14, 2016 in Las Vegas, USA. The aim of this conference was to bring together researchers and scientists, businessmen and entrepreneurs, teachers, engineers, computer users, and students to discuss the numerous fields of computer science and to share their experiences and exchange new ideas and information in a meaningful way. The aim of this conference was also to bring out the research results about all aspects (theory, applications and tools) of computer and information science, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them. The conference organizers selected the best papers from those papers accepted for presentation at the conference. The papers were chosen based on review scores submitted by members of the Program Committee, and underwent further rigorous rounds of review. Th...

  19. Research on application of computer technologies in jewelry process

    Directory of Open Access Journals (Sweden)

    Junbo Xia

    2017-06-01

    Full Text Available Jewelry production is a process of precious raw materials and low losses in processing. The traditional manual mode is unable to meet the needs of enterprises in reality, while the involvement of computer technology can just solve this practical problem. At present, the problem of restricting the application for computer in jewelry production is mainly a failure to find a production model that can serve the whole industry chain with the computer as the core of production. This paper designs a “synchronous and diversified” production model with “computer aided design technology” and “rapid prototyping technology” as the core, and tests with actual production cases, and achieves certain results, which are forward-looking and advanced.

  20. A high performance scientific cloud computing environment for materials simulations

    OpenAIRE

    Jorissen, Kevin; Vila, Fernando D.; Rehr, John J.

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including...

  1. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  2. Technology Evaluation Report 17. Videoconferencing in Theatre and Performance Studies

    Directory of Open Access Journals (Sweden)

    Mark Childs

    2003-04-01

    Full Text Available Previous reports in this series have indicated the growing acceptance of video-conferencing in education delivery. The current report compares a series of video-conferencing methods in an activity requiring precision of expression and communication: theatre and performance studies. The Accessing and Networking with National and International Expertise (ANNIE project is a two-year project undertaken jointly by the University of Warwick and the University of Kent at Canterbury, running from March 2001 to March 2003. The project's aim is to enhance students' learning experience in theatre studies by enabling access to research-based teaching and to workshops led by practitioners of national and international standing. Various technologies have been used, particularly ISDN video-conferencing, computer-mediated conferencing, and the Internet. This report concludes that video-conferencing methods will gain acceptance in education, as academic schools themselves are able to operate commonly available technology the assistance of specialised service units.

  3. Real-time Tsunami Inundation Prediction Using High Performance Computers

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2014-12-01

    Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the

  4. The application of cloud computing to scientific workflows: a study of cost and performance.

    Science.gov (United States)

    Berriman, G Bruce; Deelman, Ewa; Juve, Gideon; Rynge, Mats; Vöckler, Jens-S

    2013-01-28

    The current model of transferring data from data centres to desktops for analysis will soon be rendered impractical by the accelerating growth in the volume of science datasets. Processing will instead often take place on high-performance servers co-located with data. Evaluations of how new technologies such as cloud computing would support such a new distributed computing model are urgently needed. Cloud computing is a new way of purchasing computing and storage resources on demand through virtualization technologies. We report here the results of investigations of the applicability of commercial cloud computing to scientific computing, with an emphasis on astronomy, including investigations of what types of applications can be run cheaply and efficiently on the cloud, and an example of an application well suited to the cloud: processing a large dataset to create a new science product.

  5. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  6. Computing, information, and communications: Technologies for the 21. Century

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1998-11-01

    To meet the challenges of a radically new and technologically demanding century, the Federal Computing, Information, and Communications (CIC) programs are investing in long-term research and development (R and D) to advance computing, information, and communications in the United States. CIC R and D programs help Federal departments and agencies to fulfill their evolving missions, assure the long-term national security, better understand and manage the physical environment, improve health care, help improve the teaching of children, provide tools for lifelong training and distance learning to the workforce, and sustain critical US economic competitiveness. One of the nine committees of the National Science and Technology Council (NSTC), the Committee on Computing, Information, and Communications (CCIC)--through its CIC R and D Subcommittee--coordinates R and D programs conducted by twelve Federal departments and agencies in cooperation with US academia and industry. These R and D programs are organized into five Program Component Areas: (1) HECC--High End Computing and Computation; (2) LSN--Large Scale Networking, including the Next Generation Internet Initiative; (3) HCS--High Confidence Systems; (4) HuCS--Human Centered Systems; and (5) ETHR--Education, Training, and Human Resources. A brief synopsis of FY 1997 accomplishments and FY 1998 goals by PCA is presented. This report, which supplements the President`s Fiscal Year 1998 Budget, describes the interagency CIC programs.

  7. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Belov, S; Kaplin, V; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2011-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. Recently a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  8. Technology computer aided design simulation for VLSI MOSFET

    CERN Document Server

    Sarkar, Chandan Kumar

    2013-01-01

    Responding to recent developments and a growing VLSI circuit manufacturing market, Technology Computer Aided Design: Simulation for VLSI MOSFET examines advanced MOSFET processes and devices through TCAD numerical simulations. The book provides a balanced summary of TCAD and MOSFET basic concepts, equations, physics, and new technologies related to TCAD and MOSFET. A firm grasp of these concepts allows for the design of better models, thus streamlining the design process, saving time and money. This book places emphasis on the importance of modeling and simulations of VLSI MOS transistors and

  9. Cloud computing and patient engagement: leveraging available technology.

    Science.gov (United States)

    Noblin, Alice; Cortelyou-Ward, Kendall; Servan, Rosa M

    2014-01-01

    Cloud computing technology has the potential to transform medical practices and improve patient engagement and quality of care. However, issues such as privacy and security and "fit" can make incorporation of the cloud an intimidating decision for many physicians. This article summarizes the four most common types of clouds and discusses their ideal uses, how they engage patients, and how they improve the quality of care offered. This technology also can be used to meet Meaningful Use requirements 1 and 2; and, if speculation is correct, the cloud will provide the necessary support needed for Meaningful Use 3 as well.

  10. 12th International Conference on Computing and Information Technology

    CERN Document Server

    Boonkrong, Sirapat; Unger, Herwig

    2016-01-01

    This proceedings book presents recent research work and results in the area of communication and information technologies. The chapters of this book contain the main, well-selected and reviewed contributions of scientists who met at the 12th International Conference on Computing and Information Technology (IC2IT) held during 7th - 8th July 2016 in Khon Kaen, Thailand The book is divided into three parts: “User Centric Data Mining and Text Processing”, “Data Mining Algoritms and their Applications” and “Optimization of Complex Networks”.

  11. Data Mining Based on Cloud-Computing Technology

    Directory of Open Access Journals (Sweden)

    Ren Ying

    2016-01-01

    Full Text Available There are performance bottlenecks and scalability problems when traditional data-mining system is used in cloud computing. In this paper, we present a data-mining platform based on cloud computing. Compared with a traditional data mining system, this platform is highly scalable, has massive data processing capacities, is service-oriented, and has low hardware cost. This platform can support the design and applications of a wide range of distributed data-mining systems.

  12. Importance of Computer Model Validation in Pyroprocessing Technology Development

    Energy Technology Data Exchange (ETDEWEB)

    Jung, Y. E.; Li, Hui; Yim, M. S. [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of)

    2014-05-15

    In this research, we developed a plan for experimental validation of one of the computer models developed for ER process modeling, i. e., the ERAD code. Several candidate surrogate materials are selected for the experiment considering the chemical and physical properties. Molten salt-based pyroprocessing technology is being examined internationally as an alternative to treat spent nuclear fuel over aqueous technology. The central process in pyroprocessing is electrorefining(ER) which separates uranium from transuranic elements and fission products present in spent nuclear fuel. ER is a widely used process in the minerals industry to purify impure metals. Studies of ER by using actual spent nuclear fuel materials are problematic for both technical and political reasons. Therefore, the initial effort for ER process optimization is made by using computer models. A number of models have been developed for this purpose. But as validation of these models is incomplete and often times problematic, the simulation results from these models are inherently uncertain.

  13. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  14. Computer Self-Efficacy, Computer Anxiety, Performance and Personal Outcomes of Turkish Physical Education Teachers

    Science.gov (United States)

    Aktag, Isil

    2015-01-01

    The purpose of this study is to determine the computer self-efficacy, performance outcome, personal outcome, and affect and anxiety level of physical education teachers. Influence of teaching experience, computer usage and participation of seminars or in-service programs on computer self-efficacy level were determined. The subjects of this study…

  15. An Overview of Computer Network security and Research Technology

    OpenAIRE

    Rathore, Vandana

    2016-01-01

    The rapid development in the field of computer networks and systems brings both convenience and security threats for users. Security threats include network security and data security. Network security refers to the reliability, confidentiality, integrity and availability of the information in the system. The main objective of network security is to maintain the authenticity, integrity, confidentiality, availability of the network. This paper introduces the details of the technologies used in...

  16. The Application of Computer Music Technology to College Students

    OpenAIRE

    Wang Na

    2016-01-01

    Contemporary music education started late in China on the basis of western teaching theories formed its own unique system, which has a great influence on present computer music technology. This paper explores that contemporary music education is analyzed advantages and disadvantages of the influence on the development of Chinese class music, and the solutions are found out to the existing problems, summed up the reality enlightenment of that the contemporary music on the impact of education.

  17. The Application of Computer Music Technology to College Students

    Directory of Open Access Journals (Sweden)

    Wang Na

    2016-01-01

    Full Text Available Contemporary music education started late in China on the basis of western teaching theories formed its own unique system, which has a great influence on present computer music technology. This paper explores that contemporary music education is analyzed advantages and disadvantages of the influence on the development of Chinese class music, and the solutions are found out to the existing problems, summed up the reality enlightenment of that the contemporary music on the impact of education.

  18. Advanced Computing for 21st Century Accelerator Science and Technology

    International Nuclear Information System (INIS)

    Dragt, Alex J.

    2004-01-01

    Dr. Dragt of the University of Maryland is one of the Institutional Principal Investigators for the SciDAC Accelerator Modeling Project Advanced Computing for 21st Century Accelerator Science and Technology whose principal investigators are Dr. Kwok Ko (Stanford Linear Accelerator Center) and Dr. Robert Ryne (Lawrence Berkeley National Laboratory). This report covers the activities of Dr. Dragt while at Berkeley during spring 2002 and at Maryland during fall 2003

  19. Technology diversification, coherence, and performance of firms

    NARCIS (Netherlands)

    Leten, B.; Belderbos, R.A.; Looy, van B.

    2007-01-01

    Technological diversification at the firm level (i.e., the expansion of a firm's technology base into a wide range of technology fields) is found to be a prevailing phenomenon in all three major industrialized regions,—the United States, Europe, and Japan—prompting the term multitechnology

  20. Computer performance evaluation of FACOM 230-75 computer system, (2)

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1980-08-01

    In this report are described computer performance evaluations for FACOM230-75 computers in JAERI. The evaluations are performed on following items: (1) Cost/benefit analysis of timesharing terminals, (2) Analysis of the response time of timesharing terminals, (3) Analysis of throughout time for batch job processing, (4) Estimation of current potential demands for computer time, (5) Determination of appropriate number of card readers and line printers. These evaluations are done mainly from the standpoint of cost reduction of computing facilities. The techniques adapted are very practical ones. This report will be useful for those people who are concerned with the management of computing installation. (author)

  1. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  2. Basic research and 12 years of clinical experience in computer-assisted navigation technology: a review.

    Science.gov (United States)

    Ewers, R; Schicho, K; Undt, G; Wanschitz, F; Truppe, M; Seemann, R; Wagner, A

    2005-01-01

    Computer-aided surgical navigation technology is commonly used in craniomaxillofacial surgery. It offers substantial improvement regarding esthetic and functional aspects in a range of surgical procedures. Based on augmented reality principles, where the real operative site is merged with computer generated graphic information, computer-aided navigation systems were employed, among other procedures, in dental implantology, arthroscopy of the temporomandibular joint, osteotomies, distraction osteogenesis, image guided biopsies and removals of foreign bodies. The decision to perform a procedure with or without computer-aided intraoperative navigation depends on the expected benefit to the procedure as well as on the technical expenditure necessary to achieve that goal. This paper comprises the experience gained in 12 years of research, development and routine clinical application. One hundred and fifty-eight operations with successful application of surgical navigation technology--divided into five groups--are evaluated regarding the criteria "medical benefit" and "technical expenditure" necessary to perform these procedures. Our results indicate that the medical benefit is likely to outweight the expenditure of technology with few exceptions (calvaria transplant, resection of the temporal bone, reconstruction of the orbital floor). Especially in dental implantology, specialized software reduces time and additional costs necessary to plan and perform procedures with computer-aided surgical navigation.

  3. Computer-Aided Modeling of Lipid Processing Technology

    DEFF Research Database (Denmark)

    Diaz Tovar, Carlos Axel

    2011-01-01

    increase along with growing interest in biofuels, the oleochemical industry faces in the upcoming years major challenges in terms of design and development of better products and more sustainable processes to make them. Computer-aided methods and tools for process synthesis, modeling and simulation...... are widely used for design, analysis, and optimization of processes in the chemical and petrochemical industries. These computer-aided tools have helped the chemical industry to evolve beyond commodities toward specialty chemicals and ‘consumer oriented chemicals based products’. Unfortunately...... to develop systematic computer-aided methods (property models) and tools (database) related to the prediction of the necessary physical properties suitable for design and analysis of processes employing lipid technologies. The methods and tools include: the development of a lipid-database (CAPEC...

  4. High Performance Computing Modernization Program Kerberos Throughput Test Report

    Science.gov (United States)

    2017-10-26

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/5524--17-9751 High Performance Computing Modernization Program Kerberos Throughput Test ...NUMBER 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 2. REPORT TYPE1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 6. AUTHOR(S) 8. PERFORMING...PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT High Performance Computing Modernization Program Kerberos Throughput Test Report Daniel G. Gdula* and

  5. Performance evaluation of computer and communication systems

    CERN Document Server

    Le Boudec, Jean-Yves

    2011-01-01

    … written by a scientist successful in performance evaluation, it is based on his experience and provides many ideas not only to laymen entering the field, but also to practitioners looking for inspiration. The work can be read systematically as a textbook on how to model and test the derived hypotheses on the basis of simulations. Also, separate parts can be studied, as the chapters are self-contained. … the book can be successfully used either for self-study or as a supplementary book for a lecture. I believe that different types of readers will like it: practicing engineers and resea

  6. Computer task performance by subjects with Duchenne muscular dystrophy.

    Science.gov (United States)

    Malheiros, Silvia Regina Pinheiro; da Silva, Talita Dias; Favero, Francis Meire; de Abreu, Luiz Carlos; Fregni, Felipe; Ribeiro, Denise Cardoso; de Mello Monteiro, Carlos Bandeira

    2016-01-01

    Two specific objectives were established to quantify computer task performance among people with Duchenne muscular dystrophy (DMD). First, we compared simple computational task performance between subjects with DMD and age-matched typically developing (TD) subjects. Second, we examined correlations between the ability of subjects with DMD to learn the computational task and their motor functionality, age, and initial task performance. The study included 84 individuals (42 with DMD, mean age of 18±5.5 years, and 42 age-matched controls). They executed a computer maze task; all participants performed the acquisition (20 attempts) and retention (five attempts) phases, repeating the same maze. A different maze was used to verify transfer performance (five attempts). The Motor Function Measure Scale was applied, and the results were compared with maze task performance. In the acquisition phase, a significant decrease was found in movement time (MT) between the first and last acquisition block, but only for the DMD group. For the DMD group, MT during transfer was shorter than during the first acquisition block, indicating improvement from the first acquisition block to transfer. In addition, the TD group showed shorter MT than the DMD group across the study. DMD participants improved their performance after practicing a computational task; however, the difference in MT was present in all attempts among DMD and control subjects. Computational task improvement was positively influenced by the initial performance of individuals with DMD. In turn, the initial performance was influenced by their distal functionality but not their age or overall functionality.

  7. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  8. Examining hemodialyzer membrane performance using proteomic technologies.

    Science.gov (United States)

    Bonomini, Mario; Pieroni, Luisa; Di Liberato, Lorenzo; Sirolli, Vittorio; Urbani, Andrea

    2018-01-01

    The success and the quality of hemodialysis therapy are mainly related to both clearance and biocompatibility properties of the artificial membrane packed in the hemodialyzer. Performance of a membrane is strongly influenced by its interaction with the plasma protein repertoire during the extracorporeal procedure. Recognition that a number of medium-high molecular weight solutes, including proteins and protein-bound molecules, are potentially toxic has prompted the development of more permeable membranes. Such membrane engineering, however, may cause loss of vital proteins, with membrane removal being nonspecific. In addition, plasma proteins can be adsorbed onto the membrane surface upon blood contact during dialysis. Adsorption can contribute to the removal of toxic compounds and governs the biocompatibility of a membrane, since surface-adsorbed proteins may trigger a variety of biologic blood pathways with pathophysiologic consequences. Over the last years, use of proteomic approaches has allowed polypeptide spectrum involved in the process of hemodialysis, a key issue previously hampered by lack of suitable technology, to be assessed in an unbiased manner and in its full complexity. Proteomics has been successfully applied to identify and quantify proteins in complex mixtures such as dialysis outflow fluid and fluid desorbed from dialysis membrane containing adsorbed proteins. The identified proteins can also be characterized by their involvement in metabolic and signaling pathways, molecular networks, and biologic processes through application of bioinformatics tools. Proteomics may thus provide an actual functional definition as to the effect of a membrane material on plasma proteins during hemodialysis. Here, we review the results of proteomic studies on the performance of hemodialysis membranes, as evaluated in terms of solute removal efficiency and blood-membrane interactions. The evidence collected indicates that the information provided by proteomic

  9. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  10. Emerging technologies for high performance infrared detectors

    Science.gov (United States)

    Tan, Chee Leong; Mohseni, Hooman

    2018-01-01

    Infrared photodetectors (IRPDs) have become important devices in various applications such as night vision, military missile tracking, medical imaging, industry defect imaging, environmental sensing, and exoplanet exploration. Mature semiconductor technologies such as mercury cadmium telluride and III-V material-based photodetectors have been dominating the industry. However, in the last few decades, significant funding and research has been focused to improve the performance of IRPDs such as lowering the fabrication cost, simplifying the fabrication processes, increasing the production yield, and increasing the operating temperature by making use of advances in nanofabrication and nanotechnology. We will first review the nanomaterial with suitable electronic and mechanical properties, such as two-dimensional material, graphene, transition metal dichalcogenides, and metal oxides. We compare these with more traditional low-dimensional material such as quantum well, quantum dot, quantum dot in well, semiconductor superlattice, nanowires, nanotube, and colloid quantum dot. We will also review the nanostructures used for enhanced light-matter interaction to boost the IRPD sensitivity. These include nanostructured antireflection coatings, optical antennas, plasmonic, and metamaterials.

  11. Emerging technologies for high performance infrared detectors

    Directory of Open Access Journals (Sweden)

    Tan Chee Leong

    2018-01-01

    Full Text Available Infrared photodetectors (IRPDs have become important devices in various applications such as night vision, military missile tracking, medical imaging, industry defect imaging, environmental sensing, and exoplanet exploration. Mature semiconductor technologies such as mercury cadmium telluride and III–V material-based photodetectors have been dominating the industry. However, in the last few decades, significant funding and research has been focused to improve the performance of IRPDs such as lowering the fabrication cost, simplifying the fabrication processes, increasing the production yield, and increasing the operating temperature by making use of advances in nanofabrication and nanotechnology. We will first review the nanomaterial with suitable electronic and mechanical properties, such as two-dimensional material, graphene, transition metal dichalcogenides, and metal oxides. We compare these with more traditional low-dimensional material such as quantum well, quantum dot, quantum dot in well, semiconductor superlattice, nanowires, nanotube, and colloid quantum dot. We will also review the nanostructures used for enhanced light-matter interaction to boost the IRPD sensitivity. These include nanostructured antireflection coatings, optical antennas, plasmonic, and metamaterials.

  12. A Heterogeneous High-Performance System for Computational and Computer Science

    Science.gov (United States)

    2016-11-15

    expand the research infrastructure at the institution but also to enhance the high -performance computing training provided to both undergraduate and... cloud computing, supercomputing, and the availability of cheap memory and storage led to enormous amounts of data to be sifted through in forensic... High -Performance Computing (HPC) tools that can be integrated with existing curricula and support our research to modernize and dramatically advance

  13. Radiotherapy Monte Carlo simulation using cloud computing technology.

    Science.gov (United States)

    Poole, C M; Cornelius, I; Trapp, J V; Langton, C M

    2012-12-01

    Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.

  14. Radiotherapy Monte Carlo simulation using cloud computing technology

    International Nuclear Information System (INIS)

    Poole, C.M.; Cornelius, I.; Trapp, J.V.; Langton, C.M.

    2012-01-01

    Cloud computing allows for vast computational resources to be leveraged quickly and easily in bursts as and when required. Here we describe a technique that allows for Monte Carlo radiotherapy dose calculations to be performed using GEANT4 and executed in the cloud, with relative simulation cost and completion time evaluated as a function of machine count. As expected, simulation completion time decreases as 1/n for n parallel machines, and relative simulation cost is found to be optimal where n is a factor of the total simulation time in hours. Using the technique, we demonstrate the potential usefulness of cloud computing as a solution for rapid Monte Carlo simulation for radiotherapy dose calculation without the need for dedicated local computer hardware as a proof of principal.

  15. Quantum Accelerators for High-Performance Computing Systems

    OpenAIRE

    Britt, Keith A.; Mohiyaddin, Fahd A.; Humble, Travis S.

    2017-01-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantu...

  16. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  17. Production Well Performance Enhancement using Sonication Technology

    Energy Technology Data Exchange (ETDEWEB)

    Adewumi, Michael A; Ityokumbul, M Thaddeus; Watson, Robert W; Eltohami, Eltohami; Farias, Mario; Heckman, Glenn; Houlihan, Brendan; Karoor, Samata Prakash; Miller, Bruce G; Mohammed, Nazia; Olanrewaju, Johnson; Ozdemir, Mine; Rejepov, Dautmamed; Sadegh, Abdallah A; Quammie, Kevin E; Zaghloul, Jose; Hughes, W Jack; Montgomery, Thomas C

    2005-12-31

    The objective of this project was to develop a sonic well performance enhancement technology that focused on near wellbore formation damage. In order to successfully achieve this objective, a three-year project was defined. The entire project was broken into four tasks. The overall objective of all this was to foster a better understanding of the mechanisms involved in sonic energy interactions with fluid flow in porous media and adapt such knowledge for field applications. The fours tasks are: • Laboratory studies • Mathematical modeling • Sonic tool design and development • Field demonstration The project was designed to be completed in three years; however, due to budget cuts, support was only provided for the first year, and hence the full objective of the project could not be accomplished. This report summarizes what was accomplished with the support provided by the US Department of Energy. Experiments performed focused on determining the inception of cavitation, studying thermal dissipation under cavitation conditions, investigating sonic energy interactions with glass beads and oil, and studying the effects of sonication on crude oil properties. Our findings show that the voltage threshold for onset of cavitation is independent of transducer-hydrophone separation distance. In addition, thermal dissipation under cavitation conditions contributed to the mobilization of deposited paraffins and waxes. Our preliminary laboratory experiments suggest that waxes are mobilized when the fluid temperature approaches 40°C. Experiments were conducted that provided insights into the interactions between sonic wave and the fluid contained in the porous media. Most of these studies were carried out in a slim-tube apparatus. A numerical model was developed for simulating the effect of sonication in the nearwellbore region. The numerical model developed was validated using a number of standard testbed problems. However, actual application of the model for scale

  18. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  19. A collaborative brain-computer interface for improving human performance.

    Directory of Open Access Journals (Sweden)

    Yijun Wang

    Full Text Available Electroencephalogram (EEG based brain-computer interfaces (BCI have been studied since the 1970s. Currently, the main focus of BCI research lies on the clinical use, which aims to provide a new communication channel to patients with motor disabilities to improve their quality of life. However, the BCI technology can also be used to improve human performance for normal healthy users. Although this application has been proposed for a long time, little progress has been made in real-world practices due to technical limits of EEG. To overcome the bottleneck of low single-user BCI performance, this study proposes a collaborative paradigm to improve overall BCI performance by integrating information from multiple users. To test the feasibility of a collaborative BCI, this study quantitatively compares the classification accuracies of collaborative and single-user BCI applied to the EEG data collected from 20 subjects in a movement-planning experiment. This study also explores three different methods for fusing and analyzing EEG data from multiple subjects: (1 Event-related potentials (ERP averaging, (2 Feature concatenating, and (3 Voting. In a demonstration system using the Voting method, the classification accuracy of predicting movement directions (reaching left vs. reaching right was enhanced substantially from 66% to 80%, 88%, 93%, and 95% as the numbers of subjects increased from 1 to 5, 10, 15, and 20, respectively. Furthermore, the decision of reaching direction could be made around 100-250 ms earlier than the subject's actual motor response by decoding the ERP activities arising mainly from the posterior parietal cortex (PPC, which are related to the processing of visuomotor transmission. Taken together, these results suggest that a collaborative BCI can effectively fuse brain activities of a group of people to improve the overall performance of natural human behavior.

  20. High-Performance Secure Database Access Technologies for HEP Grids

    Energy Technology Data Exchange (ETDEWEB)

    Matthew Vranicar; John Weicher

    2006-04-17

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist’s computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that "Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications.” There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the

  1. High-Performance Secure Database Access Technologies for HEP Grids

    International Nuclear Information System (INIS)

    Vranicar, Matthew; Weicher, John

    2006-01-01

    The Large Hadron Collider (LHC) at the CERN Laboratory will become the largest scientific instrument in the world when it starts operations in 2007. Large Scale Analysis Computer Systems (computational grids) are required to extract rare signals of new physics from petabytes of LHC detector data. In addition to file-based event data, LHC data processing applications require access to large amounts of data in relational databases: detector conditions, calibrations, etc. U.S. high energy physicists demand efficient performance of grid computing applications in LHC physics research where world-wide remote participation is vital to their success. To empower physicists with data-intensive analysis capabilities a whole hyperinfrastructure of distributed databases cross-cuts a multi-tier hierarchy of computational grids. The crosscutting allows separation of concerns across both the global environment of a federation of computational grids and the local environment of a physicist's computer used for analysis. Very few efforts are on-going in the area of database and grid integration research. Most of these are outside of the U.S. and rely on traditional approaches to secure database access via an extraneous security layer separate from the database system core, preventing efficient data transfers. Our findings are shared by the Database Access and Integration Services Working Group of the Global Grid Forum, who states that 'Research and development activities relating to the Grid have generally focused on applications where data is stored in files. However, in many scientific and commercial domains, database management systems have a central role in data storage, access, organization, authorization, etc, for numerous applications'. There is a clear opportunity for a technological breakthrough, requiring innovative steps to provide high-performance secure database access technologies for grid computing. We believe that an innovative database architecture where the secure

  2. An overview of flight computer technologies for future NASA

    Science.gov (United States)

    Alkalai, L.

    2001-01-01

    In this paper, we present an overview of current developments by several US Government Agencies and associated programs, towards high-performance single board computers for use in space. Three separate projects will be described; two that are based on the Power PC processor, and one based on the Pentium processor.

  3. Survey of Canadian Myotonic Dystrophy Patients' Access to Computer Technology.

    Science.gov (United States)

    Climans, Seth A; Piechowicz, Christine; Koopman, Wilma J; Venance, Shannon L

    2017-09-01

    Myotonic dystrophy type 1 is an autosomal dominant condition affecting distal hand strength, energy, and cognition. Increasingly, patients and families are seeking information online. An online neuromuscular patient portal under development can help patients access resources and interact with each other regardless of location. It is unknown how individuals living with myotonic dystrophy interact with technology and whether barriers to access exist. We aimed to characterize technology use among participants with myotonic dystrophy and to determine whether there is interest in a patient portal. Surveys were mailed to 156 participants with myotonic dystrophy type 1 registered with the Canadian Neuromuscular Disease Registry. Seventy-five participants (60% female) responded; almost half were younger than 46 years. Most (84%) used the internet; almost half of the responders (47%) used social media. The complexity and cost of technology were commonly cited reasons not to use technology. The majority of responders (76%) were interested in a myotonic dystrophy patient portal. Patients in a Canada-wide registry of myotonic dystrophy have access to and use technology such as computers and mobile phones. These patients expressed interest in a portal that would provide them with an opportunity to network with others with myotonic dystrophy and to access information about the disease.

  4. Information and communication technology and bank performance ...

    African Journals Online (AJOL)

    Different sectors of world economies are rapidly being affected by improved technology. Banking sector is also witnessing the trend in Nigeria. Information and communication Technology is said to have impacted the banking sector massively as the banks in Nigeria introduce products that would help improve their efficiency ...

  5. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa; Parashar, Manish; Kim, Hyunjoo; Jordan, Kirk E.; Sachdeva, Vipin; Sexton, James; Jamjoom, Hani; Shae, Zon-Yin; Pencheva, Gergina; Tavakoli, Reza; Wheeler, Mary F.

    2012-01-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a

  6. High-performance computing on GPUs for resistivity logging of oil and gas wells

    Science.gov (United States)

    Glinskikh, V.; Dudaev, A.; Nechaev, O.; Surodina, I.

    2017-10-01

    We developed and implemented into software an algorithm for high-performance simulation of electrical logs from oil and gas wells using high-performance heterogeneous computing. The numerical solution of the 2D forward problem is based on the finite-element method and the Cholesky decomposition for solving a system of linear algebraic equations (SLAE). Software implementations of the algorithm used the NVIDIA CUDA technology and computing libraries are made, allowing us to perform decomposition of SLAE and find its solution on central processor unit (CPU) and graphics processor unit (GPU). The calculation time is analyzed depending on the matrix size and number of its non-zero elements. We estimated the computing speed on CPU and GPU, including high-performance heterogeneous CPU-GPU computing. Using the developed algorithm, we simulated resistivity data in realistic models.

  7. Computer-Supported Instruction in Enhancing the Performance of Dyscalculics

    Science.gov (United States)

    Kumar, S. Praveen; Raja, B. William Dharma

    2010-01-01

    The use of instructional media is an essential component of teaching-learning process which contributes to the efficiency as well as effectiveness of the teaching-learning process. Computer-supported instruction has a very important role to play as an advanced technological instruction as it employs different instructional techniques like…

  8. Computer technology applications in industrial and organizational psychology.

    Science.gov (United States)

    Crespin, Timothy R; Austin, James T

    2002-08-01

    This article reviews computer applications developed and utilized by industrial-organizational (I-O) psychologists, both in practice and in research. A primary emphasis is on applications developed for Internet usage, because this "network of networks" changes the way I-O psychologists work. The review focuses on traditional and emerging topics in I-O psychology. The first topic involves information technology applications in measurement, defined broadly across levels of analysis (persons, groups, organizations) and domains (abilities, personality, attitudes). Discussion then focuses on individual learning at work, both in formal training and in coping with continual automation of work. A section on job analysis follows, illustrating the role of computers and the Internet in studying jobs. Shifting focus to the group level of analysis, we briefly review how information technology is being used to understand and support cooperative work. Finally, special emphasis is given to the emerging "third discipline" in I-O psychology research-computational modeling of behavioral events in organizations. Throughout this review, themes of innovation and dissemination underlie a continuum between research and practice. The review concludes by setting a framework for I-O psychology in a computerized and networked world.

  9. Computational Fluid Dynamics (CFD) Technology Programme 1995- 1999

    Energy Technology Data Exchange (ETDEWEB)

    Haekkinen, R.J.; Hirsch, C.; Krause, E.; Kytoemaa, H.K. [eds.

    1997-12-31

    The report is a mid-term evaluation of the Computational Fluid Dynamics (CFD) Technology Programme started by Technology Development Centre Finland (TEKES) in 1995 as a five-year initiative to be concluded in 1999. The main goal of the programme is to increase the know-how and application of CFD in Finnish industry, to coordinate and thus provide a better basis for co-operation between national CFD activities and encouraging research laboratories and industry to establish co-operation with the international CFD community. The projects of the programme focus on the following areas: (1) studies of modeling the physics and dynamics of the behaviour of fluid material, (2) expressing the physical models in a numerical mode and developing a computer codes, (3) evaluating and testing current physical models and developing new ones, (4) developing new numerical algorithms, solvers, and pre- and post-processing software, and (5) applying the new computational tools to problems relevant to their ultimate industrial use. The report consists of two sections. The first considers issues concerning the whole programme and the second reviews each project

  10. Integrated Geo Hazard Management System in Cloud Computing Technology

    Science.gov (United States)

    Hanifah, M. I. M.; Omar, R. C.; Khalid, N. H. N.; Ismail, A.; Mustapha, I. S.; Baharuddin, I. N. Z.; Roslan, R.; Zalam, W. M. Z.

    2016-11-01

    Geo hazard can result in reducing of environmental health and huge economic losses especially in mountainous area. In order to mitigate geo-hazard effectively, cloud computer technology are introduce for managing geo hazard database. Cloud computing technology and it services capable to provide stakeholder's with geo hazards information in near to real time for an effective environmental management and decision-making. UNITEN Integrated Geo Hazard Management System consist of the network management and operation to monitor geo-hazard disaster especially landslide in our study area at Kelantan River Basin and boundary between Hulu Kelantan and Hulu Terengganu. The system will provide easily manage flexible measuring system with data management operates autonomously and can be controlled by commands to collects and controls remotely by using “cloud” system computing. This paper aims to document the above relationship by identifying the special features and needs associated with effective geohazard database management using “cloud system”. This system later will use as part of the development activities and result in minimizing the frequency of the geo-hazard and risk at that research area.

  11. Computational fluid dynamics for propulsion technology: Geometric grid visualization in CFD-based propulsion technology research

    Science.gov (United States)

    Ziebarth, John P.; Meyer, Doug

    1992-01-01

    The coordination is examined of necessary resources, facilities, and special personnel to provide technical integration activities in the area of computational fluid dynamics applied to propulsion technology. Involved is the coordination of CFD activities between government, industry, and universities. Current geometry modeling, grid generation, and graphical methods are established to use in the analysis of CFD design methodologies.

  12. Examining hemodialyzer membrane performance using proteomic technologies

    Directory of Open Access Journals (Sweden)

    Bonomini M

    2017-12-01

    Full Text Available Mario Bonomini,1 Luisa Pieroni,2 Lorenzo Di Liberato,1 Vittorio Sirolli,1 Andrea Urbani2,3 1Department of Medicine, G. d’Annunzio University, Chieti, 2Proteomic and Metabonomic Units, IRCCS S. Lucia Foundation, Rome, 3Faculty of Medicine, Biochemistry and Clinical Biochemistry Institute, Catholic University of the “Sacred Heart”, Rome, Italy Abstract: The success and the quality of hemodialysis therapy are mainly related to both clearance and biocompatibility properties of the artificial membrane packed in the hemodialyzer. Performance of a membrane is strongly influenced by its interaction with the plasma protein repertoire during the extracorporeal procedure. Recognition that a number of medium–high molecular weight solutes, including proteins and protein-bound molecules, are potentially toxic has prompted the development of more permeable membranes. Such membrane engineering, however, may cause loss of vital proteins, with membrane removal being nonspecific. In addition, plasma proteins can be adsorbed onto the membrane surface upon blood contact during dialysis. Adsorption can contribute to the removal of toxic compounds and governs the biocompatibility of a membrane, since surface-adsorbed proteins may trigger a variety of biologic blood pathways with pathophysiologic consequences. Over the last years, use of proteomic approaches has allowed polypeptide spectrum involved in the process of hemodialysis, a key issue previously hampered by lack of suitable technology, to be assessed in an unbiased manner and in its full complexity. Proteomics has been successfully applied to identify and quantify proteins in complex mixtures such as dialysis outflow fluid and fluid desorbed from dialysis membrane containing adsorbed proteins. The identified proteins can also be characterized by their involvement in metabolic and signaling pathways, molecular networks, and biologic processes through application of bioinformatics tools. Proteomics may

  13. THE ROLE OF COMPUTER TECHNOLOGY IN TEACHING ENGLISH LANGUAGE

    Directory of Open Access Journals (Sweden)

    Батагоз Талгатовна Керимбаева

    2017-12-01

    Full Text Available In the article an attempt was made to define the role and to study the peculiarities of functioning of English language in higher education. The state of education of the Republic of Kazakhstan and trends of development of society are the most result problems of priority development of the education system on the basis of computer technology and the creation of a unified educational information environment. With the rapid development of science, fast updates of information, it is impossible to learn for a lifetime, it is important to develop the interest in obtaining knowledge for continuous self- education. Intense changes in society caused by the development of modern educational technologies, has led to the need for change of the education system. The main objective of the training is to achieve a new modern quality of education.Modernization of the Kazakhstan education defines the main goal of professional education as the training of qualified professional of the appropriate level and profile, fluent in their profession, capable to effective work on a speciality at the level of world standards, ready for professional growth and professional mobility. Modern trends of modernization of educational programs demand introduction of modern methods of teaching. The increasing introduction of new computer technology and the application of the competence approach in educational process of H.A. Yasawi International kazakh- turkish university promotes increase of efficiency of process of teaching English.

  14. Condition Monitoring Through Advanced Sensor and Computational Technology

    International Nuclear Information System (INIS)

    Kim, Jung Taek; Park, Won Man; Kim, Jung Soo; Seong, Soeng Hwan; Hur, Sub; Cho, Jae Hwan; Jung, Hyung Gue

    2005-05-01

    The overall goal of this joint research project was to develop and demonstrate advanced sensors and computational technology for continuous monitoring of the condition of components, structures, and systems in advanced and next-generation nuclear power plants (NPPs). This project included investigating and adapting several advanced sensor technologies from Korean and US national laboratory research communities, some of which were developed and applied in non-nuclear industries. The project team investigated and developed sophisticated signal processing, noise reduction, and pattern recognition techniques and algorithms. The researchers installed sensors and conducted condition monitoring tests on two test loops, a check valve (an active component) and a piping elbow (a passive component), to demonstrate the feasibility of using advanced sensors and computational technology to achieve the project goal. Acoustic emission (AE) devices, optical fiber sensors, accelerometers, and ultrasonic transducers (UTs) were used to detect mechanical vibratory response of check valve and piping elbow in normal and degraded configurations. Chemical sensors were also installed to monitor the water chemistry in the piping elbow test loop. Analysis results of processed sensor data indicate that it is feasible to differentiate between the normal and degraded (with selected degradation mechanisms) configurations of these two components from the acquired sensor signals, but it is questionable that these methods can reliably identify the level and type of degradation. Additional research and development efforts are needed to refine the differentiation techniques and to reduce the level of uncertainties

  15. Enabling high performance computational science through combinatorial algorithms

    International Nuclear Information System (INIS)

    Boman, Erik G; Bozdag, Doruk; Catalyurek, Umit V; Devine, Karen D; Gebremedhin, Assefaw H; Hovland, Paul D; Pothen, Alex; Strout, Michelle Mills

    2007-01-01

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation

  16. Enabling high performance computational science through combinatorial algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Erik G [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Bozdag, Doruk [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Catalyurek, Umit V [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Devine, Karen D [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Gebremedhin, Assefaw H [Computer Science and Center for Computational Science, Old Dominion University (United States); Hovland, Paul D [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Pothen, Alex [Computer Science and Center for Computational Science, Old Dominion University (United States); Strout, Michelle Mills [Computer Science, Colorado State University (United States)

    2007-07-15

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation.

  17. The Uses and Impacts of Mobile Computing Technology in Hot Spots Policing.

    Science.gov (United States)

    Koper, Christopher S; Lum, Cynthia; Hibdon, Julie

    2015-12-01

    Recent technological advances have much potential for improving police performance, but there has been little research testing whether they have made police more effective in reducing crime. To study the uses and crime control impacts of mobile computing technology in the context of geographically focused "hot spots" patrols. An experiment was conducted using 18 crime hot spots in a suburban jurisdiction. Nine of these locations were randomly selected to receive additional patrols over 11 weeks. Researchers studied officers' use of mobile information technology (IT) during the patrols using activity logs and interviews. Nonrandomized subgroup and multivariate analyses were employed to determine if and how the effects of the patrols varied based on these patterns. Officers used mobile computing technology primarily for surveillance and enforcement (e.g., checking automobile license plates and running checks on people during traffic stops and field interviews), and they noted both advantages and disadvantages to its use. Officers did not often use technology for strategic problem-solving and crime prevention. Given sufficient (but modest) dosages, the extra patrols reduced crime at the hot spots, but this effect was smaller in places where officers made greater use of technology. Basic applications of mobile computing may have little if any direct, measurable impact on officers' ability to reduce crime in the field. Greater training and emphasis on strategic uses of IT for problem-solving and crime prevention, and greater attention to its behavioral effects on officers, might enhance its application for crime reduction. © The Author(s) 2016.

  18. Micromagnetics on high-performance workstation and mobile computational platforms

    Science.gov (United States)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  19. InfoMall: An Innovative Strategy for High-Performance Computing and Communications Applications Development.

    Science.gov (United States)

    Mills, Kim; Fox, Geoffrey

    1994-01-01

    Describes the InfoMall, a program led by the Northeast Parallel Architectures Center (NPAC) at Syracuse University (New York). The InfoMall features a partnership of approximately 24 organizations offering linked programs in High Performance Computing and Communications (HPCC) technology integration, software development, marketing, education and…

  20. High-Performance Computing in Neuroscience for Data-Driven Discovery, Integration, and Dissemination

    International Nuclear Information System (INIS)

    Bouchard, Kristofer E.

    2016-01-01

    A lack of coherent plans to analyze, manage, and understand data threatens the various opportunities offered by new neuro-technologies. High-performance computing will allow exploratory analysis of massive datasets stored in standardized formats, hosted in open repositories, and integrated with simulations.

  1. Technologies for Large Data Management in Scientific Computing

    CERN Document Server

    Pace, A

    2014-01-01

    In recent years, intense usage of computing has been the main strategy of investigations in several scientific research projects. The progress in computing technology has opened unprecedented opportunities for systematic collection of experimental data and the associated analysis that were considered impossible only few years ago. This paper focusses on the strategies in use: it reviews the various components that are necessary for an effective solution that ensures the storage, the long term preservation, and the worldwide distribution of large quantities of data that are necessary in a large scientific research project. The paper also mentions several examples of data management solutions used in High Energy Physics for the CERN Large Hadron Collider (LHC) experiments in Geneva, Switzerland which generate more than 30,000 terabytes of data every year that need to be preserved, analyzed, and made available to a community of several tenth of thousands scientists worldwide.

  2. International Conference on Soft Computing in Information Communication Technology

    CERN Document Server

    Soft Computing in Information Communication Technology

    2012-01-01

      This is a collection of the accepted papers concerning soft computing in information communication technology. All accepted papers are subjected to strict peer-reviewing by 2 expert referees. The resultant dissemination of the latest research results, and the exchanges of views concerning the future research directions to be taken in this field makes the work of immense value to all those having an interest in the topics covered. The present book represents a cooperative effort to seek out the best strategies for effecting improvements in the quality and the reliability of Neural Networks, Swarm Intelligence, Evolutionary Computing, Image Processing Internet Security, Data Security, Data Mining, Network Security and Protection of data and Cyber laws. Our sincere appreciation and thanks go to these authors for their contributions to this conference. I hope you can gain lots of useful information from the book.

  3. Advanced intelligent computational technologies and decision support systems

    CERN Document Server

    Kountchev, Roumen

    2014-01-01

    This book offers a state of the art collection covering themes related to Advanced Intelligent Computational Technologies and Decision Support Systems which can be applied to fields like healthcare assisting the humans in solving problems. The book brings forward a wealth of ideas, algorithms and case studies in themes like: intelligent predictive diagnosis; intelligent analyzing of medical images; new format for coding of single and sequences of medical images; Medical Decision Support Systems; diagnosis of Down’s syndrome; computational perspectives for electronic fetal monitoring; efficient compression of CT Images; adaptive interpolation and halftoning for medical images; applications of artificial neural networks for real-life problems solving; present and perspectives for Electronic Healthcare Record Systems; adaptive approaches for noise reduction in sequences of CT images etc.

  4. Reconfigurable Computing As an Enabling Technology for Single-Photon-Counting Laser Altimetry

    Science.gov (United States)

    Powell, Wesley; Hicks, Edward; Pinchinat, Maxime; Dabney, Philip; McGarry, Jan; Murray, Paul

    2003-01-01

    Single-photon-counting laser altimetry is a new measurement technique offering significant advantages in vertical resolution, reducing instrument size, mass, and power, and reducing laser complexity as compared to analog or threshold detection laser altimetry techniques. However, these improvements come at the cost of a dramatically increased requirement for onboard real-time data processing. Reconfigurable computing has been shown to offer considerable performance advantages in performing this processing. These advantages have been demonstrated on the Multi-KiloHertz Micro-Laser Altimeter (MMLA), an aircraft based single-photon-counting laser altimeter developed by NASA Goddard Space Flight Center with several potential spaceflight applications. This paper describes how reconfigurable computing technology was employed to perform MMLA data processing in real-time under realistic operating constraints, along with the results observed. This paper also expands on these prior results to identify concepts for using reconfigurable computing to enable spaceflight single-photon-counting laser altimeter instruments.

  5. Technological Innovation Capabilities and Firm Performance

    OpenAIRE

    Richard C.M. Yam; William Lo; Esther P.Y. Tang; Antonio; K.W. Lau

    2010-01-01

    Technological innovation capability (TIC) is defined as a comprehensive set of characteristics of a firm that facilities and supports its technological innovation strategies. An audit to evaluate the TICs of a firm may trigger improvement in its future practices. Such an audit can be used by the firm for self assessment or third-party independent assessment to identify problems of its capability status. This paper attempts to develop such an auditing framework that can...

  6. High performance computing in power and energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Khaitan, Siddhartha Kumar [Iowa State Univ., Ames, IA (United States); Gupta, Anshul (eds.) [IBM Watson Research Center, Yorktown Heights, NY (United States)

    2013-07-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, cascading and security analysis, interaction between hybrid systems (electric, transport, gas, oil, coal, etc.) and so on, to get meaningful information in real time to ensure a secure, reliable and stable power system grid. Advanced research on development and implementation of market-ready leading-edge high-speed enabling technologies and algorithms for solving real-time, dynamic, resource-critical problems will be required for dynamic security analysis targeted towards successful implementation of Smart Grid initiatives. This books aims to bring together some of the latest research developments as well as thoughts on the future research directions of the high performance computing applications in electric power systems planning, operations, security, markets, and grid integration of alternate sources of energy, etc.

  7. Tablet computer enhanced training improves internal medicine exam performance.

    Science.gov (United States)

    Baumgart, Daniel C; Wende, Ilja; Grittner, Ulrike

    2017-01-01

    Traditional teaching concepts in medical education do not take full advantage of current information technology. We aimed to objectively determine the impact of Tablet PC enhanced training on learning experience and MKSAP® (medical knowledge self-assessment program) exam performance. In this single center, prospective, controlled study final year medical students and medical residents doing an inpatient service rotation were alternatingly assigned to either the active test (Tablet PC with custom multimedia education software package) or traditional education (control) group, respectively. All completed an extensive questionnaire to collect their socio-demographic data, evaluate educational status, computer affinity and skills, problem solving, eLearning knowledge and self-rated medical knowledge. Both groups were MKSAP® tested at the beginning and the end of their rotation. The MKSAP® score at the final exam was the primary endpoint. Data of 55 (tablet n = 24, controls n = 31) male 36.4%, median age 28 years, 65.5% students, were evaluable. The mean MKSAP® score improved in the tablet PC (score Δ + 8 SD: 11), but not the control group (score Δ- 7, SD: 11), respectively. After adjustment for baseline score and confounders the Tablet PC group showed on average 11% better MKSAP® test results compared to the control group (plearning to their respective training programs.

  8. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    Science.gov (United States)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  9. Weightbearing Computed Tomography of the Foot and Ankle: Emerging Technology Topical Review.

    Science.gov (United States)

    Barg, Alexej; Bailey, Travis; Richter, Martinus; de Cesar Netto, Cesar; Lintz, François; Burssens, Arne; Phisitkul, Phinit; Hanrahan, Christopher J; Saltzman, Charles L

    2018-03-01

    In the last decade, cone-beam computed tomography technology with improved designs allowing flexible gantry movements has allowed both supine and standing weight-bearing imaging of the lower extremity. There is an increasing amount of literature describing the use of weightbearing computed tomography in patients with foot and ankle disorders. To date, there is no review article summarizing this imaging modality in the foot and ankle. Therefore, we performed a systematic literature review of relevant clinical studies targeting the use of weightbearing computed tomography in diagnosis of patients with foot and ankle disorders. Furthermore, this review aims to offer insight to those with interest in considering possible future research opportunities with use of this technology. Level V, expert opinion.

  10. CANTEEN MANAGEMENT SYSTEM USING RFID TECHNOLOGY BASED ON CLOUD COMPUTING

    OpenAIRE

    Lavina Mall*, Nihal Shaikh

    2017-01-01

    We are currently in the midst of a technological and computing revolution that will drastically change our lives and potentially redefine what it means to be human. Automation in many fields has replaced the old school pen and paper and at the same time proved to be more efficient, correct and less cumbersome making our life much easier. This automation process when applied on an integral part of the working people i.e. “canteen” helps reduce the service time, eliminates queues, there is no b...

  11. Soft Computing in Information Communication Technology Volume 2

    CERN Document Server

    2012-01-01

    This book is a collection of the accepted papers concerning soft computing in information communication technology. The resultant dissemination of the latest research results, and the exchanges of views concerning the future research directions to be taken in this field makes the work of immense value to all those having an interest in the topics covered. The present book represents a cooperative effort to seek out the best strategies for effecting improvements in the quality and the reliability of Fuzzy Logic, Machine Learning, Cryptography, Pattern Recognition, Bioinformatics, Biomedical Engineering, Advancements in ICT.

  12. Symposium on computational fluid dynamics: technology and applications

    International Nuclear Information System (INIS)

    1988-01-01

    A symposium on the technology and applications of computational fluid dynamics (CFD) was held in Pretoria from 21-23 Nov 1988. The following aspects were covered: multilevel adaptive methods and multigrid solvers in CFD, a symbolic processing approach to CFD, interplay between CFD and analytical approximations, CFD on a transfer array, the application of CFD in high speed aerodynamics, numerical simulation of laminar blood flow, two-phase flow modelling in nuclear accident analysis, and the finite difference scheme for the numerical solution of fluid flow

  13. Emerging computer technologies and the news media of the future

    Science.gov (United States)

    Vrabel, Debra A.

    1993-01-01

    The media environment of the future may be dramatically different from what exists today. As new computing and communications technologies evolve and synthesize to form a global, integrated communications system of networks, public domain hardware and software, and consumer products, it will be possible for citizens to fulfill most information needs at any time and from any place, to obtain desired information easily and quickly, to obtain information in a variety of forms, and to experience and interact with information in a variety of ways. This system will transform almost every institution, every profession, and every aspect of human life--including the creation, packaging, and distribution of news and information by media organizations. This paper presents one vision of a 21st century global information system and how it might be used by citizens. It surveys some of the technologies now on the market that are paving the way for new media environment.

  14. Neural computation and particle accelerators research, technology and applications

    CERN Document Server

    D'Arras, Horace

    2010-01-01

    This book discusses neural computation, a network or circuit of biological neurons and relatedly, particle accelerators, a scientific instrument which accelerates charged particles such as protons, electrons and deuterons. Accelerators have a very broad range of applications in many industrial fields, from high energy physics to medical isotope production. Nuclear technology is one of the fields discussed in this book. The development that has been reached by particle accelerators in energy and particle intensity has opened the possibility to a wide number of new applications in nuclear technology. This book reviews the applications in the nuclear energy field and the design features of high power neutron sources are explained. Surface treatments of niobium flat samples and superconducting radio frequency cavities by a new technique called gas cluster ion beam are also studied in detail, as well as the process of electropolishing. Furthermore, magnetic devises such as solenoids, dipoles and undulators, which ...

  15. High-performance scientific computing in the cloud

    Science.gov (United States)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  16. TRANSFORMING RURAL SECONDARY SCHOOLS IN ZIMBABWE THROUGH TECHNOLOGY: LIVED EXPERIENCES OF STUDENT COMPUTER USERS

    Directory of Open Access Journals (Sweden)

    Gomba Clifford

    2016-04-01

    Full Text Available A technological divide exists in Zimbabwe between urban and rural schools that puts rural based students at a disadvantage. In Zimbabwe, the government, through the president donated computers to most rural schools in a bid to bridge the digital divide between rural and urban schools. The purpose of this phenomenological study was to understand the experiences of Advanced Level students using computers at two rural boarding Catholic High Schools in Zimbabwe. The study was guided by two research questions: (1 How do Advanced level students in the rural areas use computers at their school? and (2 What is the experience of using computers for Advanced Level students in the rural areas of Zimbabwe? By performing this study, it was possible to understand from the students’ experiences whether computer usage was for educational learning or not. The results of the phenomenological study showed that students’ experiences can be broadly classified into five themes, namely worthwhile (interesting experience, accessibility issues, teachers’ monopoly, research and social use, and Internet availability. The participants proposed teachers use computers, but not monopolize computer usage. The solution to the computer shortage may be solved by having donors and government help in the acquisitioning of more computers.

  17. Computer technology for self-management: a scoping review.

    Science.gov (United States)

    Jacelon, Cynthia S; Gibbs, Molly A; Ridgway, John Ve

    2016-05-01

    The purpose of this scoping review of literature is to explore the types of computer-based systems used for self-management of chronic disease, the goals and success of these systems, the value added by technology integration and the target audience for these systems. Technology is changing the way health care is provided and the way that individuals manage their health. Individuals with chronic diseases are now able to use computer-based systems to self-manage their health. These systems have the ability to remind users of daily activities, and to help them recognise when symptoms are worsening and intervention is indicated. However, there are many questions about the types of systems available, the goals of these systems and the success with which individuals with chronic illness are using them. This is a scoping review in which the Cumulative Index of Nursing and Allied Health Literature, PubMed and IEEE Xplore databases were searched. A total of 303 articles were reviewed, 89 articles were read in-depth and 30 were included in the scoping review. The Substitution, Augmentation, Modification, Redefinition model was used to evaluate the value added by the technology integration. Research on technology for self-management was conducted in 13 countries. Data analysis identified five kinds of platforms on which the systems were based, some systems were focused on a specific disease management processes, others were not. For individuals to effectively use systems to maintain maximum wellness, the systems must have a strong component of self-management and provide the user with meaningful information regarding their health states. Clinicians should choose systems for their clients based on the design, components and goals of the systems. © 2016 John Wiley & Sons Ltd.

  18. The ongoing investigation of high performance parallel computing in HEP

    CERN Document Server

    Peach, Kenneth J; Böck, R K; Dobinson, Robert W; Hansroul, M; Norton, Alan Robert; Willers, Ian Malcolm; Baud, J P; Carminati, F; Gagliardi, F; McIntosh, E; Metcalf, M; Robertson, L; CERN. Geneva. Detector Research and Development Committee

    1993-01-01

    Past and current exploitation of parallel computing in High Energy Physics is summarized and a list of R & D projects in this area is presented. The applicability of new parallel hardware and software to physics problems is investigated, in the light of the requirements for computing power of LHC experiments and the current trends in the computer industry. Four main themes are discussed (possibilities for a finer grain of parallelism; fine-grain communication mechanism; usable parallel programming environment; different programming models and architectures, using standard commercial products). Parallel computing technology is potentially of interest for offline and vital for real time applications in LHC. A substantial investment in applications development and evaluation of state of the art hardware and software products is needed. A solid development environment is required at an early stage, before mainline LHC program development begins.

  19. How Computer Technology Expands Educational Options: A Rationale, Recommendations, and a Pamphlet for Administrators.

    Science.gov (United States)

    Kelch, Panette Evers; Karr-Kidwell, PJ

    The purpose of this paper is to provide a historical rationale on how computer technology, particularly the Internet, expands educational options for administrators and teachers. A review of the literature includes a brief history of computer technology and its growing use, and a discussion of computer technology for distance learning, for…

  20. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1991-03-15

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour.

  1. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour

  2. Computational Fluid Dynamics and Building Energy Performance Simulation

    DEFF Research Database (Denmark)

    Nielsen, Peter V.; Tryggvason, Tryggvi

    An interconnection between a building energy performance simulation program and a Computational Fluid Dynamics program (CFD) for room air distribution will be introduced for improvement of the predictions of both the energy consumption and the indoor environment. The building energy performance...

  3. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2015-01-01

    A continuation of Contemporary High Performance Computing: From Petascale toward Exascale, this second volume continues the discussion of HPC flagship systems, major application workloads, facilities, and sponsors. The book includes of figures and pictures that capture the state of existing systems: pictures of buildings, systems in production, floorplans, and many block diagrams and charts to illustrate system design and performance.

  4. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bremer, Peer-Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohr, Bernd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schulz, Martin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pasccci, Valerio [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gamblin, Todd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brunst, Holger [Dresden Univ. of Technology (Germany)

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  5. Survey of LWR environmental control technology performance and cost

    International Nuclear Information System (INIS)

    Heeb, C.M.; Aaberg, R.L.; Cole, B.M.; Engel, R.L.; Kennedy, W.E. Jr.; Lewallen, M.A.

    1980-03-01

    This study attempts to establish a ranking for species that are routinely released to the environment for a projected nuclear power growth scenario. Unlike comparisons made to existing standards, which are subject to frequent revision, the ranking of releases can be used to form a more logical basis for identifying the areas where further development of control technology could be required. This report describes projections of releases for several fuel cycle scenarios, identifies areas where alternative control technologies may be implemented, and discusses the available alternative control technologies. The release factors were used in a computer code system called ENFORM, which calculates the annual release of any species from any part of the LWR nuclear fuel cycle given a projection of installed nuclear generation capacity. This survey of fuel cycle releases was performed for three reprocessing scenarios (stowaway, reprocessing without recycle of Pu and reprocessing with full recycle of U and Pu) for a 100-year period beginning in 1977. The radioactivity releases were ranked on the basis of a relative ranking factor. The relative ranking factor is based on the 100-year summation of the 50-year population dose commitment from an annual release of radioactive effluents. The nonradioactive releases were ranked on the basis of dilution factor. The twenty highest ranking radioactive releases were identified and each of these was analyzed in terms of the basis for calculating the release and a description of the currently employed control method. Alternative control technology is then discussed, along with the available capital and operating cost figures for alternative control methods

  6. Human performance models for computer-aided engineering

    Science.gov (United States)

    Elkind, Jerome I. (Editor); Card, Stuart K. (Editor); Hochberg, Julian (Editor); Huey, Beverly Messick (Editor)

    1989-01-01

    This report discusses a topic important to the field of computational human factors: models of human performance and their use in computer-based engineering facilities for the design of complex systems. It focuses on a particular human factors design problem -- the design of cockpit systems for advanced helicopters -- and on a particular aspect of human performance -- vision and related cognitive functions. By focusing in this way, the authors were able to address the selected topics in some depth and develop findings and recommendations that they believe have application to many other aspects of human performance and to other design domains.

  7. Development of a body motion interactive system with a weight voting mechanism and computer vision technology

    Science.gov (United States)

    Lin, Chern-Sheng; Chen, Chia-Tse; Shei, Hung-Jung; Lay, Yun-Long; Chiu, Chuang-Chien

    2012-09-01

    This study develops a body motion interactive system with computer vision technology. This application combines interactive games, art performing, and exercise training system. Multiple image processing and computer vision technologies are used in this study. The system can calculate the characteristics of an object color, and then perform color segmentation. When there is a wrong action judgment, the system will avoid the error with a weight voting mechanism, which can set the condition score and weight value for the action judgment, and choose the best action judgment from the weight voting mechanism. Finally, this study estimated the reliability of the system in order to make improvements. The results showed that, this method has good effect on accuracy and stability during operations of the human-machine interface of the sports training system.

  8. Survey of computer codes applicable to waste facility performance evaluations

    International Nuclear Information System (INIS)

    Alsharif, M.; Pung, D.L.; Rivera, A.L.; Dole, L.R.

    1988-01-01

    This study is an effort to review existing information that is useful to develop an integrated model for predicting the performance of a radioactive waste facility. A summary description of 162 computer codes is given. The identified computer programs address the performance of waste packages, waste transport and equilibrium geochemistry, hydrological processes in unsaturated and saturated zones, and general waste facility performance assessment. Some programs also deal with thermal analysis, structural analysis, and special purposes. A number of these computer programs are being used by the US Department of Energy, the US Nuclear Regulatory Commission, and their contractors to analyze various aspects of waste package performance. Fifty-five of these codes were identified as being potentially useful on the analysis of low-level radioactive waste facilities located above the water table. The code summaries include authors, identification data, model types, and pertinent references. 14 refs., 5 tabs

  9. Development of Integrated Assessment Technology of Risk and Performance

    International Nuclear Information System (INIS)

    Yang, Jun Eon; Kang, Dae Il; Kang, Hyun Gook

    2010-04-01

    The main idea and contents are summarized as below 1) Development of new risk/performance assessment system innovating old labor-intensive risk assessment structure - New consolidated risk assessment technology from various hazard(flood, fire, seismic in NPP) - BOP model development for performance monitoring - Consolidated risk/performance management system for consistency and efficiency of NPP 2) Resolution technology for pending issues in PSA - Base technology for PSA of digital I and C system - Base technology for seismic PSA reflecting domestic seismic characteristics and aging effect - Uncertainty reduction technology for level 2 PSA and best estimation of containment failure frequency 3) Next generation risk/performance assessment technology - Human-induced error reduction technology for efficient operation of a NPP

  10. Routing performance analysis and optimization within a massively parallel computer

    Science.gov (United States)

    Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen

    2013-04-16

    An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.

  11. The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report

    Energy Technology Data Exchange (ETDEWEB)

    Diachin, L F; Garaizar, F X; Henson, V E; Pope, G

    2009-10-12

    In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.

  12. GPU-based high-performance computing for radiation therapy

    International Nuclear Information System (INIS)

    Jia, Xun; Jiang, Steve B; Ziegenhein, Peter

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. (topical review)

  13. The correlation between a passion for computer games and the school performance of younger schoolchildren.

    Directory of Open Access Journals (Sweden)

    Maliy D.V.

    2015-07-01

    Full Text Available Today computer games occupy a significant place in children’s lives and fundamentally affect the process of the formation and development of their personalities. A number of present-day researchers assert that computer games have a developmental effect on players. Others share the point of view that computer games have negative effects on the cognitive and emotional spheres of a child and claim that children with low self-esteem who neglect their schoolwork and have difficulties in communication are particularly passionate about computer games. This article reviews theoretical and experimental pedagogical and psychological studies of the nature of the correlation between a passion for computer games and the school performance of younger schoolchildren. Our analysis of foreign and Russian psychology studies regarding the problem of playing activities mediated by information and computer technologies allowed us to single out the main criteria for children’s passion for computer games and school performance. This article presents the results of a pilot study of the nature of the correlation between a passion for computer games and the school performance of younger schoolchildren. The research involved 32 pupils (12 girls and 20 boys aged 10-11 years in the 4th grade. The general hypothesis was that there are divergent correlations between the passion of younger schoolchildren for computer games and their school performance. A questionnaire survey administered to the pupils allowed us to obtain information about the amount of time they devoted to computer games, their preferences for computer-game genres, and the extent of their passion for games. To determine the level of school performance we analyzed class registers. To establish the correlation between a passion for computer games and the school performance of younger schoolchildren, as well as to determine the effect of a passion for computer games on the personal qualities of the children

  14. Parallel, distributed and GPU computing technologies in single-particle electron microscopy

    International Nuclear Information System (INIS)

    Schmeisser, Martin; Heisen, Burkhard C.; Luettich, Mario; Busche, Boris; Hauer, Florian; Koske, Tobias; Knauber, Karl-Heinz; Stark, Holger

    2009-01-01

    An introduction to the current paradigm shift towards concurrency in software. Most known methods for the determination of the structure of macromolecular complexes are limited or at least restricted at some point by their computational demands. Recent developments in information technology such as multicore, parallel and GPU processing can be used to overcome these limitations. In particular, graphics processing units (GPUs), which were originally developed for rendering real-time effects in computer games, are now ubiquitous and provide unprecedented computational power for scientific applications. Each parallel-processing paradigm alone can improve overall performance; the increased computational performance obtained by combining all paradigms, unleashing the full power of today’s technology, makes certain applications feasible that were previously virtually impossible. In this article, state-of-the-art paradigms are introduced, the tools and infrastructure needed to apply these paradigms are presented and a state-of-the-art infrastructure and solution strategy for moving scientific applications to the next generation of computer hardware is outlined

  15. A Survey On Biometric Security Technologies From Cloud Computing Perspective

    Directory of Open Access Journals (Sweden)

    Shivashish Ratnam

    2015-08-01

    Full Text Available Cloud computing is one of the rising technologies that takes set of connections users to the next level. Cloud is a technology where resources are paid as per usage rather than owned. One of the major challenges in this technology is Security. Biometric systems provide the answer to ensure that the rendered services are accessed only by a legal user or an authorized user and no one else. Biometric systems recognize users based on behavioral or physiological characteristics. The advantages of such systems over traditional validation methods such as passwords and IDs are well known and hence biometric systems are progressively gaining ground in terms of usage. This paper brings about a new replica of a security system where in users have to offer multiple biometric finger prints during Enrollment for a service. These templates are stored at the cloud providers section. The users are authenticated based on these finger print designed templates which have to be provided in the order of arbitrary numbers or imaginary numbers that are generated every time continuously. Both finger prints templates and images are present and they provided every time duration are encrypted or modified for enhanced security.

  16. The computational design of Geological Disposal Technology Integration System

    International Nuclear Information System (INIS)

    Ishihara, Yoshinao; Iwamoto, Hiroshi; Kobayashi, Shigeki; Neyama, Atsushi; Endo, Shuji; Shindo, Tomonori

    2002-03-01

    In order to develop 'Geological Disposal Technology Integration System' that is intended to systematize as knowledge base for fundamental study, the computational design of an indispensable database and image processing function to 'Geological Disposal Technology Integration System' was done, the prototype was made for trial purposes, and the function was confirmed. (1) Database of Integration System which systematized necessary information and relating information as an examination of a whole of repository composition and managed were constructed, and the system function was constructed as a system composed of image processing, analytical information management, the repository component management, and the system security function. (2) The range of the data treated with this system and information was examined, the design examination of the database structure was done, and the design examination of the image processing function of the data preserved in an integrated database was done. (3) The prototype of the database concerning a basic function, the system operation interface, and the image processing function was manufactured to verify the feasibility of the 'Geological Disposal Technology Integration System' based on the result of the design examination and the function was confirmed. (author)

  17. Dry process fuel performance technology development

    Energy Technology Data Exchange (ETDEWEB)

    Kang, Kweon Ho; Kim, K. W.; Kim, B. K. (and others)

    2006-06-15

    The objective of the project is to establish the performance evaluation system of DUPIC fuel during the Phase III R and D. In order to fulfil this objectives, property model development of DUPIC fuel and irradiation test was carried out in Hanaro using the instrumented rig. Also, the analysis on the in-reactor behavior analysis of DUPIC fuel, out-pile test using simulated DUPIC fuel as well as performance and integrity assessment in a commercial reactor were performed during this Phase. The R and D results of the Phase III are summarized as follows: Fabrication process establishment of simulated DUPIC fuel for property measurement, Property model development for the DUPIC fuel, Performance evaluation of DUPIC fuel via irradiation test in Hanaro, Post irradiation examination of irradiated fuel and performance analysis, Development of DUPIC fuel performance code (KAOS)

  18. Dry process fuel performance technology development

    International Nuclear Information System (INIS)

    Kang, Kweon Ho; Kim, K. W.; Kim, B. K.

    2006-06-01

    The objective of the project is to establish the performance evaluation system of DUPIC fuel during the Phase III R and D. In order to fulfil this objectives, property model development of DUPIC fuel and irradiation test was carried out in Hanaro using the instrumented rig. Also, the analysis on the in-reactor behavior analysis of DUPIC fuel, out-pile test using simulated DUPIC fuel as well as performance and integrity assessment in a commercial reactor were performed during this Phase. The R and D results of the Phase III are summarized as follows: Fabrication process establishment of simulated DUPIC fuel for property measurement, Property model development for the DUPIC fuel, Performance evaluation of DUPIC fuel via irradiation test in Hanaro, Post irradiation examination of irradiated fuel and performance analysis, Development of DUPIC fuel performance code (KAOS)

  19. Future Vehicle Technologies : high performance transportation innovations

    Energy Technology Data Exchange (ETDEWEB)

    Pratt, T. [Future Vehicle Technologies Inc., Maple Ridge, BC (Canada)

    2010-07-01

    Battery management systems (BMS) were discussed in this presentation, with particular reference to the basic BMS design considerations; safety; undisclosed information about BMS; the essence of BMS; and Future Vehicle Technologies' BMS solution. Basic BMS design considerations that were presented included the balancing methodology; prismatic/cylindrical cells; cell protection; accuracy; PCB design, size and components; communications protocol; cost of manufacture; and expandability. In terms of safety, the presentation addressed lithium fires; high voltage; high voltage ground detection; crash/rollover shutdown; complete pack shutdown capability; and heat shields, casings, and impact protection. BMS bus bar engineering considerations were discussed along with good chip design. It was concluded that FVTs advantage is a unique skillset in automotive technology and the development of speed and cost effectiveness. tabs., figs.

  20. High-Performance Java Codes for Computational Fluid Dynamics

    Science.gov (United States)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  1. Computer simulation of steady-state performance of air-to-air heat pumps

    Energy Technology Data Exchange (ETDEWEB)

    Ellison, R D; Creswick, F A

    1978-03-01

    A computer model by which the performance of air-to-air heat pumps can be simulated is described. The intended use of the model is to evaluate analytically the improvements in performance that can be effected by various component improvements. The model is based on a trio of independent simulation programs originated at the Massachusetts Institute of Technology Heat Transfer Laboratory. The three programs have been combined so that user intervention and decision making between major steps of the simulation are unnecessary. The program was further modified by substituting a new compressor model and adding a capillary tube model, both of which are described. Performance predicted by the computer model is shown to be in reasonable agreement with performance data observed in our laboratory. Planned modifications by which the utility of the computer model can be enhanced in the future are described. User instructions and a FORTRAN listing of the program are included.

  2. A brain-computer interface as input channel for a standard assistive technology software.

    Science.gov (United States)

    Zickler, Claudia; Riccio, Angela; Leotta, Francesco; Hillian-Tress, Sandra; Halder, Sebastian; Holz, Elisa; Staiger-Sälzer, Pit; Hoogerwerf, Evert-Jan; Desideri, Lorenzo; Mattia, Donatella; Kübler, Andrea

    2011-10-01

    Recently brain-computer interface (BCI) control was integrated into the commercial assistive technology product QualiWORLD (QualiLife Inc., Paradiso-Lugano, CH). Usability of the first prototype was evaluated in terms of effectiveness (accuracy), efficiency (information transfer rate and subjective workload/NASA Task Load Index) and user satisfaction (Quebec User Evaluation of Satisfaction with assistive Technology, QUEST 2.0) by four end-users with severe disabilities. Three assistive technology experts evaluated the device from a third person perspective. The results revealed high performance levels in communication and internet tasks. Users and assistive technology experts were quite satisfied with the device. However, none could imagine using the device in daily life without improvements. Main obstacles were the EEG-cap and low speed.

  3. High Performance Computing Software Applications for Space Situational Awareness

    Science.gov (United States)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  4. Acting with Technology: Rehearsing for Mixed-Media Live Performances

    DEFF Research Database (Denmark)

    Barkhuus, Louise; Rossitto, Chiara

    2016-01-01

    require a different type of engagement from the actors and rehearsing is challenging, as it can be impossible to rehearse with all the functional technology and interaction. Here, we report experiences from a case study of two mixed-media performances; we studied the rehearsal practices of two actors who......Digital technologies provide theater with new possibilities for combining traditional stage-based performances with interactive artifacts, for streaming remote parallel performances and for other device facilitated audience interaction. Compared to traditional theater, mixed-media performances...... and mixed media performances through addressing critical factors of implementing technology into rehearsal practices....

  5. A Perspective on Computational Human Performance Models as Design Tools

    Science.gov (United States)

    Jones, Patricia M.

    2010-01-01

    The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.

  6. High performance computing and communications: FY 1997 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  7. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  8. Solar energy photovoltaic technology: proficiency and performance

    International Nuclear Information System (INIS)

    2006-01-01

    Total is committed to making the best possible of the planet's fossil fuel reserves while fostering the emergence of other solutions, notably by developing effective alternatives. Total involves in photovoltaics when it founded in 1983 Total Energies, renamed Tenesol in 2005, a world leader in the design and installation of photovoltaic solar power systems. This document presents Total's activities in the domain: the global challenge of energy sources and the environment, the energy collecting by photovoltaic electricity, the silicon technology for cell production, solar panels and systems to distribute energy, research and development to secure the future. (A.L.B.)

  9. Very High-Performance Embedded Computing Will Allow Ambitious Space Science Investigation

    National Research Council Canada - National Science Library

    Pignol, Michel

    2005-01-01

    .... developed on radiation tolerant technologies. Unfortunately, the microprocessors today available on such technologies have the computing throughput which was available about 10 years ago on the commercial market...

  10. [Veneer computer aided design based on reverse engineering technology].

    Science.gov (United States)

    Liu, Ming-li; Chen, Xiao-dong; Wang, Yong

    2012-03-01

    To explore the computer aided design (CAD) method of veneer restoration, and to assess if the solution can help prosthesis meet morphology esthetics standard. A volunteer's upper right central incisor needed to be restored with veneer. Super hard stone models of patient's dentition (before and after tooth preparation) were scanned with the three-dimensional laser scanner. The veneer margin was designed as butt-to-butt type. The veneer was constructed using reverse engineering (RE) software. The technique guideline of veneers CAD was explore based on RE software, and the veneers was smooth, continuous and symmetrical, which met esthetics construction needs. It was a feasible method to reconstruct veneer restoration based on RE technology.

  11. The impact of changing computing technology on EPRI [Electric Power Research Institute] nuclear analysis codes

    International Nuclear Information System (INIS)

    Breen, R.J.

    1988-01-01

    The Nuclear Reload Management Program of the Nuclear Power Division (NPD) of the Electric Power Research Institute (EPRI) has the responsibility for initiating and managing applied research in selected nuclear engineering analysis functions for nuclear utilities. The computer systems that result from the research projects consist of large FORTRAN programs containing elaborate computational algorithms used to access such areas as core physics, fuel performance, thermal hydraulics, and transient analysis. This paper summarizes a study of computing technology trends sponsored by the NPD. The approach taken was to interview hardware and software vendors, industry observers, and utility personnel focusing on expected changes that will occur in the computing industry over the next 3 to 5 yr. Particular emphasis was placed on how these changes will impact engineering/scientific computer code development, maintenance, and use. In addition to the interviews, a workshop was held with attendees from EPRI, Power Computing Company, industry, and utilities. The workshop provided a forum for discussing issues and providing input into EPRI's long-term computer code planning process

  12. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and

  13. Japanese technology assessment: Computer science, opto- and microelectronics mechatronics, biotechnology

    Energy Technology Data Exchange (ETDEWEB)

    Brandin, D.; Wieder, H.; Spicer, W.; Nevins, J.; Oxender, D.

    1986-01-01

    The series studies Japanese research and development in four high-technology areas - computer science, opto and microelectronics, mechatronics (a term created by the Japanese to describe the union of mechanical and electronic engineering to produce the next generation of machines, robots, and the like), and biotechnology. The evaluations were conducted by panels of U.S. scientists - chosen from academia, government, and industry - actively involved in research in areas of expertise. The studies were prepared for the purpose of aiding the U.S. response to Japan's technological challenge. The main focus of the assessments is on the current status and long-term direction and emphasis of Japanese research and development. Other aspects covered include evolution of the state of the art; identification of Japanese researchers, R and D organizations, and resources; and comparative U.S. efforts. The general time frame of the studies corresponds to future industrial applications and potential commercial impacts spanning approximately the next two decades.

  14. Computer technologies for industrial risk prevention and emergency management

    International Nuclear Information System (INIS)

    Balduccelli, C.; Bologna, S.; Di Costanzo, G.; Vicoli, G.

    1996-07-01

    This document provides an overview about problems related to the engineering of computer based systems for industrial risk prevention and emergency management. Such systems are rather complex and subject to precise reliability and safety requirements. With the evolution of informatic technologies, such systems are becoming to be the means for building protective barriers for reduction of risk associated with plant operations. For giving more generality to this document, and for not concentrating on only a specific plant, the emergency management systems will be dealt with more details than ones for accident prevention. The document is organized in six chapters. Chapter one is an introduction to the problem and to its state of art, with particular emphasis to the aspects of safety requirements definition. Chapter two is an introduction to the problems related to the emergency management and to the training of operators in charge of this task. Chapter three deals in details the topic of the Training Support Systems, in particular about MUSTER (multi-user system for training and evaluation of environmental emergency response) system. Chapter four deals in details the topic of decision support systems, in particular about ISEM (information technology support for emergency management) system. Chapter five illustrates an application of support to the operators of Civil Protection Department for the management of emergencies in the fields of industrial chemical. Chapter six is about a synthesis of the state of art and the future possibilities, identifying some research and development activities more promising for the future

  15. Irradiated fuel performance evaluation technology development

    International Nuclear Information System (INIS)

    Koo, Yang Hyun; Bang, J. G.; Kim, D. H.

    2012-01-01

    Alpha version performance code for dual-cooled annular fuel under steady state operation, so called 'DUOS', has been developed applying performance models and proposed methodology. Furthermore, nonlinear finite element module which could be integrated into transient/accident fuel performance code was also developed and evaluated using commercial FE code. The first/second irradiation and PIE test of annular pellet for dual-cooled annular fuel in the world have been completed. In-pile irradiation test DB of annular pellet up to burnup of 10,000 MWd/MTU through the 1st test was established and cracking behavior of annular pellet and swelling rate at low temperature were studied. To do irradiation test of dual-cooled annular fuel under PWR's simulating steady-state conditions, irradiation test rig/rod design/manufacture of mock-up/performance test have been completed through international collaboration program with Halden reactor project. The irradiation test of large grain pellets has been continued from 2002 to 2011 and completed successfully. Burnup of 70,000 MWd/MTU which is the highest burnup among irradiation test pellets in domestic was achieved

  16. Multi-Language Programming Environments for High Performance Java Computing

    OpenAIRE

    Vladimir Getov; Paul Gray; Sava Mintchev; Vaidy Sunderam

    1999-01-01

    Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI) tool which provides ...

  17. Computational modelling of expressive music performance in hexaphonic guitar

    OpenAIRE

    Siquier, Marc

    2017-01-01

    Computational modelling of expressive music performance has been widely studied in the past. While previous work in this area has been mainly focused on classical piano music, there has been very little work on guitar music, and such work has focused on monophonic guitar playing. In this work, we present a machine learning approach to automatically generate expressive performances from non expressive music scores for polyphonic guitar. We treated guitar as an hexaphonic instrument, obtaining ...

  18. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa

    2012-10-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a representative HPC application. © 2012 IEEE.

  19. Computer science of the high performance; Informatica del alto rendimiento

    Energy Technology Data Exchange (ETDEWEB)

    Moraleda, A.

    2008-07-01

    The high performance computing is taking shape as a powerful accelerator of the process of innovation, to drastically reduce the waiting times for access to the results and the findings in a growing number of processes and activities as complex and important as medicine, genetics, pharmacology, environment, natural resources management or the simulation of complex processes in a wide variety of industries. (Author)

  20. Performativity, Fabrication and Trust: Exploring Computer-Mediated Moderation

    Science.gov (United States)

    Clapham, Andrew

    2013-01-01

    Based on research conducted in an English secondary school, this paper explores computer-mediated moderation as a performative tool. The Module Assessment Meeting (MAM) was the moderation approach under investigation. I mobilise ethnographic data generated by a key informant, and triangulated with that from other actors in the setting, in order to…

  1. Running Interactive Jobs on Peregrine | High-Performance Computing | NREL

    Science.gov (United States)

    shell prompt, which allows users to execute commands and scripts as they would on the login nodes. Login performed on the compute nodes rather than on login nodes. This page provides instructions and examples of , start GUIs etc. and the commands will execute on that node instead of on the login node. The -V option

  2. Performance Evaluation of a Mobile Wireless Computational Grid ...

    African Journals Online (AJOL)

    This work developed and simulated a mathematical model for a mobile wireless computational Grid architecture using networks of queuing theory. This was in order to evaluate the performance of theload-balancing three tier hierarchical configuration. The throughput and resource utilizationmetrics were measured and the ...

  3. Analysis of Alternatives (AoA) of Open Colllaboration and Research Capabilities Collaboratipon in Research and Engineering in Advanced Technology and Education and High-Performance Computing Innovation Center (HPCIC) on the LVOC.

    Energy Technology Data Exchange (ETDEWEB)

    Vrieling, P. Douglas [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2016-01-01

    The Livermore Valley Open Campus (LVOC), a joint initiative of the National Nuclear Security Administration (NNSA), Lawrence Livermore National Laboratory (LLNL), and Sandia National Laboratories (SNL), enhances the national security missions of NNSA by promoting greater collaboration between world-class scientists at the national security laboratories, and their partners in industry and academia. Strengthening the science, technology, and engineering (ST&E) base of our nation is one of the NNSA’s top goals. By conducting coordinated and collaborative programs, LVOC enhances both the NNSA and the broader national science and technology base, and helps to ensure the health of core capabilities at LLNL and SNL. These capabilities must remain strong to enable the laboratories to execute their primary mission for NNSA.

  4. The performance of low-cost commercial cloud computing as an alternative in computational chemistry.

    Science.gov (United States)

    Thackston, Russell; Fortenberry, Ryan C

    2015-05-05

    The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services. © 2015 Wiley Periodicals, Inc.

  5. The Diffusion of Computer-Based Technology in K-12 Schools: Teachers' Perspectives

    Science.gov (United States)

    Colandrea, John Louis

    2012-01-01

    Because computer technology represents a major financial outlay for school districts and is an efficient method of preparing and delivering lessons, studying the process of teacher adoption of computer use is beneficial and adds to the current body of knowledge. Because the teacher is the ultimate user of computer technology for lesson preparation…

  6. A Detailed Analysis over Some Important Issues towards Using Computer Technology into the EFL Classrooms

    Science.gov (United States)

    Gilakjani, Abbas Pourhosein

    2014-01-01

    Computer technology has changed the ways we work, learn, interact and spend our leisure time. Computer technology has changed every aspect of our daily life--how and where we get our news, how we order goods and services, and how we communicate. This study investigates some of the significant issues concerning the use of computer technology…

  7. Performance-Based Technology Selection Filter description report

    International Nuclear Information System (INIS)

    O'Brien, M.C.; Morrison, J.L.; Morneau, R.A.; Rudin, M.J.; Richardson, J.G.

    1992-05-01

    A formal methodology has been developed for identifying technology gaps and assessing innovative or postulated technologies for inclusion in proposed Buried Waste Integrated Demonstration (BWID) remediation systems. Called the Performance-Based Technology Selection Filter, the methodology provides a formalized selection process where technologies and systems are rated and assessments made based on performance measures, and regulatory and technical requirements. The results are auditable, and can be validated with field data. This analysis methodology will be applied to the remedial action of transuranic contaminated waste pits and trenches buried at the Idaho National Engineering Laboratory (INEL)

  8. Employees Technology Usage Adaptation Impact on Companies’ Logistics Service Performance

    Directory of Open Access Journals (Sweden)

    A. Zafer ACAR

    2018-01-01

    Full Text Available The information technology (IT capability of companies is one of the determinants of their competitive power. However, IT outputs depend on employees intentions to use them. As a technological investment Port automation systems are widely used in container terminals. Therefore behavioral intention in the usage of various IT applications is one of the important factors that may affect logistics service performance. This study aims to explore the employees' technology usage adaptation impact on the logistics service performance of ports. In this context, the behavioral intentions of employees who use port automation systems are investigated using the Technological Acceptance Model.

  9. Performance-Based Technology Selection Filter description report

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, M.C.; Morrison, J.L.; Morneau, R.A.; Rudin, M.J.; Richardson, J.G.

    1992-05-01

    A formal methodology has been developed for identifying technology gaps and assessing innovative or postulated technologies for inclusion in proposed Buried Waste Integrated Demonstration (BWID) remediation systems. Called the Performance-Based Technology Selection Filter, the methodology provides a formalized selection process where technologies and systems are rated and assessments made based on performance measures, and regulatory and technical requirements. The results are auditable, and can be validated with field data. This analysis methodology will be applied to the remedial action of transuranic contaminated waste pits and trenches buried at the Idaho National Engineering Laboratory (INEL).

  10. The Acceptance of Computer Technology by Teachers in Early Childhood Education

    Science.gov (United States)

    Jeong, Hye In; Kim, Yeolib

    2017-01-01

    This study investigated kindergarten teachers' decision-making process regarding the acceptance of computer technology. We incorporated the Technology Acceptance Model framework, in addition to computer self-efficacy, subjective norm, and personal innovativeness in education technology as external variables. The data were obtained from 160…

  11. Technological Metaphors and Moral Education: The Hacker Ethic and the Computational Experience

    Science.gov (United States)

    Warnick, Bryan R.

    2004-01-01

    This essay is an attempt to understand how technological metaphors, particularly computer metaphors, are relevant to moral education. After discussing various types of technological metaphors, it is argued that technological metaphors enter moral thought through their "functional descriptions." The computer metaphor is then explored by turning to…

  12. Performance evaluation of scientific programs on advanced architecture computers

    International Nuclear Information System (INIS)

    Walker, D.W.; Messina, P.; Baille, C.F.

    1988-01-01

    Recently a number of advanced architecture machines have become commercially available. These new machines promise better cost-performance then traditional computers, and some of them have the potential of competing with current supercomputers, such as the Cray X/MP, in terms of maximum performance. This paper describes an on-going project to evaluate a broad range of advanced architecture computers using a number of complete scientific application programs. The computers to be evaluated include distributed- memory machines such as the NCUBE, INTEL and Caltech/JPL hypercubes, and the MEIKO computing surface, shared-memory, bus architecture machines such as the Sequent Balance and the Alliant, very long instruction word machines such as the Multiflow Trace 7/200 computer, traditional supercomputers such as the Cray X.MP and Cray-2, and SIMD machines such as the Connection Machine. Currently 11 application codes from a number of scientific disciplines have been selected, although it is not intended to run all codes on all machines. Results are presented for two of the codes (QCD and missile tracking), and future work is proposed

  13. High-performance computing in accelerating structure design and analysis

    International Nuclear Information System (INIS)

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  14. Computer Music

    Science.gov (United States)

    Cook, Perry R.

    This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and the study of perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.).

  15. Static Memory Deduplication for Performance Optimization in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Gangyong Jia

    2017-04-01

    Full Text Available In a cloud computing environment, the number of virtual machines (VMs on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.

  16. Static Memory Deduplication for Performance Optimization in Cloud Computing.

    Science.gov (United States)

    Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan

    2017-04-27

    In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.

  17. Computational Analysis on Performance of Thermal Energy Storage (TES) Diffuser

    Science.gov (United States)

    Adib, M. A. H. M.; Adnan, F.; Ismail, A. R.; Kardigama, K.; Salaam, H. A.; Ahmad, Z.; Johari, N. H.; Anuar, Z.; Azmi, N. S. N.

    2012-09-01

    Application of thermal energy storage (TES) system reduces cost and energy consumption. The performance of the overall operation is affected by diffuser design. In this study, computational analysis is used to determine the thermocline thickness. Three dimensional simulations with different tank height-to-diameter ratio (HD), diffuser opening and the effect of difference number of diffuser holes are investigated. Medium HD tanks simulations with double ring octagonal diffuser show good thermocline behavior and clear distinction between warm and cold water. The result show, the best performance of thermocline thickness during 50% time charging occur in medium tank with height-to-diameter ratio of 4.0 and double ring octagonal diffuser with 48 holes (9mm opening ~ 60%) acceptable compared to diffuser with 6mm ~ 40% and 12mm ~ 80% opening. The conclusion is computational analysis method are very useful in the study on performance of thermal energy storage (TES).

  18. Computational Analysis on Performance of Thermal Energy Storage (TES) Diffuser

    International Nuclear Information System (INIS)

    Adib, M A H M; Ismail, A R; Kardigama, K; Salaam, H A; Ahmad, Z; Johari, N H; Anuar, Z; Azmi, N S N; Adnan, F

    2012-01-01

    Application of thermal energy storage (TES) system reduces cost and energy consumption. The performance of the overall operation is affected by diffuser design. In this study, computational analysis is used to determine the thermocline thickness. Three dimensional simulations with different tank height-to-diameter ratio (HD), diffuser opening and the effect of difference number of diffuser holes are investigated. Medium HD tanks simulations with double ring octagonal diffuser show good thermocline behavior and clear distinction between warm and cold water. The result show, the best performance of thermocline thickness during 50% time charging occur in medium tank with height-to-diameter ratio of 4.0 and double ring octagonal diffuser with 48 holes (9mm opening ∼ 60%) acceptable compared to diffuser with 6mm ∼ 40% and 12mm ∼ 80% opening. The conclusion is computational analysis method are very useful in the study on performance of thermal energy storage (TES).

  19. Rotary engine performance computer program (RCEMAP and RCEMAPPC): User's guide

    Science.gov (United States)

    Bartrand, Timothy A.; Willis, Edward A.

    1993-01-01

    This report is a user's guide for a computer code that simulates the performance of several rotary combustion engine configurations. It is intended to assist prospective users in getting started with RCEMAP and/or RCEMAPPC. RCEMAP (Rotary Combustion Engine performance MAP generating code) is the mainframe version, while RCEMAPPC is a simplified subset designed for the personal computer, or PC, environment. Both versions are based on an open, zero-dimensional combustion system model for the prediction of instantaneous pressures, temperature, chemical composition and other in-chamber thermodynamic properties. Both versions predict overall engine performance and thermal characteristics, including bmep, bsfc, exhaust gas temperature, average material temperatures, and turbocharger operating conditions. Required inputs include engine geometry, materials, constants for use in the combustion heat release model, and turbomachinery maps. Illustrative examples and sample input files for both versions are included.

  20. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda; Yokota, Rio; Keyes, David E.

    2016-01-01

    model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization

  1. Scalability of DL_POLY on High Performance Computing Platform

    CSIR Research Space (South Africa)

    Mabakane, Mabule S

    2017-12-01

    Full Text Available stream_source_info Mabakanea_19979_2017.pdf.txt stream_content_type text/plain stream_size 33716 Content-Encoding UTF-8 stream_name Mabakanea_19979_2017.pdf.txt Content-Type text/plain; charset=UTF-8 SACJ 29(3) December... when using many processors within the compute nodes of the supercomputer. The type of the processors of compute nodes and their memory also play an important role in the overall performance of the parallel application running on a supercomputer. DL...

  2. Component-based software for high-performance scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  3. Performance measurements in 3D ideal magnetohydrodynamic stability computations

    International Nuclear Information System (INIS)

    Anderson, D.V.; Cooper, W.A.; Gruber, R.; Schwenn, U.

    1989-10-01

    The 3D ideal magnetohydrodynamic stability code TERPSICHORE has been designed to take advantage of vector and microtasking capabilities of the latest CRAY computers. To keep the number of operations small most efficient algorithms have been applied in each computational step. The program investigates the stability properties of fusion reactor relevant plasma configurations confined by magnetic fields. For a typical 3D HELIAS configuration that has been considered we obtain an overall performance in excess of 1 Gflops on an eight processor CRAY-YMP machine. (author) 3 figs., 1 tab., 11 refs

  4. Component-based software for high-performance scientific computing

    International Nuclear Information System (INIS)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly

  5. Nuclear forces and high-performance computing: The perfect match

    International Nuclear Information System (INIS)

    Luu, T; Walker-Loud, A

    2009-01-01

    High-performance computing is now enabling the calculation of certain hadronic interaction parameters directly from Quantum Chromodynamics, the quantum field theory that governs the behavior of quarks and gluons and is ultimately responsible for the nuclear strong force. In this paper we briefly describe the state of the field and show how other aspects of hadronic interactions will be ascertained in the near future. We give estimates of computational requirements needed to obtain these goals, and outline a procedure for incorporating these results into the broader nuclear physics community.

  6. Performance Analyses in an Assistive Technology Service Delivery Process

    DEFF Research Database (Denmark)

    Petersen, Anne Karin

    Performance Analyses in an Assistive Technology Service Delivery Process.Keywords: process model, occupational performance, assistive technologiesThe Poster is about teaching students, using models and theory in education and practice. It is related to Occupational therapy process and professional...... af top-til-bund, klientcentreret og aktivitetsbaseret interventioner, ERGO/MunksgaardFisher, A. &, Griswold, L. A., 2014. Performance Skills. I: B.Schell red.2014 Occupational Therapy. Willard &Spackman’s occupational therapy. -12th ed., p.249-264Cook A.M., Polgar J.M. (2015) Assistive Technologies...

  7. High performance nano-composite technology development

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D. [KAERI, Taejon (Korea, Republic of); Kim, E. K.; Jung, S. Y.; Ryu, H. J. [KRICT, Taejon (Korea, Republic of); Hwang, S. S.; Kim, J. K.; Hong, S. M. [KIST, Taejon (Korea, Republic of); Chea, Y. B. [KIGAM, Taejon (Korea, Republic of); Choi, C. H.; Kim, S. D. [ATS, Taejon (Korea, Republic of); Cho, B. G.; Lee, S. H. [HGREC, Taejon (Korea, Republic of)

    1999-06-15

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  8. High performance nano-composite technology development

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D. [KAERI, Taejon (Korea, Republic of); Kim, E. K.; Jung, S. Y.; Ryu, H. J. [KRICT, Taejon (Korea, Republic of); Hwang, S. S.; Kim, J. K.; Hong, S. M. [KIST, Taejon (Korea, Republic of); Chea, Y. B. [KIGAM, Taejon (Korea, Republic of); Choi, C. H.; Kim, S. D. [ATS, Taejon (Korea, Republic of); Cho, B. G.; Lee, S. H. [HGREC, Taejon (Korea, Republic of)

    1999-06-15

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  9. High performance nano-composite technology development

    International Nuclear Information System (INIS)

    Kim, Whung Whoe; Rhee, C. K.; Kim, S. J.; Park, S. D.; Kim, E. K.; Jung, S. Y.; Ryu, H. J.; Hwang, S. S.; Kim, J. K.; Hong, S. M.; Chea, Y. B.; Choi, C. H.; Kim, S. D.; Cho, B. G.; Lee, S. H.

    1999-06-01

    The trend of new material development are being to carried out not only high performance but also environmental attraction. Especially nano composite material which enhances the functional properties of components, extending the component life resulting to reduced the wastes and environmental contamination, has a great effect on various industrial area. The application of nano composite, depends on the polymer matrix and filler materials, has various application from semiconductor to medical field. In spite of nano composite merits, nano composite study are confined to a few special materials as a lab, scale because a few technical difficulties are still on hold. Therefore, the purpose of this study establishes the systematical planning to carried out the next generation projects on order to compete with other countries and overcome the protective policy of advanced countries with grasping over sea's development trends and our present status. (author).

  10. High Performance Fuel Technology Development(I)

    International Nuclear Information System (INIS)

    Song, Kun Woo; Kim, Keon Sik; Bang, Jeong Yong; Park, Je Keon; Chen, Tae Hyun; Kim, Hyung Kyu

    2010-04-01

    The dual-cooled annular fuel has been investigated for the purpose of achieving the power uprate of 20% and decreasing pellet temperature by 30%. The 12x12 rod array and basic design was developed, which is mechanically compatible with the OPR-1000. The reactor core analysis has been performed using this design, and the results have shown that the criteria of nuclear, thermohydraulic and safety design are satisfied and pellet temperature can be lowered by 40% even in 120% power. The basic design of fuel component was developed and the cladding thickness was designed through analysis and experiments. The solutions have been proposed and analyzed to the technical issues such as 'inner channel blockage' and 'imbalance between inner and outer coolant'. The annular pellet was fabricated with good control of shape and size, and especially, a new sintering technique has been developed to control the deviation of inner diameter within ±5μm. The irradiation test of annular pellets has been conducted up to 10 MWD/kgU to find out the densification and swelling behaviors. The 11 types of materials candidates have developed for the PCI-endurance pellet, and the material containing the Mn-Al additive showed its creep performance of much better than UO2 material. The HANA cladding has been irradiated up to 61 MWD/kgU, and the results have shown that its oxidation resistance is better by 40% than that of Zircaloy. The 30 types of candidate materials for next generation have been developed through alloy design and property tests

  11. Human and Robotic Space Mission Use Cases for High-Performance Spaceflight Computing

    Science.gov (United States)

    Some, Raphael; Doyle, Richard; Bergman, Larry; Whitaker, William; Powell, Wesley; Johnson, Michael; Goforth, Montgomery; Lowry, Michael

    2013-01-01

    Spaceflight computing is a key resource in NASA space missions and a core determining factor of spacecraft capability, with ripple effects throughout the spacecraft, end-to-end system, and mission. Onboard computing can be aptly viewed as a "technology multiplier" in that advances provide direct dramatic improvements in flight functions and capabilities across the NASA mission classes, and enable new flight capabilities and mission scenarios, increasing science and exploration return. Space-qualified computing technology, however, has not advanced significantly in well over ten years and the current state of the practice fails to meet the near- to mid-term needs of NASA missions. Recognizing this gap, the NASA Game Changing Development Program (GCDP), under the auspices of the NASA Space Technology Mission Directorate, commissioned a study on space-based computing needs, looking out 15-20 years. The study resulted in a recommendation to pursue high-performance spaceflight computing (HPSC) for next-generation missions, and a decision to partner with the Air Force Research Lab (AFRL) in this development.

  12. Assessment of the fit of removable partial denture fabricated by computer-aided designing/computer aided manufacturing technology.

    Science.gov (United States)

    Arafa, Khalid A O

    2018-01-01

    To assess the level of evidence that supports the quality of fit for removable partial denture (RPD) fabricated by computer-aided designing/computer aided manufacturing (CAD/CAM) and rapid prototyping (RP) technology. Methods: An electronic search was performed in Google Scholar, PubMed, and Cochrane library search engines, using Boolean operators. All articles published in English and published in the period from 1950 until April 2017 were eligible to be included in this review. The total number of articles contained the search terms in any part of the article (including titles, abstracts, or article texts) were screened, which resulted in 214 articles. After exclusion of irrelevant and duplicated articles, 12 papers were included in this systematic review.  Results: All the included studies were case reports, except one study, which was a case series that recruited 10 study participants. The visual and tactile examination in the cast or clinically in the patient's mouth was the most-used method for assessment of the fit of RPDs. From all included studies, only one has assessed the internal fit between RPDs and oral tissues using silicone registration material. The vast majority of included studies found that the fit of RPDs ranged from satisfactory to excellent fit. Conclusion: Despite the lack of clinical trials that provide strong evidence, the available evidence supported the claim of good fit of RPDs fabricated by new technologies using CAD/CAM.

  13. Computer-aided performance monitoring program at Diablo Canyon

    International Nuclear Information System (INIS)

    Nelson, T.; Glynn, R. III; Kessler, T.C.

    1992-01-01

    This paper describes the thermal performance monitoring program at Pacific Gas ampersand Electric Company's (PG ampersand E's) Diablo Canyon Nuclear Power Plant. The plant performance monitoring program at Diablo Canyon uses the THERMAC performance monitoring and analysis computer software provided by Expert-EASE Systems. THERMAC is used to collect performance data from the plant process computers, condition that data to adjust for measurement errors and missing data points, evaluate cycle and component-level performance, archive the data for trend analysis and generate performance reports. The current status of the program is that, after a fair amount of open-quotes tuningclose quotes of the basic open-quotes thermal kitclose quotes models provided with the initial THERMAC installation, we have successfully baselined both units to cycle isolation test data from previous reload cycles. Over the course of the past few months, we have accumulated enough data to generate meaningful performance trends and, as a result, have been able to use THERMAC to track a condenser fouling problem that was costing enough megawatts to attract corporate-level attention. Trends from THERMAC clearly related the megawatt loss to a steadily degrading condenser cleanliness factor and verified the subsequent gain in megawatts after the condenser was cleaned. In the future, we expect to rebaseline THERMAC to a beginning of cycle (BOC) data set and to use the program to help track feedwater nozzle fouling

  14. Elucidate Innovation Performance of Technology-driven Mergers and Acquisitions

    Energy Technology Data Exchange (ETDEWEB)

    Huang, L.; Wang, K.; Yu, H.; Shang, L.; Mitkova, L.

    2016-07-01

    The importance and value of Mergers and Acquisitions (M&As) have increased with the expectancy to obtain key technology capabilities and rapid impact on innovation. This article develops an original analytical framework to elucidate the impact of the technology and product relatedness (similarity/complementarity) of the Technology-driven M&A’ partners on post-innovation performance. We present results drawing on a multiple case studies of Chinese High-Tech firms from three industries. (Author)

  15. Overview of Parallel Platforms for Common High Performance Computing

    Directory of Open Access Journals (Sweden)

    T. Fryza

    2012-04-01

    Full Text Available The paper deals with various parallel platforms used for high performance computing in the signal processing domain. More precisely, the methods exploiting the multicores central processing units such as message passing interface and OpenMP are taken into account. The properties of the programming methods are experimentally proved in the application of a fast Fourier transform and a discrete cosine transform and they are compared with the possibilities of MATLAB's built-in functions and Texas Instruments digital signal processors with very long instruction word architectures. New FFT and DCT implementations were proposed and tested. The implementation phase was compared with CPU based computing methods and with possibilities of the Texas Instruments digital signal processing library on C6747 floating-point DSPs. The optimal combination of computing methods in the signal processing domain and new, fast routines' implementation is proposed as well.

  16. Heat exchanger performance analysis programs for the personal computer

    International Nuclear Information System (INIS)

    Putman, R.E.

    1992-01-01

    Numerous utility industry heat exchange calculations are repetitive and thus lend themselves to being performed on a Personal Computer. These programs may be regarded as engineering tools which, when put together, can form a Toolbox. However, the practicing Results Engineer in the utility industry desires not only programs that are robust as well as easy to use but can also be used both on desktop and laptop PC's. The latter also offer the opportunity to take the computer into the plant or control room, and use it there to process test or operating data right on the spot. Most programs evolve through the needs which arise in the course of day-to-day work. This paper describes several of the more useful programs of this type and outlines some of the guidelines to be followed when designing personal computer programs for use by the practicing Results Engineer

  17. Base technology development of new materials for FBR performance innovations

    International Nuclear Information System (INIS)

    Kano, Shigeki; Koyama, Masahiro; Nomura, Shigeo; Morikawa, Satoru; Ueno, Fumiyoshi

    1989-01-01

    This paper describes the base technology development of new materials for FBR performance innovations at the Power Reactor and Nuclear Fuel Development Corporation. The contents are as follows: (1) development of sodium and radiation resistant new materials, (2) development of high performance shielding material, (3) development of high performance control material, (4) development of new functional materials for reactor instrumentation. (author)

  18. High-performance computing on the Intel Xeon Phi how to fully exploit MIC architectures

    CERN Document Server

    Wang, Endong; Shen, Bo; Zhang, Guangyong; Lu, Xiaowei; Wu, Qing; Wang, Yajuan

    2014-01-01

    The aim of this book is to explain to high-performance computing (HPC) developers how to utilize the Intel® Xeon Phi™ series products efficiently. To that end, it introduces some computing grammar, programming technology and optimization methods for using many-integrated-core (MIC) platforms and also offers tips and tricks for actual use, based on the authors' first-hand optimization experience.The material is organized in three sections. The first section, "Basics of MIC", introduces the fundamentals of MIC architecture and programming, including the specific Intel MIC programming environment

  19. Closing the Technological Gender Gap: Feminist Pedagogy in the Computer-Assisted Classroom.

    Science.gov (United States)

    Hesse-Biber, Sharlene; Gilbert, Melissa Kesler

    1994-01-01

    Asserts that, although computers are playing an increasingly important role in the classroom, a technological gender gap serves as a barrier to the effective use of computers by women instructors in higher education. Encourages women to seize computer tools for their own educational purposes and argues for enhancing women's computer learning. (CFR)

  20. Cloud Computing and Information Technology Resource Cost Management for SMEs

    DEFF Research Database (Denmark)

    Kuada, Eric; Adanu, Kwame; Olesen, Henning

    2013-01-01

    This paper analyzes the decision-making problem confronting SMEs considering the adoption of cloud computing as an alternative to in-house computing services provision. The economics of choosing between in-house computing and a cloud alternative is analyzed by comparing the total economic costs...... in determining the relative value of cloud computing....

  1. Cloud computing as a new technology trend in education

    OpenAIRE

    Шамина, Ольга Борисовна; Буланова, Татьяна Валентиновна

    2014-01-01

    The construction and operation of extremely large-scale, commodity-computer datacenters was the key necessary enabler of Cloud Computing. Cloud Computing could offer services make a good profit for using in education. With Cloud Computing it is possible to increase the quality of education, improve communicative culture and give to teachers and students new application opportunities.

  2. Using high performance interconnects in a distributed computing and mass storage environment

    International Nuclear Information System (INIS)

    Ernst, M.

    1994-01-01

    Detector Collaborations of the HERA Experiments typically involve more than 500 physicists from a few dozen institutes. These physicists require access to large amounts of data in a fully transparent manner. Important issues include Distributed Mass Storage Management Systems in a Distributed and Heterogeneous Computing Environment. At the very center of a distributed system, including tens of CPUs and network attached mass storage peripherals are the communication links. Today scientists are witnessing an integration of computing and communication technology with the open-quote network close-quote becoming the computer. This contribution reports on a centrally operated computing facility for the HERA Experiments at DESY, including Symmetric Multiprocessor Machines (84 Processors), presently more than 400 GByte of magnetic disk and 40 TB of automoted tape storage, tied together by a HIPPI open-quote network close-quote. Focussing on the High Performance Interconnect technology, details will be provided about the HIPPI based open-quote Backplane close-quote configured around a 20 Gigabit/s Multi Media Router and the performance and efficiency of the related computer interfaces

  3. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs

  4. Simple, parallel, high-performance virtual machines for extreme computations

    International Nuclear Information System (INIS)

    Chokoufe Nejad, Bijan; Ohl, Thorsten; Reuter, Jurgen

    2014-11-01

    We introduce a high-performance virtual machine (VM) written in a numerically fast language like Fortran or C to evaluate very large expressions. We discuss the general concept of how to perform computations in terms of a VM and present specifically a VM that is able to compute tree-level cross sections for any number of external legs, given the corresponding byte code from the optimal matrix element generator, O'Mega. Furthermore, this approach allows to formulate the parallel computation of a single phase space point in a simple and obvious way. We analyze hereby the scaling behaviour with multiple threads as well as the benefits and drawbacks that are introduced with this method. Our implementation of a VM can run faster than the corresponding native, compiled code for certain processes and compilers, especially for very high multiplicities, and has in general runtimes in the same order of magnitude. By avoiding the tedious compile and link steps, which may fail for source code files of gigabyte sizes, new processes or complex higher order corrections that are currently out of reach could be evaluated with a VM given enough computing power.

  5. Improved Laser performance through Planar Waveguide Technology Development

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose a laser technology development to improve efficiency and performance for a variety of science applications including: Lunar Ice, 2-Step Laser Tandem Mass...

  6. Enhanced Performance of Recycled Aggregate Concrete with Atomic Polymer Technology

    Science.gov (United States)

    2012-06-01

    The atomic polymer technology in form of mesoporous inorganic polymer (MIP) can effectively improve material durability and performance of concrete by dramatically increase inter/intragranular bond strength of concrete at nano-scale. The strategy of ...

  7. High-performance computing for structural mechanics and earthquake/tsunami engineering

    CERN Document Server

    Hori, Muneo; Ohsaki, Makoto

    2016-01-01

    Huge earthquakes and tsunamis have caused serious damage to important structures such as civil infrastructure elements, buildings and power plants around the globe.  To quantitatively evaluate such damage processes and to design effective prevention and mitigation measures, the latest high-performance computational mechanics technologies, which include telascale to petascale computers, can offer powerful tools. The phenomena covered in this book include seismic wave propagation in the crust and soil, seismic response of infrastructure elements such as tunnels considering soil-structure interactions, seismic response of high-rise buildings, seismic response of nuclear power plants, tsunami run-up over coastal towns and tsunami inundation considering fluid-structure interactions. The book provides all necessary information for addressing these phenomena, ranging from the fundamentals of high-performance computing for finite element methods, key algorithms of accurate dynamic structural analysis, fluid flows ...

  8. Multidisciplinary Design Optimisation (MDO) Methods: Their Synergy with Computer Technology in the Design Process

    Science.gov (United States)

    Sobieszczanski-Sobieski, Jaroslaw

    1999-01-01

    The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate. A radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimisation (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behaviour by interaction of a large number of very simple models may be an inspiration for the above algorithms; the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should begin now, even though the widespread availability of massively parallel processing is still a few years away.

  9. Multidisciplinary Design Optimization (MDO) Methods: Their Synergy with Computer Technology in Design Process

    Science.gov (United States)

    Sobieszczanski-Sobieski, Jaroslaw

    1998-01-01

    The paper identifies speed, agility, human interface, generation of sensitivity information, task decomposition, and data transmission (including storage) as important attributes for a computer environment to have in order to support engineering design effectively. It is argued that when examined in terms of these attributes the presently available environment can be shown to be inadequate a radical improvement is needed, and it may be achieved by combining new methods that have recently emerged from multidisciplinary design optimization (MDO) with massively parallel processing computer technology. The caveat is that, for successful use of that technology in engineering computing, new paradigms for computing will have to be developed - specifically, innovative algorithms that are intrinsically parallel so that their performance scales up linearly with the number of processors. It may be speculated that the idea of simulating a complex behavior by interaction of a large number of very simple models may be an inspiration for the above algorithms, the cellular automata are an example. Because of the long lead time needed to develop and mature new paradigms, development should be now, even though the widespread availability of massively parallel processing is still a few years away.

  10. Performance monitoring for brain-computer-interface actions.

    Science.gov (United States)

    Schurger, Aaron; Gale, Steven; Gozel, Olivia; Blanke, Olaf

    2017-02-01

    When presented with a difficult perceptual decision, human observers are able to make metacognitive judgements of subjective certainty. Such judgements can be made independently of and prior to any overt response to a sensory stimulus, presumably via internal monitoring. Retrospective judgements about one's own task performance, on the other hand, require first that the subject perform a task and thus could potentially be made based on motor processes, proprioceptive, and other sensory feedback rather than internal monitoring. With this dichotomy in mind, we set out to study performance monitoring using a brain-computer interface (BCI), with which subjects could voluntarily perform an action - moving a cursor on a computer screen - without any movement of the body, and thus without somatosensory feedback. Real-time visual feedback was available to subjects during training, but not during the experiment where the true final position of the cursor was only revealed after the subject had estimated where s/he thought it had ended up after 6s of BCI-based cursor control. During the first half of the experiment subjects based their assessments primarily on the prior probability of the end position of the cursor on previous trials. However, during the second half of the experiment subjects' judgements moved significantly closer to the true end position of the cursor, and away from the prior. This suggests that subjects can monitor task performance when the task is performed without overt movement of the body. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. FY 1998 Blue Book: Computing, Information, and Communications: Technologies for the 21st Century

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — As the 21st century approaches, the rapid convergence of computing, communications, and information technology promises unprecedented opportunities for scientific...

  12. Computer program for distance learning of pesticide application technology

    Directory of Open Access Journals (Sweden)

    Bruno Maia

    2011-12-01

    Full Text Available Distance learning presents great potential for mitigating field problems on pesticide application technology. Thus, due to the lack of teaching material about pesticide spraying technology in the Portuguese language and the increasing availability of distance learning, this study developed and evaluated a computer program for distance learning about the theory of pesticide spraying technology using the tools of information technology. The modules comprising the course, named Pulverizar, were: (1 Basic concepts, (2 Factors that affect application, (3 Equipments, (4 Spraying nozzles, (5 Sprayer calibration, (6 Aerial application, (7 Chemigation, (8 Physical-chemical properties, (9 Formulations, (10 Adjuvants, (11 Water quality, and (12 Adequate use of pesticides. The program was made available to the public on July 1st, 2008, hosted at the web site www.pulverizar.iciag.ufu.br, and was simple, robust and practical on the complementation of traditional teaching for the education of professionals in Agricultural Sciences. Mastering pesticide spraying technology by people involved in agricultural production can be facilitated by the program Pulverizar, which was well accepted in its initial evaluation.O ensino à distância apresenta grande potencial para minorar os problemas ocorridos no campo na área de tecnologia de aplicação de agroquímicos. Dessa forma, diante da escassez de material instrucional na área de tecnologia de aplicação de agroquímicos em Português e do crescimento elevado da educação à distância, o objetivo deste trabalho foi desenvolver e avaliar um programa computacional para o ensino à distância da parte teórica de tecnologia de aplicação de agroquímicos, utilizando as ferramentas de tecnologia da informação. Os módulos que compuseram o curso, intitulado Pulverizar, foram: (1 Conceitos básicos, (2 Fatores que afetam a aplicação, (3 Equipamentos, (4 Pontas de pulverização, (5 Calibração de pulverizadores

  13. ASSESSING INDIVIDUAL PERFORMANCE ON INFORMATION TECHNOLOGY ADOPTION: A NEW MODEL

    OpenAIRE

    Diah Hari Suryaningrum

    2012-01-01

    This paper aims to propose a new model in assessing individual performance on information technology adoption. The new model to assess individual performance was derived from two different theories: decomposed theory of planned behavior and task-technology fit theory. Although many researchers have tried to expand these theories, some of their efforts might lack of theoretical assumptions. To overcome this problem and enhance the coherence of the integration, I used a theory from social scien...

  14. Drivers of international performance of Brazilian technology-based firms

    OpenAIRE

    Serpa Fagundes de Oliveira, Maria Carolina; Scherer, Flavia Luciane; Schneider Hahn, Ivanete; de Moura Carpes, Aletéia; Brachak dos Santos, Maríndia; Nunes Piveta, Maíra

    2018-01-01

    For Technology-Based Firms, international expansion represents an opportunity for growth and value creation. The present study was designed to analyze the role of technology-based companies (TBCs) internationalization drivers on international performance. Therefore, a descriptive research was carried out with a quantitative approach performed through a survey. Data collection happened with 53 Brazilian TBCs located in innovation habitats. These data were analyzed by multivariate statistical t...

  15. High performance stream computing for particle beam transport simulations

    International Nuclear Information System (INIS)

    Appleby, R; Bailey, D; Higham, J; Salt, M

    2008-01-01

    Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the simultaneous transport of many particles using dedicated hardware. Modern Graphics Processing Units (GPUs) are powerful and affordable stream computing devices. The results of simulations of particle transport through the booster-to-storage-ring transfer line of the DIAMOND synchrotron light source using an NVidia GeForce 7900 GPU are compared to the standard transport code MAD. It is found that particle transport calculations are suitable for stream processing and large performance increases are possible. The accuracy and potential speed gains are compared and the prospects for future work in the area are discussed

  16. Unravelling the structure of matter on high-performance computers

    International Nuclear Information System (INIS)

    Kieu, T.D.; McKellar, B.H.J.

    1992-11-01

    The various phenomena and the different forms of matter in nature are believed to be the manifestation of only a handful set of fundamental building blocks-the elementary particles-which interact through the four fundamental forces. In the study of the structure of matter at this level one has to consider forces which are not sufficiently weak to be treated as small perturbations to the system, an example of which is the strong force that binds the nucleons together. High-performance computers, both vector and parallel machines, have facilitated the necessary non-perturbative treatments. The principles and the techniques of computer simulations applied to Quantum Chromodynamics are explained examples include the strong interactions, the calculation of the mass of nucleons and their decay rates. Some commercial and special-purpose high-performance machines for such calculations are also mentioned. 3 refs., 2 tabs

  17. A performance evaluation of the IBM 370/XT personal computer

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros

    1984-01-01

    An evaluation of the IBM 370/XT personal computer is given. This evaluation focuses primarily on the use of the 370/XT for scientific and technical applications and applications development. A measurement of the capabilities of the 370/XT was performed by means of test programs which are presented. Also included is a review of facilities provided by the operating system (VM/PC), along with comments on the IBM 370/XT hardware configuration.

  18. Optimal Selection Method of Process Patents for Technology Transfer Using Fuzzy Linguistic Computing

    Directory of Open Access Journals (Sweden)

    Gangfeng Wang

    2014-01-01

    Full Text Available Under the open innovation paradigm, technology transfer of process patents is one of the most important mechanisms for manufacturing companies to implement process innovation and enhance the competitive edge. To achieve promising technology transfers, we need to evaluate the feasibility of process patents and optimally select the most appropriate patent according to the actual manufacturing situation. Hence, this paper proposes an optimal selection method of process patents using multiple criteria decision-making and 2-tuple fuzzy linguistic computing to avoid information loss during the processes of evaluation integration. An evaluation index system for technology transfer feasibility of process patents is designed initially. Then, fuzzy linguistic computing approach is applied to aggregate the evaluations of criteria weights for each criterion and corresponding subcriteria. Furthermore, performance ratings for subcriteria and fuzzy aggregated ratings of criteria are calculated. Thus, we obtain the overall technology transfer feasibility of patent alternatives. Finally, a case study of aeroengine turbine manufacturing is presented to demonstrate the applicability of the proposed method.

  19. Load Disaggregation Technologies: Real World and Laboratory Performance

    Energy Technology Data Exchange (ETDEWEB)

    Mayhorn, Ebony T.; Sullivan, Greg P.; Petersen, Joseph M.; Butner, Ryan S.; Johnson, Erica M.

    2016-09-28

    Low cost interval metering and communication technology improvements over the past ten years have enabled the maturity of load disaggregation (or non-intrusive load monitoring) technologies to better estimate and report energy consumption of individual end-use loads. With the appropriate performance characteristics, these technologies have the potential to enable many utility and customer facing applications such as billing transparency, itemized demand and energy consumption, appliance diagnostics, commissioning, energy efficiency savings verification, load shape research, and demand response measurement. However, there has been much skepticism concerning the ability of load disaggregation products to accurately identify and estimate energy consumption of end-uses; which has hindered wide-spread market adoption. A contributing factor is that common test methods and metrics are not available to evaluate performance without having to perform large scale field demonstrations and pilots, which can be costly when developing such products. Without common and cost-effective methods of evaluation, more developed disaggregation technologies will continue to be slow to market and potential users will remain uncertain about their capabilities. This paper reviews recent field studies and laboratory tests of disaggregation technologies. Several factors are identified that are important to consider in test protocols, so that the results reflect real world performance. Potential metrics are examined to highlight their effectiveness in quantifying disaggregation performance. This analysis is then used to suggest performance metrics that are meaningful and of value to potential users and that will enable researchers/developers to identify beneficial ways to improve their technologies.

  20. Advanced Technologies, Embedded and Multimedia for Human-Centric Computing

    CERN Document Server

    Chao, Han-Chieh; Deng, Der-Jiunn; Park, James; HumanCom and EMC 2013

    2014-01-01

    The theme of HumanCom and EMC are focused on the various aspects of human-centric computing for advances in computer science and its applications, embedded and multimedia computing and provides an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of human-centric computing. And the theme of EMC (Advanced in Embedded and Multimedia Computing) is focused on the various aspects of embedded system, smart grid, cloud and multimedia computing, and it provides an opportunity for academic, industry professionals to discuss the latest issues and progress in the area of embedded and multimedia computing. Therefore this book will be include the various theories and practical applications in human-centric computing and embedded and multimedia computing.

  1. Proceedings of seventh symposium on sharing of computer programs and technology in nuclear medicine, computer assisted data processing

    International Nuclear Information System (INIS)

    Howard, B.Y.; McClain, W.J.; Landay, M.

    1977-01-01

    The Council on Computers (CC) of the Society of Nuclear Medicine (SNM) annually publishes the Proceedings of its Symposium on the Sharing of Computer Programs and Technology in Nuclear Medicine. This is the seventh such volume and has been organized by topic, with the exception of the invited papers and the discussion following them. An index arranged by author and by subject is included

  2. Proceedings of seventh symposium on sharing of computer programs and technology in nuclear medicine, computer assisted data processing

    Energy Technology Data Exchange (ETDEWEB)

    Howard, B.Y.; McClain, W.J.; Landay, M. (comps.)

    1977-01-01

    The Council on Computers (CC) of the Society of Nuclear Medicine (SNM) annually publishes the Proceedings of its Symposium on the Sharing of Computer Programs and Technology in Nuclear Medicine. This is the seventh such volume and has been organized by topic, with the exception of the invited papers and the discussion following them. An index arranged by author and by subject is included.

  3. Preparing for Further Introduction of Computing Technology in Vancouver Community College Instruction. Report of the Instructional Computing Committee.

    Science.gov (United States)

    Vancouver Community Coll., British Columbia.

    After examining the impact of changing technology on postsecondary instruction and on the tools needed for instruction, this report analyzes the status and offers recommendations concerning the future of instructional computing at Vancouver Community College (VCC) in British Columbia. Section I focuses on the use of computers in community college…

  4. Neuroanatomical correlates of brain-computer interface performance.

    Science.gov (United States)

    Kasahara, Kazumi; DaSalla, Charles Sayo; Honda, Manabu; Hanakawa, Takashi

    2015-04-15

    Brain-computer interfaces (BCIs) offer a potential means to replace or restore lost motor function. However, BCI performance varies considerably between users, the reasons for which are poorly understood. Here we investigated the relationship between sensorimotor rhythm (SMR)-based BCI performance and brain structure. Participants were instructed to control a computer cursor using right- and left-hand motor imagery, which primarily modulated their left- and right-hemispheric SMR powers, respectively. Although most participants were able to control the BCI with success rates significantly above chance level even at the first encounter, they also showed substantial inter-individual variability in BCI success rate. Participants also underwent T1-weighted three-dimensional structural magnetic resonance imaging (MRI). The MRI data were subjected to voxel-based morphometry using BCI success rate as an independent variable. We found that BCI performance correlated with gray matter volume of the supplementary motor area, supplementary somatosensory area, and dorsal premotor cortex. We suggest that SMR-based BCI performance is associated with development of non-primary somatosensory and motor areas. Advancing our understanding of BCI performance in relation to its neuroanatomical correlates may lead to better customization of BCIs based on individual brain structure. Copyright © 2015 Elsevier Inc. All rights reserved.

  5. Scalability of DL_POLY on High Performance Computing Platform

    Directory of Open Access Journals (Sweden)

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  6. Scintillator performance considerations for dedicated breast computed tomography

    Science.gov (United States)

    Vedantham, Srinivasan; Shi, Linxi; Karellas, Andrew

    2017-09-01

    Dedicated breast computed tomography (BCT) is an emerging clinical modality that can eliminate tissue superposition and has the potential for improved sensitivity and specificity for breast cancer detection and diagnosis. It is performed without physical compression of the breast. Most of the dedicated BCT systems use large-area detectors operating in cone-beam geometry and are referred to as cone-beam breast CT (CBBCT) systems. The large-area detectors in CBBCT systems are energy-integrating, indirect-type detectors employing a scintillator that converts x-ray photons to light, followed by detection of optical photons. A key consideration that determines the image quality achieved by such CBBCT systems is the choice of scintillator and its performance characteristics. In this work, a framework for analyzing the impact of the scintillator on CBBCT performance and its use for task-specific optimization of CBBCT imaging performance is described.

  7. Advances in Computing and Information Technology : Proceedings of the Second International

    CERN Document Server

    Nagamalai, Dhinaharan; Chaki, Nabendu

    2012-01-01

    The international conference on Advances in Computing and Information technology (ACITY 2012) provides an excellent international forum for both academics and professionals for sharing knowledge and results in theory, methodology and applications of Computer Science and Information Technology. The Second International Conference on Advances in Computing and Information technology (ACITY 2012), held in Chennai, India, during July 13-15, 2012, covered a number of topics in all major fields of Computer Science and Information Technology including: networking and communications, network security and applications, web and internet computing, ubiquitous computing, algorithms, bioinformatics, digital image processing and pattern recognition, artificial intelligence, soft computing and applications. Upon a strength review process, a number of high-quality, presenting not only innovative ideas but also a founded evaluation and a strong argumentation of the same, were selected and collected in the present proceedings, ...

  8. A study of computer graphics technology in application of communication resource management

    Science.gov (United States)

    Li, Jing; Zhou, Liang; Yang, Fei

    2017-08-01

    With the development of computer technology, computer graphics technology has been widely used. Especially, the success of object-oriented technology and multimedia technology promotes the development of graphics technology in the computer software system. Therefore, the computer graphics theory and application technology have become an important topic in the field of computer, while the computer graphics technology becomes more and more extensive in various fields of application. In recent years, with the development of social economy, especially the rapid development of information technology, the traditional way of communication resource management cannot effectively meet the needs of resource management. In this case, the current communication resource management is still using the original management tools and management methods, resource management equipment management and maintenance, which brought a lot of problems. It is very difficult for non-professionals to understand the equipment and the situation in communication resource management. Resource utilization is relatively low, and managers cannot quickly and accurately understand the resource conditions. Aimed at the above problems, this paper proposes to introduce computer graphics technology into the communication resource management. The introduction of computer graphics not only makes communication resource management more vivid, but also reduces the cost of resource management and improves work efficiency.

  9. Informatics everywhere : information and computation in society, science, and technology

    NARCIS (Netherlands)

    Verhoeff, T.

    2013-01-01

    Informatics is about information and its processing, also known as computation. Nowadays, children grow up taking smartphones and the internet for granted. Information and computation rule society. Science uses computerized equipment to collect, analyze, and visualize massive amounts of data.

  10. AGIS: Integration of new technologies used in ATLAS Distributed Computing

    OpenAIRE

    Anisenkov, Alexey; Di Girolamo, Alessandro; Alandes Pradillo, Maria

    2017-01-01

    The variety of the ATLAS Distributed Computing infrastructure requires a central information system to define the topology of computing resources and to store different parameters and configuration data which are needed by various ATLAS software components. The ATLAS Grid Information System (AGIS) is the system designed to integrate configuration and status information about resources, services and topology of the computing infrastructure used by ATLAS Distributed Computing applications and s...

  11. Seven Years after the Manifesto: Literature Review and Research Directions for Technologies in Animal Computer Interaction

    Directory of Open Access Journals (Sweden)

    Ilyena Hirskyj-Douglas

    2018-06-01

    Full Text Available As technologies diversify and become embedded in everyday lives, the technologies we expose to animals, and the new technologies being developed for animals within the field of Animal Computer Interaction (ACI are increasing. As we approach seven years since the ACI manifesto, which grounded the field within Human Computer Interaction and Computer Science, this thematic literature review looks at the technologies developed for (non-human animals. Technologies that are analysed include tangible and physical, haptic and wearable, olfactory, screen technology and tracking systems. The conversation explores what exactly ACI is whilst questioning what it means to be animal by considering the impact and loop between machine and animal interactivity. The findings of this review are expected to form the first grounding foundation of ACI technologies informing future research in animal computing as well as suggesting future areas for exploration.

  12. Cloud Computing: A Free Technology Option to Promote Collaborative Learning

    Science.gov (United States)

    Siegle, Del

    2010-01-01

    In a time of budget cuts and limited funding, purchasing and installing the latest software on classroom computers can be prohibitive for schools. Many educators are unaware that a variety of free software options exist, and some of them do not actually require installing software on the user's computer. One such option is cloud computing. This…

  13. submitter Performance studies of CMS workflows using Big Data technologies

    CERN Document Server

    Ambroz, Luca; Grandi, Claudio

    At the Large Hadron Collider (LHC), more than 30 petabytes of data are produced from particle collisions every year of data taking. The data processing requires large volumes of simulated events through Monte Carlo techniques. Furthermore, physics analysis implies daily access to derived data formats by hundreds of users. The Worldwide LHC Computing Grid (WLCG) - an international collaboration involving personnel and computing centers worldwide - is successfully coping with these challenges, enabling the LHC physics program. With the continuation of LHC data taking and the approval of ambitious projects such as the High-Luminosity LHC, such challenges will reach the edge of current computing capacity and performance. One of the keys to success in the next decades - also under severe financial resource constraints - is to optimize the efficiency in exploiting the computing resources. This thesis focuses on performance studies of CMS workflows, namely centrallyscheduled production activities and unpredictable d...

  14. Many-core technologies: The move to energy-efficient, high-throughput x86 computing (TFLOPS on a chip)

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    With Moore's Law alive and well, more and more parallelism is introduced into all computing platforms at all levels of integration and programming to achieve higher performance and energy efficiency. Especially in the area of High-Performance Computing (HPC) users can entertain a combination of different hardware and software parallel architectures and programming environments. Those technologies range from vectorization and SIMD computation over shared memory multi-threading (e.g. OpenMP) to distributed memory message passing (e.g. MPI) on cluster systems. We will discuss HPC industry trends and Intel's approach to it from processor/system architectures and research activities to hardware and software tools technologies. This includes the recently announced new Intel(r) Many Integrated Core (MIC) architecture for highly-parallel workloads and general purpose, energy efficient TFLOPS performance, some of its architectural features and its programming environment. At the end we will have a br...

  15. Computational Fluid Dynamics (CFD) Computations With Zonal Navier-Stokes Flow Solver (ZNSFLOW) Common High Performance Computing Scalable Software Initiative (CHSSI) Software

    National Research Council Canada - National Science Library

    Edge, Harris

    1999-01-01

    ...), computational fluid dynamics (CFD) 6 project. Under the project, a proven zonal Navier-Stokes solver was rewritten for scalable parallel performance on both shared memory and distributed memory high performance computers...

  16. High performance computing in science and engineering '09: transactions of the High Performance Computing Center, Stuttgart (HLRS) 2009

    National Research Council Canada - National Science Library

    Nagel, Wolfgang E; Kröner, Dietmar; Resch, Michael

    2010-01-01

    ...), NIC/JSC (J¨ u lich), and LRZ (Munich). As part of that strategic initiative, in May 2009 already NIC/JSC has installed the first phase of the GCS HPC Tier-0 resources, an IBM Blue Gene/P with roughly 300.000 Cores, this time in J¨ u lich, With that, the GCS provides the most powerful high-performance computing infrastructure in Europe alread...

  17. Threshold-based queuing system for performance analysis of cloud computing system with dynamic scaling

    International Nuclear Information System (INIS)

    Shorgin, Sergey Ya.; Pechinkin, Alexander V.; Samouylov, Konstantin E.; Gaidamaka, Yuliya V.; Gudkova, Irina A.; Sopin, Eduard S.

    2015-01-01

    Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. For better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures

  18. Threshold-based queuing system for performance analysis of cloud computing system with dynamic scaling

    Energy Technology Data Exchange (ETDEWEB)

    Shorgin, Sergey Ya.; Pechinkin, Alexander V. [Institute of Informatics Problems, Russian Academy of Sciences (Russian Federation); Samouylov, Konstantin E.; Gaidamaka, Yuliya V.; Gudkova, Irina A.; Sopin, Eduard S. [Telecommunication Systems Department, Peoples’ Friendship University of Russia (Russian Federation)

    2015-03-10

    Cloud computing is promising technology to manage and improve utilization of computing center resources to deliver various computing and IT services. For the purpose of energy saving there is no need to unnecessarily operate many servers under light loads, and they are switched off. On the other hand, some servers should be switched on in heavy load cases to prevent very long delays. Thus, waiting times and system operating cost can be maintained on acceptable level by dynamically adding or removing servers. One more fact that should be taken into account is significant server setup costs and activation times. For better energy efficiency, cloud computing system should not react on instantaneous increase or instantaneous decrease of load. That is the main motivation for using queuing systems with hysteresis for cloud computing system modelling. In the paper, we provide a model of cloud computing system in terms of multiple server threshold-based infinite capacity queuing system with hysteresis and noninstantanuous server activation. For proposed model, we develop a method for computing steady-state probabilities that allow to estimate a number of performance measures.

  19. Improving Operational Risk Management Using Business Performance Management Technologies

    OpenAIRE

    Bram Pieket Weeserik; Marco Spruit

    2018-01-01

    Operational Risk Management (ORM) comprises the continuous management of risks resulting from: human actions, internal processes, systems, and external events. With increasing requirements, complexity and a growing volume of risks, information systems provide benefits for integrating risk management activities and optimizing performance. Business Performance Management (BPM) technologies are believed to provide a solution for effective Operational Risk Management by offering several combined ...

  20. U.S. report on fuel performance and technology

    Energy Technology Data Exchange (ETDEWEB)

    Cook, T [Department of Energy, Washington, DC (United States). Office of Engineering and Technology Development

    1997-12-01

    The report reviews the following aspects of fuel performance and technology: increased demand on fuel performance;improved fuel failure rate; operating fuel cycles; capacity factor for US nuclear electric generating plants; potential reduction of SNF due to improved fuel burnup.