WorldWideScience

Sample records for international computer performance

  1. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  2. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  3. 6th International Conference on High Performance Scientific Computing

    CERN Document Server

    Phu, Hoang; Rannacher, Rolf; Schlöder, Johannes

    2017-01-01

    This proceedings volume highlights a selection of papers presented at the Sixth International Conference on High Performance Scientific Computing, which took place in Hanoi, Vietnam on March 16-20, 2015. The conference was jointly organized by the Heidelberg Institute of Theoretical Studies (HITS), the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University, and the Vietnam Institute for Advanced Study in Mathematics, Ministry of Education The contributions cover a broad, interdisciplinary spectrum of scientific computing and showcase recent advances in theory, methods, and practical applications. Subjects covered numerical simulation, methods for optimization and control, parallel computing, and software development, as well as the applications of scientific computing in physics, mechanics, biomechanics and robotics, material science, hydrology, biotechnology, medicine, transport, scheduling, and in...

  4. Tablet computer enhanced training improves internal medicine exam performance.

    Science.gov (United States)

    Baumgart, Daniel C; Wende, Ilja; Grittner, Ulrike

    2017-01-01

    Traditional teaching concepts in medical education do not take full advantage of current information technology. We aimed to objectively determine the impact of Tablet PC enhanced training on learning experience and MKSAP® (medical knowledge self-assessment program) exam performance. In this single center, prospective, controlled study final year medical students and medical residents doing an inpatient service rotation were alternatingly assigned to either the active test (Tablet PC with custom multimedia education software package) or traditional education (control) group, respectively. All completed an extensive questionnaire to collect their socio-demographic data, evaluate educational status, computer affinity and skills, problem solving, eLearning knowledge and self-rated medical knowledge. Both groups were MKSAP® tested at the beginning and the end of their rotation. The MKSAP® score at the final exam was the primary endpoint. Data of 55 (tablet n = 24, controls n = 31) male 36.4%, median age 28 years, 65.5% students, were evaluable. The mean MKSAP® score improved in the tablet PC (score Δ + 8 SD: 11), but not the control group (score Δ- 7, SD: 11), respectively. After adjustment for baseline score and confounders the Tablet PC group showed on average 11% better MKSAP® test results compared to the control group (plearning to their respective training programs.

  5. 8th International Workshop on Parallel Tools for High Performance Computing

    CERN Document Server

    Gracia, José; Knüpfer, Andreas; Resch, Michael; Nagel, Wolfgang

    2015-01-01

    Numerical simulation and modelling using High Performance Computing has evolved into an established technique in academic and industrial research. At the same time, the High Performance Computing infrastructure is becoming ever more complex. For instance, most of the current top systems around the world use thousands of nodes in which classical CPUs are combined with accelerator cards in order to enhance their compute power and energy efficiency. This complexity can only be mastered with adequate development and optimization tools. Key topics addressed by these tools include parallelization on heterogeneous systems, performance optimization for CPUs and accelerators, debugging of increasingly complex scientific applications, and optimization of energy usage in the spirit of green IT. This book represents the proceedings of the 8th International Parallel Tools Workshop, held October 1-2, 2014 in Stuttgart, Germany – which is a forum to discuss the latest advancements in the parallel tools.

  6. International Conference on Modern Mathematical Methods and High Performance Computing in Science and Technology

    CERN Document Server

    Srivastava, HM; Venturino, Ezio; Resch, Michael; Gupta, Vijay

    2016-01-01

    The book discusses important results in modern mathematical models and high performance computing, such as applied operations research, simulation of operations, statistical modeling and applications, invisibility regions and regular meta-materials, unmanned vehicles, modern radar techniques/SAR imaging, satellite remote sensing, coding, and robotic systems. Furthermore, it is valuable as a reference work and as a basis for further study and research. All contributing authors are respected academicians, scientists and researchers from around the globe. All the papers were presented at the international conference on Modern Mathematical Methods and High Performance Computing in Science & Technology (M3HPCST 2015), held at Raj Kumar Goel Institute of Technology, Ghaziabad, India, from 27–29 December 2015, and peer-reviewed by international experts. The conference provided an exceptional platform for leading researchers, academicians, developers, engineers and technocrats from a broad range of disciplines ...

  7. 10th International Workshop on Parallel Tools for High Performance Computing

    CERN Document Server

    Gracia, José; Hilbrich, Tobias; Knüpfer, Andreas; Resch, Michael; Nagel, Wolfgang

    2017-01-01

    This book presents the proceedings of the 10th International Parallel Tools Workshop, held October 4-5, 2016 in Stuttgart, Germany – a forum to discuss the latest advances in parallel tools. High-performance computing plays an increasingly important role for numerical simulation and modelling in academic and industrial research. At the same time, using large-scale parallel systems efficiently is becoming more difficult. A number of tools addressing parallel program development and analysis have emerged from the high-performance computing community over the last decade, and what may have started as collection of small helper script has now matured to production-grade frameworks. Powerful user interfaces and an extensive body of documentation allow easy usage by non-specialists.

  8. 9th International Workshop on Parallel Tools for High Performance Computing

    CERN Document Server

    Hilbrich, Tobias; Niethammer, Christoph; Gracia, José; Nagel, Wolfgang; Resch, Michael

    2016-01-01

    High Performance Computing (HPC) remains a driver that offers huge potentials and benefits for science and society. However, a profound understanding of the computational matters and specialized software is needed to arrive at effective and efficient simulations. Dedicated software tools are important parts of the HPC software landscape, and support application developers. Even though a tool is by definition not a part of an application, but rather a supplemental piece of software, it can make a fundamental difference during the development of an application. Such tools aid application developers in the context of debugging, performance analysis, and code optimization, and therefore make a major contribution to the development of robust and efficient parallel software. This book introduces a selection of the tools presented and discussed at the 9th International Parallel Tools Workshop held in Dresden, Germany, September 2-3, 2015, which offered an established forum for discussing the latest advances in paral...

  9. 7th International Workshop on Parallel Tools for High Performance Computing

    CERN Document Server

    Gracia, José; Nagel, Wolfgang; Resch, Michael

    2014-01-01

    Current advances in High Performance Computing (HPC) increasingly impact efficient software development workflows. Programmers for HPC applications need to consider trends such as increased core counts, multiple levels of parallelism, reduced memory per core, and I/O system challenges in order to derive well performing and highly scalable codes. At the same time, the increasing complexity adds further sources of program defects. While novel programming paradigms and advanced system libraries provide solutions for some of these challenges, appropriate supporting tools are indispensable. Such tools aid application developers in debugging, performance analysis, or code optimization and therefore make a major contribution to the development of robust and efficient parallel software. This book introduces a selection of the tools presented and discussed at the 7th International Parallel Tools Workshop, held in Dresden, Germany, September 3-4, 2013.  

  10. Computational Intelligence : International Joint Conference

    CERN Document Server

    Rosa, Agostinho; Cadenas, José; Dourado, António; Madani, Kurosh; Filipe, Joaquim

    2016-01-01

    The present book includes a set of selected extended papers from the sixth International Joint Conference on Computational Intelligence (IJCCI 2014), held in Rome, Italy, from 22 to 24 October 2014. The conference was composed by three co-located conferences:  The International Conference on Evolutionary Computation Theory and Applications (ECTA), the International Conference on Fuzzy Computation Theory and Applications (FCTA), and the International Conference on Neural Computation Theory and Applications (NCTA). Recent progresses in scientific developments and applications in these three areas are reported in this book. IJCCI received 210 submissions, from 51 countries, in all continents. After a double blind paper review performed by the Program Committee, 15% were accepted as full papers and thus selected for oral presentation. Additional papers were accepted as short papers and posters. A further selection was made after the Conference, based also on the assessment of presentation quality and audience in...

  11. Computational Intelligence : International Joint Conference

    CERN Document Server

    Dourado, António; Rosa, Agostinho; Filipe, Joaquim; Kacprzyk, Janusz

    2016-01-01

    The present book includes a set of selected extended papers from the fifth International Joint Conference on Computational Intelligence (IJCCI 2013), held in Vilamoura, Algarve, Portugal, from 20 to 22 September 2013. The conference was composed by three co-located conferences:  The International Conference on Evolutionary Computation Theory and Applications (ECTA), the International Conference on Fuzzy Computation Theory and Applications (FCTA), and the International Conference on Neural Computation Theory and Applications (NCTA). Recent progresses in scientific developments and applications in these three areas are reported in this book. IJCCI received 111 submissions, from 30 countries, in all continents. After a double blind paper review performed by the Program Committee, only 24 submissions were accepted as full papers and thus selected for oral presentation, leading to a full paper acceptance ratio of 22%. Additional papers were accepted as short papers and posters. A further selection was made after ...

  12. COMPUTING: International symposium

    International Nuclear Information System (INIS)

    Anon.

    1984-01-01

    Recent Developments in Computing, Processor, and Software Research for High Energy Physics, a four-day international symposium, was held in Guanajuato, Mexico, from 8-11 May, with 112 attendees from nine countries. The symposium was the third in a series of meetings exploring activities in leading-edge computing technology in both processor and software research and their effects on high energy physics. Topics covered included fixed-target on- and off-line reconstruction processors; lattice gauge and general theoretical processors and computing; multiprocessor projects; electron-positron collider on- and offline reconstruction processors; state-of-the-art in university computer science and industry; software research; accelerator processors; and proton-antiproton collider on and off-line reconstruction processors

  13. SCinet Architecture: Featured at the International Conference for High Performance Computing,Networking, Storage and Analysis 2016

    Energy Technology Data Exchange (ETDEWEB)

    Lyonnais, Marc; Smith, Matt; Mace, Kate P.

    2017-02-06

    SCinet is the purpose-built network that operates during the International Conference for High Performance Computing,Networking, Storage and Analysis (Super Computing or SC). Created each year for the conference, SCinet brings to life a high-capacity network that supports applications and experiments that are a hallmark of the SC conference. The network links the convention center to research and commercial networks around the world. This resource serves as a platform for exhibitors to demonstrate the advanced computing resources of their home institutions and elsewhere by supporting a wide variety of applications. Volunteers from academia, government and industry work together to design and deliver the SCinet infrastructure. Industry vendors and carriers donate millions of dollars in equipment and services needed to build and support the local and wide area networks. Planning begins more than a year in advance of each SC conference and culminates in a high intensity installation in the days leading up to the conference. The SCinet architecture for SC16 illustrates a dramatic increase in participation from the vendor community, particularly those that focus on network equipment. Software-Defined Networking (SDN) and Data Center Networking (DCN) are present in nearly all aspects of the design.

  14. Human Computer Music Performance

    OpenAIRE

    Dannenberg, Roger B.

    2012-01-01

    Human Computer Music Performance (HCMP) is the study of music performance by live human performers and real-time computer-based performers. One goal of HCMP is to create a highly autonomous artificial performer that can fill the role of a human, especially in a popular music setting. This will require advances in automated music listening and understanding, new representations for music, techniques for music synchronization, real-time human-computer communication, music generation, sound synt...

  15. The Effect of Computer Assisted Language Learning (CALL) on Performance in the Test of English for International Communication (TOEIC) Listening Module

    Science.gov (United States)

    van Han, Nguyen; van Rensburg, Henriette

    2014-01-01

    Many companies and organizations have been using the Test of English for International Communication (TOEIC) for business and commercial communication purpose in Vietnam and around the world. The present study investigated the effect of Computer Assisted Language Learning (CALL) on performance in the Test of English for International Communication…

  16. Performance and Internal Control

    OpenAIRE

    Mifti, Sri; Lestariyo, Nugroho Budi; Kowanda, Anacostia

    2009-01-01

    The objective of this study is to measure the influence of internal auditing on performance. Research object is Inspectorate General Department of Home Affairs staffs. As research instrument, questionnaire was developed and distributed to respondents. Closed type questionnaire was developed with five (5) choices to measure the two (2) research variables. Internal auditing is measured using six (6) dimensions, and performance is measured using three (3) dimensions. As the two variables are lat...

  17. International Computer Symposium

    CERN Document Server

    Jain, Lakhmi; Peng, Sheng-Lung; Pan, Jeng-Shyang; Yang, Ching-Nung; Lin, Chia-Chen

    2013-01-01

    The field of Intelligent Systems and Applications has expanded enormously during the last two decades. Theoretical and practical results in this area are growing rapidly due to many successful applications and new theories derived from many diverse problems. This book is dedicated to the Intelligent Systems and Applications in many different aspects. In particular, this book is to provide highlights of the current research in Intelligent Systems and Applications. It consists of research papers in the following specific topics:   l   Graph Theory and Algorithms l   Interconnection Networks and Combinatorial Algorithms l   Artificial Intelligence and Fuzzy Systems l   Database, Data Mining, and Information Retrieval l   Information Literacy, e-Learning, and Social Media l   Computer Networks and Web Service/Technologies l   Wireless Sensor Networks l   Wireless Network Protocols l   Wireless Data Processing   This book provides a reference to theoretical problems as well as practical solutio...

  18. High Performance Computing Multicast

    Science.gov (United States)

    2012-02-01

    A History of the Virtual Synchrony Replication Model,” in Replication: Theory and Practice, Charron-Bost, B., Pedone, F., and Schiper, A. (Eds...Performance Computing IP / IPv4 Internet Protocol (version 4.0) IPMC Internet Protocol MultiCast LAN Local Area Network MCMD Dr. Multicast MPI

  19. Performing stencil computations

    Energy Technology Data Exchange (ETDEWEB)

    Donofrio, David

    2018-01-16

    A method and apparatus for performing stencil computations efficiently are disclosed. In one embodiment, a processor receives an offset, and in response, retrieves a value from a memory via a single instruction, where the retrieving comprises: identifying, based on the offset, one of a plurality of registers of the processor; loading an address stored in the identified register; and retrieving from the memory the value at the address.

  20. International Conference on Advanced Computing

    CERN Document Server

    Patnaik, Srikanta

    2014-01-01

    This book is composed of the Proceedings of the International Conference on Advanced Computing, Networking, and Informatics (ICACNI 2013), held at Central Institute of Technology, Raipur, Chhattisgarh, India during June 14–16, 2013. The book records current research articles in the domain of computing, networking, and informatics. The book presents original research articles, case-studies, as well as review articles in the said field of study with emphasis on their implementation and practical application. Researchers, academicians, practitioners, and industry policy makers around the globe have contributed towards formation of this book with their valuable research submissions.

  1. Monitoring the Microgravity Environment Quality On-board the International Space Station Using Soft Computing Techniques. Part 2; Preliminary System Performance Results

    Science.gov (United States)

    Jules, Kenol; Lin, Paul P.; Weiss, Daniel S.

    2002-01-01

    This paper presents the preliminary performance results of the artificial intelligence monitoring system in full operational mode using near real time acceleration data downlinked from the International Space Station. Preliminary microgravity environment characterization analysis result for the International Space Station (Increment-2), using the monitoring system is presented. Also, comparison between the system predicted performance based on ground test data for the US laboratory "Destiny" module and actual on-orbit performance, using measured acceleration data from the U.S. laboratory module of the International Space Station is presented. Finally, preliminary on-orbit disturbance magnitude levels are presented for the Experiment of Physics of Colloids in Space, which are compared with on ground test data. The ground test data for the Experiment of Physics of Colloids in Space were acquired from the Microgravity Emission Laboratory, located at the NASA Glenn Research Center, Cleveland, Ohio. The artificial intelligence was developed by the NASA Glenn Principal Investigator Microgravity Services Project to help the principal investigator teams identify the primary vibratory disturbance sources that are active, at any moment of time, on-board the International Space Station, which might impact the microgravity environment their experiments are exposed to. From the Principal Investigator Microgravity Services' web site, the principal investigator teams can monitor via a dynamic graphical display, implemented in Java, in near real time, which event(s) is/are on, such as crew activities, pumps, fans, centrifuges, compressor, crew exercise, structural modes, etc., and decide whether or not to run their experiments, whenever that is an option, based on the acceleration magnitude and frequency sensitivity associated with that experiment. This monitoring system detects primarily the vibratory disturbance sources. The system has built-in capability to detect both known

  2. 1st International Conference on Computational Intelligence and Informatics

    CERN Document Server

    Prasad, V; Rani, B; Udgata, Siba; Raju, K

    2017-01-01

    The book covers a variety of topics which include data mining and data warehousing, high performance computing, parallel and distributed computing, computational intelligence, soft computing, big data, cloud computing, grid computing, cognitive computing, image processing, computer networks, wireless networks, social networks, wireless sensor networks, information and network security, web security, internet of things, bioinformatics and geoinformatics. The book is a collection of best papers submitted in the First International Conference on Computational Intelligence and Informatics (ICCII 2016) held during 28-30 May 2016 at JNTUH CEH, Hyderabad, India. It was hosted by Department of Computer Science and Engineering, JNTUH College of Engineering in association with Division V (Education & Research) CSI, India. .

  3. Development of Probabilistic Internal Dosimetry Computer Code

    Energy Technology Data Exchange (ETDEWEB)

    Noh, Siwan [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Kwon, Tae-Eun [Korea Institute of Radiological and Medical Sciences, Seoul (Korea, Republic of); Lee, Jai-Ki [Korean Association for Radiation Protection, Seoul (Korea, Republic of)

    2017-02-15

    Internal radiation dose assessment involves biokinetic models, the corresponding parameters, measured data, and many assumptions. Every component considered in the internal dose assessment has its own uncertainty, which is propagated in the intake activity and internal dose estimates. For research or scientific purposes, and for retrospective dose reconstruction for accident scenarios occurring in workplaces having a large quantity of unsealed radionuclides, such as nuclear power plants, nuclear fuel cycle facilities, and facilities in which nuclear medicine is practiced, a quantitative uncertainty assessment of the internal dose is often required. However, no calculation tools or computer codes that incorporate all the relevant processes and their corresponding uncertainties, i.e., from the measured data to the committed dose, are available. Thus, the objective of the present study is to develop an integrated probabilistic internal-dose-assessment computer code. First, the uncertainty components in internal dosimetry are identified, and quantitative uncertainty data are collected. Then, an uncertainty database is established for each component. In order to propagate these uncertainties in an internal dose assessment, a probabilistic internal-dose-assessment system that employs the Bayesian and Monte Carlo methods. Based on the developed system, we developed a probabilistic internal-dose-assessment code by using MATLAB so as to estimate the dose distributions from the measured data with uncertainty. Using the developed code, we calculated the internal dose distribution and statistical values (e.g. the 2.5{sup th}, 5{sup th}, median, 95{sup th}, and 97.5{sup th} percentiles) for three sample scenarios. On the basis of the distributions, we performed a sensitivity analysis to determine the influence of each component on the resulting dose in order to identify the major component of the uncertainty in a bioassay. The results of this study can be applied to various

  4. Development of Probabilistic Internal Dosimetry Computer Code

    International Nuclear Information System (INIS)

    Noh, Siwan; Kwon, Tae-Eun; Lee, Jai-Ki

    2017-01-01

    Internal radiation dose assessment involves biokinetic models, the corresponding parameters, measured data, and many assumptions. Every component considered in the internal dose assessment has its own uncertainty, which is propagated in the intake activity and internal dose estimates. For research or scientific purposes, and for retrospective dose reconstruction for accident scenarios occurring in workplaces having a large quantity of unsealed radionuclides, such as nuclear power plants, nuclear fuel cycle facilities, and facilities in which nuclear medicine is practiced, a quantitative uncertainty assessment of the internal dose is often required. However, no calculation tools or computer codes that incorporate all the relevant processes and their corresponding uncertainties, i.e., from the measured data to the committed dose, are available. Thus, the objective of the present study is to develop an integrated probabilistic internal-dose-assessment computer code. First, the uncertainty components in internal dosimetry are identified, and quantitative uncertainty data are collected. Then, an uncertainty database is established for each component. In order to propagate these uncertainties in an internal dose assessment, a probabilistic internal-dose-assessment system that employs the Bayesian and Monte Carlo methods. Based on the developed system, we developed a probabilistic internal-dose-assessment code by using MATLAB so as to estimate the dose distributions from the measured data with uncertainty. Using the developed code, we calculated the internal dose distribution and statistical values (e.g. the 2.5 th , 5 th , median, 95 th , and 97.5 th percentiles) for three sample scenarios. On the basis of the distributions, we performed a sensitivity analysis to determine the influence of each component on the resulting dose in order to identify the major component of the uncertainty in a bioassay. The results of this study can be applied to various situations. In cases

  5. Internal Clock Drift Estimation in Computer Clusters

    Directory of Open Access Journals (Sweden)

    Hicham Marouani

    2008-01-01

    Full Text Available Most computers have several high-resolution timing sources, from the programmable interrupt timer to the cycle counter. Yet, even at a precision of one cycle in ten millions, clocks may drift significantly in a single second at a clock frequency of several GHz. When tracing the low-level system events in computer clusters, such as packet sending or reception, each computer system records its own events using an internal clock. In order to properly understand the global system behavior and performance, as reported by the events recorded on each computer, it is important to estimate precisely the clock differences and drift between the different computers in the system. This article studies the clock precision and stability of several computer systems, with different architectures. It also studies the typical network delay characteristics, since time synchronization algorithms rely on the exchange of network packets and are dependent on the symmetry of the delays. A very precise clock, based on the atomic time provided by the GPS satellite network, was used as a reference to measure clock drifts and network delays. The results obtained are of immediate use to all applications which depend on computer clocks or network time synchronization accuracy.

  6. International Conference on Computational Mechanics

    CERN Document Server

    Atluri, Satya

    1986-01-01

    It is often said that these days there are too many conferences on general areas of computational mechanics. mechanics. and numer ical methods. vJhile this may be true. the his tory of scientific conferences is itself quite short. According to Abraham Pais (in "Subtle is the Lord ...• " Oxford University Press. 1982. p.80). the first international scientific conference ever held was the Karlsruhe Congress of Chemists. 3-5 September 1860 in Karlsruhe. Germany. There were 127 chemists in attendance. and the participants came from Austria. Belgium. France. Germany. Great Britain. Italy. Mexico. Poland. Russia. Spain. Sweden. and Switzerland. At the top of the agenda of the points to be discussed at this conference was the question: "Shall a difference be made between the expressions molecule and atom?" Pais goes on to note: "The conference did not at once succeed in bringing chemists closer together ... It is possible that the older men were offended by the impetuous behavior and imposing manner of the younger...

  7. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  8. International assessment of functional computer abilities

    OpenAIRE

    Anderson, Ronald E.; Collis, Betty

    1993-01-01

    After delineating the major rationale for computer education, data are presented from Stage 1 of the IEA Computers in Education Study showing international comparisons that may reflect differential priorities. Rapid technological change and the lack of consensus on goals of computer education impedes the establishment of stable curricula for ¿general computer education¿ or computer literacy. In this context the construction of instruments for student assessment remains a challenge. Seeking to...

  9. 7th International Workshop on Natural Computing

    CERN Document Server

    Hagiya, Masami

    2015-01-01

    This book highlights recent advances in natural computing, including biology and its theory, bio-inspired computing, computational aesthetics, computational models and theories, computing with natural media, philosophy of natural computing and educational technology. It presents extended versions of the best papers selected from the symposium “7th International Workshop on Natural Computing” (IWNC7), held in Tokyo, Japan, in 2013. The target audience is not limited to researchers working in natural computing but also those active in biological engineering, fine/media art design, aesthetics and philosophy.

  10. 8th International Workshop on Natural Computing

    CERN Document Server

    Hagiya, Masami

    2016-01-01

    This book highlights recent advances in natural computing, including biology and its theory, bio-inspired computing, computational aesthetics, computational models and theories, computing with natural media, philosophy of natural computing, and educational technology. It presents extended versions of the best papers selected from the “8th International Workshop on Natural Computing” (IWNC8), a symposium held in Hiroshima, Japan, in 2014. The target audience is not limited to researchers working in natural computing but also includes those active in biological engineering, fine/media art design, aesthetics, and philosophy.

  11. International assessment of functional computer abilities

    NARCIS (Netherlands)

    Anderson, Ronald E.; Collis, Betty

    1993-01-01

    After delineating the major rationale for computer education, data are presented from Stage 1 of the IEA Computers in Education Study showing international comparisons that may reflect differential priorities. Rapid technological change and the lack of consensus on goals of computer education

  12. Spontaneity and international marketing performance

    OpenAIRE

    Souchon, Anne L.; Hughes, Paul; Farrell, Andrew M.; Nemkova, Ekaterina; Oliveira, Joao S.

    2016-01-01

    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link. Purpose – The purpose of this paper is to ascertain how today’s international marketers can perform better on the global scene by harnessing spontaneity. Design/methodology/approach – The authors draw on contingency theory to develop a model of the spontaneity – international marketing performance relationship, and identify three potential m...

  13. International Developments in Computer Science.

    Science.gov (United States)

    1982-06-01

    background on 52 53 China’s scientific research and on their computer science before 1978. A useful companion to the directory is another publication of the...bimonthly publication in Portuguese; occasional translation of foreign articles into Portuguese. Data News: A bimonthly industry newsletter. Sistemas ...computer-related topics; Spanish. Delta: Publication of local users group; Spanish. Sistemas : Publication of System Engineers of Colombia; Spanish. CUBA

  14. Computer-Related Task Performance

    DEFF Research Database (Denmark)

    Longstreet, Phil; Xiao, Xiao; Sarker, Saonee

    2016-01-01

    The existing information system (IS) literature has acknowledged computer self-efficacy (CSE) as an important factor contributing to enhancements in computer-related task performance. However, the empirical results of CSE on performance have not always been consistent, and increasing an individual......'s CSE is often a cumbersome process. Thus, we introduce the theoretical concept of self-prophecy (SP) and examine how this social influence strategy can be used to improve computer-related task performance. Two experiments are conducted to examine the influence of SP on task performance. Results show...... that SP and CSE interact to influence performance. Implications are then discussed in terms of organizations’ ability to increase performance....

  15. International Conference on Advanced Computing for Innovation

    CERN Document Server

    Angelova, Galia; Agre, Gennady

    2016-01-01

    This volume is a selected collection of papers presented and discussed at the International Conference “Advanced Computing for Innovation (AComIn 2015)”. The Conference was held at 10th -11th of November, 2015 in Sofia, Bulgaria and was aimed at providing a forum for international scientific exchange between Central/Eastern Europe and the rest of the world on several fundamental topics of computational intelligence. The papers report innovative approaches and solutions in hot topics of computational intelligence – advanced computing, language and semantic technologies, signal and image processing, as well as optimization and intelligent control.

  16. International Conference of Intelligence Computation and Evolutionary Computation ICEC 2012

    CERN Document Server

    Intelligence Computation and Evolutionary Computation

    2013-01-01

    2012 International Conference of Intelligence Computation and Evolutionary Computation (ICEC 2012) is held on July 7, 2012 in Wuhan, China. This conference is sponsored by Information Technology & Industrial Engineering Research Center.  ICEC 2012 is a forum for presentation of new research results of intelligent computation and evolutionary computation. Cross-fertilization of intelligent computation, evolutionary computation, evolvable hardware and newly emerging technologies is strongly encouraged. The forum aims to bring together researchers, developers, and users from around the world in both industry and academia for sharing state-of-art results, for exploring new areas of research and development, and to discuss emerging issues facing intelligent computation and evolutionary computation.

  17. 3rd International Conference on Computational Mathematics and Computational Geometry

    CERN Document Server

    Ravindran, Anton

    2016-01-01

    This volume presents original research contributed to the 3rd Annual International Conference on Computational Mathematics and Computational Geometry (CMCGS 2014), organized and administered by Global Science and Technology Forum (GSTF). Computational Mathematics and Computational Geometry are closely related subjects, but are often studied by separate communities and published in different venues. This volume is unique in its combination of these topics. After the conference, which took place in Singapore, selected contributions chosen for this volume and peer-reviewed. The section on Computational Mathematics contains papers that are concerned with developing new and efficient numerical algorithms for mathematical sciences or scientific computing. They also cover analysis of such algorithms to assess accuracy and reliability. The parts of this project that are related to Computational Geometry aim to develop effective and efficient algorithms for geometrical applications such as representation and computati...

  18. International Conference on Computational Intelligence, Cyber Security, and Computational Models

    CERN Document Server

    Ramasamy, Vijayalakshmi; Sheen, Shina; Veeramani, C; Bonato, Anthony; Batten, Lynn

    2016-01-01

    This book aims at promoting high-quality research by researchers and practitioners from academia and industry at the International Conference on Computational Intelligence, Cyber Security, and Computational Models ICC3 2015 organized by PSG College of Technology, Coimbatore, India during December 17 – 19, 2015. This book enriches with innovations in broad areas of research like computational modeling, computational intelligence and cyber security. These emerging inter disciplinary research areas have helped to solve multifaceted problems and gained lot of attention in recent years. This encompasses theory and applications, to provide design, analysis and modeling of the aforementioned key areas.

  19. Second International Joint Conference on Computational Intelligence (IJCCI 2010)

    CERN Document Server

    Correia, António; Rosa, Agostinho; Filipe, Joaquim; Computational Intelligence

    2012-01-01

    The present book includes a set of selected extended papers from the second International Joint Conference on Computational Intelligence (IJCCI 2010), held in Valencia, Spain, from 24 to 26 October 2010. The conference was composed by three co-located conferences:  The International Conference on Fuzzy Computation (ICFC), the International Conference on Evolutionary Computation (ICEC), and the International Conference on Neural Computation (ICNC). Recent progresses in scientific developments and applications in these three areas are reported in this book. IJCCI received 236 submissions, from 49 countries, in all continents. After a double blind paper review performed by the Program Committee, only 30 submissions were accepted as full papers and thus selected for oral presentation, leading to a full paper acceptance ratio of 13%. Additional papers were accepted as short papers and posters. A further selection was made after the Conference, based also on the assessment of presentation quality and audience inte...

  20. International Conference on Computational Intelligence 2015

    CERN Document Server

    Saha, Sujan

    2017-01-01

    This volume comprises the proceedings of the International Conference on Computational Intelligence 2015 (ICCI15). This book aims to bring together work from leading academicians, scientists, researchers and research scholars from across the globe on all aspects of computational intelligence. The work is composed mainly of original and unpublished results of conceptual, constructive, empirical, experimental, or theoretical work in all areas of computational intelligence. Specifically, the major topics covered include classical computational intelligence models and artificial intelligence, neural networks and deep learning, evolutionary swarm and particle algorithms, hybrid systems optimization, constraint programming, human-machine interaction, computational intelligence for the web analytics, robotics, computational neurosciences, neurodynamics, bioinspired and biomorphic algorithms, cross disciplinary topics and applications. The contents of this volume will be of use to researchers and professionals alike....

  1. International Conference on Computer, Communication and Computational Sciences

    CERN Document Server

    Mishra, Krishn; Tiwari, Shailesh; Singh, Vivek

    2017-01-01

    Exchange of information and innovative ideas are necessary to accelerate the development of technology. With advent of technology, intelligent and soft computing techniques came into existence with a wide scope of implementation in engineering sciences. Keeping this ideology in preference, this book includes the insights that reflect the ‘Advances in Computer and Computational Sciences’ from upcoming researchers and leading academicians across the globe. It contains high-quality peer-reviewed papers of ‘International Conference on Computer, Communication and Computational Sciences (ICCCCS 2016), held during 12-13 August, 2016 in Ajmer, India. These papers are arranged in the form of chapters. The content of the book is divided into two volumes that cover variety of topics such as intelligent hardware and software design, advanced communications, power and energy optimization, intelligent techniques used in internet of things, intelligent image processing, advanced software engineering, evolutionary and ...

  2. Improving engineers' performance with computers

    International Nuclear Information System (INIS)

    Purvis, E.E. III

    1984-01-01

    The problem addressed is how to improve the performance of engineers in the design, operation, and maintenance of nuclear power plants. The application of computer science to this problem offers a challenge in maximizing the use of developments outside the nuclear industry and setting priorities to address the most fruitful areas first. Areas of potential benefits include data base management through design, analysis, procurement, construction, operation maintenance, cost, schedule and interface control and planning, and quality engineering on specifications, inspection, and training

  3. International Conference on Theoretical and Computational Physics

    CERN Document Server

    2016-01-01

    Int'l Conference on Theoretical and Computational Physics (TCP 2016) will be held from August 24 to 26, 2016 in Xi'an, China. This Conference will cover issues on Theoretical and Computational Physics. It dedicates to creating a stage for exchanging the latest research results and sharing the advanced research methods. TCP 2016 will be an important platform for inspiring international and interdisciplinary exchange at the forefront of Theoretical and Computational Physics. The Conference will bring together researchers, engineers, technicians and academicians from all over the world, and we cordially invite you to take this opportunity to join us for academic exchange and visit the ancient city of Xi’an.

  4. Proceedings of the International Computer Music Conference

    DEFF Research Database (Denmark)

    It is a pleasure to welcome everyone to the 2007 International Computer Music Conference, hosted in Copenhagen from the 27th to the 31st of August. ICMC2007 is organized by Re:New - Digital arts forum, in collaboration with the International Computer Music Association and Medialogy at Aalborg Uni...... and interactive music and circus. The cross-disciplinary nature of ICMC is well represented by the Medialogy education, a new initiative started in Esbjerg and now well established in Copenhagen.......It is a pleasure to welcome everyone to the 2007 International Computer Music Conference, hosted in Copenhagen from the 27th to the 31st of August. ICMC2007 is organized by Re:New - Digital arts forum, in collaboration with the International Computer Music Association and Medialogy at Aalborg...... efficient in providing feedback and comments on the numerous papers submitted. The response to the call for participation was very positive. We received 290 paper submissions and 554 music submissions. We are extremely grateful for the interest and support from around the world. It is an honor to welcome...

  5. Computed radiography systems performance evaluation

    International Nuclear Information System (INIS)

    Xavier, Clarice C.; Nersissian, Denise Y.; Furquim, Tania A.C.

    2009-01-01

    The performance of a computed radiography system was evaluated, according to the AAPM Report No. 93. Evaluation tests proposed by the publication were performed, and the following nonconformities were found: imaging p/ate (lP) dark noise, which compromises the clinical image acquired using the IP; exposure indicator uncalibrated, which can cause underexposure to the IP; nonlinearity of the system response, which causes overexposure; resolution limit under the declared by the manufacturer and erasure thoroughness uncalibrated, impairing structures visualization; Moire pattern visualized at the grid response, and IP Throughput over the specified by the manufacturer. These non-conformities indicate that digital imaging systems' lack of calibration can cause an increase in dose in order that image prob/ems can be so/ved. (author)

  6. 6th International Workshop Soft Computing Applications

    CERN Document Server

    Jain, Lakhmi; Kovačević, Branko

    2016-01-01

    These volumes constitute the Proceedings of the 6th International Workshop on Soft Computing Applications, or SOFA 2014, held on 24-26 July 2014 in Timisoara, Romania. This edition was organized by the University of Belgrade, Serbia in conjunction with Romanian Society of Control Engineering and Technical Informatics (SRAIT) - Arad Section, The General Association of Engineers in Romania - Arad Section, Institute of Computer Science, Iasi Branch of the Romanian Academy and IEEE Romanian Section.                 The Soft Computing concept was introduced by Lotfi Zadeh in 1991 and serves to highlight the emergence of computing methodologies in which the accent is on exploiting the tolerance for imprecision and uncertainty to achieve tractability, robustness and low solution cost. Soft computing facilitates the use of fuzzy logic, neurocomputing, evolutionary computing and probabilistic computing in combination, leading to the concept of hybrid intelligent systems.        The combination of ...

  7. International Symposium on Scientific Computing, Computer Arithmetic and Validated Numerics

    CERN Document Server

    DEVELOPMENTS IN RELIABLE COMPUTING

    1999-01-01

    The SCAN conference, the International Symposium on Scientific Com­ puting, Computer Arithmetic and Validated Numerics, takes place bian­ nually under the joint auspices of GAMM (Gesellschaft fiir Angewandte Mathematik und Mechanik) and IMACS (International Association for Mathematics and Computers in Simulation). SCAN-98 attracted more than 100 participants from 21 countries all over the world. During the four days from September 22 to 25, nine highlighted, plenary lectures and over 70 contributed talks were given. These figures indicate a large participation, which was partly caused by the attraction of the organizing country, Hungary, but also the effec­ tive support system have contributed to the success. The conference was substantially supported by the Hungarian Research Fund OTKA, GAMM, the National Technology Development Board OMFB and by the J6zsef Attila University. Due to this funding, it was possible to subsidize the participation of over 20 scientists, mainly from Eastern European countries. I...

  8. International Conference on Soft Computing Systems

    CERN Document Server

    Panigrahi, Bijaya

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in International Conference on Soft Computing Systems (ICSCS 2015) held at Noorul Islam Centre for Higher Education, Chennai, India. These research papers provide the latest developments in the emerging areas of Soft Computing in Engineering and Technology. The book is organized in two volumes and discusses a wide variety of industrial, engineering and scientific applications of the emerging techniques. It presents invited papers from the inventors/originators of new applications and advanced technologies.

  9. International Symposium on Computing and Network Sustainability

    CERN Document Server

    Akashe, Shyam

    2017-01-01

    The book is compilation of technical papers presented at International Research Symposium on Computing and Network Sustainability (IRSCNS 2016) held in Goa, India on 1st and 2nd July 2016. The areas covered in the book are sustainable computing and security, sustainable systems and technologies, sustainable methodologies and applications, sustainable networks applications and solutions, user-centered services and systems and mobile data management. The novel and recent technologies presented in the book are going to be helpful for researchers and industries in their advanced works.

  10. RISC Processors and High Performance Computing

    Science.gov (United States)

    Bailey, David H.; Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    This tutorial will discuss the top five RISC microprocessors and the parallel systems in which they are used. It will provide a unique cross-machine comparison not available elsewhere. The effective performance of these processors will be compared by citing standard benchmarks in the context of real applications. The latest NAS Parallel Benchmarks, both absolute performance and performance per dollar, will be listed. The next generation of the NPB will be described. The tutorial will conclude with a discussion of future directions in the field. Technology Transfer Considerations: All of these computer systems are commercially available internationally. Information about these processors is available in the public domain, mostly from the vendors themselves. The NAS Parallel Benchmarks and their results have been previously approved numerous times for public release, beginning back in 1991.

  11. High Performance Spaceflight Computing (HPSC)

    Data.gov (United States)

    National Aeronautics and Space Administration — Space-based computing has not kept up with the needs of current and future NASA missions. We are developing a next-generation flight computing system that addresses...

  12. High performance computing in Windows Azure cloud

    OpenAIRE

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  13. Second International workshop Geometry and Symbolic Computation

    CERN Document Server

    Walczak, Paweł; Geometry and its Applications

    2014-01-01

    This volume has been divided into two parts: Geometry and Applications. The geometry portion of the book relates primarily to geometric flows, laminations, integral formulae, geometry of vector fields on Lie groups, and osculation; the articles in the applications portion concern some particular problems of the theory of dynamical systems, including mathematical problems of liquid flows and a study of cycles for non-dynamical systems. This Work is based on the second international workshop entitled "Geometry and Symbolic Computations," held on May 15-18, 2013 at the University of Haifa and is dedicated to modeling (using symbolic calculations) in differential geometry and its applications in fields such as computer science, tomography, and mechanics. It is intended to create a forum for students and researchers in pure and applied geometry to promote discussion of modern state-of-the-art in geometric modeling using symbolic programs such as Maple™ and Mathematica®, as well as presentation of new results. ...

  14. Manual on internal dose computation and reporting

    International Nuclear Information System (INIS)

    Sawant, Pramilla D.; Sawant, Jyoti V.; Gurg, R.P.; Rudran, Kamala; Gupta, V.K.; Abani, M.C.

    1999-05-01

    Whole body counting and bioassay measurement are carried out for estimation of radioactivity content in the whole body or in a particular organ/tissue of interest. These measurements are routinely carried out for occupational workers at nuclear power plants, reprocessing plants, radiochemical laboratories, radioisotope laboratories and radioactive waste management facilities to evaluate individual internal dose due to 3 H, 60 Co, 90 Sr, 137 Cs, transuranics and other isotopes of interest. This manual is prepared to provide guidelines for computation of intake, committed equivalent dose and committed effective dose from direct measurement of tissue and/or body content of radioactivity for 60 Co, 131 I, and 137 Cs employing in-vivo monitoring procedures and/or bioassay measurements only. Bioassay measurements are used for determination of 90 Sr in the body since it is a pure beta emitter. This manual can be used as a ready reckoner for assessment of radiation dose due to internal contamination of occupational workers as estimated using above techniques in the middle and back-end of the nuclear fuel cycle operations. The methodology used in computation of dose is based on the principles and biokinetic models given by ICRP. Recording level recommended in the manual is 0.6 mSv for both, routine as well as special monitoring, which is lower than 1 mSv recommended by ICRP (ICRP-75, 1997) for individual routine monitoring and 0.66 mSv for special monitoring. The Annual Limit on Intake is taken equivalent to Annual Effective Dose Limit of 20 mSv as prescribed by the Atomic Energy Regulatory Board (AERB), India. (author)

  15. Computer Network Attacks and Modern International Law

    Directory of Open Access Journals (Sweden)

    Andrey L. Kozik

    2014-01-01

    Full Text Available Computer network attacks (CNA is a no doubt actual theoretical and practical topic today. Espionage, public and private computer-systems disruptions committed by states have been a real life. States execute CNA's involving its agents or hiring private hacker groups. However, the application of lex lata remains unclear in practice and still undeveloped in doctrine. Nevertheless the international obligations, which states have accepted under the UN Charter and other treaties as well as customs - with any related exemptions and reservations - are still in force and create a legal framework, which one cannot ignore. Taking into account the intensity level or the consequences of a CNA the later could be considered as an unfriendly, but legal doing, or, as a use of force (prohibited under the article 2(4 of the UN Charter, or - in the case the proper threshold is taken - as an armed attack (which gives the victim-state the right to use force in self-defence under the customs and the article 51 of the UN Charter. Researches in the field of lex lata applicability to the CNAs could highlight gaps and week points of the nowadays legal regime. The subject is on agenda in western doctrine, and it is a pity - not in Russian one - the number of publication here is still unsatisfied. The article formulates issues related to CNAs and the modern international legal regime. The author explores the definition, legal volume of the term CNA, highlights main issues, which have to be analyzed from the point of the contemporary law.

  16. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  17. Network survivability performance (computer diskette)

    Science.gov (United States)

    1993-11-01

    File characteristics: Data file; 1 file. Physical description: 1 computer diskette; 3 1/2 in.; high density; 2.0MB. System requirements: Mac; Word. This technical report has been developed to address the survivability of telecommunications networks including services. It responds to the need for a common understanding of, and assessment techniques for network survivability, availability, integrity, and reliability. It provides a basis for designing and operating telecommunication networks to user expectations for network survivability.

  18. International performance indicators: gas supply

    International Nuclear Information System (INIS)

    1994-12-01

    This study evaluates the performance of Australian's natural gas utilities against world best practice. In particular, it examines whether Australia's traded goods sector is disadvantaged by the performance of domestic infrastructure service industries. It reports on the operating efficiency of the natural gas industry using Data Envelopment Analysis. It concludes that the Australian gas industry as a whole is performing relatively well in term of operating efficiency and that its prices are comparable with prices in North America, once differences in consumption per customer are taken into consideration. Appendixes 1 and 2 provide a summary of the structure and regulation of the gas supply industry in Australia and selected overseas countries, while the Appendix 3 gives an econometric analysis of the relationships between consumption per customer and residential price-cost margins. refs., tabs., figs

  19. High-performance computing — an overview

    Science.gov (United States)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  20. Automation, Performance and International Competition

    DEFF Research Database (Denmark)

    Kromann, Lene; Sørensen, Anders

    This paper presents new evidence on trade‐induced automation in manufacturing firms using unique data combining a retrospective survey that we have assembled with register data for 2005‐2010. In particular, we establish a causal effect where firms that have specialized in product types for which...... the Chinese exports to the world market has risen sharply invest more in automated capital compared to firms that have specialized in other product types. We also study the relationship between automation and firm performance and find that firms with high increases in scale and scope of automation have faster...... productivity growth than other firms. Moreover, automation improves the efficiency of all stages of the production process by reducing setup time, run time, and inspection time and increasing uptime and quantity produced per worker. The efficiency improvement varies by type of automation....

  1. 4th International Joint Conference on Computational Intelligence

    CERN Document Server

    Correia, António; Rosa, Agostinho; Filipe, Joaquim

    2015-01-01

    The present book includes extended and revised versions of a set of selected papers from the Fourth International Joint Conference on Computational Intelligence (IJCCI 2012)., held in Barcelona, Spain, from 5 to 7 October, 2012. The conference was sponsored by the Institute for Systems and Technologies of Information, Control and Communication (INSTICC) and was organized in cooperation with the Association for the Advancement of Artificial Intelligence (AAAI). The conference brought together researchers, engineers and practitioners in computational technologies, especially those related to the areas of fuzzy computation, evolutionary computation and neural computation. It is composed of three co-located conferences, each one specialized in one of the aforementioned -knowledge areas. Namely: - International Conference on Evolutionary Computation Theory and Applications (ECTA) - International Conference on Fuzzy Computation Theory and Applications (FCTA) - International Conference on Neural Computation Theory a...

  2. US QCD computational performance studies with PERI

    International Nuclear Information System (INIS)

    Zhang, Y; Fowler, R; Huck, K; Malony, A; Porterfield, A; Reed, D; Shende, S; Taylor, V; Wu, X

    2007-01-01

    We report on some of the interactions between two SciDAC projects: The National Computational Infrastructure for Lattice Gauge Theory (USQCD), and the Performance Engineering Research Institute (PERI). Many modern scientific programs consistently report the need for faster computational resources to maintain global competitiveness. However, as the size and complexity of emerging high end computing (HEC) systems continue to rise, achieving good performance on such systems is becoming ever more challenging. In order to take full advantage of the resources, it is crucial to understand the characteristics of relevant scientific applications and the systems these applications are running on. Using tools developed under PERI and by other performance measurement researchers, we studied the performance of two applications, MILC and Chroma, on several high performance computing systems at DOE laboratories. In the case of Chroma, we discuss how the use of C++ and modern software engineering and programming methods are driving the evolution of performance tools

  3. Third International Joint Conference on Computational Intelligence (IJCCI 2011)

    CERN Document Server

    Dourado, António; Rosa, Agostinho; Filipe, Joaquim; Computational Intelligence

    2013-01-01

    The present book includes a set of selected extended papers from the third International Joint Conference on Computational Intelligence (IJCCI 2011), held in Paris, France, from 24 to 26 October 2011. The conference was composed of three co-located conferences:  The International Conference on Fuzzy Computation (ICFC), the International Conference on Evolutionary Computation (ICEC), and the International Conference on Neural Computation (ICNC). Recent progresses in scientific developments and applications in these three areas are reported in this book. IJCCI received 283 submissions, from 59 countries, in all continents. This book includes the revised and extended versions of a strict selection of the best papers presented at the conference.

  4. 7th International Joint Conference on Computational Intelligence

    CERN Document Server

    Rosa, Agostinho; Cadenas, José; Correia, António; Madani, Kurosh; Ruano, António; Filipe, Joaquim

    2017-01-01

    This book includes a selection of revised and extended versions of the best papers from the seventh International Joint Conference on Computational Intelligence (IJCCI 2015), held in Lisbon, Portugal, from 12 to 14 November 2015, which was composed of three co-located conferences: The International Conference on Evolutionary Computation Theory and Applications (ECTA), the International Conference on Fuzzy Computation Theory and Applications (FCTA), and the International Conference on Neural Computation Theory and Applications (NCTA). The book presents recent advances in scientific developments and applications in these three areas, reflecting the IJCCI’s commitment to high quality standards.

  5. Intern transport beter te plannen met computer

    NARCIS (Netherlands)

    Annevelink, E.

    1999-01-01

    Verbetering van intern transport in de potplantenteelt. Door getoetste vuistregels te combineren met toegepaste wiskunde is automatische planning van het intern transport binnen handbereik. Dit leidt tot minder transportbewegingen en tijdsbesparing bij het plannen

  6. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  7. 6th International Conference on Computer Science and its Applications

    CERN Document Server

    Stojmenovic, Ivan; Jeong, Hwa; Yi, Gangman

    2015-01-01

    The 6th FTRA International Conference on Computer Science and its Applications (CSA-14) will be held in Guam, USA, Dec. 17 - 19, 2014. CSA-14 presents a comprehensive conference focused on the various aspects of advances in engineering systems in computer science, and applications, including ubiquitous computing, U-Health care system, Big Data, UI/UX for human-centric computing, Computing Service, Bioinformatics and Bio-Inspired Computing and will show recent advances on various aspects of computing technology, Ubiquitous Computing Services and its application.

  8. Embedded High Performance Scalable Computing Systems

    National Research Council Canada - National Science Library

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  9. Internal Control Good Cooperative Governance And Performance

    Directory of Open Access Journals (Sweden)

    Andry Arifian Rachman

    2017-11-01

    Full Text Available This study aims to examine the influence of internal control and good cooperative governance partially and simultaneously to the performance of cooperatives in West Java Province. The research method used in this research is descriptive and verification. The sample in this research is 22 boards as manager of cooperative in West Java Province. The data used in the research is the primary data through questionnaire collection. Validity and reliability testing is performed before hypothesis testing. This research uses multiple regression analysis technique. Based on hypothesis testing obtained 1 internal control has no significant effect on performance 2 good cooperative governance has a significant effect on performance and 3 internal control and good cooperative governance have a significant effect on performance.

  10. Cloud Computing for Complex Performance Codes.

    Energy Technology Data Exchange (ETDEWEB)

    Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Klein, Brandon Thorin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Miner, John Gifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.

  11. An Adaptive Middleware for Improved Computational Performance

    DEFF Research Database (Denmark)

    Bonnichsen, Lars Frydendal

    , we are improving computational performance by exploiting modern hardware features, such as dynamic voltage-frequency scaling and transactional memory. Adapting software is an iterative process, requiring that we continually revisit it to meet new requirements or realities; a time consuming process......The performance improvements in computer systems over the past 60 years have been fueled by an exponential increase in energy efficiency. In recent years, the phenomenon known as the end of Dennard’s scaling has slowed energy efficiency improvements — but improving computer energy efficiency...... is more important now than ever. Traditionally, most improvements in computer energy efficiency have come from improvements in lithography — the ability to produce smaller transistors — and computer architecture - the ability to apply those transistors efficiently. Since the end of scaling, we have seen...

  12. 6th International Workshop on Computational Kinematics

    CERN Document Server

    Gracia, Alba

    2014-01-01

    Computational kinematics is an enthralling area of science with a rich spectrum of problems at the junction of mechanics, robotics, computer science, mathematics, and computer graphics. The covered topics include design and optimization of cable-driven robots, analysis of parallel manipulators, motion planning, numerical methods for mechanism calibration and optimization, geometric approaches to mechanism analysis and design, synthesis of mechanisms, kinematical issues in biomechanics, construction of novel mechanical devices, as well as detection and treatment of singularities. The results should be of interest for practicing and research engineers as well as Ph.D. students from the fields of mechanical and electrical engineering, computer science, and computer graphics. Indexed in Conference Proceedings Citation Index- Science (CPCI-S).

  13. International Conference on Mathematics and Computing

    CERN Document Server

    Giri, Debasis; Saxena, P; Srivastava, P

    2014-01-01

    This book discusses recent developments and contemporary research in mathematics, statistics and their applications in computing. All contributing authors are eminent academicians, scientists, researchers and scholars in their respective fields, hailing from around the world. The conference has emerged as a powerful forum, offering researchers a venue to discuss, interact and collaborate, and stimulating the advancement of mathematics and its applications in computer science. The book will allow aspiring researchers to update their knowledge of cryptography, algebra, frame theory, optimizations, stochastic processes, compressive sensing, functional analysis, complex variables, etc. Educating future consumers, users, producers, developers and researchers in mathematics and computing is a challenging task and essential to the development of modern society. Hence, mathematics and its applications in computer science are of vital importance to a broad range of communities, including mathematicians and computing p...

  14. International Conference on Computational Engineering Science

    CERN Document Server

    Yagawa, G

    1988-01-01

    The aim of this Conference was to become a forum for discussion of both academic and industrial research in those areas of computational engineering science and mechanics which involve and enrich the rational application of computers, numerical methods, and mechanics, in modern technology. The papers presented at this Conference cover the following topics: Solid and Structural Mechanics, Constitutive Modelling, Inelastic and Finite Deformation Response, Transient Analysis, Structural Control and Optimization, Fracture Mechanics and Structural Integrity, Computational Fluid Dynamics, Compressible and Incompressible Flow, Aerodynamics, Transport Phenomena, Heat Transfer and Solidification, Electromagnetic Field, Related Soil Mechanics and MHD, Modern Variational Methods, Biomechanics, and Off-Shore-Structural Mechanics.

  15. Function Follows Performance in Evolutionary Computational Processing

    DEFF Research Database (Denmark)

    Pasold, Anke; Foged, Isak Worre

    2011-01-01

    As the title ‘Function Follows Performance in Evolutionary Computational Processing’ suggests, this paper explores the potentials of employing multiple design and evaluation criteria within one processing model in order to account for a number of performative parameters desired within varied...

  16. AHPCRC - Army High Performance Computing Research Center

    Science.gov (United States)

    2010-01-01

    computing. Of particular interest is the ability of a distrib- uted jamming network (DJN) to jam signals in all or part of a sensor or communications net...and reasoning, assistive technologies. FRIEDRICH (FRITZ) PRINZ Finmeccanica Professor of Engineering, Robert Bosch Chair, Department of Engineering...High Performance Computing Research Center www.ahpcrc.org BARBARA BRYAN AHPCRC Research and Outreach Manager, HPTi (650) 604-3732 bbryan@hpti.com Ms

  17. DURIP: High Performance Computing in Biomathematics Applications

    Science.gov (United States)

    2017-05-10

    Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied

  18. 3rd International Conference on Computer & Communication Technologies

    CERN Document Server

    Bhateja, Vikrant; Raju, K; Janakiramaiah, B

    2017-01-01

    The book is a compilation of high-quality scientific papers presented at the 3rd International Conference on Computer & Communication Technologies (IC3T 2016). The individual papers address cutting-edge technologies and applications of soft computing, artificial intelligence and communication. In addition, a variety of further topics are discussed, which include data mining, machine intelligence, fuzzy computing, sensor networks, signal and image processing, human-computer interaction, web intelligence, etc. As such, it offers readers a valuable and unique resource.

  19. International Conference on Frontiers of Intelligent Computing : Theory and Applications

    CERN Document Server

    Bhateja, Vikrant; Udgata, Siba; Pattnaik, Prasant

    2017-01-01

    The book is a collection of high-quality peer-reviewed research papers presented at International Conference on Frontiers of Intelligent Computing: Theory and applications (FICTA 2016) held at School of Computer Engineering, KIIT University, Bhubaneswar, India during 16 – 17 September 2016. The book presents theories, methodologies, new ideas, experiences and applications in all areas of intelligent computing and its applications to various engineering disciplines like computer science, electronics, electrical and mechanical engineering.

  20. International Conference on Computational Vision and Robotics

    CERN Document Server

    2015-01-01

    Computer Vision and Robotic is one of the most challenging areas of 21st century. Its application ranges from Agriculture to Medicine, Household applications to Humanoid, Deep-sea-application to Space application, and Industry applications to Man-less-plant. Today’s technologies demand to produce intelligent machine, which are enabling applications in various domains and services. Robotics is one such area which encompasses number of technology in it and its application is widespread. Computational vision or Machine vision is one of the most challenging tools for the robot to make it intelligent.   This volume covers chapters from various areas of Computational Vision such as Image and Video Coding and Analysis, Image Watermarking, Noise Reduction and Cancellation, Block Matching and Motion Estimation, Tracking of Deformable Object using Steerable Pyramid Wavelet Transformation, Medical Image Fusion, CT and MRI Image Fusion based on Stationary Wavelet Transform. The book also covers articles from applicati...

  1. Computer technique for evaluating collimator performance

    International Nuclear Information System (INIS)

    Rollo, F.D.

    1975-01-01

    A computer program has been developed to theoretically evaluate the overall performance of collimators used with radioisotope scanners and γ cameras. The first step of the program involves the determination of the line spread function (LSF) and geometrical efficiency from the fundamental parameters of the collimator being evaluated. The working equations can be applied to any plane of interest. The resulting LSF is applied to subroutine computer programs which compute corresponding modulation transfer function and contrast efficiency functions. The latter function is then combined with appropriate geometrical efficiency data to determine the performance index function. The overall computer program allows one to predict from the physical parameters of the collimator alone how well the collimator will reproduce various sized spherical voids of activity in the image plane. The collimator performance program can be used to compare the performance of various collimator types, to study the effects of source depth on collimator performance, and to assist in the design of collimators. The theory of the collimator performance equation is discussed, a comparison between the experimental and theoretical LSF values is made, and examples of the application of the technique are presented

  2. International Symposium on Complex Computing-Networks

    CERN Document Server

    Sevgi, L; CCN2005; Complex computing networks: Brain-like and wave-oriented electrodynamic algorithms

    2006-01-01

    This book uniquely combines new advances in the electromagnetic and the circuits&systems theory. It integrates both fields regarding computational aspects of common interest. Emphasized subjects are those methods which mimic brain-like and electrodynamic behaviour; among these are cellular neural networks, chaos and chaotic dynamics, attractor-based computation and stream ciphers. The book contains carefully selected contributions from the Symposium CCN2005. Pictures from the bestowal of Honorary Doctorate degrees to Leon O. Chua and Leopold B. Felsen are included.

  3. Misleading Performance Claims in Parallel Computations

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.

    2009-05-29

    In a previous humorous note entitled 'Twelve Ways to Fool the Masses,' I outlined twelve common ways in which performance figures for technical computer systems can be distorted. In this paper and accompanying conference talk, I give a reprise of these twelve 'methods' and give some actual examples that have appeared in peer-reviewed literature in years past. I then propose guidelines for reporting performance, the adoption of which would raise the level of professionalism and reduce the level of confusion, not only in the world of device simulation but also in the larger arena of technical computing.

  4. Computer Mediated Communication and University International Students

    Science.gov (United States)

    Robbins, Nancy; Lo, Yen-Hai; Hou, Feng-Heiung; Chou, Tsai-Sheng; Chen, Chin-Hung; Chen, Chao-Chien; Chen, Wen-Chiang; Chen, Yen-Chuan; Wang, Shih-Jen; Huang, Shih-Yu; Lii, Jong-Yiing

    2002-01-01

    The design for this preliminary study was based on the experiences of the international students and faculty members of a small southwest university being surveyed and interviewed. The data collection procedure blends qualitative and quantitative data. A strong consensus was found that supports the study's premise that there is an association…

  5. High-performance computing for airborne applications

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Manuzatto, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-01-01

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  6. High performance parallel computers for science

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  7. 7th International Symposium on Intelligent Distributed Computing

    CERN Document Server

    Jung, Jason; Badica, Costin

    2014-01-01

    This book represents the combined peer-reviewed proceedings of the Seventh International Symposium on Intelligent Distributed Computing - IDC-2013, of the Second Workshop on Agents for Clouds - A4C-2013, of the Fifth International Workshop on Multi-Agent Systems Technology and Semantics - MASTS-2013, and of the International Workshop on Intelligent Robots - iR-2013. All the events were held in Prague, Czech Republic during September 4-6, 2013. The 41 contributions published in this book address many topics related to theory and applications of intelligent distributed computing and multi-agent systems, including: agent-based data processing, ambient intelligence, bio-informatics, collaborative systems, cryptography and security, distributed algorithms, grid and cloud computing, information extraction, intelligent robotics, knowledge management, linked data, mobile agents, ontologies, pervasive computing, self-organizing systems, peer-to-peer computing, social networks and trust, and swarm intelligence.  .

  8. The Logistics Performance Effect in International Trade

    Directory of Open Access Journals (Sweden)

    Azmat Gani

    2017-12-01

    Full Text Available The continuous growth in world trade depends on the efficiency of trade support structures such as the logistics services. Despite logistics integral role in supporting commercial activities, there has generally been a low level of analysis and trade policy research focus from trade practitioners. This paper explores the effect of logistics performance in international trade. The analysis draws on overall logistics performance as well as disaggregated measures of logistics specificities data for a large sample of countries. The empirical analysis involved the estimation of standard export and import equations incorporating measures of logistics performance. The findings show that the overall logistics performance is positively and statistically significantly correlated with exports and imports. The analysis is also extended by investigating if logistics specificities mattered for international trade. The findings reveal that several dimensions capturing logistics performance have statistically significant and positive effect, mostly on exports. The main policy implication is that continuous investment in logistics infrastructure and services can positively impact international trade.

  9. Performative Computation-aided Design Optimization

    Directory of Open Access Journals (Sweden)

    Ming Tang

    2012-12-01

    Full Text Available This article discusses a collaborative research and teaching project between the University of Cincinnati, Perkins+Will’s Tech Lab, and the University of North Carolina Greensboro. The primary investigation focuses on the simulation, optimization, and generation of architectural designs using performance-based computational design approaches. The projects examine various design methods, including relationships between building form, performance and the use of proprietary software tools for parametric design.

  10. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  11. Internal Performance Measurement Systems: Problems and Solutions

    DEFF Research Database (Denmark)

    Jakobsen, Morten; Mitchell, Falconer; Nørreklit, Hanne

    2010-01-01

    This article pursues two aims: to identify problems and dangers related to the operational use of internal performance measurement systems of the Balanced Scorecard (BSC) type and to provide some guidance on how performance measurement systems may be designed to overcome these problems....... The analysis uses and extends N rreklit's (2000) critique of the BSC by applying the concepts developed therein to contemporary research on the BSC and to the development of practice in performance measurement. The analysis is of relevance for many companies in the Asia-Pacific area as an increasing numbers...

  12. 6th International Conference on Soft Computing for Problem Solving

    CERN Document Server

    Bansal, Jagdish; Das, Kedar; Lal, Arvind; Garg, Harish; Nagar, Atulya; Pant, Millie

    2017-01-01

    This two-volume book gathers the proceedings of the Sixth International Conference on Soft Computing for Problem Solving (SocProS 2016), offering a collection of research papers presented during the conference at Thapar University, Patiala, India. Providing a veritable treasure trove for scientists and researchers working in the field of soft computing, it highlights the latest developments in the broad area of “Computational Intelligence” and explores both theoretical and practical aspects using fuzzy logic, artificial neural networks, evolutionary algorithms, swarm intelligence, soft computing, computational intelligence, etc.

  13. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Michael J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  14. 22nd International Conference on Soft Computing

    CERN Document Server

    2017-01-01

    This proceeding book contains a collection of selected accepted papers of the Mendel conference held in Brno, Czech Republic in June 2016. The proceedings book contains three chapters which present recent advances in soft computing including intelligent image processing. The Mendel conference was established in 1995 and is named after the scientist and Augustinian priest Gregor J. Mendel who discovered the famous Laws of Heredity. The main aim of the conference is to create a regular possibility for students, academics and researchers to exchange ideas and novel research methods on a yearly basis.

  15. Computational hybrid anthropometric paediatric phantom library for internal radiation dosimetry

    Science.gov (United States)

    Xie, Tianwu; Kuster, Niels; Zaidi, Habib

    2017-04-01

    Hybrid computational phantoms combine voxel-based and simplified equation-based modelling approaches to provide unique advantages and more realism for the construction of anthropomorphic models. In this work, a methodology and C++ code are developed to generate hybrid computational phantoms covering statistical distributions of body morphometry in the paediatric population. The paediatric phantoms of the Virtual Population Series (IT’IS Foundation, Switzerland) were modified to match target anthropometric parameters, including body mass, body length, standing height and sitting height/stature ratio, determined from reference databases of the National Centre for Health Statistics and the National Health and Nutrition Examination Survey. The phantoms were selected as representative anchor phantoms for the newborn, 1, 2, 5, 10 and 15 years-old children, and were subsequently remodelled to create 1100 female and male phantoms with 10th, 25th, 50th, 75th and 90th body morphometries. Evaluation was performed qualitatively using 3D visualization and quantitatively by analysing internal organ masses. Overall, the newly generated phantoms appear very reasonable and representative of the main characteristics of the paediatric population at various ages and for different genders, body sizes and sitting stature ratios. The mass of internal organs increases with height and body mass. The comparison of organ masses of the heart, kidney, liver, lung and spleen with published autopsy and ICRP reference data for children demonstrated that they follow the same trend when correlated with age. The constructed hybrid computational phantom library opens up the prospect of comprehensive radiation dosimetry calculations and risk assessment for the paediatric population of different age groups and diverse anthropometric parameters.

  16. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  17. High performance computing in linear control

    International Nuclear Information System (INIS)

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  18. Mid-cervical flame-shaped pseudo-occlusion: diagnostic performance of mid-cervical flame-shaped extracranial internal carotid artery sign on computed tomographic angiography in hyperacute ischemic stroke

    Energy Technology Data Exchange (ETDEWEB)

    Prakkamakul, Supada; Pitakvej, Nantaporn [King Chulalongkorn Memorial Hospital the Thai Red Cross Society, Department of Radiology, Bangkok (Thailand); Dumrongpisutikul, Netsiri; Lerdlum, Sukalaya [King Chulalongkorn Memorial Hospital the Thai Red Cross Society, Department of Radiology, Bangkok (Thailand); Chulalongkorn University, Department of Radiology, Faculty of Medicine, Bangkok (Thailand)

    2017-10-15

    Flame-shaped pseudo-occlusion of the extracranial internal carotid artery (ICA) is a flow-related phenomenon that creates computed tomographic angiography (CTA) and digital subtraction angiography (DSA) findings that mimic tandem intracranial-extracranial ICA occlusion or dissection. We aim to determine the diagnostic performance of mid-cervical flame-shaped extracranial ICA sign on CTA in hyperacute ischemic stroke patients. We retrospectively included consecutive anterior circulation ischemic stroke patients presenting within 6 h of symptom onset who underwent 4D brain CTA and arterial-phase neck CTA using a 320-detector CT scanner during August 2012 to July 2015. Two blinded readers independently reviewed arterial-phase neck CTA and characterized the extracranial ICA configurations into mid-cervical flame-shaped, proximal blunt/beak-shaped, and tubular-shaped groups. 4D whole brain CTA was used as a reference standard for intracranial ICA occlusion detection. Diagnostic performance of the mid-cervical flame-shaped extracranial ICA sign and interobserver reliability were calculated. Of the 81 cases, 11 had isolated intracranial ICA occlusion, and 6 had true extracranial ICA occlusion. Mid-cervical flame-shaped extracranial ICA sign was found in 45.5% (5/11) of isolated intracranial ICA occlusions but none in the true extracranial ICA occlusion group. The sensitivity, specificity, PPV, NPV, and accuracy of the mid-cervical flame-shaped extracranial ICA sign for the detection of isolated intracranial ICA occlusion were 45.5, 100, 100, 92.1, and 92.6%, respectively. Interobserver reliability was 0.90. The mid-cervical flame-shaped extracranial ICA sign may suggest the presence of isolated intracranial ICA occlusion and allow reliable exclusion of tandem extracranial-intracranial ICA occlusion in hyperacute ischemic stroke setting. (orig.)

  19. Mid-cervical flame-shaped pseudo-occlusion: diagnostic performance of mid-cervical flame-shaped extracranial internal carotid artery sign on computed tomographic angiography in hyperacute ischemic stroke

    International Nuclear Information System (INIS)

    Prakkamakul, Supada; Pitakvej, Nantaporn; Dumrongpisutikul, Netsiri; Lerdlum, Sukalaya

    2017-01-01

    Flame-shaped pseudo-occlusion of the extracranial internal carotid artery (ICA) is a flow-related phenomenon that creates computed tomographic angiography (CTA) and digital subtraction angiography (DSA) findings that mimic tandem intracranial-extracranial ICA occlusion or dissection. We aim to determine the diagnostic performance of mid-cervical flame-shaped extracranial ICA sign on CTA in hyperacute ischemic stroke patients. We retrospectively included consecutive anterior circulation ischemic stroke patients presenting within 6 h of symptom onset who underwent 4D brain CTA and arterial-phase neck CTA using a 320-detector CT scanner during August 2012 to July 2015. Two blinded readers independently reviewed arterial-phase neck CTA and characterized the extracranial ICA configurations into mid-cervical flame-shaped, proximal blunt/beak-shaped, and tubular-shaped groups. 4D whole brain CTA was used as a reference standard for intracranial ICA occlusion detection. Diagnostic performance of the mid-cervical flame-shaped extracranial ICA sign and interobserver reliability were calculated. Of the 81 cases, 11 had isolated intracranial ICA occlusion, and 6 had true extracranial ICA occlusion. Mid-cervical flame-shaped extracranial ICA sign was found in 45.5% (5/11) of isolated intracranial ICA occlusions but none in the true extracranial ICA occlusion group. The sensitivity, specificity, PPV, NPV, and accuracy of the mid-cervical flame-shaped extracranial ICA sign for the detection of isolated intracranial ICA occlusion were 45.5, 100, 100, 92.1, and 92.6%, respectively. Interobserver reliability was 0.90. The mid-cervical flame-shaped extracranial ICA sign may suggest the presence of isolated intracranial ICA occlusion and allow reliable exclusion of tandem extracranial-intracranial ICA occlusion in hyperacute ischemic stroke setting. (orig.)

  20. 11th International Conference on Computer and Information Science

    CERN Document Server

    Computer and Information 2012

    2012-01-01

    The series "Studies in Computational Intelligence" (SCI) publishes new developments and advances in the various areas of computational intelligence – quickly and with a high quality. The intent is to cover the theory, applications, and design methods of computational intelligence, as embedded in the fields of engineering, computer science, physics and life science, as well as the methodologies behind them. The series contains monographs, lecture notes and edited volumes in computational intelligence spanning the areas of neural networks, connectionist systems, genetic algorithms, evolutionary computation, artificial intelligence, cellular automata, self-organizing systems, soft computing, fuzzy systems, and hybrid intelligent systems. Critical to both contributors and readers are the short publication time and world-wide distribution - this permits a rapid and broad dissemination of research results.   The purpose of the 11th IEEE/ACIS International Conference on Computer and Information Science (ICIS 2012...

  1. International Comparisons on Internal Labor Markets and Corporate Performance

    Directory of Open Access Journals (Sweden)

    Joonmo Cho

    2003-12-01

    Full Text Available This paper presents an analytical framework which demonstrates how the structures of corporate internal labor markets are formed within the broader labor and capital market context in the U.S. and Japan. This framework is then used to evaluate labor markets within Korean companies and to identify points of change which might promote greater efficiency. Prior to the Asian economic crisis, Korean conglomerates had large, closed internal labor markets. However, in the aftermath of the crisis, they have pursued structural downsizing and moved to open their labor markets. The empirical evidence introduced in this paper affirms the argument that the first step toward creating a flexible labor market in Korea should begin with establishing an efficient corporate governance structure. This implies that a simple switch from the Japanese paradigm for human resource management to an Anglo-American model or vice-versa may not improve internal labor market performance unless the change is accompanied by a solution to the problems posed by the minority controlling structure of Korean companies. The implications of this study for guiding policy in developing countries having labor market rigidities and underdeveloped corporate governance is clear. Capital market structure and corporate governance systems may provide an appropriate starting point for the development of any policies aimed at building an efficient human resource management system and a flexible labor market.

  2. HPCToolkit: performance tools for scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Tallent, N; Mellor-Crummey, J; Adhianto, L; Fagan, M; Krentel, M [Department of Computer Science, Rice University, Houston, TX 77005 (United States)

    2008-07-15

    As part of the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC) program, science teams are tackling problems that require simulation and modeling on petascale computers. As part of activities associated with the SciDAC Center for Scalable Application Development Software (CScADS) and the Performance Engineering Research Institute (PERI), Rice University is building software tools for performance analysis of scientific applications on the leadership-class platforms. In this poster abstract, we briefly describe the HPCToolkit performance tools and how they can be used to pinpoint bottlenecks in SPMD and multi-threaded parallel codes. We demonstrate HPCToolkit's utility by applying it to two SciDAC applications: the S3D code for simulation of turbulent combustion and the MFDn code for ab initio calculations of microscopic structure of nuclei.

  3. HPCToolkit: performance tools for scientific computing

    International Nuclear Information System (INIS)

    Tallent, N; Mellor-Crummey, J; Adhianto, L; Fagan, M; Krentel, M

    2008-01-01

    As part of the U.S. Department of Energy's Scientific Discovery through Advanced Computing (SciDAC) program, science teams are tackling problems that require simulation and modeling on petascale computers. As part of activities associated with the SciDAC Center for Scalable Application Development Software (CScADS) and the Performance Engineering Research Institute (PERI), Rice University is building software tools for performance analysis of scientific applications on the leadership-class platforms. In this poster abstract, we briefly describe the HPCToolkit performance tools and how they can be used to pinpoint bottlenecks in SPMD and multi-threaded parallel codes. We demonstrate HPCToolkit's utility by applying it to two SciDAC applications: the S3D code for simulation of turbulent combustion and the MFDn code for ab initio calculations of microscopic structure of nuclei

  4. 2013 International Conference on Computer Engineering and Network

    CERN Document Server

    Zhu, Tingshao

    2014-01-01

    This book aims to examine innovation in the fields of computer engineering and networking. The book covers important emerging topics in computer engineering and networking, and it will help researchers and engineers improve their knowledge of state-of-art in related areas. The book presents papers from The Proceedings of the 2013 International Conference on Computer Engineering and Network (CENet2013) which was held on July 20-21, in Shanghai, China.

  5. 10th International Symposium on Intelligent Distributed Computing

    CERN Document Server

    Seghrouchni, Amal; Beynier, Aurélie; Camacho, David; Herpson, Cédric; Hindriks, Koen; Novais, Paulo

    2017-01-01

    This book presents the combined peer-reviewed proceedings of the tenth International Symposium on Intelligent Distributed Computing (IDC’2016), which was held in Paris, France from October 10th to 12th, 2016. The 23 contributions address a range of topics related to theory and application of intelligent distributed computing, including: Intelligent Distributed Agent-Based Systems, Ambient Intelligence and Social Networks, Computational Sustainability, Intelligent Distributed Knowledge Representation and Processing, Smart Networks, Networked Intelligence and Intelligent Distributed Applications, amongst others.

  6. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  7. Computer performance optimization systems, applications, processes

    CERN Document Server

    Osterhage, Wolfgang W

    2013-01-01

    Computing power performance was important at times when hardware was still expensive, because hardware had to be put to the best use. Later on this criterion was no longer critical, since hardware had become inexpensive. Meanwhile, however, people have realized that performance again plays a significant role, because of the major drain on system resources involved in developing complex applications. This book distinguishes between three levels of performance optimization: the system level, application level and business processes level. On each, optimizations can be achieved and cost-cutting p

  8. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  9. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  10. 10th International Conference on Genetic and Evolutionary Computing

    CERN Document Server

    Lin, Jerry; Wang, Chia-Hung; Jiang, Xin

    2017-01-01

    This book gathers papers presented at the 10th International Conference on Genetic and Evolutionary Computing (ICGEC 2016). The conference was co-sponsored by Springer, Fujian University of Technology in China, the University of Computer Studies in Yangon, University of Miyazaki in Japan, National Kaohsiung University of Applied Sciences in Taiwan, Taiwan Association for Web Intelligence Consortium, and VSB-Technical University of Ostrava, Czech Republic. The ICGEC 2016, which was held from November 7 to 9, 2016 in Fuzhou City, China, was intended as an international forum for researchers and professionals in all areas of genetic and evolutionary computing.

  11. Computation of high Reynolds number internal/external flows

    Science.gov (United States)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, was developed to calculate high Reynolds number, internal/ external flows. The VNAP2 program solves the two dimensional, time dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack Scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.

  12. Computation of high Reynolds number internal/external flows

    International Nuclear Information System (INIS)

    Cline, M.C.; Wilmoth, R.G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, has been developed to calculate high Reynolds number, internal/external flows. VNAP2 solves the two-dimensional, time-dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented

  13. 2012 International Conference on Teaching and Computational Science (ICTCS 2012)

    CERN Document Server

    Advanced Technology in Teaching

    2013-01-01

    2012 International Conference on Teaching and Computational Science (ICTCS 2012) is held on April 1-2, 2012, Macao.   This volume contains 120 selected papers presented at 2012 International Conference on Teaching and Computational Science (ICTCS 2012), which is to bring together researchers working in many different areas of teaching and computational Science to foster international collaborations and exchange of new ideas.   This volume book can be divided into two sections on the basis of the classification of manuscripts considered. The first section deals with teaching. The second section of this volume consists of computational Science.   We hope that all the papers here published can benefit you in the related researching fields.

  14. A methodology for performing computer security reviews

    International Nuclear Information System (INIS)

    Hunteman, W.J.

    1991-01-01

    DOE Order 5637.1, ''Classified Computer Security,'' requires regular reviews of the computer security activities for an ADP system and for a site. Based on experiences gained in the Los Alamos computer security program through interactions with DOE facilities, we have developed a methodology to aid a site or security officer in performing a comprehensive computer security review. The methodology is designed to aid a reviewer in defining goals of the review (e.g., preparation for inspection), determining security requirements based on DOE policies, determining threats/vulnerabilities based on DOE and local threat guidance, and identifying critical system components to be reviewed. Application of the methodology will result in review procedures and checklists oriented to the review goals, the target system, and DOE policy requirements. The review methodology can be used to prepare for an audit or inspection and as a periodic self-check tool to determine the status of the computer security program for a site or specific ADP system. 1 tab

  15. A methodology for performing computer security reviews

    International Nuclear Information System (INIS)

    Hunteman, W.J.

    1991-01-01

    This paper reports on DIE Order 5637.1, Classified Computer Security, which requires regular reviews of the computer security activities for an ADP system and for a site. Based on experiences gained in the Los Alamos computer security program through interactions with DOE facilities, the authors have developed a methodology to aid a site or security officer in performing a comprehensive computer security review. The methodology is designed to aid a reviewer in defining goals of the review (e.g., preparation for inspection), determining security requirements based on DOE policies, determining threats/vulnerabilities based on DOE and local threat guidance, and identifying critical system components to be reviewed. Application of the methodology will result in review procedures and checklists oriented to the review goals, the target system, and DOE policy requirements. The review methodology can be used to prepare for an audit or inspection and as a periodic self-check tool to determine the status of the computer security program for a site or specific ADP system

  16. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    Directory of Open Access Journals (Sweden)

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  17. International Conference on Computers and Advanced Technology in Education

    CERN Document Server

    Advanced Information Technology in Education

    2012-01-01

    The volume includes a set of selected papers extended and revised from the 2011 International Conference on Computers and Advanced Technology in Education. With the development of computers and advanced technology, the human social activities are changing basically. Education, especially the education reforms in different countries, has been experiencing the great help from the computers and advanced technology. Generally speaking, education is a field which needs more information, while the computers, advanced technology and internet are a good information provider. Also, with the aid of the computer and advanced technology, persons can make the education an effective combination. Therefore, computers and advanced technology should be regarded as an important media in the modern education. Volume Advanced Information Technology in Education is to provide a forum for researchers, educators, engineers, and government officials involved in the general areas of computers and advanced technology in education to d...

  18. 2nd International Conference on Intelligent Computing, Communication & Devices

    CERN Document Server

    Popentiu-Vladicescu, Florin

    2017-01-01

    The book presents high quality papers presented at 2nd International Conference on Intelligent Computing, Communication & Devices (ICCD 2016) organized by Interscience Institute of Management and Technology (IIMT), Bhubaneswar, Odisha, India, during 13 and 14 August, 2016. The book covers all dimensions of intelligent sciences in its three tracks, namely, intelligent computing, intelligent communication and intelligent devices. intelligent computing track covers areas such as intelligent and distributed computing, intelligent grid and cloud computing, internet of things, soft computing and engineering applications, data mining and knowledge discovery, semantic and web technology, hybrid systems, agent computing, bioinformatics, and recommendation systems. Intelligent communication covers communication and network technologies, including mobile broadband and all optical networks that are the key to groundbreaking inventions of intelligent communication technologies. This covers communication hardware, soft...

  19. Monitoring SLAC High Performance UNIX Computing Systems

    International Nuclear Information System (INIS)

    Lettsome, Annette K.

    2005-01-01

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface

  20. Cloud Computing for Maintenance Performance Improvement

    OpenAIRE

    Kour, Ravdeep; Karim, Ramin; Parida, Aditya

    2013-01-01

    Cloud Computing is an emerging research area. It can be utilised for acquiring an effective and efficient information logistics. This paper uses cloud-based technology for the establishment of information logistics for railway system which requires information based on data from different data sources (e.g. railway maintenance, railway operation, and railway business data). In order to improve the performance of the maintenance process relevant data from various sources need to be acquired, f...

  1. High Performance Computing Operations Review Report

    Energy Technology Data Exchange (ETDEWEB)

    Cupps, Kimberly C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  2. International Conference on Computational Intelligence in Data Mining

    CERN Document Server

    Mohapatra, Durga

    2017-01-01

    The book presents high quality papers presented at the International Conference on Computational Intelligence in Data Mining (ICCIDM 2016) organized by School of Computer Engineering, Kalinga Institute of Industrial Technology (KIIT), Bhubaneswar, Odisha, India during December 10 – 11, 2016. The book disseminates the knowledge about innovative, active research directions in the field of data mining, machine and computational intelligence, along with current issues and applications of related topics. The volume aims to explicate and address the difficulties and challenges that of seamless integration of the two core disciplines of computer science. .

  3. International MultiConference of Engineers and Computer Scientists 2015

    CERN Document Server

    Ao, Sio-Iong; Huang, Xu; Castillo, Oscar

    2016-01-01

    This volume comprises selected extended papers written by prominent researchers participating in the International MultiConference of Engineers and Computer Scientists 2015, Hong Kong, 18-20 March 2015. The conference served as a platform for discussion of frontier topics in theoretical and applied engineering and computer science, and subjects covered include communications systems, control theory and automation, bioinformatics, artificial intelligence, data mining, engineering mathematics, scientific computing, engineering physics, electrical engineering, and industrial applications. The book describes the state-of-the-art in engineering technologies and computer science and its applications, and will serve as an excellent reference for industrial and academic researchers and graduate students working in these fields.

  4. 1st International Conference on Intelligent Computing and Communication

    CERN Document Server

    Satapathy, Suresh; Sanyal, Manas; Bhateja, Vikrant

    2017-01-01

    The book covers a wide range of topics in Computer Science and Information Technology including swarm intelligence, artificial intelligence, evolutionary algorithms, and bio-inspired algorithms. It is a collection of papers presented at the First International Conference on Intelligent Computing and Communication (ICIC2) 2016. The prime areas of the conference are Intelligent Computing, Intelligent Communication, Bio-informatics, Geo-informatics, Algorithm, Graphics and Image Processing, Graph Labeling, Web Security, Privacy and e-Commerce, Computational Geometry, Service Orient Architecture, and Data Engineering.

  5. Optimising investment performance through international diversification

    Directory of Open Access Journals (Sweden)

    J. Swart

    2014-01-01

    Full Text Available International portfolio diversification is often advocated as a way of enhancing portfolio performance particularly through the reduction of portfolio risk. Portfolio managers in Europe have for decades routinely invested a substantial portion of their portfolios in securities that were issued in other countries. During the last decade US investors have held a significant amount of foreign securities with over a trillion dollars invested in foreign assets by 1994. South African institutions have been allowed some freedom to diversify internationally since mid 1995 and individual investors since July 1997. In this paper the potential diversification benefits for South African investors are considered. The stability over time of the correlation structure is investigated and simple ex-ante investment strategies are formulated and evaluated.

  6. Performance monitoring for brain-computer-interface actions.

    Science.gov (United States)

    Schurger, Aaron; Gale, Steven; Gozel, Olivia; Blanke, Olaf

    2017-02-01

    When presented with a difficult perceptual decision, human observers are able to make metacognitive judgements of subjective certainty. Such judgements can be made independently of and prior to any overt response to a sensory stimulus, presumably via internal monitoring. Retrospective judgements about one's own task performance, on the other hand, require first that the subject perform a task and thus could potentially be made based on motor processes, proprioceptive, and other sensory feedback rather than internal monitoring. With this dichotomy in mind, we set out to study performance monitoring using a brain-computer interface (BCI), with which subjects could voluntarily perform an action - moving a cursor on a computer screen - without any movement of the body, and thus without somatosensory feedback. Real-time visual feedback was available to subjects during training, but not during the experiment where the true final position of the cursor was only revealed after the subject had estimated where s/he thought it had ended up after 6s of BCI-based cursor control. During the first half of the experiment subjects based their assessments primarily on the prior probability of the end position of the cursor on previous trials. However, during the second half of the experiment subjects' judgements moved significantly closer to the true end position of the cursor, and away from the prior. This suggests that subjects can monitor task performance when the task is performed without overt movement of the body. Copyright © 2016 Elsevier Inc. All rights reserved.

  7. 1st International Conference on Communication and Computer Engineering

    CERN Document Server

    Othman, Mohd; Othman, Mohd; Rahim, Yahaya; Pee, Naim

    2015-01-01

    This book covers diverse aspects of advanced computer and communication engineering, focusing specifically on industrial and manufacturing theory and applications of electronics, communications, computing, and information technology. Experts in research, industry, and academia present the latest developments in technology, describe applications involving cutting-edge communication and computer systems, and explore likely future directions. In addition, access is offered to numerous new algorithms that assist in solving computer and communication engineering problems. The book is based on presentations delivered at ICOCOE 2014, the 1st International Conference on Communication and Computer Engineering. It will appeal to a wide range of professionals in the field, including telecommunication engineers, computer engineers and scientists, researchers, academics, and students.

  8. 2nd International Conference on Intelligent Computing and Applications

    CERN Document Server

    Dash, Subhransu; Das, Swagatam; Panigrahi, Bijaya

    2017-01-01

    Second International Conference on Intelligent Computing and Applications was the annual research conference aimed to bring together researchers around the world to exchange research results and address open issues in all aspects of Intelligent Computing and Applications. The main objective of the second edition of the conference for the scientists, scholars, engineers and students from the academia and the industry is to present ongoing research activities and hence to foster research relations between the Universities and the Industry. The theme of the conference unified the picture of contemporary intelligent computing techniques as an integral concept that highlights the trends in computational intelligence and bridges theoretical research concepts with applications. The conference covered vital issues ranging from intelligent computing, soft computing, and communication to machine learning, industrial automation, process technology and robotics. This conference also provided variety of opportunities for ...

  9. 2nd International Conference on Communication and Computer Engineering

    CERN Document Server

    Othman, Mohd; Othman, Mohd; Rahim, Yahaya; Pee, Naim

    2016-01-01

    This book covers diverse aspects of advanced computer and communication engineering, focusing specifically on industrial and manufacturing theory and applications of electronics, communications, computing and information technology. Experts in research, industry, and academia present the latest developments in technology, describe applications involving cutting-edge communication and computer systems, and explore likely future trends. In addition, a wealth of new algorithms that assist in solving computer and communication engineering problems are presented. The book is based on presentations given at ICOCOE 2015, the 2nd International Conference on Communication and Computer Engineering. It will appeal to a wide range of professionals in the field, including telecommunication engineers, computer engineers and scientists, researchers, academics and students.

  10. Second International Conference on Advanced Computing, Networking and Informatics

    CERN Document Server

    Mohapatra, Durga; Konar, Amit; Chakraborty, Aruna

    2014-01-01

    Advanced Computing, Networking and Informatics are three distinct and mutually exclusive disciplines of knowledge with no apparent sharing/overlap among them. However, their convergence is observed in many real world applications, including cyber-security, internet banking, healthcare, sensor networks, cognitive radio, pervasive computing amidst many others. This two-volume proceedings explore the combined use of Advanced Computing and Informatics in the next generation wireless networks and security, signal and image processing, ontology and human-computer interfaces (HCI). The two volumes together include 148 scholarly papers, which have been accepted for presentation from over 640 submissions in the second International Conference on Advanced Computing, Networking and Informatics, 2014, held in Kolkata, India during June 24-26, 2014. The first volume includes innovative computing techniques and relevant research results in informatics with selective applications in pattern recognition, signal/image process...

  11. DETERMINING INTERNATIONAL STRATEGIC ALLIANCE PER-FORMANCE

    OpenAIRE

    Nielsen, Bo Bernhard

    2002-01-01

    This paper considers the relationship between subjective measures of international al-liance performance and a set of variables, which may act as predictors of success before the alliance is formed (pre-alliance formation factors), and a set of variables which emerge during the operation of the alliance (post-alliance formation factors). The empiri-cal study, based on a web-survey, investigates a sample of Danish partner firms engaged in 48 equity joint ventures and 70 non-equity joint ventur...

  12. The path toward HEP High Performance Computing

    International Nuclear Information System (INIS)

    Apostolakis, John; Brun, René; Gheata, Andrei; Wenzel, Sandro; Carminati, Federico

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit

  13. A High Performance VLSI Computer Architecture For Computer Graphics

    Science.gov (United States)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  14. NINJA: Java for High Performance Numerical Computing

    Directory of Open Access Journals (Sweden)

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  15. 30th International Symposium on Computer and Information Sciences

    CERN Document Server

    Gelenbe, Erol; Gorbil, Gokce; Lent, Ricardo

    2016-01-01

    The 30th Anniversary of the ISCIS (International Symposium on Computer and Information Sciences) series of conferences, started by Professor Erol Gelenbe at Bilkent University, Turkey, in 1986, will be held at Imperial College London on September 22-24, 2015. The preceding two ISCIS conferences were held in Krakow, Poland in 2014, and in Paris, France, in 2013.   The Proceedings of ISCIS 2015 published by Springer brings together rigorously reviewed contributions from leading international experts. It explores new areas of research and technological development in computer science, computer engineering, and information technology, and presents new applications in fast changing fields such as information science, computer science and bioinformatics.   The topics covered include (but are not limited to) advances in networking technologies, software defined networks, distributed systems and the cloud, security in the Internet of Things, sensor systems, and machine learning and large data sets.

  16. 6th International Conference on Design Computing and Cognition

    CERN Document Server

    Hanna, Sean

    2015-01-01

    This book details the state-of-the-art of research and development in design computing and design cognition. It features more than 35 papers that were presented at the Sixth International Conference on Design Computing and Cognition, DCC’14, held at University College, London, UK. Inside, readers will find the work of expert researchers and practitioners that explores both advances in theory and application as well as demonstrates the depth and breadth of design computing and design cognition. This interdisciplinary coverage, which includes material from international research groups, examines design synthesis, design cognition, design creativity, design processes, design theory, design grammars, design support, and design ideation. Overall, the papers provide a bridge between design computing and design cognition. The confluence of these two fields continues to build the foundation for further advances and leads to an increased understanding of design as an activity whose influence continues to spread. ...

  17. Performance Measurements in a High Throughput Computing Environment

    CERN Document Server

    AUTHOR|(CDS)2145966; Gribaudo, Marco

    The IT infrastructures of companies and research centres are implementing new technologies to satisfy the increasing need of computing resources for big data analysis. In this context, resource profiling plays a crucial role in identifying areas where the improvement of the utilisation efficiency is needed. In order to deal with the profiling and optimisation of computing resources, two complementary approaches can be adopted: the measurement-based approach and the model-based approach. The measurement-based approach gathers and analyses performance metrics executing benchmark applications on computing resources. Instead, the model-based approach implies the design and implementation of a model as an abstraction of the real system, selecting only those aspects relevant to the study. This Thesis originates from a project carried out by the author within the CERN IT department. CERN is an international scientific laboratory that conducts fundamental researches in the domain of elementary particle physics. The p...

  18. The 6th International Conference on Computer Science and Computational Mathematics (ICCSCM 2017)

    Science.gov (United States)

    2017-09-01

    Retrievals, Data Mining, Web Image Mining, & Applications, Defining Spectrum Rights and Open Spectrum Solutions, E-Comerce, Ubiquitous, RFID, Applications, Fingerprint/Hand/Biometrics Recognitions and Technologies, Foundations of High-performance Computing, IC-card Security, OTP, and Key Management Issues, IDS/Firewall, Anti-Spam mail, Anti-virus issues, Mobile Computing for E-Commerce, Network Security Applications, Neural Networks and Biomedical Simulations, Quality of Services and Communication Protocols, Quantum Computing, Coding, and Error Controls, Satellite and Optical Communication Systems, Theory of Parallel Processing and Distributed Computing, Virtual Visions, 3-D Object Retrievals, & Virtual Simulations, Wireless Access Security, etc. The success of ICCSCM 2017 is reflected in the received papers from authors around the world from several countries which allows a highly multinational and multicultural idea and experience exchange. The accepted papers of ICCSCM 2017 are published in this Book. Please check http://www.iccscm.com for further news. A conference such as ICCSCM 2017 can only become successful using a team effort, so herewith we want to thank the International Technical Committee and the Reviewers for their efforts in the review process as well as their valuable advices. We are thankful to all those who contributed to the success of ICCSCM 2017. The Secretary

  19. Intelligent Distributed Computing VI : Proceedings of the 6th International Symposium on Intelligent Distributed Computing

    CERN Document Server

    Badica, Costin; Malgeri, Michele; Unland, Rainer

    2013-01-01

    This book represents the combined peer-reviewed proceedings of the Sixth International Symposium on Intelligent Distributed Computing -- IDC~2012, of the International Workshop on Agents for Cloud -- A4C~2012 and of the Fourth International Workshop on Multi-Agent Systems Technology and Semantics -- MASTS~2012. All the events were held in Calabria, Italy during September 24-26, 2012. The 37 contributions published in this book address many topics related to theory and applications of intelligent distributed computing and multi-agent systems, including: adaptive and autonomous distributed systems, agent programming, ambient assisted living systems, business process modeling and verification, cloud computing, coalition formation, decision support systems, distributed optimization and constraint satisfaction, gesture recognition, intelligent energy management in WSNs, intelligent logistics, machine learning, mobile agents, parallel and distributed computational intelligence, parallel evolutionary computing, trus...

  20. 1st International Conference on Internet Computing and Information Communications

    CERN Document Server

    Awasthi, Lalit; Masillamani, M; Sridhar, S

    2014-01-01

    The book presents high quality research papers presented by experts in the International Conference on Internet Computing and Information Communications 2012, organized by ICICIC Global organizing committee (on behalf of The CARD Atlanta, Georgia, CREATE Conferences Inc). The objective of this book is to present the latest work done in the field of Internet computing by researchers and industrial professionals across the globe. A step to reduce the research divide between developed and under developed countries.

  1. Instrumentation, computer software and experimental techniques used in low-frequency internal friction studies at WNRE

    International Nuclear Information System (INIS)

    Sprugmann, K.W.; Ritchie, I.G.

    1980-04-01

    A detailed and comprehensive account of the equipment, computer programs and experimental methods developed at the Whiteshell Nuclear Research Estalbishment for the study of low-frequency internal friction is presented. Part 1 describes the mechanical apparatus, electronic instrumentation and computer software, while Part II describes in detail the laboratory techniques and various types of experiments performed together with data reduction and analysis. Experimental procedures for the study of internal friction as a function of temperature, strain amplitude or time are described. Computer control of these experiments using the free-decay technique is outlined. In addition, a pendulum constant-amplitude drive system is described. (auth)

  2. Computer fan performance enhancement via acoustic perturbations

    Energy Technology Data Exchange (ETDEWEB)

    Greenblatt, David, E-mail: davidg@technion.ac.il [Faculty of Mechanical Engineering, Technion - Israel Institute of Technology, Haifa (Israel); Avraham, Tzahi; Golan, Maayan [Faculty of Mechanical Engineering, Technion - Israel Institute of Technology, Haifa (Israel)

    2012-04-15

    Highlights: Black-Right-Pointing-Pointer Computer fan effectiveness was increased by introducing acoustic perturbations. Black-Right-Pointing-Pointer Acoustic perturbations controlled blade boundary layer separation. Black-Right-Pointing-Pointer Optimum frequencies corresponded with airfoils studies. Black-Right-Pointing-Pointer Exploitation of flow instabilities was responsible for performance improvements. Black-Right-Pointing-Pointer Peak pressure and peak flowrate were increased by 40% and 15% respectively. - Abstract: A novel technique for increasing computer fan effectiveness, based on introducing acoustic perturbations onto the fan blades to control boundary layer separation, was assessed. Experiments were conducted in a specially designed facility that simultaneously allowed characterization of fan performance and introduction of the perturbations. A parametric study was conducted to determine the optimum control parameters, namely those that deliver the largest increase in fan pressure for a given flowrate. The optimum reduced frequencies corresponded with those identified on stationary airfoils and it was thus concluded that the exploitation of Kelvin-Helmholtz instabilities, commonly observed on airfoils, was responsible for the fan blade performance improvements. The optimum control inputs, such as acoustic frequency and sound pressure level, showed some variation with different fan flowrates. With the near-optimum control conditions identified, the full operational envelope of the fan, when subjected to acoustic perturbations, was assessed. The peak pressure and peak flowrate were increased by up to 40% and 15% respectively. The peak fan efficiency increased with acoustic perturbations but the overall system efficiency was reduced when the speaker input power was accounted for.

  3. Computer fan performance enhancement via acoustic perturbations

    International Nuclear Information System (INIS)

    Greenblatt, David; Avraham, Tzahi; Golan, Maayan

    2012-01-01

    Highlights: ► Computer fan effectiveness was increased by introducing acoustic perturbations. ► Acoustic perturbations controlled blade boundary layer separation. ► Optimum frequencies corresponded with airfoils studies. ► Exploitation of flow instabilities was responsible for performance improvements. ► Peak pressure and peak flowrate were increased by 40% and 15% respectively. - Abstract: A novel technique for increasing computer fan effectiveness, based on introducing acoustic perturbations onto the fan blades to control boundary layer separation, was assessed. Experiments were conducted in a specially designed facility that simultaneously allowed characterization of fan performance and introduction of the perturbations. A parametric study was conducted to determine the optimum control parameters, namely those that deliver the largest increase in fan pressure for a given flowrate. The optimum reduced frequencies corresponded with those identified on stationary airfoils and it was thus concluded that the exploitation of Kelvin–Helmholtz instabilities, commonly observed on airfoils, was responsible for the fan blade performance improvements. The optimum control inputs, such as acoustic frequency and sound pressure level, showed some variation with different fan flowrates. With the near-optimum control conditions identified, the full operational envelope of the fan, when subjected to acoustic perturbations, was assessed. The peak pressure and peak flowrate were increased by up to 40% and 15% respectively. The peak fan efficiency increased with acoustic perturbations but the overall system efficiency was reduced when the speaker input power was accounted for.

  4. High performance computations using dynamical nucleation theory

    International Nuclear Information System (INIS)

    Windus, T L; Crosby, L D; Kathmann, S M

    2008-01-01

    Chemists continue to explore the use of very large computations to perform simulations that describe the molecular level physics of critical challenges in science. In this paper, we describe the Dynamical Nucleation Theory Monte Carlo (DNTMC) model - a model for determining molecular scale nucleation rate constants - and its parallel capabilities. The potential for bottlenecks and the challenges to running on future petascale or larger resources are delineated. A 'master-slave' solution is proposed to scale to the petascale and will be developed in the NWChem software. In addition, mathematical and data analysis challenges are described

  5. The 9th international conference on computing and information technology

    CERN Document Server

    Unger, Herwig; Boonkrong, Sirapat; IC2IT2013

    2013-01-01

    This volume contains the papers of the 9th International Conference on Computing and Information Technology (IC2IT 2013) held at King Mongkut's University of Technology North Bangkok (KMUTNB), Bangkok, Thailand, on May 9th-10th, 2013. Traditionally, the conference is organized in conjunction with the National Conference on Computing and Information Technology, one of the leading Thai national events in the area of Computer Science and Engineering. The conference as well as this volume is structured into 3 main tracks on Data Networks/Communication, Data Mining/Machine Learning, and Human Interfaces/Image processing.  

  6. 1st International Conference on Signal, Networks, Computing, and Systems

    CERN Document Server

    Mohapatra, Durga; Nagar, Atulya; Sahoo, Manmath

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in the first International Conference on Signal, Networks, Computing, and Systems (ICSNCS 2016) held at Jawaharlal Nehru University, New Delhi, India during February 25–27, 2016. The book is organized in to two volumes and primarily focuses on theory and applications in the broad areas of communication technology, computer science and information security. The book aims to bring together the latest scientific research works of academic scientists, professors, research scholars and students in the areas of signal, networks, computing and systems detailing the practical challenges encountered and the solutions adopted.

  7. International Conference on Frontiers of Intelligent Computing : Theory and Applications

    CERN Document Server

    Udgata, Siba; Biswal, Bhabendra

    2013-01-01

    The volume contains the papers presented at FICTA 2012: International Conference on Frontiers in Intelligent Computing: Theory and Applications held on December 22-23, 2012 in Bhubaneswar engineering College, Bhubaneswar, Odissa, India. It contains 86 papers contributed by authors from the globe. These research papers mainly focused on application of intelligent techniques which includes evolutionary computation techniques like genetic algorithm, particle swarm optimization techniques, teaching-learning based optimization etc  for various engineering applications such as data mining, image processing, cloud computing, networking etc.

  8. International Conference on Frontiers of Intelligent Computing : Theory and Applications

    CERN Document Server

    Udgata, Siba; Biswal, Bhabendra

    2014-01-01

    This volume contains the papers presented at the Second International Conference on Frontiers in Intelligent Computing: Theory and Applications (FICTA-2013) held during 14-16 November 2013 organized by Bhubaneswar Engineering College (BEC), Bhubaneswar, Odisha, India. It contains 63 papers focusing on application of intelligent techniques which includes evolutionary computation techniques like genetic algorithm, particle swarm optimization techniques, teaching-learning based optimization etc  for various engineering applications such as data mining, Fuzzy systems, Machine Intelligence and ANN, Web technologies and Multimedia applications and Intelligent computing and Networking etc.

  9. Analysis of parallel computing performance of the code MCNP

    International Nuclear Information System (INIS)

    Wang Lei; Wang Kan; Yu Ganglin

    2006-01-01

    Parallel computing can reduce the running time of the code MCNP effectively. With the MPI message transmitting software, MCNP5 can achieve its parallel computing on PC cluster with Windows operating system. Parallel computing performance of MCNP is influenced by factors such as the type, the complexity level and the parameter configuration of the computing problem. This paper analyzes the parallel computing performance of MCNP regarding with these factors and gives measures to improve the MCNP parallel computing performance. (authors)

  10. Performance comparison between Java and JNI for optimal implementation of computational micro-kernels

    OpenAIRE

    Halli , Nassim; Charles , Henri-Pierre; Méhaut , Jean-François

    2015-01-01

    International audience; General purpose CPUs used in high performance computing (HPC) support a vector instruction set and an out-of-order engine dedicated to increase the instruction level parallelism. Hence, related optimizations are currently critical to improve the performance of applications requiring numerical computation. Moreover, the use of a Java run-time environment such as the HotSpot Java Virtual Machine (JVM) in high performance computing is a promising alternative. It benefits ...

  11. 3rd International Conference on Advanced Computing, Networking and Informatics

    CERN Document Server

    Mohapatra, Durga; Chaki, Nabendu

    2016-01-01

    Advanced Computing, Networking and Informatics are three distinct and mutually exclusive disciplines of knowledge with no apparent sharing/overlap among them. However, their convergence is observed in many real world applications, including cyber-security, internet banking, healthcare, sensor networks, cognitive radio, pervasive computing amidst many others. This two volume proceedings explore the combined use of Advanced Computing and Informatics in the next generation wireless networks and security, signal and image processing, ontology and human-computer interfaces (HCI). The two volumes together include 132 scholarly articles, which have been accepted for presentation from over 550 submissions in the Third International Conference on Advanced Computing, Networking and Informatics, 2015, held in Bhubaneswar, India during June 23–25, 2015.

  12. 4th International Conference on Computer Engineering and Networks

    CERN Document Server

    2015-01-01

    This book aims to examine innovation in the fields of computer engineering and networking. The book covers important emerging topics in computer engineering and networking, and it will help researchers and engineers improve their knowledge of state-of-art in related areas. The book presents papers from the 4th International Conference on Computer Engineering and Networks (CENet2014) held July 19-20, 2014 in Shanghai, China.  ·       Covers emerging topics for computer engineering and networking ·       Discusses how to improve productivity by using the latest advanced technologies ·       Examines innovation in the fields of computer engineering and networking  

  13. 4th International Conference on Quantitative Logic and Soft Computing

    CERN Document Server

    Chen, Shui-Li; Wang, San-Min; Li, Yong-Ming

    2017-01-01

    This book is the proceedings of the Fourth International Conference on Quantitative Logic and Soft Computing (QLSC2016) held 14-17, October, 2016 in Zhejiang Sci-Tech University, Hangzhou, China. It includes 61 papers, of which 5 are plenary talks( 3 abstracts and 2 full length talks). QLSC2016 was the fourth in a series of conferences on Quantitative Logic and Soft Computing. This conference was a major symposium for scientists, engineers and practitioners to present their updated results, ideas, developments and applications in all areas of quantitative logic and soft computing. The book aims to strengthen relations between industry research laboratories and universities in fields such as quantitative logic and soft computing worldwide as follows: (1) Quantitative Logic and Uncertainty Logic; (2) Automata and Quantification of Software; (3) Fuzzy Connectives and Fuzzy Reasoning; (4) Fuzzy Logical Algebras; (5) Artificial Intelligence and Soft Computing; (6) Fuzzy Sets Theory and Applications.

  14. 2nd International Conference on Computer and Communication Technologies

    CERN Document Server

    Raju, K; Mandal, Jyotsna; Bhateja, Vikrant

    2016-01-01

    The book is about all aspects of computing, communication, general sciences and educational research covered at the Second International Conference on Computer & Communication Technologies held during 24-26 July 2015 at Hyderabad. It hosted by CMR Technical Campus in association with Division – V (Education & Research) CSI, India. After a rigorous review only quality papers are selected and included in this book. The entire book is divided into three volumes. Three volumes cover a variety of topics which include medical imaging, networks, data mining, intelligent computing, software design, image processing, mobile computing, digital signals and speech processing, video surveillance and processing, web mining, wireless sensor networks, circuit analysis, fuzzy systems, antenna and communication systems, biomedical signal processing and applications, cloud computing, embedded systems applications and cyber security and digital forensic. The readers of these volumes will be highly benefited from the te...

  15. 1st International Conference on Computational and Experimental Biomedical Sciences

    CERN Document Server

    Jorge, RM

    2015-01-01

    This book contains the full papers presented at ICCEBS 2013 – the 1st International Conference on Computational and Experimental Biomedical Sciences, which was organized in Azores, in October 2013. The included papers present and discuss new trends in those fields, using several methods and techniques, including active shape models, constitutive models, isogeometric elements, genetic algorithms, level sets, material models, neural networks, optimization, and the finite element method, in order to address more efficiently different and timely applications involving biofluids, computer simulation, computational biomechanics, image based diagnosis, image processing and analysis, image segmentation, image registration, scaffolds, simulation, and surgical planning. The main audience for this book consists of researchers, Ph.D students, and graduate students with multidisciplinary interests related to the areas of artificial intelligence, bioengineering, biology, biomechanics, computational fluid dynamics, comput...

  16. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  17. Internal and external Field of View: computer games and cybersickness

    NARCIS (Netherlands)

    Vries, S.C. de; Bos, J.E.; Emmerik, M.L. van; Groen, E.L.

    2007-01-01

    In an experiment with a computer game environment, we studied the effect of Field-of-View (FOV) on cybersickness. In particular, we examined the effect of differences between the internal FOV (IFOV, the FOV which the graphics generator is using to render its images) and the external FOV (EFOV, the

  18. Advances in Computer Entertainment. 10th International Conference, ACE 2013

    NARCIS (Netherlands)

    Reidsma, Dennis; Katayose, H.; Nijholt, Antinus; Unknown, [Unknown

    2013-01-01

    These are the proceedings of the 10th International Conference on Advances in Computer Entertainment (ACE 2013), hosted by the Human Media Interaction research group of the Centre for Telematics and Information Technology at the University of Twente, The Netherlands. The ACE series of conferences,

  19. 10th International Symposium on Computer Science in Sports

    CERN Document Server

    Soltoggio, Andrea; Dawson, Christian; Meng, Qinggang; Pain, Matthew

    2016-01-01

    This book presents the main scientific results of the 10th International Symposium of Computer Science in Sport (IACSS/ISCSS 2015), sponsored by the International Association of Computer Science in Sport in collaboration with the International Society of Sport Psychology (ISSP), which took place between September 9-11, 2015 at Loughborough, UK. This proceedings aims to build a link between computer science and sport, and reports on results from applying computer science techniques to address a wide number of problems in sport and exercise sciences. It provides a good platform and opportunity for researchers in both computer science and sport to understand and discuss ideas and promote cross-disciplinary research. The strictly reviewed and carefully revised papers cover the following topics: Modelling and Analysis, Artificial Intelligence in Sport, Virtual Reality in Sport,  Neural Cognitive Training,  IT Systems for Sport, Sensing Technologies and Image Processing.

  20. Papers from the Fifth International Brain-Computer Interface Meeting

    Science.gov (United States)

    Huggins, Jane E.; Wolpaw, Jonathan R.

    2014-06-01

    practices in BCI performance measurement [8]. The following eight papers focus on specific BCI applications and on methods for increasing their usefulness for people with severe disabilities. The next two examine how brain activity and BCI use affect each other. The final five studies investigate brain signals and evaluate new signal processing algorithms in order to improve BCI performance and broaden its possible applications in some of the newest areas of BCI research, including the direct interpretation of speech from electrocorticographic (ECoG) activity [9]. Together, these papers span many aspects of BCI research, including different recording modalities (i.e. electroencephalogram (EEG), ECoG, functional magnetic resonance imaging (fMRI)) and signal types (e.g. P300 event-related potentials (ERPs), sensorimotor rhythms, steady-state visual evoked potentials (SSVEPs)). Furthermore, additional clinically related studies that were presented at the meeting but were considered to be outside the scope of the Journal of Neural Engineering will appear in a special issue of the Archives of Physical Medicine and Rehabilitation . With a theme of 'Defining the Future' the Fifth International BCI Meeting tackled the issues of a rapidly growing multidisciplinary research and development enterprise that is now entering clinical use. Important new areas that received attention included the need for active involvement of the people with severe disabilities who are the primary initial users of BCI technology and the growing importance and difficulty of the multiple ethical questions raised by BCIs and their potential applications. The meeting also marked the launching of the new journal Brain--Computer Interfaces , dedicated to BCI research and development, and initiated the establishment of the Brain--Computer Interface Society, which will organize and manage the Sixth International BCI Meeting to be held in 2016. References [1] http://bcimeeting.org/2013/sponsors.html [2] http

  1. 9th International Symposium on Intelligent Distributed Computing

    CERN Document Server

    Camacho, David; Analide, Cesar; Seghrouchni, Amal; Badica, Costin

    2016-01-01

    This book represents the combined peer-reviewed proceedings of the ninth International Symposium on Intelligent Distributed Computing – IDC’2015, of the Workshop on Cyber Security and Resilience of Large-Scale Systems – WSRL’2015, and of the International Workshop on Future Internet and Smart Networks – FI&SN’2015. All the events were held in Guimarães, Portugal during October 7th-9th, 2015. The 46 contributions published in this book address many topics related to theory and applications of intelligent distributed computing, including: Intelligent Distributed Agent-Based Systems, Ambient Intelligence and Social Networks, Computational Sustainability, Intelligent Distributed Knowledge Representation and Processing, Smart Networks, Networked Intelligence and Intelligent Distributed Applications, amongst others.

  2. The path toward HEP High Performance Computing

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  3. Computational fluid dynamics for turbomachinery internal air systems.

    Science.gov (United States)

    Chew, John W; Hills, Nicholas J

    2007-10-15

    Considerable progress in development and application of computational fluid dynamics (CFD) for aeroengine internal flow systems has been made in recent years. CFD is regularly used in industry for assessment of air systems, and the performance of CFD for basic axisymmetric rotor/rotor and stator/rotor disc cavities with radial throughflow is largely understood and documented. Incorporation of three-dimensional geometrical features and calculation of unsteady flows are becoming commonplace. Automation of CFD, coupling with thermal models of the solid components, and extension of CFD models to include both air system and main gas path flows are current areas of development. CFD is also being used as a research tool to investigate a number of flow phenomena that are not yet fully understood. These include buoyancy-affected flows in rotating cavities, rim seal flows and mixed air/oil flows. Large eddy simulation has shown considerable promise for the buoyancy-driven flows and its use for air system flows is expected to expand in the future.

  4. A High Performance COTS Based Computer Architecture

    Science.gov (United States)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  5. Performance of a radioactive waste internment site

    International Nuclear Information System (INIS)

    Bernhardt, D.E.; Owen, D.H.; Mishub, R.J.; Prewett, S.V.; Cole, L.W.

    1990-01-01

    Aerojet disposed of 30,000 cubic meters of uranium and thorium wastes in an engineered 12,000 square meter (3 acre) internment site in Tennessee. The operation, performed under a State of Tennessee source material license, is based on termination of the license with the material remaining in place, and the option of future commercial use of the site. The closure plan included a long-term monitoring program. Yearly monitoring of the site has verified performance within the design criteria. Full-time construction monitored by a licensed engineer and extensive testing of materials ensured construction of the site according to or better than the specifications. Surface subsidence of the site has averaged about 2.2 cm with a range of 0 to 4.3 cm at 11 settlement markers after 4 years. The anticipated settlement, based on design parameters, was between 15 and 25 cm. The present anticipated settlement is 8 to 15 cm, based on the as-built parameters, and the long-term monitoring results. Slope stability analysis indicates the stability has improved with time due to consolidation of materials and reduced groundwater activity as a result of site construction. There has been no need for corrective action maintenance due to erosion, sluffing of slopes, or subsidence. Groundwater monitoring indicates the materials have been isolated, and there have also been no indications of springs or leakage from the site. 9 refs., 2 figs., 3 tabs

  6. Performance evaluation of a computed radiography system

    Energy Technology Data Exchange (ETDEWEB)

    Roussilhe, J.; Fallet, E. [Carestream Health France, 71 - Chalon/Saone (France); Mango, St.A. [Carestream Health, Inc. Rochester, New York (United States)

    2007-07-01

    Computed radiography (CR) standards have been formalized and published in Europe and in the US. The CR system classification is defined in those standards by - minimum normalized signal-to-noise ratio (SNRN), and - maximum basic spatial resolution (SRb). Both the signal-to-noise ratio (SNR) and the contrast sensitivity of a CR system depend on the dose (exposure time and conditions) at the detector. Because of their wide dynamic range, the same storage phosphor imaging plate can qualify for all six CR system classes. The exposure characteristics from 30 to 450 kV, the contrast sensitivity, and the spatial resolution of the KODAK INDUSTREX CR Digital System have been thoroughly evaluated. This paper will present some of the factors that determine the system's spatial resolution performance. (authors)

  7. 7th International Conference on Design Computing and Cognition

    CERN Document Server

    2017-01-01

    This book gathers the peer-reviewed and revised versions of papers from the Seventh International Conference on Design Computing and Cognition (DCC'16), held at Northwestern University, Evanston (Chicago), USA, from 27–29 June 2016. The material presented here reflects cutting-edge design research with a focus on artificial intelligence, cognitive science and computational theories. The papers are grouped under the following nine headings, describing advances in theory and applications alike and demonstrating the depth and breadth of design computing and design cognition: Design Creativity; Design Cognition - Design Approaches; Design Support; Design Grammars; Design Cognition - Design Behaviors; Design Processes; Design Synthesis; Design Activity and Design Knowledge. The book will be of particular interest to researchers, developers and users of advanced computation in design across all disciplines, and to all readers who need to gain a better understanding of designing.

  8. 4th International Conference on Applied Computing and Information Technology

    CERN Document Server

    2017-01-01

    This edited book presents scientific results of the 4th International Conference on Applied Computing and Information Technology (ACIT 2016) which was held on December 12–14, 2016 in Las Vegas, USA. The aim of this conference was to bring together researchers and scientists, businessmen and entrepreneurs, teachers, engineers, computer users, and students to discuss the numerous fields of computer science and to share their experiences and exchange new ideas and information in a meaningful way. The aim of this conference was also to bring out the research results about all aspects (theory, applications and tools) of computer and information science, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them. The conference organizers selected the best papers from those papers accepted for presentation at the conference. The papers were chosen based on review scores submitted by members of the Program Committee, and underwent further rigorous rounds of review. Th...

  9. 2nd International Conference on Computational Intelligence in Data Mining

    CERN Document Server

    Mohapatra, Durga

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in the Second International Conference on Computational Intelligence in Data Mining (ICCIDM 2015) held at Bhubaneswar, Odisha, India during 5 – 6 December 2015. The two-volume Proceedings address the difficulties and challenges for the seamless integration of two core disciplines of computer science, i.e., computational intelligence and data mining. The book addresses different methods and techniques of integration for enhancing the overall goal of data mining. The book helps to disseminate the knowledge about some innovative, active research directions in the field of data mining, machine and computational intelligence, along with some current issues and applications of related topics.

  10. First International Conference on Intelligent Computing and Applications

    CERN Document Server

    Kar, Rajib; Das, Swagatam; Panigrahi, Bijaya

    2015-01-01

    The idea of the 1st International Conference on Intelligent Computing and Applications (ICICA 2014) is to bring the Research Engineers, Scientists, Industrialists, Scholars and Students together from in and around the globe to present the on-going research activities and hence to encourage research interactions between universities and industries. The conference provides opportunities for the delegates to exchange new ideas, applications and experiences, to establish research relations and to find global partners for future collaboration. The proceedings covers latest progresses in the cutting-edge research on various research areas of Image, Language Processing, Computer Vision and Pattern Recognition, Machine Learning, Data Mining and Computational Life Sciences, Management of Data including Big Data and Analytics, Distributed and Mobile Systems including Grid and Cloud infrastructure, Information Security and Privacy, VLSI, Electronic Circuits, Power Systems, Antenna, Computational fluid dynamics & Hea...

  11. International Conference on Emerging Technologies for Information Systems, Computing, and Management

    CERN Document Server

    Ma, Tinghuai; Emerging Technologies for Information Systems, Computing, and Management

    2013-01-01

    This book aims to examine innovation in the fields of information technology, software engineering, industrial engineering, management engineering. Topics covered in this publication include; Information System Security, Privacy, Quality Assurance, High-Performance Computing and Information System Management and Integration. The book presents papers from The Second International Conference for Emerging Technologies Information Systems, Computing, and Management (ICM2012) which was held on December 1 to 2, 2012 in Hangzhou, China.

  12. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  13. International MultiConference of Engineers and Computer Scientists 2016

    CERN Document Server

    Kim, Haeng; Huang, Xu; Castillo, Oscar

    2017-01-01

    This volume contains selected revised and extended research articles written by prominent researchers who participated in the International MultiConference of Engineers and Computer Scientists 2016, held in Hong Kong, 16-18 March 2016. Topics covered include engineering physics, communications systems, control theory, automation, engineering mathematics, scientific computing, electrical engineering, and industrial applications. The book showcases the tremendous advances in engineering technologies and applications, and also serves as an excellent reference work for researchers and graduate students working on engineering technologies, physical sciences and their applications.

  14. Distributed Computing and Artificial Intelligence, 12th International Conference

    CERN Document Server

    Malluhi, Qutaibah; Gonzalez, Sara; Bocewicz, Grzegorz; Bucciarelli, Edgardo; Giulioni, Gianfranco; Iqba, Farkhund

    2015-01-01

    The 12th International Symposium on Distributed Computing and Artificial Intelligence 2015 (DCAI 2015) is a forum to present applications of innovative techniques for studying and solving complex problems. The exchange of ideas between scientists and technicians from both the academic and industrial sector is essential to facilitate the development of systems that can meet the ever-increasing demands of today’s society. The present edition brings together past experience, current work and promising future trends associated with distributed computing, artificial intelligence and their application in order to provide efficient solutions to real problems. This symposium is organized by the Osaka Institute of Technology, Qatar University and the University of Salamanca.

  15. 9th International Conference on Advanced Computing & Communication Technologies

    CERN Document Server

    Mandal, Jyotsna; Auluck, Nitin; Nagarajaram, H

    2016-01-01

    This book highlights a collection of high-quality peer-reviewed research papers presented at the Ninth International Conference on Advanced Computing & Communication Technologies (ICACCT-2015) held at Asia Pacific Institute of Information Technology, Panipat, India during 27–29 November 2015. The book discusses a wide variety of industrial, engineering and scientific applications of the emerging techniques. Researchers from academia and industry present their original work and exchange ideas, information, techniques and applications in the field of Advanced Computing and Communication Technology.

  16. 4th International Conference on Soft Computing for Problem Solving

    CERN Document Server

    Deep, Kusum; Pant, Millie; Bansal, Jagdish; Nagar, Atulya

    2015-01-01

    This two volume book is based on the research papers presented at the 4th International Conference on Soft Computing for Problem Solving (SocProS 2014) and covers a variety of topics, including mathematical modelling, image processing, optimization methods, swarm intelligence, evolutionary algorithms, fuzzy logic, neural networks, forecasting, medical and healthcare, data mining, etc. Mainly the emphasis is on Soft Computing and its applications in diverse areas. The prime objective of this book is to familiarize the reader with the latest scientific developments in various fields of Science, Engineering and Technology and is directed to the researchers and scientists engaged in various real-world applications of ‘Soft Computing’.

  17. International Conference on Computer Science and Information Technology

    CERN Document Server

    Li, Xiaolong

    2014-01-01

    The main objective of CSAIT 2013 is to provide a forum for researchers, educators, engineers and government officials involved in the general areas of Computational Sciences and Information Technology to disseminate their latest research results and exchange views on the future research directions of these fields. A medium like this provides an opportunity to the academicians and industrial professionals to exchange and integrate practice of computer science, application of the academic ideas, improve the academic depth. The in-depth discussions on the subject provide an international communication platform for educational technology and scientific research for the world's universities, engineering field experts, professionals and business executives.

  18. High-Performance Computing Paradigm and Infrastructure

    CERN Document Server

    Yang, Laurence T

    2006-01-01

    With hyperthreading in Intel processors, hypertransport links in next generation AMD processors, multi-core silicon in today's high-end microprocessors from IBM and emerging grid computing, parallel and distributed computers have moved into the mainstream

  19. Proceedings of the 2007 International Computer Music Conference

    DEFF Research Database (Denmark)

    John Chowning, pioneer on digital music, and Barbara Tillmann, researcher in music cognition as our keynote speakers. This years' theme, 'Immersed Music', will be covered by numerous sessions related to perception and cognition, and by Barbara Tillmanns' keynote. In addition, we have planned two......It is a pleasure to welcome everyone to the 2007 International Computer Music Conference, hosted in Copenhagen from the 27th to the 31st of August. ICMC2007 is organized by Re:New - Digital arts forum, in collaboration with the International Computer Music Association and Medialogy at Aalborg...... efficient in providing feedback and comments on the numerous papers submitted. The response to the call for participation was very positive. We received 290 paper submissions and 554 music submissions. We are extremely grateful for the interest and support from around the world. It is an honor to welcome...

  20. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  1. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  2. The First International Conference on Soft Computing and Data Mining

    CERN Document Server

    Ghazali, Rozaida; Deris, Mustafa

    2014-01-01

    This book constitutes the refereed proceedings of the First International Conference on Soft Computing and Data Mining, SCDM 2014, held in Universiti Tun Hussein Onn Malaysia, in June 16th-18th, 2014. The 65 revised full papers presented in this book were carefully reviewed and selected from 145 submissions, and organized into two main topical sections; Data Mining and Soft Computing. The goal of this book is to provide both theoretical concepts and, especially, practical techniques on these exciting fields of soft computing and data mining, ready to be applied in real-world applications. The exchanges of views pertaining future research directions to be taken in this field and the resultant dissemination of the latest research findings makes this work of immense value to all those having an interest in the topics covered.    

  3. 5th International Conference on Computational Collective Intelligence

    CERN Document Server

    Trawinski, Bogdan; Nguyen, Ngoc

    2014-01-01

    The book consists of 19 extended and revised chapters based on original works presented during a poster session organized within the 5th International Conference on Computational Collective Intelligence that was held between 11 and 13 of September 2013 in Craiova, Romania. The book is divided into three parts. The first part is titled “Agents and Multi-Agent Systems” and consists of 8 chapters that concentrate on many problems related to agent and multi-agent systems, including: formal models, agent autonomy, emergent properties, agent programming, agent-based simulation and planning. The second part of the book is titled “Intelligent Computational Methods” and consists of 6 chapters. The authors present applications of various intelligent computational methods like neural networks, mathematical optimization and multistage decision processes in areas like cooperation, character recognition, wireless networks, transport, and metal structures. The third part of the book is titled “Language and Knowled...

  4. 10th International Conference on Computing and Information Technology

    CERN Document Server

    Unger, Herwig; Meesad, Phayung

    2014-01-01

    Computer and Information Technology (CIT) are now involved in governmental, industrial, and business domains more than ever. Thus, it is important for CIT personnel to continue academic research to improve technology and its adoption to modern applications. The up-to-date research and technologies must be distributed to researchers and CIT community continuously to aid future development. The 10th International Conference on Computing and Information Technology (IC 2 IT2014) organized by King Mongkut's University of Technology North Bangkok (KMUTNB) and partners provides an exchange of the state of the art and future developments in the two key areas of this process: Computer Networking and Data Mining. Behind the background of the foundation of ASEAN, it becomes clear that efficient languages, business principles and communication methods need to be adapted, unified and especially optimized to gain a maximum benefit to the users and customers of future IT systems.

  5. 9th International conference on distributed computing and artificial intelligence

    CERN Document Server

    Santana, Juan; González, Sara; Molina, Jose; Bernardos, Ana; Rodríguez, Juan; DCAI 2012; International Symposium on Distributed Computing and Artificial Intelligence 2012

    2012-01-01

    The International Symposium on Distributed Computing and Artificial Intelligence 2012 (DCAI 2012) is a stimulating and productive forum where the scientific community can work towards future cooperation in Distributed Computing and Artificial Intelligence areas. This conference is a forum in which  applications of innovative techniques for solving complex problems will be presented. Artificial intelligence is changing our society. Its application in distributed environments, such as the internet, electronic commerce, environment monitoring, mobile communications, wireless devices, distributed computing, to mention only a few, is continuously increasing, becoming an element of high added value with social and economic potential, in industry, quality of life, and research. These technologies are changing constantly as a result of the large research and technical effort being undertaken in both universities and businesses. The exchange of ideas between scientists and technicians from both the academic and indus...

  6. 10th International Conference on Scientific Computing in Electrical Engineering

    CERN Document Server

    Clemens, Markus; Günther, Michael; Maten, E

    2016-01-01

    This book is a collection of selected papers presented at the 10th International Conference on Scientific Computing in Electrical Engineering (SCEE), held in Wuppertal, Germany in 2014. The book is divided into five parts, reflecting the main directions of SCEE 2014: 1. Device Modeling, Electric Circuits and Simulation, 2. Computational Electromagnetics, 3. Coupled Problems, 4. Model Order Reduction, and 5. Uncertainty Quantification. Each part starts with a general introduction followed by the actual papers. The aim of the SCEE 2014 conference was to bring together scientists from academia and industry, mathematicians, electrical engineers, computer scientists, and physicists, with the goal of fostering intensive discussions on industrially relevant mathematical problems, with an emphasis on the modeling and numerical simulation of electronic circuits and devices, electromagnetic fields, and coupled problems. The methodological focus was on model order reduction and uncertainty quantification.

  7. Distributed computing and artificial intelligence : 10th International Conference

    CERN Document Server

    Neves, José; Rodriguez, Juan; Santana, Juan; Gonzalez, Sara

    2013-01-01

    The International Symposium on Distributed Computing and Artificial Intelligence 2013 (DCAI 2013) is a forum in which applications of innovative techniques for solving complex problems are presented. Artificial intelligence is changing our society. Its application in distributed environments, such as the internet, electronic commerce, environment monitoring, mobile communications, wireless devices, distributed computing, to mention only a few, is continuously increasing, becoming an element of high added value with social and economic potential, in industry, quality of life, and research. This conference is a stimulating and productive forum where the scientific community can work towards future cooperation in Distributed Computing and Artificial Intelligence areas. These technologies are changing constantly as a result of the large research and technical effort being undertaken in both universities and businesses. The exchange of ideas between scientists and technicians from both the academic and industry se...

  8. 9th International Conference on Genetic and Evolutionary Computing

    CERN Document Server

    Lin, Jerry; Pan, Jeng-Shyang; Tin, Pyke; Yokota, Mitsuhiro; Genetic and Evolutionary Computing

    2016-01-01

    This volume of Advances in Intelligent Systems and Computing contains accepted papers presented at ICGEC 2015, the 9th International Conference on Genetic and Evolutionary Computing. The conference this year was technically co-sponsored by Ministry of Science and Technology, Myanmar, University of Computer Studies, Yangon, University of Miyazaki in Japan, Kaohsiung University of Applied Science in Taiwan, Fujian University of Technology in China and VSB-Technical University of Ostrava. ICGEC 2015 is held from 26-28, August, 2015 in Yangon, Myanmar. Yangon, the most multiethnic and cosmopolitan city in Myanmar, is the main gateway to the country. Despite being the commercial capital of Myanmar, Yangon is a city engulfed by its rich history and culture, an integration of ancient traditions and spiritual heritage. The stunning SHWEDAGON Pagoda is the center piece of Yangon city, which itself is famous for the best British colonial era architecture. Of particular interest in many shops of Bogyoke Aung San Market,...

  9. 13thInternational Conference on Computer and Information Science

    CERN Document Server

    Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing 2012

    2013-01-01

    The purpose of the 13th  International Conference on Computer and Information Science (SNPD 2012) held on August 8-10, 2012 in Kyoto, Japan was to bring together researchers and scientists, businessmen and entrepreneurs, teachers and students to discuss the numerous fields of computer science, and to share ideas and information in a meaningful way.  Our conference officers selected the best 17 papers from those papers accepted for presentation at the conference in order to publish them in this volume.  The papers were chosen based on review scores submitted by members of the program committee, and underwent further rounds of rigorous review.   The  conference organizers selected 17 outstanding papers from SNPD 2012, all of which you will find in this volume of Springer’s Studies in Computational Intelligence.

  10. Computational Humor 2012 : extended abstacts of the (3rd international) Workshop on computational Humor

    NARCIS (Netherlands)

    Nijholt, Antinus; Unknown, [Unknown

    2012-01-01

    Like its predecessors in 1996 (University of Twente, the Netherlands) and 2002 (ITC-irst, Trento, Italy), this Third International Workshop on Computational Humor (IWCH 2012) focusses on the possibility to find algorithms that allow understanding and generation of humor. There is the general aim of

  11. DOE research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  12. The 3d International Workshop on Computational Electronics

    Science.gov (United States)

    Goodnick, Stephen M.

    1994-09-01

    The Third International Workshop on Computational Electronics (IWCE) was held at the Benson Hotel in downtown Portland, Oregon, on May 18, 19, and 20, 1994. The workshop was devoted to a broad range of topics in computational electronics related to the simulation of electronic transport in semiconductors and semiconductor devices, particularly those which use large computational resources. The workshop was supported by the National Science Foundation (NSF), the Office of Naval Research and the Army Research Office, as well as local support from the Oregon Joint Graduate Schools of Engineering and the Oregon Center for Advanced Technology Education. There were over 100 participants in the Portland workshop, of which more than one quarter represented research groups outside of the United States from Austria, Canada, France, Germany, Italy, Japan, Switzerland, and the United Kingdom. There were a total 81 papers presented at the workshop, 9 invited talks, 26 oral presentations and 46 poster presentations. The emphasis of the contributions reflected the interdisciplinary nature of computational electronics with researchers from the Chemistry, Computer Science, Mathematics, Engineering, and Physics communities participating in the workshop.

  13. PERFORMANCE IN INTERNAL CONTROL AND RISK MANAGEMENT

    OpenAIRE

    JELER (POPA) IOANA; FOCŞAN ELEONORA IONELA; CORICI MARIAN CĂTĂLIN

    2017-01-01

    The purpose of this article is to highlight the importance of internal control and risk management. In practice, economic entities meet a variety of risks that have the origins from the internal environment or the external one. Although there are different of views on addressing the concept of risk - threats or opportunities, event or action, accordingly uncertain, proposed by specialists in risk management in this article we try to present these issues and identify techniques to ...

  14. International co-operation in use of computers

    Energy Technology Data Exchange (ETDEWEB)

    Sterling, T D [Medical Computing Centre, University of Cincinnati, College of Medicine, Cincinnati, OH (United States)

    1966-06-15

    National and international co-operation takes on concreteness in the exchange of programmes and research results . On a practical level and for the exchange of actual techniques the diversity between computers and computer languages may offer almost overwhelming obstacles which can be overcome but not without considerable effort. If computing centres have similar constellations of hardware, then the exchange of programmes is of course routine. We were able to exchange programmes with the Cancer Institute Board at Melbourne, Australia. It turned out that our programmes, although developed for a {sup 60}Co teletherapy beam unit (Model Eldorado A), could be fitted to the dose distribution from the Melbourne 4-MeV linear accelerator with the change of a few constants. The work of fitting the equations themselves was done by the Melbourne group. Once the new constants were found it was a simple matter of transcribing our programme on a reel of magnetic tape and returning it to Melbourne. Treatment centres may have access to different computers. However, it is becoming increasingly true that different computers will be able to accept programmes written in many languages. Language compatibility makes it possible t o take programmes and to rewrite them, at a very small cost, to fit another computer. Very often the only changes that have to be introduced are of 'input-output' instructions . If this is the case , then a copy of the flow diagram and a print-out of the source programme is usually all that is needed to make a programme operational on a different machine. But even here we have found it desirable t o send a programmer along so that resolution of detailed problems may be expedited. In this way we exchanged programmers with the Mallinckrodt Institute of Radiology at St. Louis, Missouri to make our own external beam methods compatible with their computer and to make their interstitial and intracavitary programmes operational on ours.

  15. Performance of Air Pollution Models on Massively Parallel Computers

    DEFF Research Database (Denmark)

    Brown, John; Hansen, Per Christian; Wasniewski, Jerzy

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on the computers. Using a realistic large-scale model, we gain detailed insight about the performance of the three computers when used to solve large-scale scientific problems...

  16. 8th International Conference on Genetic and Evolutionary Computing

    CERN Document Server

    Yang, Chin-Yu; Lin, Chun-Wei; Pan, Jeng-Shyang; Snasel, Vaclav; Abraham, Ajith

    2015-01-01

    This volume of Advances in Intelligent Systems and Computing contains accepted papers presented at ICGEC 2014, the 8th International Conference on Genetic and Evolutionary Computing. The conference this year was technically co-sponsored by Nanchang Institute of Technology in China, Kaohsiung University of Applied Science in Taiwan, and VSB-Technical University of Ostrava. ICGEC 2014 is held from 18-20 October 2014 in Nanchang, China. Nanchang is one of is the capital of Jiangxi Province in southeastern China, located in the north-central portion of the province. As it is bounded on the west by the Jiuling Mountains, and on the east by Poyang Lake, it is famous for its scenery, rich history and cultural sites. Because of its central location relative to the Yangtze and Pearl River Delta regions, it is a major railroad hub in Southern China. The conference is intended as an international forum for the researchers and professionals in all areas of genetic and evolutionary computing.

  17. 7th International Conference on Genetic and Evolutionary Computing

    CERN Document Server

    Krömer, Pavel; Snášel, Václav

    2014-01-01

    Genetic and Evolutionary Computing This volume of Advances in Intelligent Systems and Computing contains accepted papers presented at ICGEC 2013, the 7th International Conference on Genetic and Evolutionary Computing. The conference this year was technically co-sponsored by The Waseda University in Japan, Kaohsiung University of Applied Science in Taiwan, and VSB-Technical University of Ostrava. ICGEC 2013 was held in Prague, Czech Republic. Prague is one of the most beautiful cities in the world whose magical atmosphere has been shaped over ten centuries. Places of the greatest tourist interest are on the Royal Route running from the Powder Tower through Celetna Street to Old Town Square, then across Charles Bridge through the Lesser Town up to the Hradcany Castle. One should not miss the Jewish Town, and the National Gallery with its fine collection of Czech Gothic art, collection of old European art, and a beautiful collection of French art. The conference was intended as an international forum for the res...

  18. Peregrine System | High-Performance Computing | NREL

    Science.gov (United States)

    classes of nodes that users access: Login Nodes Peregrine has four login nodes, each of which has Intel E5 /scratch file systems, the /mss file system is mounted on all login nodes. Compute Nodes Peregrine has 2592

  19. Computer and Information Sciences II : 26th International Symposium on Computer and Information Sciences

    CERN Document Server

    Lent, Ricardo; Sakellari, Georgia

    2012-01-01

    Information technology is the enabling foundation for all of human activity at the beginning of the 21st century, and advances in this area are crucial to all of us. These advances are taking place all over the world and can only be followed and perceived when researchers from all over the world assemble, and exchange their ideas in conferences such as the one presented in this proceedings volume regarding the 26th International Symposium on Computer and Information Systems, held at the Royal Society in London on 26th to 28th September 2011. Computer and Information Sciences II contains novel advances in the state of the art covering applied research in electrical and computer engineering and computer science, across the broad area of information technology. It provides access to the main innovative activities in research across the world, and points to the results obtained recently by some of the most active teams in both Europe and Asia.

  20. Second International Conference on Soft Computing for Problem Solving

    CERN Document Server

    Nagar, Atulya; Deep, Kusum; Pant, Millie; Bansal, Jagdish; Ray, Kanad; Gupta, Umesh

    2014-01-01

    The present book is based on the research papers presented in the International Conference on Soft Computing for Problem Solving (SocProS 2012), held at JK Lakshmipat University, Jaipur, India. This book provides the latest developments in the area of soft computing and covers a variety of topics, including mathematical modeling, image processing, optimization, swarm intelligence, evolutionary algorithms, fuzzy logic, neural networks, forecasting, data mining, etc. The objective of the book is to familiarize the reader with the latest scientific developments that are taking place in various fields and the latest sophisticated problem solving tools that are being developed to deal with the complex and intricate problems that are otherwise difficult to solve by the usual and traditional methods. The book is directed to the researchers and scientists engaged in various fields of Science and Technology.

  1. 11th International Conference on Distributed Computing and Artificial Intelligence

    CERN Document Server

    Bersini, Hugues; Corchado, Juan; Rodríguez, Sara; Pawlewski, Paweł; Bucciarelli, Edgardo

    2014-01-01

    The 11th International Symposium on Distributed Computing and Artificial Intelligence 2014 (DCAI 2014) is a forum to present applications of innovative techniques for studying and solving complex problems. The exchange of ideas between scientists and technicians from both the academic and industrial sector is essential to facilitate the development of systems that can meet the ever-increasing demands of today’s society. The present edition brings together past experience, current work and promising future trends associated with distributed computing, artificial intelligence and their application in order to provide efficient solutions to real problems. This year’s technical program presents both high quality and diversity, with contributions in well-established and evolving areas of research (Algeria, Brazil, China, Croatia, Czech Republic, Denmark, France, Germany, Ireland, Italy, Japan, Malaysia, Mexico, Poland, Portugal, Republic of Korea, Spain, Taiwan, Tunisia, Ukraine, United Kingdom), representing ...

  2. Third International Conference on Soft Computing for Problem Solving

    CERN Document Server

    Deep, Kusum; Nagar, Atulya; Bansal, Jagdish

    2014-01-01

    The present book is based on the research papers presented in the 3rd International Conference on Soft Computing for Problem Solving (SocProS 2013), held as a part of the golden jubilee celebrations of the Saharanpur Campus of IIT Roorkee, at the Noida Campus of Indian Institute of Technology Roorkee, India. This book is divided into two volumes and covers a variety of topics including mathematical modelling, image processing, optimization, swarm intelligence, evolutionary algorithms, fuzzy logic, neural networks, forecasting, medical and health care, data mining etc. Particular emphasis is laid on soft computing and its application to diverse fields. The prime objective of the book is to familiarize the reader with the latest scientific developments that are taking place in various fields and the latest sophisticated problem solving tools that are being developed to deal with the complex and intricate problems, which are otherwise difficult to solve by the usual and traditional methods. The book is directed ...

  3. 1st International Conference on Computer Vision and Image Processing

    CERN Document Server

    Kumar, Sanjeev; Roy, Partha; Sen, Debashis

    2017-01-01

    This edited volume contains technical contributions in the field of computer vision and image processing presented at the First International Conference on Computer Vision and Image Processing (CVIP 2016). The contributions are thematically divided based on their relation to operations at the lower, middle and higher levels of vision systems, and their applications. The technical contributions in the areas of sensors, acquisition, visualization and enhancement are classified as related to low-level operations. They discuss various modern topics – reconfigurable image system architecture, Scheimpflug camera calibration, real-time autofocusing, climate visualization, tone mapping, super-resolution and image resizing. The technical contributions in the areas of segmentation and retrieval are classified as related to mid-level operations. They discuss some state-of-the-art techniques – non-rigid image registration, iterative image partitioning, egocentric object detection and video shot boundary detection. Th...

  4. 5th International Conference on Soft Computing for Problem Solving

    CERN Document Server

    Deep, Kusum; Bansal, Jagdish; Nagar, Atulya; Das, Kedar

    2016-01-01

    This two volume book is based on the research papers presented at the 5th International Conference on Soft Computing for Problem Solving (SocProS 2015) and covers a variety of topics, including mathematical modelling, image processing, optimization methods, swarm intelligence, evolutionary algorithms, fuzzy logic, neural networks, forecasting, medical and health care, data mining, etc. Mainly the emphasis is on Soft Computing and its applications in diverse areas. The prime objective of this book is to familiarize the reader with the latest scientific developments in various fields of Science, Engineering and Technology and is directed to the researchers and scientists engaged in various real-world applications of ‘Soft Computing’.

  5. 13th International Conference on Distributed Computing and Artificial Intelligence

    CERN Document Server

    Silvestri, Marcello; González, Sara

    2016-01-01

    The special session Decision Economics (DECON) 2016 is a scientific forum by which to share ideas, projects, researches results, models and experiences associated with the complexity of behavioral decision processes aiming at explaining socio-economic phenomena. DECON 2016 held in the University of Seville, Spain, as part of the 13th International Conference on Distributed Computing and Artificial Intelligence (DCAI) 2016. In the tradition of Herbert A. Simon’s interdisciplinary legacy, this book dedicates itself to the interdisciplinary study of decision-making in the recognition that relevant decision-making takes place in a range of critical subject areas and research fields, including economics, finance, information systems, small and international business, management, operations, and production. Decision-making issues are of crucial importance in economics. Not surprisingly, the study of decision-making has received a growing empirical research efforts in the applied economic literature over the last ...

  6. International MultiConference of Engineers and Computer Scientists 2013

    CERN Document Server

    Ao, Sio-Iong; Huang, Xu; Castillo, Oscar

    2014-01-01

    This volume contains revised and extended research articles written by prominent researchers participating in the international conference on Advances in Engineering Technologies and Physical Science was held in Hong Kong, 13-15 March, 2013. Topics covered include engineering physics, engineering mathematics, scientific computing, control theory, automation, artificial intelligence, electrical engineering, and industrial applications. The book offers the state of art of tremendous advances in engineering technologies and physical science and applications, and also serves as an excellent reference work for researchers and graduate students working with/on engineering technologies and physical science and applications.

  7. 12th International Conference on Computing and Information Technology

    CERN Document Server

    Boonkrong, Sirapat; Unger, Herwig

    2016-01-01

    This proceedings book presents recent research work and results in the area of communication and information technologies. The chapters of this book contain the main, well-selected and reviewed contributions of scientists who met at the 12th International Conference on Computing and Information Technology (IC2IT) held during 7th - 8th July 2016 in Khon Kaen, Thailand The book is divided into three parts: “User Centric Data Mining and Text Processing”, “Data Mining Algoritms and their Applications” and “Optimization of Complex Networks”.

  8. 11th International Conference on Computing and Information Technology

    CERN Document Server

    Meesad, Phayung; Boonkrong, Sirapat

    2015-01-01

    This book presents recent research work and results in the area of communication and information technologies. The book includes the main results of the 11th International Conference on Computing and Information Technology (IC2IT) held during July 2nd-3rd, 2015 in Bangkok, Thailand. The book is divided into the two main parts Data Mining and Machine Learning as well as Data Network and Communications. New algorithms and methods of data mining asr discussed as well as innovative applications and state-of-the-art technologies on data mining, machine learning and data networking.

  9. International MultiConference of Engineers and Computer Scientists 2014

    CERN Document Server

    Ao, Sio-Iong; Huang, Xu; Castillo, Oscar

    2015-01-01

    This volume contains revised and extended research articles written by prominent researchers who participated in the international conference on Advances in Engineering Technologies, which was held in Hong Kong, 12-14 March, 2014. Topics covered include engineering physics, engineering mathematics, scientific computing, control theory,  artificial intelligence, electrical engineering, communications systems, and industrial applications. The book offers the state of art of tremendous advances in engineering technologies and physical science and applications, and also serves as an excellent reference work for researchers and graduate students working with/on engineering technologies and physical science and applications.

  10. The computation of equating errors in international surveys in education.

    Science.gov (United States)

    Monseur, Christian; Berezner, Alla

    2007-01-01

    Since the IEA's Third International Mathematics and Science Study, one of the major objectives of international surveys in education has been to report trends in achievement. The names of the two current IEA surveys reflect this growing interest: Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study (PIRLS). Similarly a central concern of the OECD's PISA is with trends in outcomes over time. To facilitate trend analyses these studies link their tests using common item equating in conjunction with item response modelling methods. IEA and PISA policies differ in terms of reporting the error associated with trends. In IEA surveys, the standard errors of the trend estimates do not include the uncertainty associated with the linking step while PISA does include a linking error component in the standard errors of trend estimates. In other words, PISA implicitly acknowledges that trend estimates partly depend on the selected common items, while the IEA's surveys do not recognise this source of error. Failing to recognise the linking error leads to an underestimation of the standard errors and thus increases the Type I error rate, thereby resulting in reporting of significant changes in achievement when in fact these are not significant. The growing interest of policy makers in trend indicators and the impact of the evaluation of educational reforms appear to be incompatible with such underestimation. However, the procedure implemented by PISA raises a few issues about the underlying assumptions for the computation of the equating error. After a brief introduction, this paper will describe the procedure PISA implemented to compute the linking error. The underlying assumptions of this procedure will then be discussed. Finally an alternative method based on replication techniques will be presented, based on a simulation study and then applied to the PISA 2000 data.

  11. Computer and Information Sciences III : 27th International Symposium on Computer and Information Sciences

    CERN Document Server

    Lent, Ricardo

    2013-01-01

    Information technology is the enabling foundation science and technology for all of human activity at the beginning of the 21st century, and advances in this area are crucial to all of us. These advances are taking place all over the world and can only be followed and perceived when researchers from all over the world assemble, and exchange their ideas in conferences such as the one presented in this proceedings volume regarding the 27th International Symposium on Computer and Information Systems, held at the Institut Henri Poincare' in Paris on October 3 and 4, 2012. Computer and Information Sciences III: 27th International Symposium on Computer and Information Sciences contains novel advances in the state of the art covering applied research in electrical and computer engineering and computer science, across the broad area of information technology. It provides access to the main innovative activities in research across the world, and points to the results obtained recently by some of the most active teams ...

  12. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  13. Performing an allreduce operation on a plurality of compute nodes of a parallel computer

    Science.gov (United States)

    Faraj, Ahmad [Rochester, MN

    2012-04-17

    Methods, apparatus, and products are disclosed for performing an allreduce operation on a plurality of compute nodes of a parallel computer. Each compute node includes at least two processing cores. Each processing core has contribution data for the allreduce operation. Performing an allreduce operation on a plurality of compute nodes of a parallel computer includes: establishing one or more logical rings among the compute nodes, each logical ring including at least one processing core from each compute node; performing, for each logical ring, a global allreduce operation using the contribution data for the processing cores included in that logical ring, yielding a global allreduce result for each processing core included in that logical ring; and performing, for each compute node, a local allreduce operation using the global allreduce results for each processing core on that compute node.

  14. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  15. PERFORMANCE IN INTERNAL CONTROL AND RISK MANAGEMENT

    Directory of Open Access Journals (Sweden)

    JELER (POPA IOANA

    2017-06-01

    Full Text Available The purpose of this article is to highlight the importance of internal control and risk management. In practice, economic entities meet a variety of risks that have the origins from the internal environment or the external one. Although there are different of views on addressing the concept of risk - threats or opportunities, event or action, accordingly uncertain, proposed by specialists in risk management in this article we try to present these issues and identify techniques to counter risks occurrence. In this article we present also means managing risk and why needs to be implemented at institutional level a risk management. The paper concludes by highlight the role of efficient risk management in the company’s management and company's activities.

  16. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    Science.gov (United States)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  17. Neurons compute internal models of the physical laws of motion.

    Science.gov (United States)

    Angelaki, Dora E; Shaikh, Aasef G; Green, Andrea M; Dickman, J David

    2004-07-29

    A critical step in self-motion perception and spatial awareness is the integration of motion cues from multiple sensory organs that individually do not provide an accurate representation of the physical world. One of the best-studied sensory ambiguities is found in visual processing, and arises because of the inherent uncertainty in detecting the motion direction of an untextured contour moving within a small aperture. A similar sensory ambiguity arises in identifying the actual motion associated with linear accelerations sensed by the otolith organs in the inner ear. These internal linear accelerometers respond identically during translational motion (for example, running forward) and gravitational accelerations experienced as we reorient the head relative to gravity (that is, head tilt). Using new stimulus combinations, we identify here cerebellar and brainstem motion-sensitive neurons that compute a solution to the inertial motion detection problem. We show that the firing rates of these populations of neurons reflect the computations necessary to construct an internal model representation of the physical equations of motion.

  18. Inclusive vision for high performance computing at the CSIR

    CSIR Research Space (South Africa)

    Gazendam, A

    2006-02-01

    Full Text Available and computationally intensive applications. A number of different technologies and standards were identified as core to the open and distributed high-performance infrastructure envisaged...

  19. Computational Humor 2012 : extended abstacts of the (3rd international) Workshop on computational Humor

    OpenAIRE

    Nijholt, Antinus; Unknown, [Unknown

    2012-01-01

    Like its predecessors in 1996 (University of Twente, the Netherlands) and 2002 (ITC-irst, Trento, Italy), this Third International Workshop on Computational Humor (IWCH 2012) focusses on the possibility to find algorithms that allow understanding and generation of humor. There is the general aim of modeling humor, and if we can do that, it will provide us with lots of information about our cognitive abilities in general, such as reasoning, remembering, understanding situations, and understand...

  20. Hypersonic ramjet experiment project. Phase 1: Computer program description, ramjet and scramjet cycle performance

    Science.gov (United States)

    Jackson, R. J.; Wang, T. T.

    1974-01-01

    A computer program was developed to describe the performance of ramjet and scramjet cycles. The program performs one dimensional calculations of the equilibrium, real-gas internal flow properties of the engine. The program can be used for the following: (1) preliminary design calculation and (2) design analysis of internal flow properties corresponding to stipulated flow areas. Only the combustion of hydrogen in air is considered in this case.

  1. Absorptive routines and international patent performance

    Directory of Open Access Journals (Sweden)

    Fernando E. García-Muiña

    2017-04-01

    We enrich the treatment of the absorptive capacity phases including the moderating effects between routines associated to the traditional potential-realized absorptive capacities. Taking into account external knowledge search strategies, the deeper external relationships, the better transference and appropriation of specific external knowledge. Nevertheless, when the moderating role of assimilation is included, cooperation agreements appear as the most efficient source of external knowledge. Finally, we show that technological tools let firms store and structure the information making easier its use for international patenting. This positive effect is reinforced in the presence of exploitation routines, since technological knowledge will better fit to the industry's key factors of success.

  2. 26 CFR 7.999-1 - Computation of the international boycott factor.

    Science.gov (United States)

    2010-04-01

    ... 26 Internal Revenue 14 2010-04-01 2010-04-01 false Computation of the international boycott factor...-1 Computation of the international boycott factor. (a) In general. Sections 908(a), 952(a)(3), and... 993(a)(3)) that includes that person participates in or cooperates with an international boycott...

  3. Advanced Certification Program for Computer Graphic Specialists. Final Performance Report.

    Science.gov (United States)

    Parkland Coll., Champaign, IL.

    A pioneer program in computer graphics was implemented at Parkland College (Illinois) to meet the demand for specialized technicians to visualize data generated on high performance computers. In summer 1989, 23 students were accepted into the pilot program. Courses included C programming, calculus and analytic geometry, computer graphics, and…

  4. Performance of the TRISTAN computer control network

    International Nuclear Information System (INIS)

    Koiso, H.; Abe, K.; Akiyama, A.; Katoh, T.; Kikutani, E.; Kurihara, N.; Kurokawa, S.; Oide, K.; Shinomoto, M.

    1985-01-01

    An N-to-N token ring network of twenty-four minicomputers controls the TRISTAN accelerator complex. The computers are linked by optical fiber cables with 10 Mbps transmission speed. The software system is based on the NODAL, a multi-computer interpreter language developed at CERN SPS. Typical messages exchanged between computers are NODAL programs and NODAL variables transmitted by the EXEC and the REMIT commands. These messages are exchanged as a cluster of packets whose maximum size is 512 bytes. At present, eleven minicomputers are connected to the network and the total length of the ring is 1.5 km. In this condition, the maximum attainable throughput is 980 kbytes/s. The response of a pair of an EXEC and a REMIT transactions which transmit a NODAL array A and one line of program 'REMIT A' and immediately remit the A is measured to be 95+0.039/chi/ ms, where /chi/ is the array size in byte. In ordinary accelerator operations, the maximum channel utilization is 2%, the average packet length is 96 bytes and the transmission rate is 10 kbytes/s

  5. Performing quantum computing experiments in the cloud

    Science.gov (United States)

    Devitt, Simon J.

    2016-09-01

    Quantum computing technology has reached a second renaissance in the past five years. Increased interest from both the private and public sector combined with extraordinary theoretical and experimental progress has solidified this technology as a major advancement in the 21st century. As anticipated my many, some of the first realizations of quantum computing technology has occured over the cloud, with users logging onto dedicated hardware over the classical internet. Recently, IBM has released the Quantum Experience, which allows users to access a five-qubit quantum processor. In this paper we take advantage of this online availability of actual quantum hardware and present four quantum information experiments. We utilize the IBM chip to realize protocols in quantum error correction, quantum arithmetic, quantum graph theory, and fault-tolerant quantum computation by accessing the device remotely through the cloud. While the results are subject to significant noise, the correct results are returned from the chip. This demonstrates the power of experimental groups opening up their technology to a wider audience and will hopefully allow for the next stage of development in quantum information technology.

  6. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  7. A high performance scientific cloud computing environment for materials simulations

    Science.gov (United States)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  8. Ability performance of older workers - Internal and external influencing factors

    NARCIS (Netherlands)

    Dittmann-Kohli, F.; Heijden, B.I.J.M. van der

    1996-01-01

    Internal and external factors affecting ability and performance of older employees are being analyzed in a short literature review. Internal factors like physical capacity, sensory capacity, cognitive abilities and general health are reduced with ageing; their effect on performance, however, depends

  9. Performance Aspects of Synthesizable Computing Systems

    DEFF Research Database (Denmark)

    Schleuniger, Pascal

    Embedded systems are used in a broad range of applications that demand high performance within severely constrained mechanical, power, and cost requirements. Embedded systems implemented in ASIC technology tend to provide the highest performance, lowest power consumption and lowest unit cost. How...

  10. A high performance scientific cloud computing environment for materials simulations

    OpenAIRE

    Jorissen, Kevin; Vila, Fernando D.; Rehr, John J.

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including...

  11. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  12. 13th International Conference on Computer Aided Engineering

    CERN Document Server

    Pietrusiak, Damian

    2017-01-01

    These proceedings of the 13th International Conference on Computer Aided Engineering present selected papers from the event, which was held in Polanica Zdrój, Poland, from June 22 to 25, 2016. The contributions are organized according to thematic sections on the design and manufacture of machines and technical systems; durability prediction; repairs and retrofitting of power equipment; strength and thermodynamic analyses for power equipment; design and calculation of various types of load-carrying structures; numerical methods for dimensioning materials handling; and long-distance transport equipment. The conference and its proceedings offer a major interdisciplinary forum for researchers and engineers to present the most innovative studies and advances in this dynamic field.

  13. First International Conference Multimedia Processing, Communication and Computing Applications

    CERN Document Server

    Guru, Devanur

    2013-01-01

    ICMCCA 2012 is the first International Conference on Multimedia Processing, Communication and Computing Applications and the theme of the Conference is chosen as ‘Multimedia Processing and its Applications’. Multimedia processing has been an active research area contributing in many frontiers of today’s science and technology. This book presents peer-reviewed quality papers on multimedia processing, which covers a very broad area of science and technology. The prime objective of the book is to familiarize readers with the latest scientific developments that are taking place in various fields of multimedia processing and is widely used in many disciplines such as Medical Diagnosis, Digital Forensic, Object Recognition, Image and Video Analysis, Robotics, Military, Automotive Industries, Surveillance and Security, Quality Inspection, etc. The book will assist the research community to get the insight of the overlapping works which are being carried out across the globe at many medical hospitals and instit...

  14. Computer system for International Reactor Pressure Vessel Materials Database support

    International Nuclear Information System (INIS)

    Arutyunjan, R.; Kabalevsky, S.; Kiselev, V.; Serov, A.

    1997-01-01

    This report presents description of the computer tools for support of International Reactor Pressure Vessel Materials Database developed at IAEA. Work was focused on raw, qualified, processed materials data, search, retrieval, analysis, presentation and export possibilities of data. Developed software has the following main functions: provides software tools for querying and search of any type of data in the database; provides the capability to update the existing information in the database; provides the capability to present and print selected data; provides the possibility of export on yearly basis the run-time IRPVMDB with raw, qualified and processed materials data to Database members; provides the capability to export any selected sets of raw, qualified, processed materials data

  15. Drivers of international performance of Brazilian technology-based firms

    Directory of Open Access Journals (Sweden)

    Maria Carolina Serpa Fagundes de Oliveira

    2018-01-01

    Full Text Available For Technology-Based Firms, international expansion represents an opportunity for growth and value creation. The present study was designed to analyze the role of technology-based companies (TBCs internationalization drivers on international performance. Therefore, a descriptive research was carried out with a quantitative approach performed through a survey. Data collection happened with 53 Brazilian TBCs located in innovation habitats. These data were analyzed by multivariate statistical technique. The results showed that the determinants of the international performance of Brazilian TBCs, can be set by external influencers (localization in innovation habitats, integration into global productive chains, partnerships and strategic alliances for innovation and government policies and internal influencers (innovation capability, international market orientation and international marketing skills.

  16. Internal radiation dose calculations with the INREM II computer code

    International Nuclear Information System (INIS)

    Dunning, D.E. Jr.; Killough, G.G.

    1978-01-01

    A computer code, INREM II, was developed to calculate the internal radiation dose equivalent to organs of man which results from the intake of a radionuclide by inhalation or ingestion. Deposition and removal of radioactivity from the respiratory tract is represented by the Internal Commission on Radiological Protection Task Group Lung Model. A four-segment catenary model of the gastrointestinal tract is used to estimate movement of radioactive material that is ingested, or swallowed after being cleared from the respiratory tract. Retention of radioactivity in other organs is specified by linear combinations of decaying exponential functions. The formation and decay of radioactive daughters is treated explicitly, with each radionuclide in the decay chain having its own uptake and retention parameters, as supplied by the user. The dose equivalent to a target organ is computed as the sum of contributions from each source organ in which radioactivity is assumed to be situated. This calculation utilizes a matrix of dosimetric S-factors (rem/μCi-day) supplied by the user for the particular choice of source and target organs. Output permits the evaluation of components of dose from cross-irradiations when penetrating radiations are present. INREM II has been utilized with current radioactive decay data and metabolic models to produce extensive tabulations of dose conversion factors for a reference adult for approximately 150 radionuclides of interest in environmental assessments of light-water-reactor fuel cycles. These dose conversion factors represent the 50-year dose commitment per microcurie intake of a given radionuclide for 22target organs including contributions from specified source organs and surplus activity in the rest of the body. These tabulations are particularly significant in their consistent use of contemporary models and data and in the detail of documentation

  17. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  18. Computer Self-Efficacy, Computer Anxiety, Performance and Personal Outcomes of Turkish Physical Education Teachers

    Science.gov (United States)

    Aktag, Isil

    2015-01-01

    The purpose of this study is to determine the computer self-efficacy, performance outcome, personal outcome, and affect and anxiety level of physical education teachers. Influence of teaching experience, computer usage and participation of seminars or in-service programs on computer self-efficacy level were determined. The subjects of this study…

  19. Hydration level is an internal variable for computing motivation to obtain water rewards in monkeys.

    Science.gov (United States)

    Minamimoto, Takafumi; Yamada, Hiroshi; Hori, Yukiko; Suhara, Tetsuya

    2012-05-01

    In the process of motivation to engage in a behavior, valuation of the expected outcome is comprised of not only external variables (i.e., incentives) but also internal variables (i.e., drive). However, the exact neural mechanism that integrates these variables for the computation of motivational value remains unclear. Besides, the signal of physiological needs, which serves as the primary internal variable for this computation, remains to be identified. Concerning fluid rewards, the osmolality level, one of the physiological indices for the level of thirst, may be an internal variable for valuation, since an increase in the osmolality level induces drinking behavior. Here, to examine the relationship between osmolality and the motivational value of a water reward, we repeatedly measured the blood osmolality level, while 2 monkeys continuously performed an instrumental task until they spontaneously stopped. We found that, as the total amount of water earned increased, the osmolality level progressively decreased (i.e., the hydration level increased) in an individual-dependent manner. There was a significant negative correlation between the error rate of the task (the proportion of trials with low motivation) and the osmolality level. We also found that the increase in the error rate with reward accumulation can be well explained by a formula describing the changes in the osmolality level. These results provide a biologically supported computational formula for the motivational value of a water reward that depends on the hydration level, enabling us to identify the neural mechanism that integrates internal and external variables.

  20. Computer simulation of fuel element performance

    Energy Technology Data Exchange (ETDEWEB)

    Sukhanov, G I

    1979-01-01

    The review presents reports made at the Conference on the Bahaviour and Production of Fuel for Water Reactors on March 13-17, 1979. Discussed at the Conference are the most developed and tested calculation models specially evolved to predict the behaviour of fuel elements of water reactors. The following five main aspects of the problem are discussed: general conceptions and programs; mechanical mock-ups and their applications; gas release, gap conductivity and fuel thermal conductivity; analysis of nonstationary processes; models of specific phenomena. The review briefly describes the physical principles of the following models and programs: the RESTR, providing calculation of the radii of zones of columnar and equiaxial grains as well as the radius of the internal cavity of the fuel core; programs for calculation of fuel-can interaction, based on the finite elements method; a model predicting the behaviour of the CANDU-PHW fuel elements in transient conditions. General results are presented of investigations of heat transfer through a can-fuel gap and thermal conductivity of UO/sub 2/ with regard for cracking and gas release of the fuel. Many programs already suit the accepted standards and are intensively tested at present.

  1. Computer performance evaluation of FACOM 230-75 computer system, (2)

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1980-08-01

    In this report are described computer performance evaluations for FACOM230-75 computers in JAERI. The evaluations are performed on following items: (1) Cost/benefit analysis of timesharing terminals, (2) Analysis of the response time of timesharing terminals, (3) Analysis of throughout time for batch job processing, (4) Estimation of current potential demands for computer time, (5) Determination of appropriate number of card readers and line printers. These evaluations are done mainly from the standpoint of cost reduction of computing facilities. The techniques adapted are very practical ones. This report will be useful for those people who are concerned with the management of computing installation. (author)

  2. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  3. Soft Computing Applications : Proceedings of the 5th International Workshop Soft Computing Applications

    CERN Document Server

    Fodor, János; Várkonyi-Kóczy, Annamária; Dombi, Joszef; Jain, Lakhmi

    2013-01-01

                    This volume contains the Proceedings of the 5thInternational Workshop on Soft Computing Applications (SOFA 2012).                                The book covers a broad spectrum of soft computing techniques, theoretical and practical applications employing knowledge and intelligence to find solutions for world industrial, economic and medical problems. The combination of such intelligent systems tools and a large number of applications introduce a need for a synergy of scientific and technological disciplines in order to show the great potential of Soft Computing in all domains.                   The conference papers included in these proceedings, published post conference, were grouped into the following area of research: ·         Soft Computing and Fusion Algorithms in Biometrics, ·         Fuzzy Theory, Control andApplications, ·         Modelling and Control Applications, ·         Steps towa...

  4. Advances in Computing and Information Technology : Proceedings of the Second International Conference on Advances in Computing and Information Technology

    CERN Document Server

    Nagamalai, Dhinaharan; Chaki, Nabendu

    2013-01-01

    The international conference on Advances in Computing and Information technology (ACITY 2012) provides an excellent international forum for both academics and professionals for sharing knowledge and results in theory, methodology and applications of Computer Science and Information Technology. The Second International Conference on Advances in Computing and Information technology (ACITY 2012), held in Chennai, India, during July 13-15, 2012, covered a number of topics in all major fields of Computer Science and Information Technology including: networking and communications, network security and applications, web and internet computing, ubiquitous computing, algorithms, bioinformatics, digital image processing and pattern recognition, artificial intelligence, soft computing and applications. Upon a strength review process, a number of high-quality, presenting not only innovative ideas but also a founded evaluation and a strong argumentation of the same, were selected and collected in the present proceedings, ...

  5. High Performance Computing Modernization Program Kerberos Throughput Test Report

    Science.gov (United States)

    2017-10-26

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/5524--17-9751 High Performance Computing Modernization Program Kerberos Throughput Test ...NUMBER 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 2. REPORT TYPE1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 6. AUTHOR(S) 8. PERFORMING...PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT High Performance Computing Modernization Program Kerberos Throughput Test Report Daniel G. Gdula* and

  6. Performance evaluation of computer and communication systems

    CERN Document Server

    Le Boudec, Jean-Yves

    2011-01-01

    … written by a scientist successful in performance evaluation, it is based on his experience and provides many ideas not only to laymen entering the field, but also to practitioners looking for inspiration. The work can be read systematically as a textbook on how to model and test the derived hypotheses on the basis of simulations. Also, separate parts can be studied, as the chapters are self-contained. … the book can be successfully used either for self-study or as a supplementary book for a lecture. I believe that different types of readers will like it: practicing engineers and resea

  7. Computer task performance by subjects with Duchenne muscular dystrophy.

    Science.gov (United States)

    Malheiros, Silvia Regina Pinheiro; da Silva, Talita Dias; Favero, Francis Meire; de Abreu, Luiz Carlos; Fregni, Felipe; Ribeiro, Denise Cardoso; de Mello Monteiro, Carlos Bandeira

    2016-01-01

    Two specific objectives were established to quantify computer task performance among people with Duchenne muscular dystrophy (DMD). First, we compared simple computational task performance between subjects with DMD and age-matched typically developing (TD) subjects. Second, we examined correlations between the ability of subjects with DMD to learn the computational task and their motor functionality, age, and initial task performance. The study included 84 individuals (42 with DMD, mean age of 18±5.5 years, and 42 age-matched controls). They executed a computer maze task; all participants performed the acquisition (20 attempts) and retention (five attempts) phases, repeating the same maze. A different maze was used to verify transfer performance (five attempts). The Motor Function Measure Scale was applied, and the results were compared with maze task performance. In the acquisition phase, a significant decrease was found in movement time (MT) between the first and last acquisition block, but only for the DMD group. For the DMD group, MT during transfer was shorter than during the first acquisition block, indicating improvement from the first acquisition block to transfer. In addition, the TD group showed shorter MT than the DMD group across the study. DMD participants improved their performance after practicing a computational task; however, the difference in MT was present in all attempts among DMD and control subjects. Computational task improvement was positively influenced by the initial performance of individuals with DMD. In turn, the initial performance was influenced by their distal functionality but not their age or overall functionality.

  8. Shaping performance: do international accreditations and quality management really help?

    OpenAIRE

    Nigsch, Stefano; Schenker-Wicki, Andrea

    2012-01-01

    In recent years, international accreditations from private providers have gained importance among business schools all over the world. Higher education managers increasingly see these accreditations as a way of assuring and developing quality in order to comply with international standards, enhance performance, and increase reputation. However, given that an accreditation process requires a great deal of resources and that it might increase bureaucratization and control, international accredi...

  9. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  10. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  11. A Heterogeneous High-Performance System for Computational and Computer Science

    Science.gov (United States)

    2016-11-15

    expand the research infrastructure at the institution but also to enhance the high -performance computing training provided to both undergraduate and... cloud computing, supercomputing, and the availability of cheap memory and storage led to enormous amounts of data to be sifted through in forensic... High -Performance Computing (HPC) tools that can be integrated with existing curricula and support our research to modernize and dramatically advance

  12. Development of a computational methodology for internal dose calculations

    International Nuclear Information System (INIS)

    Yoriyaz, Helio

    2000-01-01

    A new approach for calculating internal dose estimates was developed through the use of a more realistic computational model of the human body and a more precise tool for the radiation transport simulation. The present technique shows the capability to build a patient-specific phantom with tomography data (a voxel-based phantom) for the simulation of radiation transport and energy deposition using Monte Carlo methods such as in the MCNP-4B code. In order to utilize the segmented human anatomy as a computational model for the simulation of radiation transport, an interface program, SCMS, was developed to build the geometric configurations for the phantom through the use of tomographic images. This procedure allows to calculate not only average dose values but also spatial distribution of dose in regions of interest. With the present methodology absorbed fractions for photons and electrons in various organs of the Zubal segmented phantom were calculated and compared to those reported for the mathematical phantoms of Snyder and Cristy-Eckerman. Although the differences in the organ's geometry between the phantoms are quite evident, the results demonstrate small discrepancies, however, in some cases, considerable discrepancies were found due to two major causes: differences in the organ masses between the phantoms and the occurrence of organ overlap in the Zubal segmented phantom, which is not considered in the mathematical phantom. This effect was quite evident for organ cross-irradiation from electrons. With the determination of spatial dose distribution it was demonstrated the possibility of evaluation of more detailed doses data than those obtained in conventional methods, which will give important information for the clinical analysis in therapeutic procedures and in radiobiologic studies of the human body. (author)

  13. Quantum Accelerators for High-Performance Computing Systems

    OpenAIRE

    Britt, Keith A.; Mohiyaddin, Fahd A.; Humble, Travis S.

    2017-01-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantu...

  14. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  15. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  16. PREFACE: International conference on Computer Simulation in Physics and beyond (CSP2015)

    Science.gov (United States)

    2016-02-01

    The International conference on Computer Simulations in Physics and beyond (CSP2015) was held from 6-10 September 2015 at the campus of the Moscow Institute for Electronics and Mathematics (MIEM), National Research University Higher School of Economics, Moscow. Computer simulations are in increasingly popular tool for scientific research, supplementing experimental and analytical research. The main goal of the conference is contributing to the development of methods and algorithms which take into account trends in hardware development, which may help with intensive research. The conference also allowed senior scientists and students to have the opportunity to speak each other and exchange ideas and views on the developments in the area of high-performance computing in science. We would like to take this opportunity to thank our sponsors: the Russian Foundation for Basic Research, Federal Agency of Scientific Organizations, and Higher School of Economics.

  17. Drivers of international performance of Brazilian technology-based firms

    OpenAIRE

    Serpa Fagundes de Oliveira, Maria Carolina; Scherer, Flavia Luciane; Schneider Hahn, Ivanete; de Moura Carpes, Aletéia; Brachak dos Santos, Maríndia; Nunes Piveta, Maíra

    2018-01-01

    For Technology-Based Firms, international expansion represents an opportunity for growth and value creation. The present study was designed to analyze the role of technology-based companies (TBCs) internationalization drivers on international performance. Therefore, a descriptive research was carried out with a quantitative approach performed through a survey. Data collection happened with 53 Brazilian TBCs located in innovation habitats. These data were analyzed by multivariate statistical t...

  18. 5th International Conference on Design Computing and Cognition

    CERN Document Server

    2014-01-01

    Design thinking, the label given to the acts of designing, has become a paradigmatic view that has transcended the discipline of design and is now widely used in business and elsewhere. As a consequence there is an increasing interest in design research. This is because of the realization that design is part of the wealth creation of a nation and needs to be better understood and taught. The continuing globalization of industry and trade has required nations to re-examine where their core contributions lie if not in production efficiency. Design is a precursor to manufacturing for physical objects and is the precursor to implementation for virtual objects. At the same time, the need for sustainable development requires the design of new products and processes, which feeds a movement towards design innovations and inventions. The papers in this volume are from the Fifth International Conference on Design Computing and Cognition (DCC’12) held at Texas A & M University, USA. They represent the state-of-th...

  19. International Nuclear Model personal computer (PCINM): Model documentation

    International Nuclear Information System (INIS)

    1992-08-01

    The International Nuclear Model (INM) was developed to assist the Energy Information Administration (EIA), U.S. Department of Energy (DOE) in producing worldwide projections of electricity generation, fuel cycle requirements, capacities, and spent fuel discharges from commercial nuclear reactors. The original INM was developed, maintained, and operated on a mainframe computer system. In spring 1992, a streamlined version of INM was created for use on a microcomputer utilizing CLIPPER and PCSAS software. This new version is known as PCINM. This documentation is based on the new PCINM version. This document is designed to satisfy the requirements of several categories of users of the PCINM system including technical analysts, theoretical modelers, and industry observers. This document assumes the reader is familiar with the nuclear fuel cycle and each of its components. This model documentation contains four chapters and seven appendices. Chapter Two presents the model overview containing the PCINM structure and process flow, the areas for which projections are made, and input data and output reports. Chapter Three presents the model technical specifications showing all model equations, algorithms, and units of measure. Chapter Four presents an overview of all parameters, variables, and assumptions used in PCINM. The appendices present the following detailed information: variable and parameter listings, variable and equation cross reference tables, source code listings, file layouts, sample report outputs, and model run procedures. 2 figs

  20. The fourth International Conference on Information Science and Cloud Computing

    Science.gov (United States)

    This book comprises the papers accepted by the fourth International Conference on Information Science and Cloud Computing (ISCC), which was held from 18-19 December, 2015 in Guangzhou, China. It has 70 papers divided into four parts. The first part focuses on Information Theory with 20 papers; the second part emphasizes Machine Learning also containing 21 papers; in the third part, there are 21 papers as well in the area of Control Science; and the last part with 8 papers is dedicated to Cloud Science. Each part can be used as an excellent reference by engineers, researchers and students who need to build a knowledge base of the most current advances and state-of-practice in the topics covered by the ISCC conference. Special thanks go to Professor Deyu Qi, General Chair of ISCC 2015, for his leadership in supervising the organization of the entire conference; Professor Tinghuai Ma, Program Chair, and members of program committee for evaluating all the submissions and ensuring the selection of only the highest quality papers; and the authors for sharing their ideas, results and insights. We sincerely hope that you enjoy reading papers included in this book.

  1. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa; Parashar, Manish; Kim, Hyunjoo; Jordan, Kirk E.; Sachdeva, Vipin; Sexton, James; Jamjoom, Hani; Shae, Zon-Yin; Pencheva, Gergina; Tavakoli, Reza; Wheeler, Mary F.

    2012-01-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a

  2. Multicore Challenges and Benefits for High Performance Scientific Computing

    Directory of Open Access Journals (Sweden)

    Ida M.B. Nielsen

    2008-01-01

    Full Text Available Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexity of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.

  3. Enabling high performance computational science through combinatorial algorithms

    International Nuclear Information System (INIS)

    Boman, Erik G; Bozdag, Doruk; Catalyurek, Umit V; Devine, Karen D; Gebremedhin, Assefaw H; Hovland, Paul D; Pothen, Alex; Strout, Michelle Mills

    2007-01-01

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation

  4. Enabling high performance computational science through combinatorial algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Erik G [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Bozdag, Doruk [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Catalyurek, Umit V [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Devine, Karen D [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Gebremedhin, Assefaw H [Computer Science and Center for Computational Science, Old Dominion University (United States); Hovland, Paul D [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Pothen, Alex [Computer Science and Center for Computational Science, Old Dominion University (United States); Strout, Michelle Mills [Computer Science, Colorado State University (United States)

    2007-07-15

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation.

  5. 3rd International Conference on Frontiers of Intelligent Computing : Theory and Applications

    CERN Document Server

    Biswal, Bhabendra; Udgata, Siba; Mandal, JK

    2015-01-01

    Volume 1 contains 95 papers presented at FICTA 2014: Third International Conference on Frontiers in Intelligent Computing: Theory and Applications. The conference was held during 14-15, November, 2014 at Bhubaneswar, Odisha, India.  This volume contains papers mainly focused on Data Warehousing and Mining, Machine Learning, Mobile and Ubiquitous Computing, AI, E-commerce & Distributed Computing and Soft Computing, Evolutionary Computing, Bio-inspired Computing and its Applications.

  6. Introduction of the UNIX International Performance Management Work Group

    Science.gov (United States)

    Newman, Henry

    1993-01-01

    In this paper we presented the planned direction of the UNIX International Performance Management Work Group. This group consists of concerned system developers and users who have organized to synthesize recommendations for standard UNIX performance management subsystem interfaces and architectures. The purpose of these recommendations is to provide a core set of performance management functions and these functions can be used to build tools by hardware system developers, vertical application software developers, and performance application software developers.

  7. IV International Conference on Computer Algebra in Physical Research. Collection of abstracts

    International Nuclear Information System (INIS)

    Rostovtsev, V.A.

    1990-01-01

    The abstracts of the reports made on IV International conference on computer algebra in physical research are presented. The capabilities of application of computers for algebraic computations in high energy physics and quantum field theory are discussed. Particular attention is paid to a software for the REDUCE computer algebra system

  8. NINJA: a noninvasive framework for internal computer security hardening

    Science.gov (United States)

    Allen, Thomas G.; Thomson, Steve

    2004-07-01

    Vulnerabilities are a growing problem in both the commercial and government sector. The latest vulnerability information compiled by CERT/CC, for the year ending Dec. 31, 2002 reported 4129 vulnerabilities representing a 100% increase over the 2001 [1] (the 2003 report has not been published at the time of this writing). It doesn"t take long to realize that the growth rate of vulnerabilities greatly exceeds the rate at which the vulnerabilities can be fixed. It also doesn"t take long to realize that our nation"s networks are growing less secure at an accelerating rate. As organizations become aware of vulnerabilities they may initiate efforts to resolve them, but quickly realize that the size of the remediation project is greater than their current resources can handle. In addition, many IT tools that suggest solutions to the problems in reality only address "some" of the vulnerabilities leaving the organization unsecured and back to square one in searching for solutions. This paper proposes an auditing framework called NINJA (acronym for Network Investigation Notification Joint Architecture) for noninvasive daily scanning/auditing based on common security vulnerabilities that repeatedly occur in a network environment. This framework is used for performing regular audits in order to harden an organizations security infrastructure. The framework is based on the results obtained by the Network Security Assessment Team (NSAT) which emulates adversarial computer network operations for US Air Force organizations. Auditing is the most time consuming factor involved in securing an organization's network infrastructure. The framework discussed in this paper uses existing scripting technologies to maintain a security hardened system at a defined level of performance as specified by the computer security audit team. Mobile agents which were under development at the time of this writing are used at a minimum to improve the noninvasiveness of our scans. In general, noninvasive

  9. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    International Nuclear Information System (INIS)

    Khaleel, Mohammad A.

    2009-01-01

    This report is an account of the deliberations and conclusions of the workshop on 'Forefront Questions in Nuclear Science and the Role of High Performance Computing' held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to (1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; (2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; (3) provide nuclear physicists the opportunity to influence the development of high performance computing; and (4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  10. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.

    2009-10-01

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  11. Micromagnetics on high-performance workstation and mobile computational platforms

    Science.gov (United States)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  12. Computational performance of a smoothed particle hydrodynamics simulation for shared-memory parallel computing

    Science.gov (United States)

    Nishiura, Daisuke; Furuichi, Mikito; Sakaguchi, Hide

    2015-09-01

    The computational performance of a smoothed particle hydrodynamics (SPH) simulation is investigated for three types of current shared-memory parallel computer devices: many integrated core (MIC) processors, graphics processing units (GPUs), and multi-core CPUs. We are especially interested in efficient shared-memory allocation methods for each chipset, because the efficient data access patterns differ between compute unified device architecture (CUDA) programming for GPUs and OpenMP programming for MIC processors and multi-core CPUs. We first introduce several parallel implementation techniques for the SPH code, and then examine these on our target computer architectures to determine the most effective algorithms for each processor unit. In addition, we evaluate the effective computing performance and power efficiency of the SPH simulation on each architecture, as these are critical metrics for overall performance in a multi-device environment. In our benchmark test, the GPU is found to produce the best arithmetic performance as a standalone device unit, and gives the most efficient power consumption. The multi-core CPU obtains the most effective computing performance. The computational speed of the MIC processor on Xeon Phi approached that of two Xeon CPUs. This indicates that using MICs is an attractive choice for existing SPH codes on multi-core CPUs parallelized by OpenMP, as it gains computational acceleration without the need for significant changes to the source code.

  13. High-performance scientific computing in the cloud

    Science.gov (United States)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  14. XVI International symposium on nuclear electronics and VI International school on automation and computing in nuclear physics and astrophysics

    International Nuclear Information System (INIS)

    Churin, I.N.

    1995-01-01

    Reports and papers of the 16- International Symposium on nuclear electronics and the 6- International school on automation and computing in nuclear physics and astrophysics are presented. The latest achievements in the field of development of fact - response electronic circuits designed for detecting and spectrometric facilities are studied. The peculiar attention is paid to the systems for acquisition, processing and storage of experimental data. The modern equipment designed for data communication in the computer networks is studied

  15. Fifth generation computer systems. Proceedings of the International conference

    Energy Technology Data Exchange (ETDEWEB)

    Moto-oka, T

    1982-01-01

    The following topics were dealt with: Fifth Generation Computer Project-social needs and impact; knowledge information processing research plan; architecture research plan; knowledge information processing topics; fifth generation computer architecture considerations.

  16. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1991-03-15

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour.

  17. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour

  18. Optical interconnection networks for high-performance computing systems

    International Nuclear Information System (INIS)

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  19. Computational Fluid Dynamics and Building Energy Performance Simulation

    DEFF Research Database (Denmark)

    Nielsen, Peter V.; Tryggvason, Tryggvi

    An interconnection between a building energy performance simulation program and a Computational Fluid Dynamics program (CFD) for room air distribution will be introduced for improvement of the predictions of both the energy consumption and the indoor environment. The building energy performance...

  20. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2015-01-01

    A continuation of Contemporary High Performance Computing: From Petascale toward Exascale, this second volume continues the discussion of HPC flagship systems, major application workloads, facilities, and sponsors. The book includes of figures and pictures that capture the state of existing systems: pictures of buildings, systems in production, floorplans, and many block diagrams and charts to illustrate system design and performance.

  1. Connecting Performance Analysis and Visualization to Advance Extreme Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bremer, Peer-Timo [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Mohr, Bernd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Schulz, Martin [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Pasccci, Valerio [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Gamblin, Todd [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Brunst, Holger [Dresden Univ. of Technology (Germany)

    2015-07-29

    The characterization, modeling, analysis, and tuning of software performance has been a central topic in High Performance Computing (HPC) since its early beginnings. The overall goal is to make HPC software run faster on particular hardware, either through better scheduling, on-node resource utilization, or more efficient distributed communication.

  2. Human performance models for computer-aided engineering

    Science.gov (United States)

    Elkind, Jerome I. (Editor); Card, Stuart K. (Editor); Hochberg, Julian (Editor); Huey, Beverly Messick (Editor)

    1989-01-01

    This report discusses a topic important to the field of computational human factors: models of human performance and their use in computer-based engineering facilities for the design of complex systems. It focuses on a particular human factors design problem -- the design of cockpit systems for advanced helicopters -- and on a particular aspect of human performance -- vision and related cognitive functions. By focusing in this way, the authors were able to address the selected topics in some depth and develop findings and recommendations that they believe have application to many other aspects of human performance and to other design domains.

  3. The characteristics and performance of international joint ventures in Thailand

    OpenAIRE

    Suwannarat, P

    2010-01-01

    The importance of strategic alliances in the form of international joint ventures (IJVs) is growing in the present international business environment where competition is on a global scale. A review of the IJV literature, especially in developing countries, shows an over-emphasis on China and the NIEs (the first tier newly-industrialising economies: Taiwan, Singapore, Hong Kong, and South Korea). To date, relatively little attention has been paid to the ASEAN4 countries (the high-performing e...

  4. Survey of computer codes applicable to waste facility performance evaluations

    International Nuclear Information System (INIS)

    Alsharif, M.; Pung, D.L.; Rivera, A.L.; Dole, L.R.

    1988-01-01

    This study is an effort to review existing information that is useful to develop an integrated model for predicting the performance of a radioactive waste facility. A summary description of 162 computer codes is given. The identified computer programs address the performance of waste packages, waste transport and equilibrium geochemistry, hydrological processes in unsaturated and saturated zones, and general waste facility performance assessment. Some programs also deal with thermal analysis, structural analysis, and special purposes. A number of these computer programs are being used by the US Department of Energy, the US Nuclear Regulatory Commission, and their contractors to analyze various aspects of waste package performance. Fifty-five of these codes were identified as being potentially useful on the analysis of low-level radioactive waste facilities located above the water table. The code summaries include authors, identification data, model types, and pertinent references. 14 refs., 5 tabs

  5. Knock-limited performance of several internal coolants

    Science.gov (United States)

    Bellman, Donald R; Evvard, John C

    1945-01-01

    The effect of internal cooling on the knock-limited performance of an-f-28 fuel was investigated in a CFR engine, and the following internal coolants were used: (1) water, (2), methyl alcohol-water mixture, (3) ammonia-methyl alcohol-water mixture, (4) monomethylamine-water mixture, (5) dimethylamine-water mixture, and (6) trimethylamine-water mixture. Tests were run at inlet-air temperatures of 150 degrees and 250 degrees F. to indicate the temperature sensitivity of the internal-coolant solutions.

  6. Routing performance analysis and optimization within a massively parallel computer

    Science.gov (United States)

    Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen

    2013-04-16

    An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.

  7. Performance of International Medical Students In psychosocial medicine.

    Science.gov (United States)

    Huhn, D; Lauter, J; Roesch Ely, D; Koch, E; Möltner, A; Herzog, W; Resch, F; Herpertz, S C; Nikendei, C

    2017-07-10

    Particularly at the beginning of their studies, international medical students face a number of language-related, social and intercultural challenges. Thus, they perform poorer than their local counterparts in written and oral examinations as well as in Objective Structured Clinical Examinations (OSCEs) in the fields of internal medicine and surgery. It is still unknown how international students perform in an OSCE in the field of psychosocial medicine compared to their local fellow students. All students (N = 1033) taking the OSCE in the field of psychosocial medicine and an accompanying written examination in their eighth or ninth semester between 2012 and 2015 were included in the analysis. The OSCE consisted of four different stations, in which students had to perform and manage a patient encounter with simulated patients suffering from 1) post-traumatic stress disorder, 2) schizophrenia, 3) borderline personality disorder and 4) either suicidal tendency or dementia. Students were evaluated by trained lecturers using global checklists assessing specific professional domains, namely building a relationship with the patient, conversational skills, anamnesis, as well as psychopathological findings and decision-making. International medical students scored significantly poorer than their local peers (p International students showed poorer results in clinical-practical exams in the field of psychosocial medicine, with conversational skills yielding the poorest scores. However, regarding factual and practical knowledge examined via a multiple-choice test, no differences emerged between international and local students. These findings have decisive implications for relationship building in the doctor-patient relationship.

  8. GPU-based high-performance computing for radiation therapy

    International Nuclear Information System (INIS)

    Jia, Xun; Jiang, Steve B; Ziegenhein, Peter

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. (topical review)

  9. Evaluating the Implementation of International Computing Curricular in African Universities: A Design-Reality Gap Approach

    Science.gov (United States)

    Dasuki, Salihu Ibrahim; Ogedebe, Peter; Kanya, Rislana Abdulazeez; Ndume, Hauwa; Makinde, Julius

    2015-01-01

    Efforts are been made by Universities in developing countries to ensure that it's graduate are not left behind in the competitive global information society; thus have adopted international computing curricular for their computing degree programs. However, adopting these international curricula seem to be very challenging for developing countries…

  10. International Conference on Computer Science and Information Technologies

    CERN Document Server

    2017-01-01

    The book reports on new theories and applications in the field of intelligent systems and computing. It covers computational and artificial intelligence methods, as well as advances in computer vision, current issue in big data and cloud computing, computation linguistics, cyber-physical systems as well as topics in intelligent information management. Written by active researchers, the different chapters are based on contributions presented at the workshop in intelligent systems and computing (ISC), held during CSIT 2016, September 6-9, and jointly organized by the Lviv Polytechnic National University, Ukraine, the Kharkiv National University of RadioElectronics, Ukraine, and the Technical University of Lodz, Poland, under patronage of Ministry of Education and Science of Ukraine. All in all, the book provides academics and professionals with extensive information and a timely snapshot of the field of intelligent systems, and it is expected to foster new discussions and collaborations among different groups. ...

  11. Computer Simulation Performed for Columbia Project Cooling System

    Science.gov (United States)

    Ahmad, Jasim

    2005-01-01

    This demo shows a high-fidelity simulation of the air flow in the main computer room housing the Columbia (10,024 intel titanium processors) system. The simulation asseses the performance of the cooling system and identified deficiencies, and recommended modifications to eliminate them. It used two in house software packages on NAS supercomputers: Chimera Grid tools to generate a geometric model of the computer room, OVERFLOW-2 code for fluid and thermal simulation. This state-of-the-art technology can be easily extended to provide a general capability for air flow analyses on any modern computer room. Columbia_CFD_black.tiff

  12. Computed Tomographic Morphometry of the Internal Anatomy of Mandibular Second Primary Molars.

    Science.gov (United States)

    Kurthukoti, Ameet J; Sharma, Pranjal; Swamy, Dinesh Francis; Shashidara, R; Swamy, Elaine Barretto

    2015-01-01

    Need for the study: The most important procedure for a successful endodontic treatment is the cleaning and shaping of the canal system. Understanding the internal anatomy of teeth provides valuable information to the clinician that would help him achieve higher clinical success during endodontic therapy. To evaluate by computed tomography-the internal anatomy of mandibular second primary molars with respect to the number of canals, cross-sectional shape of canals, cross-sectional area of canals and the root dentin thickness. A total of 31 mandibular second primary molars were subjected to computed-tomographic evaluation in the transverse plane, after mounting them in a prefabricated template. The images, thus, obtained were analyzed using De-winter Bio-wizard® software. All the samples demonstrated two canals in the mesial root, while majority of the samples (65.48%) demonstrated two canals in the distal root. The cross-sectional images of the mesial canals demonstrated a round shape, while the distal canals demonstrated an irregular shape. The root dentin thickness was highly reduced on the distal aspect of mesial and mesial aspect of distal canals. The mandibular second primary molars demonstrated wide variation and complexities in their internal anatomy. A thorough understanding of the complexity of the root canal system is essential for understanding the principles and problems of shaping and cleaning, determining the apical limits and dimensions of canal preparations, and for performing successful endodontic procedures. How to cite this article: Kurthukoti AJ, Sharma P, Swamy DF, Shashidara R, Swamy EB. Computed Tomographic Morphometry of the Internal Anatomy of Mandibular Second Primary Molars. Int J Clin Pediatr Dent 2015;8(3):202-207.

  13. 2012 International Conference on Human-centric Computing

    CERN Document Server

    Jin, Qun; Yeo, Martin; Hu, Bin; Human Centric Technology and Service in Smart Space, HumanCom 2012

    2012-01-01

    The theme of HumanCom is focused on the various aspects of human-centric computing for advances in computer science and its applications and provides an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of human-centric computing. In addition, the conference will publish high quality papers which are closely related to the various theories and practical applications in human-centric computing. Furthermore, we expect that the conference and its publications will be a trigger for further related research and technology improvements in this important subject.

  14. High-Performance Java Codes for Computational Fluid Dynamics

    Science.gov (United States)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  15. High Performance Computing Software Applications for Space Situational Awareness

    Science.gov (United States)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  16. Computational fluid dynamics applied to flows in an internal combustion engine

    Science.gov (United States)

    Griffin, M. D.; Diwakar, R.; Anderson, J. D., Jr.; Jones, E.

    1978-01-01

    The reported investigation is a continuation of studies conducted by Diwakar et al. (1976) and Griffin et al. (1976), who reported the first computational fluid dynamic results for the two-dimensional flowfield for all four strokes of a reciprocating internal combustion (IC) engine cycle. An analysis of rectangular and cylindrical three-dimensional engine models is performed. The working fluid is assumed to be inviscid air of constant specific heats. Calculations are carried out of a four-stroke IC engine flowfield wherein detailed finite-rate chemical combustion of a gasoline-air mixture is included. The calculations remain basically inviscid, except that in some instances thermal conduction is included to allow a more realistic model of the localized sparking of the mixture. All the results of the investigation are obtained by means of an explicity time-dependent finite-difference technique, using a high-speed digital computer.

  17. Advances in Computer Science, Engineering & Applications : Proceedings of the Second International Conference on Computer Science, Engineering & Applications

    CERN Document Server

    Zizka, Jan; Nagamalai, Dhinaharan

    2012-01-01

    The International conference series on Computer Science, Engineering & Applications (ICCSEA) aims to bring together researchers and practitioners from academia and industry to focus on understanding computer science, engineering and applications and to establish new collaborations in these areas. The Second International Conference on Computer Science, Engineering & Applications (ICCSEA-2012), held in Delhi, India, during May 25-27, 2012 attracted many local and international delegates, presenting a balanced mixture of  intellect and research both from the East and from the West. Upon a strenuous peer-review process the best submissions were selected leading to an exciting, rich and a high quality technical conference program, which featured high-impact presentations in the latest developments of various areas of computer science, engineering and applications research.  

  18. Advances in Computer Science, Engineering & Applications : Proceedings of the Second International Conference on Computer Science, Engineering & Applications

    CERN Document Server

    Zizka, Jan; Nagamalai, Dhinaharan

    2012-01-01

    The International conference series on Computer Science, Engineering & Applications (ICCSEA) aims to bring together researchers and practitioners from academia and industry to focus on understanding computer science, engineering and applications and to establish new collaborations in these areas. The Second International Conference on Computer Science, Engineering & Applications (ICCSEA-2012), held in Delhi, India, during May 25-27, 2012 attracted many local and international delegates, presenting a balanced mixture of  intellect and research both from the East and from the West. Upon a strenuous peer-review process the best submissions were selected leading to an exciting, rich and a high quality technical conference program, which featured high-impact presentations in the latest developments of various areas of computer science, engineering and applications research.

  19. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Science.gov (United States)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the

  20. Some gender issues in educational computer use: results of an international comparative survey

    OpenAIRE

    Janssen Reinen, I.A.M.; Plomp, T.

    1993-01-01

    In the framework of the Computers in Education international study of the International Association for the Evaluation of Educational Achievement (IEA), data have been collected concerning the use of computers in 21 countries. This article examines some results regarding the involvement of women in the implementation and use of computers in the educational practice of elementary, lower secondary and upper secondary education in participating countries. The results show that in many countries ...

  1. A Perspective on Computational Human Performance Models as Design Tools

    Science.gov (United States)

    Jones, Patricia M.

    2010-01-01

    The design of interactive systems, including levels of automation, displays, and controls, is usually based on design guidelines and iterative empirical prototyping. A complementary approach is to use computational human performance models to evaluate designs. An integrated strategy of model-based and empirical test and evaluation activities is particularly attractive as a methodology for verification and validation of human-rated systems for commercial space. This talk will review several computational human performance modeling approaches and their applicability to design of display and control requirements.

  2. High performance computing and communications: FY 1997 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  3. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  4. Updating the INDAC computer application of internal dosimetry

    International Nuclear Information System (INIS)

    Bravo Perez-Tinao, B.; Marchena Gonzalez, P.; Sollet Sanudo, E.; Serrano Calvo, E.

    2013-01-01

    The initial objective of this project is to expand the application INDAC currently used in internal dosimetry services of the Spanish nuclear power plants and Tecnatom for estimating the effective doses of internal dosimetry of workers in direct action. or in-vivo dosimetry. (Author)

  5. Advances in Computing and Information Technology : Proceedings of the Second International

    CERN Document Server

    Nagamalai, Dhinaharan; Chaki, Nabendu

    2012-01-01

    The international conference on Advances in Computing and Information technology (ACITY 2012) provides an excellent international forum for both academics and professionals for sharing knowledge and results in theory, methodology and applications of Computer Science and Information Technology. The Second International Conference on Advances in Computing and Information technology (ACITY 2012), held in Chennai, India, during July 13-15, 2012, covered a number of topics in all major fields of Computer Science and Information Technology including: networking and communications, network security and applications, web and internet computing, ubiquitous computing, algorithms, bioinformatics, digital image processing and pattern recognition, artificial intelligence, soft computing and applications. Upon a strength review process, a number of high-quality, presenting not only innovative ideas but also a founded evaluation and a strong argumentation of the same, were selected and collected in the present proceedings, ...

  6. Cultural Distance and the Performance of International Joint Ventures

    DEFF Research Database (Denmark)

    Christoffersen, Jeppe; Globerman, Steven; Nielsen, Bo Bernhard

    2013-01-01

    This study provides a critical summary and assessment of the empirical literature on the relationship between cultural distance and the performance of international joint ventures (IJVs) based on studies published over the period 1993-2008. The existing literature reports inconsistent and largely...

  7. Impact of English Proficiency on Academic Performance of International Students

    Science.gov (United States)

    Martirosyan, Nara M.; Hwang, Eunjin; Wanjohi, Reubenson

    2015-01-01

    Using an ex-post facto, non-experimental approach, this research examined the impact of English language proficiency and multilingualism on the academic performance of international students enrolled in a four-year university located in north central Louisiana in the United States. Data were collected through a self-reported questionnaire from 59…

  8. A Review of Antecedents of International Strategic Alliance Performance

    DEFF Research Database (Denmark)

    Christoffersen, Jeppe

    2013-01-01

    This paper provides a systematic review of 165 empirical studies on the antecedents of performance in international strategic alliances. It provides the most detailed display of definitions, rationales, measures and findings currently available. Hence, this state-of-the art literature review...

  9. South African manufacturing performance in international perspective, 1970-1999

    NARCIS (Netherlands)

    Dijk, Michiel van

    2002-01-01

    This paper analyses the historical performance of the South African manufacturing sector in an international perspective. After a brief overview of the industrialisation process of South Africa during the 20th century, a binary comparison of manufacturing output and productivity between South Africa

  10. Visual Analysis of Cloud Computing Performance Using Behavioral Lines.

    Science.gov (United States)

    Muelder, Chris; Zhu, Biao; Chen, Wei; Zhang, Hongxin; Ma, Kwan-Liu

    2016-02-29

    Cloud computing is an essential technology to Big Data analytics and services. A cloud computing system is often comprised of a large number of parallel computing and storage devices. Monitoring the usage and performance of such a system is important for efficient operations, maintenance, and security. Tracing every application on a large cloud system is untenable due to scale and privacy issues. But profile data can be collected relatively efficiently by regularly sampling the state of the system, including properties such as CPU load, memory usage, network usage, and others, creating a set of multivariate time series for each system. Adequate tools for studying such large-scale, multidimensional data are lacking. In this paper, we present a visual based analysis approach to understanding and analyzing the performance and behavior of cloud computing systems. Our design is based on similarity measures and a layout method to portray the behavior of each compute node over time. When visualizing a large number of behavioral lines together, distinct patterns often appear suggesting particular types of performance bottleneck. The resulting system provides multiple linked views, which allow the user to interactively explore the data by examining the data or a selected subset at different levels of detail. Our case studies, which use datasets collected from two different cloud systems, show that this visual based approach is effective in identifying trends and anomalies of the systems.

  11. 8th International Conference on Computer Recognition Systems

    CERN Document Server

    Jackowski, Konrad; Kurzynski, Marek; Wozniak, Michał; Zolnierek, Andrzej

    2013-01-01

    The computer recognition systems are nowadays one of the most promising directions in artificial intelligence. This book is the most comprehensive study of this field. It contains a collection of 86 carefully selected articles contributed by experts of pattern recognition. It reports on current research with respect to both methodology and applications. In particular, it includes the following sections: Biometrics Data Stream Classification and Big Data Analytics  Features, learning, and classifiers Image processing and computer vision Medical applications Miscellaneous applications Pattern recognition and image processing in robotics  Speech and word recognition This book is a great reference tool for scientists who deal with the problems of designing computer pattern recognition systems. Its target readers can be the as well researchers as students of computer science, artificial intelligence or robotics.

  12. 9th International Conference on Computer Recognition Systems

    CERN Document Server

    Jackowski, Konrad; Kurzyński, Marek; Woźniak, Michał; Żołnierek, Andrzej

    2016-01-01

    The computer recognition systems are nowadays one of the most promising directions in artificial intelligence. This book is the most comprehensive study of this field. It contains a collection of 79 carefully selected articles contributed by experts of pattern recognition. It reports on current research with respect to both methodology and applications. In particular, it includes the following sections: Features, learning, and classifiers Biometrics Data Stream Classification and Big Data Analytics Image processing and computer vision Medical applications Applications RGB-D perception: recent developments and applications This book is a great reference tool for scientists who deal with the problems of designing computer pattern recognition systems. Its target readers can be the as well researchers as students of computer science, artificial intelligence or robotics.  .

  13. 2nd International Conference on Mathematics and Computing

    CERN Document Server

    Chowdhury, Dipanwita; Giri, Debasis

    2015-01-01

    This book discusses recent developments and contemporary research in mathematics, statistics and their applications in computing. All contributing authors are eminent academicians, scientists, researchers and scholars in their respective fields, hailing from around the world. This is the second conference on mathematics and computing organized at Haldia Institute of Technology, India. The conference has emerged as a powerful forum, offering researchers a venue to discuss, interact and collaborate, and stimulating the advancement of mathematics and its applications in computer science. The book will allow aspiring researchers to update their knowledge of cryptography, algebra, frame theory, optimizations, stochastic processes, compressive sensing, functional analysis, complex variables, etc. Educating future consumers, users, producers, developers and researchers in mathematics and computing is a challenging task and essential to the development of modern society. Hence, mathematics and its applications in com...

  14. 5th International Conference on Fuzzy and Neuro Computing

    CERN Document Server

    Panigrahi, Bijaya; Das, Swagatam; Suganthan, Ponnuthurai

    2015-01-01

    This proceedings bring together contributions from researchers from academia and industry to report the latest cutting edge research made in the areas of Fuzzy Computing, Neuro Computing and hybrid Neuro-Fuzzy Computing in the paradigm of Soft Computing. The FANCCO 2015 conference explored new application areas, design novel hybrid algorithms for solving different real world application problems. After a rigorous review of the 68 submissions from all over the world, the referees panel selected 27 papers to be presented at the Conference. The accepted papers have a good, balanced mix of theory and applications. The techniques ranged from fuzzy neural networks, decision trees, spiking neural networks, self organizing feature map, support vector regression, adaptive neuro fuzzy inference system, extreme learning machine, fuzzy multi criteria decision making, machine learning, web usage mining, Takagi-Sugeno Inference system, extended Kalman filter, Goedel type logic, fuzzy formal concept analysis, biclustering e...

  15. The Effect of Internal Leakages on Thermal Performance in NPPs

    International Nuclear Information System (INIS)

    Heo, Gyun Young; Kim, Doo Won; Jang, Seok Bo

    2007-01-01

    Since the Balance Of Plant (BOP, limited to a turbine cycle in this study) does not contain radioactive material, regulatory authorities did not need to have concerns on it. As the interests on safety and performance is getting more serious and extensive, controlling the level of safety and performance of a BOP have just begun or is about to begin. The performance standards or ageing management programs of the major equipment in a BOP is being developed. The regulatory requirements for tests and/or maintenance are being actively built up. There is also a probabilistic approach quantifying performance of a BOP. The study on quantifying the rate of unanticipated shutdowns caused by careless maintenance and/or tests conducted in a BOP is going on. In this study, the modeling of the entire BOP and the methodologies of thermal performance analysis should be one of the must-have items as well. This study was achieved to ensure fundamental skills related to 1) the detailed steady-state modeling of a BOP and 2) thermal performance analysis under various conditions. Particularly, the paper will focus on the effect of internal leakages inside the valves and FeedWater Heaters (FWHs). The internal leakage is regarded as the flow movement through the isolated path but remaining inside the system boundary of a BOP. For instance, the leakage from one side of a valve seat to the other side, or the leakage through the cracked tubes or tube-sheets in a heat exchanger correspond to internal leakages. We made a BOP model of OPR1000 and investigated thermal performance under the internal leakage in Turbine Bypass Condenser Dump Valves (TBCDV) and FWHs

  16. INTRANS. A computer code for the non-linear structural response analysis of reactor internals under transient loads

    International Nuclear Information System (INIS)

    Ramani, D.T.

    1977-01-01

    The 'INTRANS' system is a general purpose computer code, designed to perform linear and non-linear structural stress and deflection analysis of impacting or non-impacting nuclear reactor internals components coupled with reactor vessel, shield building and external as well as internal gapped spring support system. This paper describes in general a unique computational procedure for evaluating the dynamic response of reactor internals, descretised as beam and lumped mass structural system and subjected to external transient loads such as seismic and LOCA time-history forces. The computational procedure is outlined in the INTRANS code, which computes component flexibilities of a discrete lumped mass planar model of reactor internals by idealising an assemblage of finite elements consisting of linear elastic beams with bending, torsional and shear stiffnesses interacted with external or internal linear as well as non-linear multi-gapped spring support system. The method of analysis is based on the displacement method and the code uses the fourth-order Runge-Kutta numerical integration technique as a basis for solution of dynamic equilibrium equations of motion for the system. During the computing process, the dynamic response of each lumped mass is calculated at specific instant of time using well-known step-by-step procedure. At any instant of time then, the transient dynamic motions of the system are held stationary and based on the predicted motions and internal forces of the previous instant. From which complete response at any time-step of interest may then be computed. Using this iterative process, the relationship between motions and internal forces is satisfied step by step throughout the time interval

  17. High performance computing and communications: FY 1996 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  18. Impact of Information and Communication Technologies in International Negotiation Performance

    Directory of Open Access Journals (Sweden)

    Jose Alejandro Cano

    2015-08-01

    Full Text Available Objective – This article establishes relations between the level of importance of Information and Communication Technologies (ICT, the frequency of use of these tools, and the efficiency and efficacy achieved in the international negotiation processes. Design/methodology/approach – A research study is carried out in 180 import and / or export firms in Medellin city, and the proposed relations are explained through a theoretical model. With the information obtained, correlation and comparative analysis of efficiency and efficacy indicators are made. Findings – ICT are essential to perform international processes, therefore the increase in the importance level and frequency of use of these technologies allows perceiving better results about efficiency and efficacy increase. Practical implications – Increasing the application of ICT to the international negotiation processes generates a reduction of cost and time in negotiation and an increase of international sale contract, however ICT must be complemented by other elements such as attitude, training and experience of the negotiator to obtain satisfactory results. Originality/value – The article proposes an original model to study the effect of the importance level and frequency of use of ICT on the performance of international negotiation process.

  19. A strategy of the company performance in the international market

    Directory of Open Access Journals (Sweden)

    Savić Marko

    2014-01-01

    Full Text Available The aim of this paper is to determine an appropriate framework of strategic performance of companies in the international market. A company that plans to enter into the international market has to prepare well, has to choose adequate strategy, should have thought-market performance and developed marketing program which follows the activity of company. The choice of the appropriate strategy for company's entry into foreign markets affects the long term stability of the market functions and determines the scope of operations abroad by which the company can successfully develop its own abilities as well as to incur risks and uncertainties. There is a constant change in the environment, and by being aware of the changes, we choose between different options the company is opting for when selecting the international market entry strategy. The company should through the country entry analysis and entry forms; examine options that have greater significance in terms of its strategic objectives at the international market. Selecting the right strategy is the key issue that the company which wants to enter into international market is facing with.

  20. Multi-Language Programming Environments for High Performance Java Computing

    OpenAIRE

    Vladimir Getov; Paul Gray; Sava Mintchev; Vaidy Sunderam

    1999-01-01

    Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI) tool which provides ...

  1. Computational modelling of expressive music performance in hexaphonic guitar

    OpenAIRE

    Siquier, Marc

    2017-01-01

    Computational modelling of expressive music performance has been widely studied in the past. While previous work in this area has been mainly focused on classical piano music, there has been very little work on guitar music, and such work has focused on monophonic guitar playing. In this work, we present a machine learning approach to automatically generate expressive performances from non expressive music scores for polyphonic guitar. We treated guitar as an hexaphonic instrument, obtaining ...

  2. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa

    2012-10-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a representative HPC application. © 2012 IEEE.

  3. Computer science of the high performance; Informatica del alto rendimiento

    Energy Technology Data Exchange (ETDEWEB)

    Moraleda, A.

    2008-07-01

    The high performance computing is taking shape as a powerful accelerator of the process of innovation, to drastically reduce the waiting times for access to the results and the findings in a growing number of processes and activities as complex and important as medicine, genetics, pharmacology, environment, natural resources management or the simulation of complex processes in a wide variety of industries. (Author)

  4. Performativity, Fabrication and Trust: Exploring Computer-Mediated Moderation

    Science.gov (United States)

    Clapham, Andrew

    2013-01-01

    Based on research conducted in an English secondary school, this paper explores computer-mediated moderation as a performative tool. The Module Assessment Meeting (MAM) was the moderation approach under investigation. I mobilise ethnographic data generated by a key informant, and triangulated with that from other actors in the setting, in order to…

  5. Running Interactive Jobs on Peregrine | High-Performance Computing | NREL

    Science.gov (United States)

    shell prompt, which allows users to execute commands and scripts as they would on the login nodes. Login performed on the compute nodes rather than on login nodes. This page provides instructions and examples of , start GUIs etc. and the commands will execute on that node instead of on the login node. The -V option

  6. Performance Evaluation of a Mobile Wireless Computational Grid ...

    African Journals Online (AJOL)

    This work developed and simulated a mathematical model for a mobile wireless computational Grid architecture using networks of queuing theory. This was in order to evaluate the performance of theload-balancing three tier hierarchical configuration. The throughput and resource utilizationmetrics were measured and the ...

  7. The performance of low-cost commercial cloud computing as an alternative in computational chemistry.

    Science.gov (United States)

    Thackston, Russell; Fortenberry, Ryan C

    2015-05-05

    The growth of commercial cloud computing (CCC) as a viable means of computational infrastructure is largely unexplored for the purposes of quantum chemistry. In this work, the PSI4 suite of computational chemistry programs is installed on five different types of Amazon World Services CCC platforms. The performance for a set of electronically excited state single-point energies is compared between these CCC platforms and typical, "in-house" physical machines. Further considerations are made for the number of cores or virtual CPUs (vCPUs, for the CCC platforms), but no considerations are made for full parallelization of the program (even though parallelization of the BLAS library is implemented), complete high-performance computing cluster utilization, or steal time. Even with this most pessimistic view of the computations, CCC resources are shown to be more cost effective for significant numbers of typical quantum chemistry computations. Large numbers of large computations are still best utilized by more traditional means, but smaller-scale research may be more effectively undertaken through CCC services. © 2015 Wiley Periodicals, Inc.

  8. 7th International Conference on Bio-Inspired Computing : Theories and Applications

    CERN Document Server

    Singh, Pramod; Deep, Kusum; Pant, Millie; Nagar, Atulya

    2013-01-01

    The book is a collection of high quality peer reviewed research papers presented in Seventh International Conference on Bio-Inspired Computing (BIC-TA 2012) held at ABV-IIITM Gwalior, India. These research papers provide the latest developments in the broad area of "Computational Intelligence". The book discusses wide variety of industrial, engineering and scientific applications of nature/bio-inspired computing and presents invited papers from the inventors/originators of novel computational techniques.

  9. Performance evaluation of scientific programs on advanced architecture computers

    International Nuclear Information System (INIS)

    Walker, D.W.; Messina, P.; Baille, C.F.

    1988-01-01

    Recently a number of advanced architecture machines have become commercially available. These new machines promise better cost-performance then traditional computers, and some of them have the potential of competing with current supercomputers, such as the Cray X/MP, in terms of maximum performance. This paper describes an on-going project to evaluate a broad range of advanced architecture computers using a number of complete scientific application programs. The computers to be evaluated include distributed- memory machines such as the NCUBE, INTEL and Caltech/JPL hypercubes, and the MEIKO computing surface, shared-memory, bus architecture machines such as the Sequent Balance and the Alliant, very long instruction word machines such as the Multiflow Trace 7/200 computer, traditional supercomputers such as the Cray X.MP and Cray-2, and SIMD machines such as the Connection Machine. Currently 11 application codes from a number of scientific disciplines have been selected, although it is not intended to run all codes on all machines. Results are presented for two of the codes (QCD and missile tracking), and future work is proposed

  10. High-performance computing in accelerating structure design and analysis

    International Nuclear Information System (INIS)

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  11. A study to establish international diagnostic reference levels for paediatric computed tomography

    International Nuclear Information System (INIS)

    Vassileva, J.; Rehani, M.; Kostova-Lefterova, D.; Al-Naemi, H.M.; Al Suwaidi, J.S.; Arandjic, D.; Bashier, E.H.O.; Kodlulovich Renha, S.; El-Nachef, L.; Aguilar, J.G.; Gershan, V.; Gershkevitsh, E.; Gruppetta, E.; Hustuc, A.; Jauhari, A.; Hassan Kharita, Mohammad; Khelassi-Toutaoui, N.; Khosravi, H.R.; Khoury, H.; Kralik, I.; Mahere, S.; Mazuoliene, J.; Mora, P.; Muhogora, W.; Muthuvelu, P.; Nikodemova, D.; Novak, L.; Pallewatte, A.; Pekarovic, D.; Shaaban, M.; Shelly, E.; Stepanyan, K.; Thelsy, N.; Visrutaratna, P.; Zaman, A.

    2015-01-01

    The article reports results from the largest international dose survey in paediatric computed tomography (CT) in 32 countries and proposes international diagnostic reference levels (DRLs) in terms of computed tomography dose index (CTDI vol ) and dose length product (DLP). It also assesses whether mean or median values of individual facilities should be used. A total of 6115 individual patient data were recorded among four age groups: <1 y, >1-5 y, >5-10 y and >10-15 y. CTDI w , CTDI vol and DLP from the CT console were recorded in dedicated forms together with patient data and technical parameters. Statistical analysis was performed, and international DRLs were established at rounded 75. percentile values of distribution of median values from all CT facilities. The study presents evidence in favour of using median rather than mean of patient dose indices as the representative of typical local dose in a facility, and for establishing DRLs as third quartile of median values. International DRLs were established for paediatric CT examinations for routine head, chest and abdomen in the four age groups. DRLs for CTDI vol are similar to the reference values from other published reports, with some differences for chest and abdomen CT. Higher variations were observed between DLP values, based on a survey of whole multi-phase exams. It may be noted that other studies in literature were based on single phase only. DRLs reported in this article can be used in countries without sufficient medical physics support to identify non-optimised practice. Recommendations to improve the accuracy and importance of future surveys are provided. (authors)

  12. 1st International Conference on Computational Intelligence in Data Mining

    CERN Document Server

    Behera, Himansu; Mandal, Jyotsna; Mohapatra, Durga

    2015-01-01

    The contributed volume aims to explicate and address the difficulties and challenges for the seamless integration of two core disciplines of computer science, i.e., computational intelligence and data mining. Data Mining aims at the automatic discovery of underlying non-trivial knowledge from datasets by applying intelligent analysis techniques. The interest in this research area has experienced a considerable growth in the last years due to two key factors: (a) knowledge hidden in organizations’ databases can be exploited to improve strategic and managerial decision-making; (b) the large volume of data managed by organizations makes it impossible to carry out a manual analysis. The book addresses different methods and techniques of integration for enhancing the overall goal of data mining. The book helps to disseminate the knowledge about some innovative, active research directions in the field of data mining, machine and computational intelligence, along with some current issues and applications of relate...

  13. Static Memory Deduplication for Performance Optimization in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Gangyong Jia

    2017-04-01

    Full Text Available In a cloud computing environment, the number of virtual machines (VMs on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.

  14. Static Memory Deduplication for Performance Optimization in Cloud Computing.

    Science.gov (United States)

    Jia, Gangyong; Han, Guangjie; Wang, Hao; Yang, Xuan

    2017-04-27

    In a cloud computing environment, the number of virtual machines (VMs) on a single physical server and the number of applications running on each VM are continuously growing. This has led to an enormous increase in the demand of memory capacity and subsequent increase in the energy consumption in the cloud. Lack of enough memory has become a major bottleneck for scalability and performance of virtualization interfaces in cloud computing. To address this problem, memory deduplication techniques which reduce memory demand through page sharing are being adopted. However, such techniques suffer from overheads in terms of number of online comparisons required for the memory deduplication. In this paper, we propose a static memory deduplication (SMD) technique which can reduce memory capacity requirement and provide performance optimization in cloud computing. The main innovation of SMD is that the process of page detection is performed offline, thus potentially reducing the performance cost, especially in terms of response time. In SMD, page comparisons are restricted to the code segment, which has the highest shared content. Our experimental results show that SMD efficiently reduces memory capacity requirement and improves performance. We demonstrate that, compared to other approaches, the cost in terms of the response time is negligible.

  15. Physics Computing '92: Proceedings of the 4th International Conference

    Science.gov (United States)

    de Groot, Robert A.; Nadrchal, Jaroslav

    1993-04-01

    The Table of Contents for the book is as follows: * Preface * INVITED PAPERS * Ab Initio Theoretical Approaches to the Structural, Electronic and Vibrational Properties of Small Clusters and Fullerenes: The State of the Art * Neural Multigrid Methods for Gauge Theories and Other Disordered Systems * Multicanonical Monte Carlo Simulations * On the Use of the Symbolic Language Maple in Physics and Chemistry: Several Examples * Nonequilibrium Phase Transitions in Catalysis and Population Models * Computer Algebra, Symmetry Analysis and Integrability of Nonlinear Evolution Equations * The Path-Integral Quantum Simulation of Hydrogen in Metals * Digital Optical Computing: A New Approach of Systolic Arrays Based on Coherence Modulation of Light and Integrated Optics Technology * Molecular Dynamics Simulations of Granular Materials * Numerical Implementation of a K.A.M. Algorithm * Quasi-Monte Carlo, Quasi-Random Numbers and Quasi-Error Estimates * What Can We Learn from QMC Simulations * Physics of Fluctuating Membranes * Plato, Apollonius, and Klein: Playing with Spheres * Steady States in Nonequilibrium Lattice Systems * CONVODE: A REDUCE Package for Differential Equations * Chaos in Coupled Rotators * Symplectic Numerical Methods for Hamiltonian Problems * Computer Simulations of Surfactant Self Assembly * High-dimensional and Very Large Cellular Automata for Immunological Shape Space * A Review of the Lattice Boltzmann Method * Electronic Structure of Solids in the Self-interaction Corrected Local-spin-density Approximation * Dedicated Computers for Lattice Gauge Theory Simulations * Physics Education: A Survey of Problems and Possible Solutions * Parallel Computing and Electronic-Structure Theory * High Precision Simulation Techniques for Lattice Field Theory * CONTRIBUTED PAPERS * Case Study of Microscale Hydrodynamics Using Molecular Dynamics and Lattice Gas Methods * Computer Modelling of the Structural and Electronic Properties of the Supported Metal Catalysis

  16. Computational Analysis on Performance of Thermal Energy Storage (TES) Diffuser

    Science.gov (United States)

    Adib, M. A. H. M.; Adnan, F.; Ismail, A. R.; Kardigama, K.; Salaam, H. A.; Ahmad, Z.; Johari, N. H.; Anuar, Z.; Azmi, N. S. N.

    2012-09-01

    Application of thermal energy storage (TES) system reduces cost and energy consumption. The performance of the overall operation is affected by diffuser design. In this study, computational analysis is used to determine the thermocline thickness. Three dimensional simulations with different tank height-to-diameter ratio (HD), diffuser opening and the effect of difference number of diffuser holes are investigated. Medium HD tanks simulations with double ring octagonal diffuser show good thermocline behavior and clear distinction between warm and cold water. The result show, the best performance of thermocline thickness during 50% time charging occur in medium tank with height-to-diameter ratio of 4.0 and double ring octagonal diffuser with 48 holes (9mm opening ~ 60%) acceptable compared to diffuser with 6mm ~ 40% and 12mm ~ 80% opening. The conclusion is computational analysis method are very useful in the study on performance of thermal energy storage (TES).

  17. Computational Analysis on Performance of Thermal Energy Storage (TES) Diffuser

    International Nuclear Information System (INIS)

    Adib, M A H M; Ismail, A R; Kardigama, K; Salaam, H A; Ahmad, Z; Johari, N H; Anuar, Z; Azmi, N S N; Adnan, F

    2012-01-01

    Application of thermal energy storage (TES) system reduces cost and energy consumption. The performance of the overall operation is affected by diffuser design. In this study, computational analysis is used to determine the thermocline thickness. Three dimensional simulations with different tank height-to-diameter ratio (HD), diffuser opening and the effect of difference number of diffuser holes are investigated. Medium HD tanks simulations with double ring octagonal diffuser show good thermocline behavior and clear distinction between warm and cold water. The result show, the best performance of thermocline thickness during 50% time charging occur in medium tank with height-to-diameter ratio of 4.0 and double ring octagonal diffuser with 48 holes (9mm opening ∼ 60%) acceptable compared to diffuser with 6mm ∼ 40% and 12mm ∼ 80% opening. The conclusion is computational analysis method are very useful in the study on performance of thermal energy storage (TES).

  18. Benchmarking high performance computing architectures with CMS’ skeleton framework

    Science.gov (United States)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  19. Rotary engine performance computer program (RCEMAP and RCEMAPPC): User's guide

    Science.gov (United States)

    Bartrand, Timothy A.; Willis, Edward A.

    1993-01-01

    This report is a user's guide for a computer code that simulates the performance of several rotary combustion engine configurations. It is intended to assist prospective users in getting started with RCEMAP and/or RCEMAPPC. RCEMAP (Rotary Combustion Engine performance MAP generating code) is the mainframe version, while RCEMAPPC is a simplified subset designed for the personal computer, or PC, environment. Both versions are based on an open, zero-dimensional combustion system model for the prediction of instantaneous pressures, temperature, chemical composition and other in-chamber thermodynamic properties. Both versions predict overall engine performance and thermal characteristics, including bmep, bsfc, exhaust gas temperature, average material temperatures, and turbocharger operating conditions. Required inputs include engine geometry, materials, constants for use in the combustion heat release model, and turbomachinery maps. Illustrative examples and sample input files for both versions are included.

  20. Performance analysis of a SOFC under direct internal reforming conditions

    Energy Technology Data Exchange (ETDEWEB)

    Janardhanan, Vinod M.; Deutschmann, Olaf [Institute for Chemical Technology and Polymer Chemistry, Engesserstr 20, D-76131 Karlsruhe, University of Karlsruhe (Germany); Heuveline, Vincent [Institute for Applied and Numerical Mathematics, Kaiserstr. 12, D-76128 Karlsruhe (Germany)

    2007-10-11

    This paper presents the performance analysis of a planar solid-oxide fuel cell (SOFC) under direct internal reforming conditions. A detailed solid-oxide fuel cell model is used to study the influences of various operating parameters on cell performance. Significant differences in efficiency and power density are observed for isothermal and adiabatic operational regimes. The influence of air number, specific catalyst area, anode thickness, steam to carbon (s/c) ratio of the inlet fuel, and extend of pre-reforming on cell performance is analyzed. In all cases except for the case of pre-reformed fuel, adiabatic operation results in lower performance compared to isothermal operation. It is further discussed that, though direct internal reforming may lead to cost reduction and increased efficiency by effective utilization of waste heat, the efficiency of the fuel cell itself is higher for pre-reformed fuel compared to non-reformed fuel. Furthermore, criteria for the choice of optimal operating conditions for cell stacks operating under direct internal reforming conditions are discussed. (author)

  1. Performance analysis of a SOFC under direct internal reforming conditions

    Science.gov (United States)

    Janardhanan, Vinod M.; Heuveline, Vincent; Deutschmann, Olaf

    This paper presents the performance analysis of a planar solid-oxide fuel cell (SOFC) under direct internal reforming conditions. A detailed solid-oxide fuel cell model is used to study the influences of various operating parameters on cell performance. Significant differences in efficiency and power density are observed for isothermal and adiabatic operational regimes. The influence of air number, specific catalyst area, anode thickness, steam to carbon (s/c) ratio of the inlet fuel, and extend of pre-reforming on cell performance is analyzed. In all cases except for the case of pre-reformed fuel, adiabatic operation results in lower performance compared to isothermal operation. It is further discussed that, though direct internal reforming may lead to cost reduction and increased efficiency by effective utilization of waste heat, the efficiency of the fuel cell itself is higher for pre-reformed fuel compared to non-reformed fuel. Furthermore, criteria for the choice of optimal operating conditions for cell stacks operating under direct internal reforming conditions are discussed.

  2. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda; Yokota, Rio; Keyes, David E.

    2016-01-01

    model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization

  3. [The Computer Book of the Internal Medicine resident: validity and reliability of a questionnaire for self-assessment of competences in internal medicine residents].

    Science.gov (United States)

    Oristrell, J; Casanovas, A; Jordana, R; Comet, R; Gil, M; Oliva, J C

    2012-12-01

    There are no simple and validated instruments for evaluating the training of specialists. To analyze the reliability and validity of a computerized self-assessment method to quantify the acquisition of medical competences during the Internal Medicine residency program. All residents of our department participated in the study during a period of 28 months. Twenty-two questionnaires specific for each rotation (the Computer-Book of the Internal Medicine Resident) were constructed with items (questions) corresponding to three competence domains: clinical skills competence, communication skills and teamwork. Reliability was analyzed by measuring the internal consistency of items in each competence domain using Cronbach's alpha index. Validation was performed by comparing mean scores in each competence domain between senior and junior residents. Cut-off levels of competence scores were established in order to identify the strengths and weaknesses of our training program. Finally, self-assessment values were correlated with the evaluations of the medical staff. There was a high internal consistency of the items of clinical skills competences, communication skills and teamwork. Higher scores of clinical skills competence and communication skills, but not in those of teamwork were observed in senior residents than in junior residents. The Computer-Book of the Internal Medicine Resident identified the strengths and weaknesses of our training program. We did not observe any correlation between the results of the self- evaluations and the evaluations made by staff physicians. The items of Computer-Book of the Internal Medicine Resident showed high internal consistency and made it possible to measure the acquisition of medical competences in a team of Internal Medicine residents. This self-assessment method should be complemented with other evaluation methods in order to assess the acquisition of medical competences by an individual resident. Copyright © 2012 Elsevier Espa

  4. Scalability of DL_POLY on High Performance Computing Platform

    CSIR Research Space (South Africa)

    Mabakane, Mabule S

    2017-12-01

    Full Text Available stream_source_info Mabakanea_19979_2017.pdf.txt stream_content_type text/plain stream_size 33716 Content-Encoding UTF-8 stream_name Mabakanea_19979_2017.pdf.txt Content-Type text/plain; charset=UTF-8 SACJ 29(3) December... when using many processors within the compute nodes of the supercomputer. The type of the processors of compute nodes and their memory also play an important role in the overall performance of the parallel application running on a supercomputer. DL...

  5. Component-based software for high-performance scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  6. Performance measurements in 3D ideal magnetohydrodynamic stability computations

    International Nuclear Information System (INIS)

    Anderson, D.V.; Cooper, W.A.; Gruber, R.; Schwenn, U.

    1989-10-01

    The 3D ideal magnetohydrodynamic stability code TERPSICHORE has been designed to take advantage of vector and microtasking capabilities of the latest CRAY computers. To keep the number of operations small most efficient algorithms have been applied in each computational step. The program investigates the stability properties of fusion reactor relevant plasma configurations confined by magnetic fields. For a typical 3D HELIAS configuration that has been considered we obtain an overall performance in excess of 1 Gflops on an eight processor CRAY-YMP machine. (author) 3 figs., 1 tab., 11 refs

  7. Component-based software for high-performance scientific computing

    International Nuclear Information System (INIS)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly

  8. Nuclear forces and high-performance computing: The perfect match

    International Nuclear Information System (INIS)

    Luu, T; Walker-Loud, A

    2009-01-01

    High-performance computing is now enabling the calculation of certain hadronic interaction parameters directly from Quantum Chromodynamics, the quantum field theory that governs the behavior of quarks and gluons and is ultimately responsible for the nuclear strong force. In this paper we briefly describe the state of the field and show how other aspects of hadronic interactions will be ascertained in the near future. We give estimates of computational requirements needed to obtain these goals, and outline a procedure for incorporating these results into the broader nuclear physics community.

  9. Computers live on in Colombian classrooms | IDRC - International ...

    International Development Research Centre (IDRC) Digital Library (Canada)

    2011-02-08

    Feb 8, 2011 ... ... its Windows operating system and Office applications suite, and there is a wealth ... The key to the success of the program in so short a time is political will, ... is a fraction of the real cost to provide computers to 2 000 schools.

  10. Ninth international conference on computational linguistics Coling 82

    Energy Technology Data Exchange (ETDEWEB)

    1983-01-01

    This paper presents the summary reports presented at the concluding session and evaluating the state of the art, trends and perspectives as reflected in the papers presented at Coling 82 in six domains: machine translation, grammatico-semantic analysis, linguistics in its relations to computational linguistics, question answering, artificial intelligence and knowledge representation, and information retrieval and linguistic data bases.

  11. Computational hybrid anthropometric paediatric phantom library for internal radiation dosimetry

    DEFF Research Database (Denmark)

    Xie, Tianwu; Kuster, Niels; Zaidi, Habib

    2017-01-01

    for children demonstrated that they follow the same trend when correlated with age. The constructed hybrid computational phantom library opens up the prospect of comprehensive radiation dosimetry calculations and risk assessment for the paediatric population of different age groups and diverse anthropometric...

  12. Internal and external use of performance information in public organisations: Results from an international executive survey

    NARCIS (Netherlands)

    G. Hammerschmid (Gerhard); S.G.J. Van de Walle (Steven); V. Štimac (Vid)

    2013-01-01

    textabstractAbstract. This paper analyses determinants of public managers´ internal and external use of performance information. Using a sample of over 3100 top public sector executives in six European countries, we find evidence for significant country variations, with a more limited use of

  13. Computer simulation study International Container Terminal "Tanjung Perak", Surabaya, Indonesia

    NARCIS (Netherlands)

    Groenveld, R.; Wanders, S.

    1998-01-01

    The Tanjung Perak harbour of the city of Surabaya on the island Java, Indonesia has experienced a considerable growth of container traffic. In order to adequately deal with the expected continuing increase of container traffic in the future, the International container terminal is presently being

  14. Computer-aided performance monitoring program at Diablo Canyon

    International Nuclear Information System (INIS)

    Nelson, T.; Glynn, R. III; Kessler, T.C.

    1992-01-01

    This paper describes the thermal performance monitoring program at Pacific Gas ampersand Electric Company's (PG ampersand E's) Diablo Canyon Nuclear Power Plant. The plant performance monitoring program at Diablo Canyon uses the THERMAC performance monitoring and analysis computer software provided by Expert-EASE Systems. THERMAC is used to collect performance data from the plant process computers, condition that data to adjust for measurement errors and missing data points, evaluate cycle and component-level performance, archive the data for trend analysis and generate performance reports. The current status of the program is that, after a fair amount of open-quotes tuningclose quotes of the basic open-quotes thermal kitclose quotes models provided with the initial THERMAC installation, we have successfully baselined both units to cycle isolation test data from previous reload cycles. Over the course of the past few months, we have accumulated enough data to generate meaningful performance trends and, as a result, have been able to use THERMAC to track a condenser fouling problem that was costing enough megawatts to attract corporate-level attention. Trends from THERMAC clearly related the megawatt loss to a steadily degrading condenser cleanliness factor and verified the subsequent gain in megawatts after the condenser was cleaned. In the future, we expect to rebaseline THERMAC to a beginning of cycle (BOC) data set and to use the program to help track feedwater nozzle fouling

  15. Sustainable Innovation, Management Accounting and Control Systems, and International Performance

    Directory of Open Access Journals (Sweden)

    Ernesto Lopez-Valeiras

    2015-03-01

    Full Text Available This study analyzes how Management Accounting and Control Systems (MACS facilitate the appropriation of the benefits of sustainable innovations in organizations. In particular, this paper examines the moderating role of different types of MACS in the relationships between sustainable innovation and international performance at an organizational level. We collected survey data from 123 Spanish and Portuguese organizations. Partial Least Square was used to analyze the data. Results show that the effect of sustainable innovations on international performance is enhanced by contemporary rather than traditional types of MACS. Overall our findings show that MACS can help managers to develop and monitor organizational activities (e.g., costumer services and distribution activities, which support the appropriation of the potential benefits from sustainable innovation. This paper responds to recent calls for in-depth studies about the organizational mechanism that may enhance the success of sustainable innovation.

  16. The Effect of Gender Equality on International Soccer Performance

    DEFF Research Database (Denmark)

    Bredtmann, Julia; Crede, Carsten J.; Otten, Sebastian

    2014-01-01

    In this paper, we propose a new estimation strategy that uses the variation in success between the male and the female national soccer team within a country to identify the causal impact of gender equality on women’s soccer performance. In particular, we analyze whether within-country variations...... in labor force participation rates and life expectancies between the genders, which serve as measures for the country’s gender equality, are able to explain diff erences in the international success of male and female national soccer teams. Our results reveal that diff erences in male and female labor...... force participation rates and life expectancies are able to explain the international soccer performance of female teams, but not that of male teams, suggesting that gender equality is an important driver of female sport success....

  17. Comparative analysis of quality control tests on computed tomography in accordance with national and international laws

    International Nuclear Information System (INIS)

    Ramos, Fernando S.; Vasconcelos, Rebeca S.; Goncalves, Marcel S.; Oliveira, Marcus V.L. de

    2014-01-01

    The objective of this study is to perform a comparative analysis between the Brazilian legislation and internationals protocols, with respect to the quality control tests for computerized tomography. We used 07 references, published from 1998-2012: the Protocolo Brasileiro - Portaria 453/98 SVS/MS and the Guia de Radiodiagnostico Medico da ANVISA; Quality Assurance Programme for Computed Tomography: Diagnostic and Therapy Applications of the IAEA; European Protocol - European Guidelines on Quality Criteria for Computed Tomography of the EUR No. 16262 EN; Radiation Protection No. 162 - Criteria for Acceptability of Medical Radiology, Nuclear Medicine and Radiotherapy of the European Commission; the Protocols of Control de Calidad en Radiodiagnostico IAEA / ARCAL XLIX; and the Protocolo Espanol de Control de Calidad en Radiodignostico. The comparative analysis of these legislations was based on aspects of tolerance / limit, frequency and objectives of the recommended tests. Were found 18 tests in the Brazilian legislation. The tests were grouped according to their nature (dosimetric tests / exposure and geometric tests and image quality tests). Among the evaluated protocols was identified divergence between tests contained in the documents and the criteria of assessment set out in this work. It is clear, moreover, that for certain documents are not observed tolerances, well-defined methodologies and even frequency of testing. We conclude that the current legislation in Brazil differs in certain respects from international protocols analyzed, although this has a great numbers of quality control tests. However, it is necessary that the Brazilian legislation takes into account technological advances presented to time

  18. Overview of Parallel Platforms for Common High Performance Computing

    Directory of Open Access Journals (Sweden)

    T. Fryza

    2012-04-01

    Full Text Available The paper deals with various parallel platforms used for high performance computing in the signal processing domain. More precisely, the methods exploiting the multicores central processing units such as message passing interface and OpenMP are taken into account. The properties of the programming methods are experimentally proved in the application of a fast Fourier transform and a discrete cosine transform and they are compared with the possibilities of MATLAB's built-in functions and Texas Instruments digital signal processors with very long instruction word architectures. New FFT and DCT implementations were proposed and tested. The implementation phase was compared with CPU based computing methods and with possibilities of the Texas Instruments digital signal processing library on C6747 floating-point DSPs. The optimal combination of computing methods in the signal processing domain and new, fast routines' implementation is proposed as well.

  19. Heat exchanger performance analysis programs for the personal computer

    International Nuclear Information System (INIS)

    Putman, R.E.

    1992-01-01

    Numerous utility industry heat exchange calculations are repetitive and thus lend themselves to being performed on a Personal Computer. These programs may be regarded as engineering tools which, when put together, can form a Toolbox. However, the practicing Results Engineer in the utility industry desires not only programs that are robust as well as easy to use but can also be used both on desktop and laptop PC's. The latter also offer the opportunity to take the computer into the plant or control room, and use it there to process test or operating data right on the spot. Most programs evolve through the needs which arise in the course of day-to-day work. This paper describes several of the more useful programs of this type and outlines some of the guidelines to be followed when designing personal computer programs for use by the practicing Results Engineer

  20. Proceedings of the 2011 2nd International Congress on Computer Applications and Computational Science

    CERN Document Server

    Nguyen, Quang

    2012-01-01

    The latest inventions in computer technology influence most of human daily activities. In the near future, there is tendency that all of aspect of human life will be dependent on computer applications. In manufacturing, robotics and automation have become vital for high quality products. In education, the model of teaching and learning is focusing more on electronic media than traditional ones. Issues related to energy savings and environment is becoming critical.   Computational Science should enhance the quality of human life,  not only solve their problems. Computational Science should help humans to make wise decisions by presenting choices and their possible consequences. Computational Science should help us make sense of observations, understand natural language, plan and reason with extensive background knowledge. Intelligence with wisdom is perhaps an ultimate goal for human-oriented science.   This book is a compilation of some recent research findings in computer application and computational sci...

  1. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs

  2. International Conference on Soft Computing in Information Communication Technology

    CERN Document Server

    Soft Computing in Information Communication Technology

    2012-01-01

      This is a collection of the accepted papers concerning soft computing in information communication technology. All accepted papers are subjected to strict peer-reviewing by 2 expert referees. The resultant dissemination of the latest research results, and the exchanges of views concerning the future research directions to be taken in this field makes the work of immense value to all those having an interest in the topics covered. The present book represents a cooperative effort to seek out the best strategies for effecting improvements in the quality and the reliability of Neural Networks, Swarm Intelligence, Evolutionary Computing, Image Processing Internet Security, Data Security, Data Mining, Network Security and Protection of data and Cyber laws. Our sincere appreciation and thanks go to these authors for their contributions to this conference. I hope you can gain lots of useful information from the book.

  3. Microdosimetry computation code of internal sources - MICRODOSE 1

    International Nuclear Information System (INIS)

    Li Weibo; Zheng Wenzhong; Ye Changqing

    1995-01-01

    This paper describes a microdosimetry computation code, MICRODOSE 1, on the basis of the following described methods: (1) the method of calculating f 1 (z) for charged particle in the unit density tissues; (2) the method of calculating f(z) for a point source; (3) the method of applying the Fourier transform theory to the calculation of the compound Poisson process; (4) the method of using fast Fourier transform technique to determine f(z) and, giving some computed examples based on the code, MICRODOSE 1, including alpha particles emitted from 239 Pu in the alveolar lung tissues and from radon progeny RaA and RAC in the human respiratory tract. (author). 13 refs., 6 figs

  4. 1st International Conference on Intelligent Computing, Communication and Devices

    CERN Document Server

    Patnaik, Srikanta; Ichalkaranje, Nikhil

    2015-01-01

    In the history of mankind, three revolutions which impact the human life are the tool-making revolution, agricultural revolution and industrial revolution. They have transformed not only the economy and civilization but the overall development of the society. Probably, intelligence revolution is the next revolution, which the society will perceive in the next 10 years. ICCD-2014 covers all dimensions of intelligent sciences, i.e. Intelligent Computing, Intelligent Communication and Intelligent Devices. This volume covers contributions from Intelligent Communication which are from the areas such as Communications and Wireless Ad Hoc & Sensor Networks, Speech & Natural Language Processing, including Signal, Image and Video Processing and Mobile broadband and Optical networks, which are the key to the ground-breaking inventions to intelligent communication technologies. Secondly, Intelligent Device is any type of equipment, instrument, or machine that has its own computing capability. Contributions from ...

  5. Simple, parallel, high-performance virtual machines for extreme computations

    International Nuclear Information System (INIS)

    Chokoufe Nejad, Bijan; Ohl, Thorsten; Reuter, Jurgen

    2014-11-01

    We introduce a high-performance virtual machine (VM) written in a numerically fast language like Fortran or C to evaluate very large expressions. We discuss the general concept of how to perform computations in terms of a VM and present specifically a VM that is able to compute tree-level cross sections for any number of external legs, given the corresponding byte code from the optimal matrix element generator, O'Mega. Furthermore, this approach allows to formulate the parallel computation of a single phase space point in a simple and obvious way. We analyze hereby the scaling behaviour with multiple threads as well as the benefits and drawbacks that are introduced with this method. Our implementation of a VM can run faster than the corresponding native, compiled code for certain processes and compilers, especially for very high multiplicities, and has in general runtimes in the same order of magnitude. By avoiding the tedious compile and link steps, which may fail for source code files of gigabyte sizes, new processes or complex higher order corrections that are currently out of reach could be evaluated with a VM given enough computing power.

  6. International Conference on Soft Computing Techniques and Engineering Application

    CERN Document Server

    Li, Xiaolong

    2014-01-01

    The main objective of ICSCTEA 2013 is to provide a platform for researchers, engineers and academicians from all over the world to present their research results and development activities in soft computing techniques and engineering application. This conference provides opportunities for them to exchange new ideas and application experiences face to face, to establish business or research relations and to find global partners for future collaboration.

  7. Internalization of Malaysian mathematical and computer science journals

    OpenAIRE

    Zainab, A. N.

    2008-01-01

    The internationalization characteristics of two Malaysian journals, Bulletin of the Malaysian Mathematical Sciences Society ( indexed by ISI) and the Malaysian Journal of Computer Science ( indexed by Inspec and Scopus) is observed. All issues for the years 2000 to 2007 were looked at to obtain the following information, (i) total articles published between 2000 and 2007; (ii) the distribution of foreign and Malaysian authors publishing in the journals; (iii) the distribution of articles by c...

  8. 2nd International Conference on Soft Computing and Data Mining

    CERN Document Server

    Ghazali, Rozaida; Nawi, Nazri; Deris, Mustafa

    2017-01-01

    This book provides a comprehensive introduction and practical look at the concepts and techniques readers need to get the most out of their data in real-world, large-scale data mining projects. It also guides readers through the data-analytic thinking necessary for extracting useful knowledge and business value from the data. The book is based on the Soft Computing and Data Mining (SCDM-16) conference, which was held in Bandung, Indonesia on August 18th–20th 2016 to discuss the state of the art in soft computing techniques, and offer participants sufficient knowledge to tackle a wide range of complex systems. The scope of the conference is reflected in the book, which presents a balance of soft computing techniques and data mining approaches. The two constituents are introduced to the reader systematically and brought together using different combinations of applications and practices. It offers engineers, data analysts, practitioners, scientists and managers the insights into the concepts, tools and techni...

  9. International conference on Advances in Intelligent Control and Innovative Computing

    CERN Document Server

    Castillo, Oscar; Huang, Xu; Intelligent Control and Innovative Computing

    2012-01-01

    In the lightning-fast world of intelligent control and cutting-edge computing, it is vitally important to stay abreast of developments that seem to follow each other without pause. This publication features the very latest and some of the very best current research in the field, with 32 revised and extended research articles written by prominent researchers in the field. Culled from contributions to the key 2011 conference Advances in Intelligent Control and Innovative Computing, held in Hong Kong, the articles deal with a wealth of relevant topics, from the most recent work in artificial intelligence and decision-supporting systems, to automated planning, modelling and simulation, signal processing, and industrial applications. Not only does this work communicate the current state of the art in intelligent control and innovative computing, it is also an illuminating guide to up-to-date topics for researchers and graduate students in the field. The quality of the contents is absolutely assured by the high pro...

  10. 9th International Conference on Practical Applications of Computational Biology and Bioinformatics

    CERN Document Server

    Rocha, Miguel; Fdez-Riverola, Florentino; Paz, Juan

    2015-01-01

    This proceedings presents recent practical applications of Computational Biology and  Bioinformatics. It contains the proceedings of the 9th International Conference on Practical Applications of Computational Biology & Bioinformatics held at University of Salamanca, Spain, at June 3rd-5th, 2015. The International Conference on Practical Applications of Computational Biology & Bioinformatics (PACBB) is an annual international meeting dedicated to emerging and challenging applied research in Bioinformatics and Computational Biology. Biological and biomedical research are increasingly driven by experimental techniques that challenge our ability to analyse, process and extract meaningful knowledge from the underlying data. The impressive capabilities of next generation sequencing technologies, together with novel and ever evolving distinct types of omics data technologies, have put an increasingly complex set of challenges for the growing fields of Bioinformatics and Computational Biology. The analysis o...

  11. High performance stream computing for particle beam transport simulations

    International Nuclear Information System (INIS)

    Appleby, R; Bailey, D; Higham, J; Salt, M

    2008-01-01

    Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the simultaneous transport of many particles using dedicated hardware. Modern Graphics Processing Units (GPUs) are powerful and affordable stream computing devices. The results of simulations of particle transport through the booster-to-storage-ring transfer line of the DIAMOND synchrotron light source using an NVidia GeForce 7900 GPU are compared to the standard transport code MAD. It is found that particle transport calculations are suitable for stream processing and large performance increases are possible. The accuracy and potential speed gains are compared and the prospects for future work in the area are discussed

  12. Unravelling the structure of matter on high-performance computers

    International Nuclear Information System (INIS)

    Kieu, T.D.; McKellar, B.H.J.

    1992-11-01

    The various phenomena and the different forms of matter in nature are believed to be the manifestation of only a handful set of fundamental building blocks-the elementary particles-which interact through the four fundamental forces. In the study of the structure of matter at this level one has to consider forces which are not sufficiently weak to be treated as small perturbations to the system, an example of which is the strong force that binds the nucleons together. High-performance computers, both vector and parallel machines, have facilitated the necessary non-perturbative treatments. The principles and the techniques of computer simulations applied to Quantum Chromodynamics are explained examples include the strong interactions, the calculation of the mass of nucleons and their decay rates. Some commercial and special-purpose high-performance machines for such calculations are also mentioned. 3 refs., 2 tabs

  13. International Conference: Computer-Aided Design of High-Temperature Materials

    National Research Council Canada - National Science Library

    Kalia, Rajiv

    1998-01-01

    .... The conference was attended by experimental and computational materials scientists, and experts in high performance computing and communications from universities, government laboratories, and industries in the U.S., Europe, and Japan...

  14. PREFACE: ELC International Meeting on Inference, Computation, and Spin Glasses (ICSG2013)

    Science.gov (United States)

    Kabashima, Yoshiyuki; Hukushima, Koji; Inoue, Jun-ichi; Tanaka, Toshiyuki; Watanabe, Osamu

    2013-12-01

    The close relationship between probability-based inference and statistical mechanics of disordered systems has been noted for some time. This relationship has provided researchers with a theoretical foundation in various fields of information processing for analytical performance evaluation and construction of efficient algorithms based on message-passing or Monte Carlo sampling schemes. The ELC International Meeting on 'Inference, Computation, and Spin Glasses (ICSG2013)', was held in Sapporo 28-30 July 2013. The meeting was organized as a satellite meeting of STATPHYS25 in order to offer a forum where concerned researchers can assemble and exchange information on the latest results and newly established methodologies, and discuss future directions of the interdisciplinary studies between statistical mechanics and information sciences. Financial support from Grant-in-Aid for Scientific Research on Innovative Areas, MEXT, Japan 'Exploring the Limits of Computation (ELC)' is gratefully acknowledged. We are pleased to publish 23 papers contributed by invited speakers of ICSG2013 in this volume of Journal of Physics: Conference Series. We hope that this volume will promote further development of this highly vigorous interdisciplinary field between statistical mechanics and information/computer science. Editors and ICSG2013 Organizing Committee: Koji Hukushima Jun-ichi Inoue (Local Chair of ICSG2013) Yoshiyuki Kabashima (Editor-in-Chief) Toshiyuki Tanaka Osamu Watanabe (General Chair of ICSG2013)

  15. An urban energy performance evaluation system and its computer implementation.

    Science.gov (United States)

    Wang, Lei; Yuan, Guan; Long, Ruyin; Chen, Hong

    2017-12-15

    To improve the urban environment and effectively reflect and promote urban energy performance, an urban energy performance evaluation system was constructed, thereby strengthening urban environmental management capabilities. From the perspectives of internalization and externalization, a framework of evaluation indicators and key factors that determine urban energy performance and explore the reasons for differences in performance was proposed according to established theory and previous studies. Using the improved stochastic frontier analysis method, an urban energy performance evaluation and factor analysis model was built that brings performance evaluation and factor analysis into the same stage for study. According to data obtained for the Chinese provincial capitals from 2004 to 2013, the coefficients of the evaluation indicators and key factors were calculated by the urban energy performance evaluation and factor analysis model. These coefficients were then used to compile the program file. The urban energy performance evaluation system developed in this study was designed in three parts: a database, a distributed component server, and a human-machine interface. Its functions were designed as login, addition, edit, input, calculation, analysis, comparison, inquiry, and export. On the basis of these contents, an urban energy performance evaluation system was developed using Microsoft Visual Studio .NET 2015. The system can effectively reflect the status of and any changes in urban energy performance. Beijing was considered as an example to conduct an empirical study, which further verified the applicability and convenience of this evaluation system. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. A performance evaluation of the IBM 370/XT personal computer

    Science.gov (United States)

    Dominick, Wayne D. (Editor); Triantafyllopoulos, Spiros

    1984-01-01

    An evaluation of the IBM 370/XT personal computer is given. This evaluation focuses primarily on the use of the 370/XT for scientific and technical applications and applications development. A measurement of the capabilities of the 370/XT was performed by means of test programs which are presented. Also included is a review of facilities provided by the operating system (VM/PC), along with comments on the IBM 370/XT hardware configuration.

  17. Homogenization of linear viscoelastic three phase media: internal variable formulation versus full-field computation

    International Nuclear Information System (INIS)

    Blanc, V.; Barbie, L.; Masson, R.

    2011-01-01

    Homogenization of linear viscoelastic heterogeneous media is here extended from two phase inclusion-matrix media to three phase inclusion-matrix media. Each phase obeying to a compressible Maxwellian behaviour, this analytic method leads to an equivalent elastic homogenization problem in the Laplace-Carson space. For some particular microstructures, such as the Hashin composite sphere assemblage, an exact solution is obtained. The inversion of the Laplace-Carson transforms of the overall stress-strain behaviour gives in such cases an internal variable formulation. As expected, the number of these internal variables and their evolution laws are modified to take into account the third phase. Moreover, evolution laws of averaged stresses and strains per phase can still be derived for three phase media. Results of this model are compared to full fields computations of representative volume elements using finite element method, for various concentrations and sizes of inclusion. Relaxation and creep test cases are performed in order to compare predictions of the effective response. The internal variable formulation is shown to yield accurate prediction in both cases. (authors)

  18. Workshops of the Sixth International Brain–Computer Interface Meeting : brain–computer interfaces past, present, and future

    NARCIS (Netherlands)

    Huggins, Jane E.; Guger, Christoph; Ziat, Mounia; Zander, Thorsten O.; Taylor, Denise; Tangermann, Michael; Soria-Frisch, Aureli; Simeral, John; Scherer, Reinhold; Rupp, Rüdiger; Ruffini, Giulio; Robinson, Douglas K.R.; Ramsey, Nick F.; Nijholt, Anton; Müller-Putz, Gernot R.; McFarland, Dennis J.; Mattia, Donatella; Lance, Brent J.; Kindermans, Pieter-Jan; Iturrate, Iñaki; Herff, Christian; Gupta, Disha; Do, An H.; Collinger, Jennifer L.; Chavarriaga, Ricardo; Chasey, Steven M.; Bleichner, Martin G.; Batista, Aaron; Anderson, Charles W.; Aarnoutse, Erik J.

    2017-01-01

    The Sixth International Brain–Computer Interface (BCI) Meeting was held 30 May–3 June 2016 at the Asilomar Conference Grounds, Pacific Grove, California, USA. The conference included 28 workshops covering topics in BCI and brain–machine interface research. Topics included BCI for specific

  19. Computer-assisted learning in anatomy at the international medical school in Debrecen, Hungary: a preliminary report.

    Science.gov (United States)

    Kish, Gary; Cook, Samuel A; Kis, Gréta

    2013-01-01

    The University of Debrecen's Faculty of Medicine has an international, multilingual student population with anatomy courses taught in English to all but Hungarian students. An elective computer-assisted gross anatomy course, the Computer Human Anatomy (CHA), has been taught in English at the Anatomy Department since 2008. This course focuses on an introduction to anatomical digital images along with clinical cases. This low-budget course has a large visual component using images from magnetic resonance imaging and computer axial tomogram scans, ultrasound clinical studies, and readily available anatomy software that presents topics which run in parallel to the university's core anatomy curriculum. From the combined computer images and CHA lecture information, students are asked to solve computer-based clinical anatomy problems in the CHA computer laboratory. A statistical comparison was undertaken of core anatomy oral examination performances of English program first-year medical students who took the elective CHA course and those who did not in the three academic years 2007-2008, 2008-2009, and 2009-2010. The results of this study indicate that the CHA-enrolled students improved their performance on required anatomy core curriculum oral examinations (P computer-assisted learning may play an active role in anatomy curriculum improvement. These preliminary results have prompted ongoing evaluation of what specific aspects of CHA are valuable and which students benefit from computer-assisted learning in a multilingual and diverse cultural environment. Copyright © 2012 American Association of Anatomists.

  20. Computing radiation dose to reactor pressure vessel and internals

    International Nuclear Information System (INIS)

    1996-01-01

    Within the next twenty years many of the nuclear reactors currently in service will reach their design lifetime. One of the key factors affecting decisions on license extensions will be the ability to confidently predict the integrity of the reactor pressure vessel and core structural components which have been subjected to many years of cumulative radiation exposure. This report gives an overview of the most recent scientific literature and current methodologies for computational dosimetry in the OECD/NEA Member countries. Discussion is extended to consider some related issues of materials science, such as the metals, and limitations of the models in current use. Proposals are made for further work. (author)

  1. Neuroanatomical correlates of brain-computer interface performance.

    Science.gov (United States)

    Kasahara, Kazumi; DaSalla, Charles Sayo; Honda, Manabu; Hanakawa, Takashi

    2015-04-15

    Brain-computer interfaces (BCIs) offer a potential means to replace or restore lost motor function. However, BCI performance varies considerably between users, the reasons for which are poorly understood. Here we investigated the relationship between sensorimotor rhythm (SMR)-based BCI performance and brain structure. Participants were instructed to control a computer cursor using right- and left-hand motor imagery, which primarily modulated their left- and right-hemispheric SMR powers, respectively. Although most participants were able to control the BCI with success rates significantly above chance level even at the first encounter, they also showed substantial inter-individual variability in BCI success rate. Participants also underwent T1-weighted three-dimensional structural magnetic resonance imaging (MRI). The MRI data were subjected to voxel-based morphometry using BCI success rate as an independent variable. We found that BCI performance correlated with gray matter volume of the supplementary motor area, supplementary somatosensory area, and dorsal premotor cortex. We suggest that SMR-based BCI performance is associated with development of non-primary somatosensory and motor areas. Advancing our understanding of BCI performance in relation to its neuroanatomical correlates may lead to better customization of BCIs based on individual brain structure. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. High performance computer code for molecular dynamics simulations

    International Nuclear Information System (INIS)

    Levay, I.; Toekesi, K.

    2007-01-01

    Complete text of publication follows. Molecular Dynamics (MD) simulation is a widely used technique for modeling complicated physical phenomena. Since 2005 we are developing a MD simulations code for PC computers. The computer code is written in C++ object oriented programming language. The aim of our work is twofold: a) to develop a fast computer code for the study of random walk of guest atoms in Be crystal, b) 3 dimensional (3D) visualization of the particles motion. In this case we mimic the motion of the guest atoms in the crystal (diffusion-type motion), and the motion of atoms in the crystallattice (crystal deformation). Nowadays, it is common to use Graphics Devices in intensive computational problems. There are several ways to use this extreme processing performance, but never before was so easy to programming these devices as now. The CUDA (Compute Unified Device) Architecture introduced by nVidia Corporation in 2007 is a very useful for every processor hungry application. A Unified-architecture GPU include 96-128, or more stream processors, so the raw calculation performance is 576(!) GFLOPS. It is ten times faster, than the fastest dual Core CPU [Fig.1]. Our improved MD simulation software uses this new technology, which speed up our software and the code run 10 times faster in the critical calculation code segment. Although the GPU is a very powerful tool, it has a strongly paralleled structure. It means, that we have to create an algorithm, which works on several processors without deadlock. Our code currently uses 256 threads, shared and constant on-chip memory, instead of global memory, which is 100 times slower than others. It is possible to implement the total algorithm on GPU, therefore we do not need to download and upload the data in every iteration. On behalf of maximal throughput, every thread run with the same instructions

  3. Scalability of DL_POLY on High Performance Computing Platform

    Directory of Open Access Journals (Sweden)

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  4. Scintillator performance considerations for dedicated breast computed tomography

    Science.gov (United States)

    Vedantham, Srinivasan; Shi, Linxi; Karellas, Andrew

    2017-09-01

    Dedicated breast computed tomography (BCT) is an emerging clinical modality that can eliminate tissue superposition and has the potential for improved sensitivity and specificity for breast cancer detection and diagnosis. It is performed without physical compression of the breast. Most of the dedicated BCT systems use large-area detectors operating in cone-beam geometry and are referred to as cone-beam breast CT (CBBCT) systems. The large-area detectors in CBBCT systems are energy-integrating, indirect-type detectors employing a scintillator that converts x-ray photons to light, followed by detection of optical photons. A key consideration that determines the image quality achieved by such CBBCT systems is the choice of scintillator and its performance characteristics. In this work, a framework for analyzing the impact of the scintillator on CBBCT performance and its use for task-specific optimization of CBBCT imaging performance is described.

  5. Nurses and computers. An international perspective on nurses' requirements.

    Science.gov (United States)

    Bond, Carol S

    2007-01-01

    This paper reports the findings from a Florence Nightingale Foundation Travel Scholarship undertaken by the author in the spring of 2006. The aim of the visit was to explore nurses' attitudes towards, and experiences of, using computers in their practice, and the requirements that they have to encourage, promote and support them in using ICT. Nurses were found to be using computers mainly for carrying out administrative tasks, such as updating records, rather than as information tools to support evidence based practice, or patient information needs. Nurses discussed the systems they used, the equipment provided, and their skills, or more often their lack of skills. The need for support was a frequent comment, most nurses feeling that it was essential that help was available at the point of need, and that it was provided by someone, preferably a nurse, who understood the work context. Three groups of nurses were identified. Engagers; Worried Willing and Resisters. The report concludes that pre-registration education has a responsibility to seek to ensure that newly qualified nurses enter practice as engagers.

  6. State-wide performance criteria for international safeguards

    International Nuclear Information System (INIS)

    Budlong-Sylvester, K.W.; Pilat, Joseph F.; Stanbro, W.D.

    2001-01-01

    Traditionally, the International Atomic Energy Agency (IAEA) has relied upon prescriptive criteria to guide safeguards implementation. The prospect of replacing prescriptive safeguards criteria with more flexible performance criteria would constitute a structural change in safeguards and raises several important questions. Performance criteria imply that while safeguards goals will be fixed, the means of attaining those goals will not be explicitly prescribed. What would the performance objectives be under such a system? How would they be formulated? How would performance be linked to higher level safeguards objectives? How would safeguards performance be measured State-wide? The implementation of safeguards under performance criteria would also signal a dramatic change in the manner the Agency does business. A higher degree of flexibility could, in principle, produce greater effectiveness and efficiency, but would come with a need for increased Agency responsibility in practice. To the extent that reliance on prescriptive criteria decreases, the burden of justifying actions and ensuring their transparency will rise. Would there need to be limits to safeguards implementation? What would be the basis for setting such limits? This paper addresses these and other issues and questions relating to both the formulation and the implementation of performance-based criteria.

  7. 2nd International Workshop on Eigenvalue Problems : Algorithms, Software and Applications in Petascale Computing

    CERN Document Server

    Zhang, Shao-Liang; Imamura, Toshiyuki; Yamamoto, Yusaku; Kuramashi, Yoshinobu; Hoshi, Takeo

    2017-01-01

    This book provides state-of-the-art and interdisciplinary topics on solving matrix eigenvalue problems, particularly by using recent petascale and upcoming post-petascale supercomputers. It gathers selected topics presented at the International Workshops on Eigenvalue Problems: Algorithms; Software and Applications, in Petascale Computing (EPASA2014 and EPASA2015), which brought together leading researchers working on the numerical solution of matrix eigenvalue problems to discuss and exchange ideas – and in so doing helped to create a community for researchers in eigenvalue problems. The topics presented in the book, including novel numerical algorithms, high-performance implementation techniques, software developments and sample applications, will contribute to various fields that involve solving large-scale eigenvalue problems.

  8. What Physicists Should Know About High Performance Computing - Circa 2002

    Science.gov (United States)

    Frederick, Donald

    2002-08-01

    High Performance Computing (HPC) is a dynamic, cross-disciplinary field that traditionally has involved applied mathematicians, computer scientists, and others primarily from the various disciplines that have been major users of HPC resources - physics, chemistry, engineering, with increasing use by those in the life sciences. There is a technological dynamic that is powered by economic as well as by technical innovations and developments. This talk will discuss practical ideas to be considered when developing numerical applications for research purposes. Even with the rapid pace of development in the field, the author believes that these concepts will not become obsolete for a while, and will be of use to scientists who either are considering, or who have already started down the HPC path. These principles will be applied in particular to current parallel HPC systems, but there will also be references of value to desktop users. The talk will cover such topics as: computing hardware basics, single-cpu optimization, compilers, timing, numerical libraries, debugging and profiling tools and the emergence of Computational Grids.

  9. Computational Fluid Dynamics (CFD) Computations With Zonal Navier-Stokes Flow Solver (ZNSFLOW) Common High Performance Computing Scalable Software Initiative (CHSSI) Software

    National Research Council Canada - National Science Library

    Edge, Harris

    1999-01-01

    ...), computational fluid dynamics (CFD) 6 project. Under the project, a proven zonal Navier-Stokes solver was rewritten for scalable parallel performance on both shared memory and distributed memory high performance computers...

  10. High performance computing in science and engineering '09: transactions of the High Performance Computing Center, Stuttgart (HLRS) 2009

    National Research Council Canada - National Science Library

    Nagel, Wolfgang E; Kröner, Dietmar; Resch, Michael

    2010-01-01

    ...), NIC/JSC (J¨ u lich), and LRZ (Munich). As part of that strategic initiative, in May 2009 already NIC/JSC has installed the first phase of the GCS HPC Tier-0 resources, an IBM Blue Gene/P with roughly 300.000 Cores, this time in J¨ u lich, With that, the GCS provides the most powerful high-performance computing infrastructure in Europe alread...

  11. High performance computing and communications: FY 1995 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-01

    The High Performance Computing and Communications (HPCC) Program was formally established following passage of the High Performance Computing Act of 1991 signed on December 9, 1991. Ten federal agencies in collaboration with scientists and managers from US industry, universities, and laboratories have developed the HPCC Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1994 and FY 1995. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency. Although the Department of Education is an official HPCC agency, its current funding and reporting of crosscut activities goes through the Committee on Education and Health Resources, not the HPCC Program. For this reason the Implementation Plan covers nine HPCC agencies.

  12. Third International Conference on Computational Science, Engineering and Information Technology (CCSEIT-2013), v.1

    CERN Document Server

    Kumar, Ashok; Annamalai, Annamalai

    2013-01-01

      This book is the proceedings of Third International Conference on Computational Science, Engineering and Information Technology (CCSEIT-2013) that was held in Konya, Turkey, on June 7-9. CCSEIT-2013 provided an excellent international forum for sharing knowledge and results in theory, methodology and applications of computational science, engineering and information technology. This book contains research results, projects, survey work and industrial experiences representing significant advances in the field. The different contributions collected in this book cover five main areas: algorithms, data structures and applications;  wireless and mobile networks; computer networks and communications; natural language processing and information theory; cryptography and information security.  

  13. FTRA 4th International Conference on Mobile, Ubiquitous, and Intelligent Computing

    CERN Document Server

    Adeli, Hojjat; Park, Namje; Woungang, Isaac

    2014-01-01

    MUSIC 2013 will be the most comprehensive text focused on the various aspects of Mobile, Ubiquitous and Intelligent computing. MUSIC 2013 provides an opportunity for academic and industry professionals to discuss the latest issues and progress in the area of intelligent technologies in mobile and ubiquitous computing environment. MUSIC 2013 is the next edition of the 3rd International Conference on Mobile, Ubiquitous, and Intelligent Computing (MUSIC-12, Vancouver, Canada, 2012) which was the next event in a series of highly successful International Workshop on Multimedia, Communication and Convergence technologies MCC-11 (Crete, Greece, June 2011), MCC-10 (Cebu, Philippines, August 2010).

  14. High Performance Computing - Power Application Programming Interface Specification.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  15. Performance of an extrapolation chamber in computed tomography standard beams

    International Nuclear Information System (INIS)

    Castro, Maysa C.; Silva, Natália F.; Caldas, Linda V.E.

    2017-01-01

    Among the medical uses of ionizing radiations, the computed tomography (CT) diagnostic exams are responsible for the highest dose values to the patients. The dosimetry procedure in CT scanner beams makes use of pencil ionization chambers with sensitive volume lengths of 10 cm. The aim of its calibration is to compare the values that are obtained with the instrument to be calibrated and a standard reference system. However, there is no primary standard system for this kind of radiation beam. Therefore, an extrapolation ionization chamber built at the Calibration Laboratory (LCI), was used to establish a CT primary standard. The objective of this work was to perform some characterization tests (short- and medium-term stabilities, saturation curve, polarity effect and ion collection efficiency) in the standard X-rays beams established for computed tomography at the LCI. (author)

  16. Performance of an extrapolation chamber in computed tomography standard beams

    Energy Technology Data Exchange (ETDEWEB)

    Castro, Maysa C.; Silva, Natália F.; Caldas, Linda V.E., E-mail: mcastro@ipen.br [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil)

    2017-07-01

    Among the medical uses of ionizing radiations, the computed tomography (CT) diagnostic exams are responsible for the highest dose values to the patients. The dosimetry procedure in CT scanner beams makes use of pencil ionization chambers with sensitive volume lengths of 10 cm. The aim of its calibration is to compare the values that are obtained with the instrument to be calibrated and a standard reference system. However, there is no primary standard system for this kind of radiation beam. Therefore, an extrapolation ionization chamber built at the Calibration Laboratory (LCI), was used to establish a CT primary standard. The objective of this work was to perform some characterization tests (short- and medium-term stabilities, saturation curve, polarity effect and ion collection efficiency) in the standard X-rays beams established for computed tomography at the LCI. (author)

  17. Evaluating computer program performance on the CRAY-1

    International Nuclear Information System (INIS)

    Rudsinski, L.; Pieper, G.W.

    1979-01-01

    The Advanced Scientific Computers Project of Argonne's Applied Mathematics Division has two objectives: to evaluate supercomputers and to determine their effect on Argonne's computing workload. Initial efforts have focused on the CRAY-1, which is the only advanced computer currently available. Users from seven Argonne divisions executed test programs on the CRAY and made performance comparisons with the IBM 370/195 at Argonne. This report describes these experiences and discusses various techniques for improving run times on the CRAY. Direct translations of code from scalar to vector processor reduced running times as much as two-fold, and this reduction will become more pronounced as the CRAY compiler is developed. Further improvement (two- to ten-fold) was realized by making minor code changes to facilitate compiler recognition of the parallel and vector structure within the programs. Finally, extensive rewriting of the FORTRAN code structure reduced execution times dramatically, in three cases by a factor of more than 20; and even greater reduction should be possible by changing algorithms within a production code. It is condluded that the CRAY-1 would be of great benefit to Argonne researchers. Existing codes could be modified with relative ease to run significantly faster than on the 370/195. More important, the CRAY would permit scientists to investigate complex problems currently deemed infeasibile on traditional scalar machines. Finally, an interface between the CRAY-1 and IBM computers such as the 370/195, scheduled by Cray Research for the first quarter of 1979, would considerably facilitate the task of integrating the CRAY into Argonne's Central Computing Facility. 13 tables

  18. Applied Computational Intelligence in Engineering and Information Technology Revised and Selected Papers from the 6th IEEE International Symposium on Applied Computational Intelligence and Informatics SACI 2011

    CERN Document Server

    Precup, Radu-Emil; Preitl, Stefan

    2012-01-01

    This book highlights the potential of getting benefits from various applications of computational intelligence techniques. The present book is structured such that to include a set of selected and extended papers from the 6th IEEE International Symposium on Applied Computational Intelligence and Informatics SACI 2011, held in Timisoara, Romania, from 19 to 21 May 2011. After a serious paper review performed by the Technical Program Committee only 116 submissions were accepted, leading to a paper acceptance ratio of 65 %. A further refinement was made after the symposium, based also on the assessment of the presentation quality. Concluding, this book includes the extended and revised versions of the very best papers of SACI 2011 and few invited papers authored by prominent specialists. The readers will benefit from gaining knowledge of the computational intelligence and on what problems can be solved in several areas; they will learn what kind of approaches is advised to use in order to solve these problems. A...

  19. Fourth International Conference on Computer Science and Its Applications (CIIA 2013)

    CERN Document Server

    Mohamed, Otmane; Bellatreche, Ladjel; Recent Advances in Robotics and Automation

    2013-01-01

        "During the last decades Computational Intelligence has emerged and showed its contributions in various broad research communities (computer science, engineering, finance, economic, decision making, etc.). This was done by proposing approaches and algorithms based either on turnkey techniques belonging to the large panoply of solutions offered by computational intelligence such as data mining, genetic algorithms, bio-inspired methods, Bayesian networks, machine learning, fuzzy logic, artificial neural networks, etc. or inspired by computational intelligence techniques to develop new ad-hoc algorithms for the problem under consideration.    This volume is a comprehensive collection of extended contributions from the 4th International Conference on Computer Science and Its Applications (CIIA’2013) organized into four main tracks: Track 1: Computational Intelligence, Track  2: Security & Network Technologies, Track  3: Information Technology and Track 4: Computer Systems and Applications. This ...

  20. Internally Heated Screw Pyrolysis Reactor (IHSPR) heat transfer performance study

    Science.gov (United States)

    Teo, S. H.; Gan, H. L.; Alias, A.; Gan, L. M.

    2018-04-01

    1.5 billion end-of-life tyres (ELT) were discarded globally each year and pyrolysis is considered the best solution to convert the ELT into valuable high energy-density products. Among all pyrolysis technologies, screw reactor is favourable. However, conventional screw reactor risks plugging issue due to its lacklustre heat transfer performance. An internally heated screw pyrolysis reactor (IHSPR) was developed by local renewable energy industry, which serves as the research subject for heat transfer performance study of this particular paper. Zero-load heating test (ZLHT) was first carried out to obtain the operational parameters of the reactor, followed by the one dimensional steady-state heat transfer analysis carried out using SolidWorks Flow Simulation 2016. Experiments with feed rate manipulations and pyrolysis products analyses were conducted last to conclude the study.

  1. A checkpoint compression study for high-performance computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Ibtesham, Dewan [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science; Ferreira, Kurt B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Scalable System Software Dept.; Arnold, Dorian [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science

    2015-02-17

    As high-performance computing systems continue to increase in size and complexity, higher failure rates and increased overheads for checkpoint/restart (CR) protocols have raised concerns about the practical viability of CR protocols for future systems. Previously, compression has proven to be a viable approach for reducing checkpoint data volumes and, thereby, reducing CR protocol overhead leading to improved application performance. In this article, we further explore compression-based CR optimization by exploring its baseline performance and scaling properties, evaluating whether improved compression algorithms might lead to even better application performance and comparing checkpoint compression against and alongside other software- and hardware-based optimizations. Our results highlights are: (1) compression is a very viable CR optimization; (2) generic, text-based compression algorithms appear to perform near optimally for checkpoint data compression and faster compression algorithms will not lead to better application performance; (3) compression-based optimizations fare well against and alongside other software-based optimizations; and (4) while hardware-based optimizations outperform software-based ones, they are not as cost effective.

  2. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  3. PREFACE: 21st International Conference on Computing in High Energy and Nuclear Physics (CHEP2015)

    Science.gov (United States)

    Sakamoto, H.; Bonacorsi, D.; Ueda, I.; Lyon, A.

    2015-12-01

    software; data store and access; middleware, software development and tools, experiment frameworks, tools for distributed computing; computing activities and computing models; facilities, infrastructure, network; clouds and virtualization; performance increase and optimization exploiting hardware features. Throughout the entire process, we were blessed with a forward-looking group of competent colleagues in our International Advisory Committee, whom we warmly thank. All the individuals in the Program Committee team, who put together the technical tracks of the conference and reviewed all papers to prepare the sections of this proceedings journal, have to be credited for their outstanding work. And of course the gratitude goes to all people who submitted a contribution, presented it, and spent time to prepare a careful paper to document the work. These people, in the first place, are the main authors of the big success that CHEP continues to be. After almost 30 years, and 21 CHEP editions, this conference cycle continues to stay strong and to evolve in rapidly changing times towards a challenging future, covering new grounds and intercepting new trends as our field of research evolves. The next stop in this journey will be at the 22nd CHEP Conference on October 12th-14th, in San Francisco, hosted by SLAC and LBNL.

  4. Use of Kraepelinian concepts in international computer diagnosis.

    Science.gov (United States)

    Copeland, John R M

    2008-06-01

    In the 1960s, diagnosis in the UK followed Kraepelinian principles whereas that in the United States of America was influenced by Freudian concepts. The US Department of Health became alarmed at the large proportion of patients with schizophrenia entering State Mental Hospitals compared to the proportion entering the Area Mental Hospitals in England and Wales. Social theories of mental illness were in vogue and some thought that mental illness reflected the state of society. The US UK Study employing Kraepelinian principles of diagnosis found no essential difference in the mental hospital statistics of the two countries. Later, the study examined the similar problems of discrepancy between depression in the UK and dementia in the US with similar results this time confirmed by outcome. Over the succeeding years psychiatric diagnosis in the USA was to undergo a radical overhaul and fall into line with that of most of the rest of the world, and even move ahead with the publication of DSM III. In order to allow larger population studies to be examined the computer-assisted program AGECAT was developed again, along Kraepelinian principles. Cases of mental illness were defined according to purpose rather than as substantive objects in their own right. Using these methods it has been possible to assess the prevalence and incidence of mental disorders in older people in Europe and Asia and question the fundamental pathology of Alzheimer's (Kraepelin's pupil and colleague) Disease using population sampled brain tissue. Derivatives of AGECAT, for younger subjects adapted for clinical use aim to carry Kraepelinian principles into the treatment of populations at present unserved by psychiatric care.

  5. Evaluation of goal kicking performance in international rugby union matches.

    Science.gov (United States)

    Quarrie, Kenneth L; Hopkins, Will G

    2015-03-01

    Goal kicking is an important element in rugby but has been the subject of minimal research. To develop and apply a method to describe the on-field pattern of goal-kicking and rank the goal kicking performance of players in international rugby union matches. Longitudinal observational study. A generalized linear mixed model was used to analyze goal-kicking performance in a sample of 582 international rugby matches played from 2002 to 2011. The model adjusted for kick distance, kick angle, a rating of the importance of each kick, and venue-related conditions. Overall, 72% of the 6769 kick attempts were successful. Forty-five percent of points scored during the matches resulted from goal kicks, and in 5.7% of the matches the result of the match hinged on the outcome of a kick attempt. There was an extremely large decrease in success with increasing distance (odds ratio for two SD distance 0.06, 90% confidence interval 0.05-0.07) and a small decrease with increasingly acute angle away from the mid-line of the goal posts (odds ratio for 2 SD angle, 0.44, 0.39-0.49). Differences between players were typically small (odds ratio for 2 between-player SD 0.53, 0.45-0.65). The generalized linear mixed model with its random-effect solutions provides a tool for ranking the performance of goal kickers in rugby. This modelling approach could be applied to other performance indicators in rugby and in other sports in which discrete outcomes are measured repeatedly on players or teams. Copyright © 2015. Published by Elsevier Ltd.

  6. 3rd ACIS International Conference on Applied Computing and Information Technology

    CERN Document Server

    2016-01-01

    This edited book presents scientific results of the 3nd International Conference on Applied Computing and Information Technology (ACIT 2015) which was held on July 12-16, 2015 in Okayama, Japan. The aim of this conference was to bring together researchers and scientists, businessmen and entrepreneurs, teachers, engineers, computer users, and students to discuss the numerous fields of computer science and to share their experiences and exchange new ideas and information in a meaningful way. Research results about all aspects (theory, applications and tools) of computer and information science, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them.

  7. 4th IEEE/ACIS International Conference on Computer and Information Science

    CERN Document Server

    2016-01-01

    This edited book presents scientific results of the 14th IEEE/ACIS International Conference on Computer and Information Science (ICIS 2015) which was held on June 28 – July 1, 2015 in Las Vegas, USA. The aim of this conference was to bring together researchers and scientists, businessmen and entrepreneurs, teachers, engineers, computer users, and students to discuss the numerous fields of computer science and to share their experiences and exchange new ideas and information in a meaningful way. Research results about all aspects (theory, applications and tools) of computer and information science, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them.

  8. Fast Performance Computing Model for Smart Distributed Power Systems

    Directory of Open Access Journals (Sweden)

    Umair Younas

    2017-06-01

    Full Text Available Plug-in Electric Vehicles (PEVs are becoming the more prominent solution compared to fossil fuels cars technology due to its significant role in Greenhouse Gas (GHG reduction, flexible storage, and ancillary service provision as a Distributed Generation (DG resource in Vehicle to Grid (V2G regulation mode. However, large-scale penetration of PEVs and growing demand of energy intensive Data Centers (DCs brings undesirable higher load peaks in electricity demand hence, impose supply-demand imbalance and threaten the reliability of wholesale and retail power market. In order to overcome the aforementioned challenges, the proposed research considers smart Distributed Power System (DPS comprising conventional sources, renewable energy, V2G regulation, and flexible storage energy resources. Moreover, price and incentive based Demand Response (DR programs are implemented to sustain the balance between net demand and available generating resources in the DPS. In addition, we adapted a novel strategy to implement the computational intensive jobs of the proposed DPS model including incoming load profiles, V2G regulation, battery State of Charge (SOC indication, and fast computation in decision based automated DR algorithm using Fast Performance Computing resources of DCs. In response, DPS provide economical and stable power to DCs under strict power quality constraints. Finally, the improved results are verified using case study of ISO California integrated with hybrid generation.

  9. Real-time Tsunami Inundation Prediction Using High Performance Computers

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2014-12-01

    Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the

  10. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  11. GRAPH-BASED POST INCIDENT INTERNAL AUDIT METHOD OF COMPUTER EQUIPMENT

    Directory of Open Access Journals (Sweden)

    I. S. Pantiukhin

    2016-05-01

    Full Text Available Graph-based post incident internal audit method of computer equipment is proposed. The essence of the proposed solution consists in the establishing of relationships among hard disk damps (image, RAM and network. This method is intended for description of information security incident properties during the internal post incident audit of computer equipment. Hard disk damps receiving and formation process takes place at the first step. It is followed by separation of these damps into the set of components. The set of components includes a large set of attributes that forms the basis for the formation of the graph. Separated data is recorded into the non-relational database management system (NoSQL that is adapted for graph storage, fast access and processing. Damps linking application method is applied at the final step. The presented method gives the possibility to human expert in information security or computer forensics for more precise, informative internal audit of computer equipment. The proposed method allows reducing the time spent on internal audit of computer equipment, increasing accuracy and informativeness of such audit. The method has a development potential and can be applied along with the other components in the tasks of users’ identification and computer forensics.

  12. Clinical use of computed tomography and surface markers to assist internal fixation within the equine hoof.

    Science.gov (United States)

    Gasiorowski, Janik C; Richardson, Dean W

    2015-02-01

    To describe clinical use of computed tomography (CT) and hoof surface markers to facilitate internal fixation within the confines of the hoof wall. Retrospective case series. Horses (n = 16) that had CT-guided internal fixation of the distal phalanx (DP) or distal sesamoid bone (DSB). Drill bit entry point and direction were planned from CT image series performed on hooves with grids of barium paste dots at proposed entry and projected exit sites. Post-implantation CT images were obtained to check screw position and length as well as fracture reduction. Imaging, reduction, and surgical and general anesthesia times were evaluated. Outcome was recorded. Screw position and length were considered near optimal in all horses, with no consequential malposition of bits or screws. Fracture reduction was evident in all cases. Preoperative planning times (at least 2 CT image acquisitions and grid creation) ranged from 10 to 20 minutes. Surgery time ranged from 45 to 90 minutes (mean, 61 minutes) and general anesthesia time ranged from 115 to 220 minutes (mean, 171 minutes). The combination of CT and surface marker grids allowed accurate positioning of screws in clinical DP and DSB fractures. The technique was simple and rapid. An aiming device is useful for the technique. © Copyright 2014 by The American College of Veterinary Surgeons.

  13. 16th International Conference on Medical Image Computing and Computer Assisted Intervention

    CERN Document Server

    Klinder, Tobias; Li, Shuo

    2014-01-01

    This book contains the full papers presented at the MICCAI 2013 workshop Computational Methods and Clinical Applications for Spine Imaging. The workshop brought together researchers representing several fields, such as Biomechanics, Engineering, Medicine, Mathematics, Physics and Statistic. The works included in this book present and discuss new trends in those fields, using several methods and techniques in order to address more efficiently different and timely applications involving signal and image acquisition, image processing and analysis, image segmentation, image registration and fusion, computer simulation, image based modelling, simulation and surgical planning, image guided robot assisted surgical and image based diagnosis.

  14. Mixed-Language High-Performance Computing for Plasma Simulations

    Directory of Open Access Journals (Sweden)

    Quanming Lu

    2003-01-01

    Full Text Available Java is receiving increasing attention as the most popular platform for distributed computing. However, programmers are still reluctant to embrace Java as a tool for writing scientific and engineering applications due to its still noticeable performance drawbacks compared with other programming languages such as Fortran or C. In this paper, we present a hybrid Java/Fortran implementation of a parallel particle-in-cell (PIC algorithm for plasma simulations. In our approach, the time-consuming components of this application are designed and implemented as Fortran subroutines, while less calculation-intensive components usually involved in building the user interface are written in Java. The two types of software modules have been glued together using the Java native interface (JNI. Our mixed-language PIC code was tested and its performance compared with pure Java and Fortran versions of the same algorithm on a Sun E6500 SMP system and a Linux cluster of Pentium~III machines.

  15. Computational Fluid Dynamics and Building Energy Performance Simulation

    DEFF Research Database (Denmark)

    Nielsen, Peter Vilhelm; Tryggvason, T.

    1998-01-01

    An interconnection between a building energy performance simulation program and a Computational Fluid Dynamics program (CFD) for room air distribution will be introduced for improvement of the predictions of both the energy consumption and the indoor environment. The building energy performance...... simulation program requires a detailed description of the energy flow in the air movement which can be obtained by a CFD program. The paper describes an energy consumption calculation in a large building, where the building energy simulation program is modified by CFD predictions of the flow between three...... zones connected by open areas with pressure and buoyancy driven air flow. The two programs are interconnected in an iterative procedure. The paper shows also an evaluation of the air quality in the main area of the buildings based on CFD predictions. It is shown that an interconnection between a CFD...

  16. Knowledge construction about port performance evaluation: An international literature analysis

    Directory of Open Access Journals (Sweden)

    Karine Somensi

    2017-10-01

    Full Text Available Purpose: This study aims at identifying and analyzing the characteristics of international scientific research that address the literature fragment referring to the Port Performance Evaluation to identify the existence of theoretical alignment of Performance Evaluation notion, as an area of knowledge, with practical area stage, the Port Performance Evaluation. Design/Methodology/Approach: The approach the problem, this paper makes use of qualitative research, since it analyzes the Bibliographical Portfolio characteristics related to the Performance Evaluation Port. The strategy adopted was action research where the authors through their analysis and interpretation made the selection of the  Bibliographical Portfolio. Findings: From the analyzed literature fragment it was possible to identify some misalignment between what has been pointed out in the literature regarding the management practices in the port sector. This discrepancy refers to the management practices that are ignored by port managers, which implies the loss of opportunities and may even come to jeopardize the organization's performance. Research limitations/implications: The literature search was restricted to articles written in the English language, published in indexed scientific journals in the selected databases (ii the restriction by the time limit of articles published after the year 2000; (iii the generation of knowledge based on the characteristics selected by the researchers and (iv the analysis of BP articles regarding the  by the judgment and interpretation of this research authors. It is suggested for future work the expanding this research to other databases, other languages, other features, and continuity of this investigation with the development of "systemic analysis' and 'identifying research opportunities' stages through ProKnow-C. Originality/value: Although two similar works have been developed in the same area of research in 2015, the results achieved have

  17. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Science.gov (United States)

    Batista, Bruno Guazzelli; Estrella, Julio Cezar; Ferreira, Carlos Henrique Gomes; Filho, Dionisio Machado Leite; Nakamura, Luis Hideo Vasconcelos; Reiff-Marganiec, Stephan; Santana, Marcos José; Santana, Regina Helena Carlucci

    2015-01-01

    Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  18. Performance of scientific computing platforms with MCNP4B

    International Nuclear Information System (INIS)

    McLaughlin, H.E.; Hendricks, J.S.

    1998-01-01

    Several computing platforms were evaluated with the MCNP4B Monte Carlo radiation transport code. The DEC AlphaStation 500/500 was the fastest to run MCNP4B. Compared to the HP 9000-735, the fastest platform 4 yr ago, the AlphaStation is 335% faster, the HP C180 is 133% faster, the SGI Origin 2000 is 82% faster, the Cray T94/4128 is 1% faster, the IBM RS/6000-590 is 93% as fast, the DEC 3000/600 is 81% as fast, the Sun Sparc20 is 57% as fast, the Cray YMP 8/8128 is 57% as fast, the sun Sparc5 is 33% as fast, and the Sun Sparc2 is 13% as fast. All results presented are reproducible and allow for comparison to computer platforms not included in this study. Timing studies are seen to be very problem dependent. The performance gains resulting from advances in software were also investigated. Various compilers and operating systems were seen to have a modest impact on performance, whereas hardware improvements have resulted in a factor of 4 improvement. MCNP4B also ran approximately as fast as MCNP4A

  19. Performance Evaluation of Resource Management in Cloud Computing Environments.

    Directory of Open Access Journals (Sweden)

    Bruno Guazzelli Batista

    Full Text Available Cloud computing is a computational model in which resource providers can offer on-demand services to clients in a transparent way. However, to be able to guarantee quality of service without limiting the number of accepted requests, providers must be able to dynamically manage the available resources so that they can be optimized. This dynamic resource management is not a trivial task, since it involves meeting several challenges related to workload modeling, virtualization, performance modeling, deployment and monitoring of applications on virtualized resources. This paper carries out a performance evaluation of a module for resource management in a cloud environment that includes handling available resources during execution time and ensuring the quality of service defined in the service level agreement. An analysis was conducted of different resource configurations to define which dimension of resource scaling has a real influence on client requests. The results were used to model and implement a simulated cloud system, in which the allocated resource can be changed on-the-fly, with a corresponding change in price. In this way, the proposed module seeks to satisfy both the client by ensuring quality of service, and the provider by ensuring the best use of resources at a fair price.

  20. Play for Performance: Using Computer Games to Improve Motivation and Test-Taking Performance

    Science.gov (United States)

    Dennis, Alan R.; Bhagwatwar, Akshay; Minas, Randall K.

    2013-01-01

    The importance of testing, especially certification and high-stakes testing, has increased substantially over the past decade. Building on the "serious gaming" literature and the psychology "priming" literature, we developed a computer game designed to improve test-taking performance using psychological priming. The game primed…

  1. INTDOS: a computer code for estimating internal radiation dose using recommendations of the International Commission on Radiological Protection

    International Nuclear Information System (INIS)

    Ryan, M.T.

    1981-09-01

    INTDOS is a user-oriented computer code designed to calculate estimates of internal radiation dose commitment resulting from the acute inhalation intake of various radionuclides. It is designed so that users unfamiliar with the details of such can obtain results by answering a few questions regarding the exposure case. The user must identify the radionuclide name, solubility class, particle size, time since exposure, and the measured lung burden. INTDOS calculates the fractions of the lung burden remaining at time, t, postexposure considering the solubility class and particle size information. From the fraction remaining in the lung at time, t, the quantity inhaled is estimated. Radioactive decay is accounted for in the estimate. Finally, effective committed dose equivalents to various organs and tissues of the body are calculated using inhalation committed dose factors presented by the International Commission on Radiological Protection (ICRP). This computer code was written for execution on a Digital Equipment Corporation PDP-10 computer and is written in Fortran IV. A flow chart and example calculations are discussed in detail to aid the user who is unfamiliar with computer operations

  2. 4th International Conference on Computational Collective Intel- ligence Technologies and Applications (ICCCI 2012)

    CERN Document Server

    Trawiński, Bogdan; Katarzyniak, Radosław; Jo, Geun-Sik; Advanced Methods for Computational Collective Intelligence

    2013-01-01

    The book consists of 35 extended chapters which have been selected and invited from the submissions to the 4th International Conference on Computational Collective Intelligence Technologies and Applications (ICCCI 2012) held on November 28-30, 2012 in Ho Chi Minh City, Vietnam. The book is organized into six parts, which are semantic web and ontologies, social networks and e-learning, agent and multiagent systems, data mining methods and applications, soft computing, and optimization and control, respectively. All chapters in the book discuss theoretical and practical issues connected with computational collective intelligence and related technologies. The editors hope that the book can be useful for graduate and Ph.D. students in Computer Science, in particular participants in courses on Soft Computing, Multiagent Systems, and Data Mining. This book can be also useful for researchers working on the concept of computational collective intelligence in artificial populations. It is the hope of the editors that ...

  3. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  4. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  5. Integrated modeling tool for performance engineering of complex computer systems

    Science.gov (United States)

    Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar

    1989-01-01

    This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.

  6. Multi-Language Programming Environments for High Performance Java Computing

    Directory of Open Access Journals (Sweden)

    Vladimir Getov

    1999-01-01

    Full Text Available Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI tool which provides application programmers wishing to use Java with immediate accessibility to existing scientific packages. The JCI tool also facilitates rapid development and reuse of existing code. These benefits are provided at minimal cost to the programmer. While beneficial to the programmer, the additional advantages of mixed‐language programming in terms of application performance and portability are addressed in detail within the context of this paper. In addition, we discuss how the JCI tool is complementing other ongoing projects such as IBM’s High‐Performance Compiler for Java (HPCJ and IceT’s metacomputing environment.

  7. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  8. Performance characteristics of a Kodak computed radiography system.

    Science.gov (United States)

    Bradford, C D; Peppler, W W; Dobbins, J T

    1999-01-01

    The performance characteristics of a photostimulable phosphor based computed radiographic (CR) system were studied. The modulation transfer function (MTF), noise power spectra (NPS), and detective quantum efficiency (DQE) of the Kodak Digital Science computed radiography (CR) system (Eastman Kodak Co.-model 400) were measured and compared to previously published results of a Fuji based CR system (Philips Medical Systems-PCR model 7000). To maximize comparability, the same measurement techniques and analysis methods were used. The DQE at four exposure levels (30, 3, 0.3, 0.03 mR) and two plate types (standard and high resolution) were calculated from the NPS and MTF measurements. The NPS was determined from two-dimensional Fourier analysis of uniformly exposed plates. The presampling MTF was determined from the Fourier transform (FT) of the system's finely sampled line spread function (LSF) as produced by a narrow slit. A comparison of the slit type ("beveled edge" versus "straight edge") and its effect on the resulting MTF measurements was also performed. The results show that both systems are comparable in resolution performance. The noise power studies indicated a higher level of noise for the Kodak images (approximately 20% at the low exposure levels and 40%-70% at higher exposure levels). Within the clinically relevant exposure range (0.3-3 mR), the resulting DQE for the Kodak plates ranged between 20%-50% lower than for the corresponding Fuji plates. Measurements of the presampling MTF with the two slit types have shown that a correction factor can be applied to compensate for transmission through the relief edges.

  9. 8th International Conference on Bio-Inspired Computing : Theories and Applications

    CERN Document Server

    Pan, Linqiang; Fang, Xianwen

    2013-01-01

    International Conference on Bio-Inspired Computing: Theories and Applications (BIC-TA) is one of the flagship conferences on Bio-Computing, bringing together the world’s leading scientists from different areas of Natural Computing. Since 2006, the conferences have taken place at Wuhan (2006), Zhengzhou (2007), Adelaide (2008), Beijing (2009), Liverpool & Changsha (2010), Malaysia (2011) and India (2012). Following the successes of previous events, the 8th conference is organized and hosted by Anhui University of Science and Technology in China. This conference aims to provide a high-level international forum that researchers with different backgrounds and who are working in the related areas can use to present their latest results and exchange ideas. Additionally, the growing trend in Emergent Systems has resulted in the inclusion of two other closely related fields in the BIC-TA 2013 event, namely Complex Systems and Computational Neuroscience. These proceedings are intended for researchers in the fiel...

  10. Explaining Organizational Export Performance by Single and Combined International Business Competencies

    NARCIS (Netherlands)

    Birru, Worku Tuffa; Runhaar, Piety; Zaalberg, Ruud; Lans, Thomas; Mulder, Martin

    2018-01-01

    This study explores relationships between export performance and international business competencies (international orientation, export market orientation and international entrepreneurial orientation), and interactions between the competencies. Data from on-site structured interviews with 159

  11. International Conference on Artificial Intelligence and Evolutionary Computations in Engineering Systems

    CERN Document Server

    Vijayakumar, K; Panigrahi, Bijaya; Das, Swagatam

    2017-01-01

    The volume is a collection of high-quality peer-reviewed research papers presented in the International Conference on Artificial Intelligence and Evolutionary Computation in Engineering Systems (ICAIECES 2016) held at SRM University, Chennai, Tamilnadu, India. This conference is an international forum for industry professionals and researchers to deliberate and state their research findings, discuss the latest advancements and explore the future directions in the emerging areas of engineering and technology. The book presents original work and novel ideas, information, techniques and applications in the field of communication, computing and power technologies.

  12. International Conference on Artificial Intelligence and Evolutionary Computations in Engineering Systems

    CERN Document Server

    Bhaskar, M; Panigrahi, Bijaya; Das, Swagatam

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented in the first International Conference on International Conference on Artificial Intelligence and Evolutionary Computations in Engineering Systems (ICAIECES -2015) held at Velammal Engineering College (VEC), Chennai, India during 22 – 23 April 2015. The book discusses wide variety of industrial, engineering and scientific applications of the emerging techniques. Researchers from academic and industry present their original work and exchange ideas, information, techniques and applications in the field of Communication, Computing and Power Technologies.

  13. Online self-report questionnaire on computer work-related exposure (OSCWE): validity and internal consistency.

    Science.gov (United States)

    Mekhora, Keerin; Jalayondeja, Wattana; Jalayondeja, Chutima; Bhuanantanondh, Petcharatana; Dusadiisariyavong, Asadang; Upiriyasakul, Rujiret; Anuraktam, Khajornyod

    2014-07-01

    To develop an online, self-report questionnaire on computer work-related exposure (OSCWE) and to determine the internal consistency, face and content validity of the questionnaire. The online, self-report questionnaire was developed to determine the risk factors related to musculoskeletal disorders in computer users. It comprised five domains: personal, work-related, work environment, physical health and psychosocial factors. The questionnaire's content was validated by an occupational medical doctor and three physical therapy lecturers involved in ergonomic teaching. Twenty-five lay people examined the feasibility of computer-administered and the user-friendly language. The item correlation in each domain was analyzed by the internal consistency (Cronbach's alpha; alpha). The content of the questionnaire was considered congruent with the testing purposes. Eight hundred and thirty-five computer users at the PTT Exploration and Production Public Company Limited registered to the online self-report questionnaire. The internal consistency of the five domains was: personal (alpha = 0.58), work-related (alpha = 0.348), work environment (alpha = 0.72), physical health (alpha = 0.68) and psychosocial factor (alpha = 0.93). The findings suggested that the OSCWE had acceptable internal consistency for work environment and psychosocial factors. The OSCWE is available to use in population-based survey research among computer office workers.

  14. International Conference on Emerging Research in Electronics, Computer Science and Technology

    CERN Document Server

    Sheshadri, Holalu; Padma, M

    2014-01-01

    PES College of Engineering is organizing an International Conference on Emerging Research in Electronics, Computer Science and Technology (ICERECT-12) in Mandya and merging the event with Golden Jubilee of the Institute. The Proceedings of the Conference presents high quality, peer reviewed articles from the field of Electronics, Computer Science and Technology. The book is a compilation of research papers from the cutting-edge technologies and it is targeted towards the scientific community actively involved in research activities.

  15. Computed Tomographic Morphometry of the Internal Anatomy of Mandibular Second Primary Molars

    OpenAIRE

    Kurthukoti, Ameet J; Sharma, Pranjal; Swamy, Dinesh Francis; Shashidara, R; Swamy, Elaine Barretto

    2015-01-01

    ABSTRACT Need for the study: The most important procedure for a successful endodontic treatment is the cleaning and shaping of the canal system. Understanding the internal anatomy of teeth provides valuable information to the clinician that would help him achieve higher clinical success during endodontic therapy. Aims: To evaluate by computed tomography?the internal anatomy of mandibular second primary molars with respect to the number of canals, cross-sectional shape of canals, cross-section...

  16. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatest number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern

  17. High performance computing network for cloud environment using simulators

    OpenAIRE

    Singh, N. Ajith; Hemalatha, M.

    2012-01-01

    Cloud computing is the next generation computing. Adopting the cloud computing is like signing up new form of a website. The GUI which controls the cloud computing make is directly control the hardware resource and your application. The difficulty part in cloud computing is to deploy in real environment. Its' difficult to know the exact cost and it's requirement until and unless we buy the service not only that whether it will support the existing application which is available on traditional...

  18. High performance computing environment for multidimensional image analysis.

    Science.gov (United States)

    Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo

    2007-07-10

    The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.

  19. Improving Software Performance in the Compute Unified Device Architecture

    Directory of Open Access Journals (Sweden)

    Alexandru PIRJAN

    2010-01-01

    Full Text Available This paper analyzes several aspects regarding the improvement of software performance for applications written in the Compute Unified Device Architecture CUDA. We address an issue of great importance when programming a CUDA application: the Graphics Processing Unit’s (GPU’s memory management through ranspose ernels. We also benchmark and evaluate the performance for progressively optimizing a transposing matrix application in CUDA. One particular interest was to research how well the optimization techniques, applied to software application written in CUDA, scale to the latest generation of general-purpose graphic processors units (GPGPU, like the Fermi architecture implemented in the GTX480 and the previous architecture implemented in GTX280. Lately, there has been a lot of interest in the literature for this type of optimization analysis, but none of the works so far (to our best knowledge tried to validate if the optimizations can apply to a GPU from the latest Fermi architecture and how well does the Fermi architecture scale to these software performance improving techniques.

  20. Trends in high-performance computing for engineering calculations.

    Science.gov (United States)

    Giles, M B; Reguly, I

    2014-08-13

    High-performance computing has evolved remarkably over the past 20 years, and that progress is likely to continue. However, in recent years, this progress has been achieved through greatly increased hardware complexity with the rise of multicore and manycore processors, and this is affecting the ability of application developers to achieve the full potential of these systems. This article outlines the key developments on the hardware side, both in the recent past and in the near future, with a focus on two key issues: energy efficiency and the cost of moving data. It then discusses the much slower evolution of system software, and the implications of all of this for application developers. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  1. Power/energy use cases for high performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kelly, Suzanne M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hammond, Steven [National Renewable Energy Lab. (NREL), Golden, CO (United States); Elmore, Ryan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Munch, Kristin [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  2. Topology and computational performance of attractor neural networks

    International Nuclear Information System (INIS)

    McGraw, Patrick N.; Menzinger, Michael

    2003-01-01

    To explore the relation between network structure and function, we studied the computational performance of Hopfield-type attractor neural nets with regular lattice, random, small-world, and scale-free topologies. The random configuration is the most efficient for storage and retrieval of patterns by the network as a whole. However, in the scale-free case retrieval errors are not distributed uniformly among the nodes. The portion of a pattern encoded by the subset of highly connected nodes is more robust and efficiently recognized than the rest of the pattern. The scale-free network thus achieves a very strong partial recognition. The implications of these findings for brain function and social dynamics are suggestive

  3. Performance and Internal Flow of a Dental Air Turbine Handpiece

    Directory of Open Access Journals (Sweden)

    Yasuyuki Nishi

    2018-01-01

    Full Text Available An air turbine handpiece is a dental abrasive device that rotates at high speed and uses compressed air as the driving force. It is characterized by its small size, light weight, and painless abrading due to its high-speed rotation, but its torque is small and noise level is high. Thus, to improve the performance of the air turbine handpiece, we conducted a performance test of an actual handpiece and a numerical analysis that modeled the whole handpiece; we also analyzed the internal flow of the handpiece. Results show that experimental and calculated values were consistent for a constant speed load method with the descending speed of 1 mm/min for torque and turbine output. When the tip of the blade was at the center of the nozzle, the torque was at its highest. This is likely because the jet from the nozzle entered the tip of the blade from a close distance that would not reduce the speed and exited along the blade.

  4. Four-Stroke, Internal Combustion Engine Performance Modeling

    Science.gov (United States)

    Wagner, Richard C.

    In this thesis, two models of four-stroke, internal combustion engines are created and compared. The first model predicts the intake and exhaust processes using isentropic flow equations augmented by discharge coefficients. The second model predicts the intake and exhaust processes using a compressible, time-accurate, Quasi-One-Dimensional (Q1D) approach. Both models employ the same heat release and reduced-order modeling of the cylinder charge. Both include friction and cylinder loss models so that the predicted performance values can be compared to measurements. The results indicate that the isentropic-based model neglects important fluid mechanics and returns inaccurate results. The Q1D flow model, combined with the reduced-order model of the cylinder charge, is able to capture the dominant intake and exhaust fluid mechanics and produces results that compare well with measurement. Fluid friction, convective heat transfer, piston ring and skirt friction and temperature-varying specific heats in the working fluids are all shown to be significant factors in engine performance predictions. Charge blowby is shown to play a lesser role.

  5. 2nd International Conference on Multiscale Computational Methods for Solids and Fluids

    CERN Document Server

    2016-01-01

    This volume contains the best papers presented at the 2nd ECCOMAS International Conference on Multiscale Computations for Solids and Fluids, held June 10-12, 2015. Topics dealt with include multiscale strategy for efficient development of scientific software for large-scale computations, coupled probability-nonlinear-mechanics problems and solution methods, and modern mathematical and computational setting for multi-phase flows and fluid-structure interaction. The papers consist of contributions by six experts who taught short courses prior to the conference, along with several selected articles from other participants dealing with complementary issues, covering both solid mechanics and applied mathematics. .

  6. 3rd International Symposium on Big Data and Cloud Computing Challenges

    CERN Document Server

    Neelanarayanan, V

    2016-01-01

    This proceedings volume contains selected papers that were presented in the 3rd International Symposium on Big data and Cloud Computing Challenges, 2016 held at VIT University, India on March 10 and 11. New research issues, challenges and opportunities shaping the future agenda in the field of Big Data and Cloud Computing are identified and presented throughout the book, which is intended for researchers, scholars, students, software developers and practitioners working at the forefront in their field. This book acts as a platform for exchanging ideas, setting questions for discussion, and sharing the experience in Big Data and Cloud Computing domain.

  7. Interface for connectioin of a spectrometer with remoted computer by means of the internal telephony net

    International Nuclear Information System (INIS)

    Minin, V.V.; Gejst, A.G.; Larin, G.M.

    1989-01-01

    A device permitting to record spectrometers analog signals for computer memory using internal telephony net for connection of a spectrometer with remoted computer, is described. Two-way communication in half-duplex mode is established by a spectrometer operator. Analog-to-digital conversion of a signal is realized by means of a voltage-pulse frequency is 3-30 kHz. The computer may be remoted at up to 1 km. The line signal level is ≤3V, that does not induce transfersal noise

  8. Proceedings of the Third International Conference on Trends in Information, Telecommunication and Computing

    CERN Document Server

    2013-01-01

    The Proceedings of the Third International Conference on Trends in Information, Telecommunication and Computing provides in-depth understanding of the fundamental challenges in the fields of Computational Engineering, Computer, Power Electronics, Instrumentation, Control System, and Telecommunication Technology. This book provides a broad vision for the future of research in these fields with ideas on how to support these new technologies currently practice. Every submitted paper received a careful review from the committee and the final accept/reject decisions were made by the co-chairs on the bases of recommendations from the committee members.

  9. International market selection and subsidiary performance : A neural network approach

    NARCIS (Netherlands)

    Brouthers, L.E.; Wilkinson, T.; Mukhopadhyay, S.; Brouthers, K.D.

    2009-01-01

    How should multinational enterprises (MNEs) select international markets? We develop a model of international market selection that adds firm-specific advantages and transaction cost considerations to previously explored target market factors based on Dunning's Eclectic Framework. Results obtained

  10. Development of an international matrix-solver prediction system on a French-Japanese international grid computing environment

    International Nuclear Information System (INIS)

    Suzuki, Yoshio; Kushida, Noriyuki; Tatekawa, Takayuki; Teshima, Naoya; Caniou, Yves; Guivarch, Ronan; Dayde, Michel; Ramet, Pierre

    2010-01-01

    The 'Research and Development of International Matrix-Solver Prediction System (REDIMPS)' project aimed at improving the TLSE sparse linear algebra expert website by establishing an international grid computing environment between Japan and France. To help users in identifying the best solver or sparse linear algebra tool for their problems, we have developed an interoperable environment between French and Japanese grid infrastructures (respectively managed by DIET and AEGIS). Two main issues were considered. The first issue is how to submit a job from DIET to AEGIS. The second issue is how to bridge the difference of security between DIET and AEGIS. To overcome these issues, we developed APIs to communicate between different grid infrastructures by improving the client API of AEGIS. By developing a server deamon program (SeD) of DIET which behaves like an AEGIS user, DIET can call functions in AEGIS: authentication, file transfer, job submission, and so on. To intensify the security, we also developed functionalities to authenticate DIET sites and DIET users in order to access AEGIS computing resources. By this study, the set of software and computers available within TLSE to find an appropriate solver is enlarged over France (DIET) and Japan (AEGIS). (author)

  11. Effect of Preparation Depth on the Marginal and Internal Adaptation of Computer-aided Design/Computer-assisted Manufacture Endocrowns.

    Science.gov (United States)

    Gaintantzopoulou, M D; El-Damanhoury, H M

    The aim of the study was to evaluate the effect of preparation depth and intraradicular extension on the marginal and internal adaptation of computer-aided design/computer-assisted manufacture (CAD/CAM) endocrown restorations. Standardized preparations were made in resin endodontic tooth models (Nissin Dental), with an intracoronal preparation depth of 2 mm (group H2), with extra 1- (group H3) or 2-mm (group H4) intraradicular extensions in the root canals (n=12). Vita Enamic polymer-infiltrated ceramic-network material endocrowns were fabricated using the CEREC AC CAD/CAM system and were seated on the prepared teeth. Specimens were evaluated by microtomography. Horizontal and vertical tomographic sections were recorded and reconstructed by using the CTSkan software (TView v1.1, Skyscan).The surface/void volume (S/V) in the region of interest was calculated. Marginal gap (MG), absolute marginal discrepancy (MD), and internal marginal gap were measured at various measuring locations and calculated in microscale (μm). Marginal and internal discrepancy data (μm) were analyzed with nonparametric Kruskal-Wallis analysis of variance by ranks with Dunn's post hoc, whereas S/V data were analyzed by one-way analysis of variance and Bonferroni multiple comparisons (α=0.05). Significant differences were found in MG, MD, and internal gap width values between the groups, with H2 showing the lowest values from all groups. S/V calculations presented significant differences between H2 and the other two groups (H3 and H4) tested, with H2 again showing the lowest values. Increasing the intraradicular extension of endocrown restorations increased the marginal and internal gap of endocrown restorations.

  12. 11th International Conference on P2P, Parallel, Grid, Cloud and Internet Computing

    CERN Document Server

    Barolli, Leonard; Amato, Flora

    2017-01-01

    P2P, Grid, Cloud and Internet computing technologies have been very fast established as breakthrough paradigms for solving complex problems by enabling aggregation and sharing of an increasing variety of distributed computational resources at large scale. The aim of this volume is to provide latest research findings, innovative research results, methods and development techniques from both theoretical and practical perspectives related to P2P, Grid, Cloud and Internet computing as well as to reveal synergies among such large scale computing paradigms. This proceedings volume presents the results of the 11th International Conference on P2P, Parallel, Grid, Cloud And Internet Computing (3PGCIC-2016), held November 5-7, 2016, at Soonchunhyang University, Asan, Korea.

  13. Fifth International Conference on Innovations in Bio-Inspired Computing and Applications

    CERN Document Server

    Abraham, Ajith; Snášel, Václav

    2014-01-01

    This volume of Advances in Intelligent Systems and Computing contains accepted papers presented at IBICA2014, the 5th International Conference on Innovations in Bio-inspired Computing and Applications. The aim of IBICA 2014 was to provide a platform for world research leaders and practitioners, to discuss the full spectrum of current theoretical developments, emerging technologies, and innovative applications of Bio-inspired Computing. Bio-inspired Computing remains to be one of the most exciting research areas, and it is continuously demonstrating exceptional strength in solving complex real life problems. The main driving force of the conference was to further explore the intriguing potential of Bio-inspired Computing. IBICA 2014 was held in Ostrava, Czech Republic and hosted by the VSB - Technical University of Ostrava.

  14. 16th International workshop on Advanced Computing and Analysis Techniques in physics (ACAT)

    CERN Document Server

    Lokajicek, M; Tumova, N

    2015-01-01

    16th International workshop on Advanced Computing and Analysis Techniques in physics (ACAT). The ACAT workshop series, formerly AIHENP (Artificial Intelligence in High Energy and Nuclear Physics), was created back in 1990. Its main purpose is to gather researchers related with computing in physics research together, from both physics and computer science sides, and bring them a chance to communicate with each other. It has established bridges between physics and computer science research, facilitating the advances in our understanding of the Universe at its smallest and largest scales. With the Large Hadron Collider and many astronomy and astrophysics experiments collecting larger and larger amounts of data, such bridges are needed now more than ever. The 16th edition of ACAT aims to bring related researchers together, once more, to explore and confront the boundaries of computing, automatic data analysis and theoretical calculation technologies. It will create a forum for exchanging ideas among the fields an...

  15. 4th International Conference on Innovations in Bio-Inspired Computing and Applications

    CERN Document Server

    Krömer, Pavel; Snášel, Václav

    2014-01-01

    This volume of Advances in Intelligent Systems and Computing contains accepted papers presented at IBICA2013, the 4th International Conference on Innovations in Bio-inspired Computing and Applications. The aim of IBICA 2013 was to provide a platform for world research leaders and practitioners, to discuss the full spectrum of current theoretical developments, emerging technologies, and innovative applications of Bio-inspired Computing. Bio-inspired Computing is currently one of the most exciting research areas, and it is continuously demonstrating exceptional strength in solving complex real life problems. The main driving force of the conference is to further explore the intriguing potential of Bio-inspired Computing. IBICA 2013 was held in Ostrava, Czech Republic and hosted by the VSB - Technical University of Ostrava.

  16. Ergonomic guidelines for using notebook personal computers. Technical Committee on Human-Computer Interaction, International Ergonomics Association.

    Science.gov (United States)

    Saito, S; Piccoli, B; Smith, M J; Sotoyama, M; Sweitzer, G; Villanueva, M B; Yoshitake, R

    2000-10-01

    In the 1980's, the visual display terminal (VDT) was introduced in workplaces of many countries. Soon thereafter, an upsurge in reported cases of related health problems, such as musculoskeletal disorders and eyestrain, was seen. Recently, the flat panel display or notebook personal computer (PC) became the most remarkable feature in modern workplaces with VDTs and even in homes. A proactive approach must be taken to avert foreseeable ergonomic and occupational health problems from the use of this new technology. Because of its distinct physical and optical characteristics, the ergonomic requirements for notebook PCs in terms of machine layout, workstation design, lighting conditions, among others, should be different from the CRT-based computers. The Japan Ergonomics Society (JES) technical committee came up with a set of guidelines for notebook PC use following exploratory discussions that dwelt on its ergonomic aspects. To keep in stride with this development, the Technical Committee on Human-Computer Interaction under the auspices of the International Ergonomics Association worked towards the international issuance of the guidelines. This paper unveils the result of this collaborative effort.

  17. A collaborative brain-computer interface for improving human performance.

    Directory of Open Access Journals (Sweden)

    Yijun Wang

    Full Text Available Electroencephalogram (EEG based brain-computer interfaces (BCI have been studied since the 1970s. Currently, the main focus of BCI research lies on the clinical use, which aims to provide a new communication channel to patients with motor disabilities to improve their quality of life. However, the BCI technology can also be used to improve human performance for normal healthy users. Although this application has been proposed for a long time, little progress has been made in real-world practices due to technical limits of EEG. To overcome the bottleneck of low single-user BCI performance, this study proposes a collaborative paradigm to improve overall BCI performance by integrating information from multiple users. To test the feasibility of a collaborative BCI, this study quantitatively compares the classification accuracies of collaborative and single-user BCI applied to the EEG data collected from 20 subjects in a movement-planning experiment. This study also explores three different methods for fusing and analyzing EEG data from multiple subjects: (1 Event-related potentials (ERP averaging, (2 Feature concatenating, and (3 Voting. In a demonstration system using the Voting method, the classification accuracy of predicting movement directions (reaching left vs. reaching right was enhanced substantially from 66% to 80%, 88%, 93%, and 95% as the numbers of subjects increased from 1 to 5, 10, 15, and 20, respectively. Furthermore, the decision of reaching direction could be made around 100-250 ms earlier than the subject's actual motor response by decoding the ERP activities arising mainly from the posterior parietal cortex (PPC, which are related to the processing of visuomotor transmission. Taken together, these results suggest that a collaborative BCI can effectively fuse brain activities of a group of people to improve the overall performance of natural human behavior.

  18. Additive Manufacturing and High-Performance Computing: a Disruptive Latent Technology

    Science.gov (United States)

    Goodwin, Bruce

    2015-03-01

    This presentation will discuss the relationship between recent advances in Additive Manufacturing (AM) technology, High-Performance Computing (HPC) simulation and design capabilities, and related advances in Uncertainty Quantification (UQ), and then examines their impacts upon national and international security. The presentation surveys how AM accelerates the fabrication process, while HPC combined with UQ provides a fast track for the engineering design cycle. The combination of AM and HPC/UQ almost eliminates the engineering design and prototype iterative cycle, thereby dramatically reducing cost of production and time-to-market. These methods thereby present significant benefits for US national interests, both civilian and military, in an age of austerity. Finally, considering cyber security issues and the advent of the ``cloud,'' these disruptive, currently latent technologies may well enable proliferation and so challenge both nuclear and non-nuclear aspects of international security.

  19. Educational Technology Research Journals: "International Journal of Computer-Supported Collaborative Learning," 2006-2014

    Science.gov (United States)

    Howland, Shiloh M. J.; Martin, M. Troy; Bodily, Robert; Faulconer, Christian; West, Richard E.

    2015-01-01

    The authors analyzed all research articles from the first issue of the "International Journal of Computer-Supported Collaborative Learning" in 2006 until the second issue of 2014. They determined the research methodologies, most frequently used author-supplied keywords as well as two- and three-word phrases, and most frequently published…

  20. Foreword 3rd International Conference on Affective Computing and Intelligent Interaction - ACII 2009

    NARCIS (Netherlands)

    Cohn, Jeffrey; Cohn, Jeffrey; Nijholt, Antinus; Pantic, Maja

    2009-01-01

    It is a pleasure and an honor to have organized the Third International Conference on Affective Computing and Intelligent Interaction (ACII). The conference will be held from 10th – 12th September 2009 in Amsterdam, The Netherlands. The conference series is the premier forum for presenting research

  1. Long-term internal thoracic artery bypass graft patency and geometry assessed by multidetector computed tomography

    DEFF Research Database (Denmark)

    Zacho, Mette; Damgaard, Sune; Lilleoer, Nikolaj Thomas

    2012-01-01

    The left internal thoracic artery (LITA) undergoes vascular remodelling when used for coronary artery bypass grafting. In this study we tested the hypothesis that the extent of the LITA remodelling late after coronary artery bypass grafting assessed by multidetector computed tomography is related...

  2. International Association for the Study of Lung Cancer Computed Tomography Screening Workshop 2011 Report

    NARCIS (Netherlands)

    Field, John K.; Smith, Robert A.; Aberle, Denise R.; Oudkerk, Matthijs; Baldwin, David R.; Yankelevitz, David; Pedersen, Jesper Holst; Swanson, Scott James; Travis, William D.; Wisbuba, Ignacio I.; Noguchi, Masayuki; Mulshine, Jim L.

    The International Association for the Study of Lung Cancer (IASLC) Board of Directors convened a computed tomography (CT) Screening Task Force to develop an IASLC position statement, after the National Cancer Institute press statement from the National Lung Screening Trial showed that lung cancer

  3. FY 1995 Blue Book: High Performance Computing and Communications: Technology for the National Information Infrastructure

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — The Federal High Performance Computing and Communications HPCC Program was created to accelerate the development of future generations of high performance computers...

  4. The contribution of high-performance computing and modelling for industrial development

    CSIR Research Space (South Africa)

    Sithole, Happy

    2017-10-01

    Full Text Available Performance Computing and Modelling for Industrial Development Dr Happy Sithole and Dr Onno Ubbink 2 Strategic context • High-performance computing (HPC) combined with machine Learning and artificial intelligence present opportunities to non...

  5. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    Science.gov (United States)

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  6. Workplace Performance of Hotel and Restaurant Management Interns of West Visayas State University, Calinog Iloilo, Philippines

    Directory of Open Access Journals (Sweden)

    Raymund B. Moreno, Ma. Nellie L. Mapa

    2014-02-01

    Full Text Available This study aimed to examine the workplace performance of Hotel and Restaurant Management practicum students of West Visayas State University – Calinog Campus as perceived by the interns’ themselves and the supervisors in different hospitality establishments in Iloilo City where students were deployed for on-the-job training in relation to industry-site internship / practicum program for academic year 2013-2014.The result of the study serves as the basis for designing a proposed education or training program in enhancing the workplace performance of Hotel and Restaurant Management Students. The study surveyed 35 practicum students and 23 supervisors in different hospitality establishments in Iloilo City using survey questionnaire based from Competency Standards for hospitality related courses of the Commission on Higher Education (CHED Memorandum Order No.30 series of 2006. Findings revealed that both supervisor and student respondents agreed on the satisfactory performance of the interns in terms of Higher Order Thinking Skills and very satisfactory rating on Personal Qualities . With regard to the Basic Skills (e.g., numerical computation, oral and written communications, the students rated themselves Very Satisfactory while supervisors gave a Satisfactory rating. The same Very Satisfactory performance rating was achieved on Professional Competencies by both respondents. Of the five common competencies, Interpersonal and Technological skills of the interns were rated very satisfactory by both respondents on Interpersonal Skills and on Technological Skills. Also the same satisfactory rating was given on Information skills by both the supervisor and interns. Whereas for the Resources Skills, the interns rated themselves Very Satisfactory rating while the supervisors rated them Satisfactory. It is recommended to continue the review and upgrade industry training in the curriculum and consistently redefine the curricular focus to meet the

  7. 6th International Conference on Innovations in Bio-Inspired Computing and Applications

    CERN Document Server

    Abraham, Ajith; Krömer, Pavel; Pant, Millie; Muda, Azah

    2016-01-01

    This Volume contains the papers presented during the 6th International Conference on Innovations in Bio-Inspired Computing and Applications IBICA 2015 which was held in Kochi, India during December 16-18, 2015. The 51 papers presented in this Volume were carefully reviewed and selected. The 6th International Conference IBICA 2015 has been organized to discuss the state-of-the-art as well as to address various issues in the growing research field of Bio-inspired Computing which is currently one of the most exciting research areas, and is continuously demonstrating exceptional strength in solving complex real life problems. The Volume will be a valuable reference to researchers, students and practitioners in the computational intelligence field.

  8. Performance of the internal audit department under ERP systems: empirical evidence from Taiwanese firms

    Science.gov (United States)

    Tsai, Wen-Hsien; Chen, Hui-Chiao; Chang, Jui-Chu; Leu, Jun-Der; Chao Chen, Der; Purbokusumo, Yuyun

    2015-10-01

    In this study, the performance of the internal audit department (IAD) and its contribution to a company under enterprise resource planning (ERP) systems was examined. It is anticipated that this will provide insight into the factors perceived to be crucial to a company's effectiveness. A theoretical framework was developed and tested using the sample of Taiwanese companies. Using mail survey procedures, we elicited perceptions from key internal auditors about the ERP system and auditing software, as well as their opinions concerning the IAD's effectiveness and its contribution within a company. Data were analysed using the partial least square (PLS) regression to test the hypotheses. Drawing upon a sample of Taiwanese firms, the study suggests that a firm can improve the performance of the IAD through an enterprise-wide integrated, effective ERP system and appropriate auditing software. At the same time, the performance of the IAD can also contribute significantly to the company. The results also show that investments in computer-assisted auditing techniques (CAATs) are crucial due to their tremendous effectiveness in regard to the performance of the IAD and for the contributions CAATs can make to a company.

  9. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Bland, Arthur S Buddy [ORNL; Boudwin, Kathlyn J. [ORNL; Hack, James J [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL; Hudson, Douglas L [ORNL

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation

  10. Determinants of International Outsourcing and Performance: A Relational Exchange Perspective

    Directory of Open Access Journals (Sweden)

    Carl E. Dresden

    2013-07-01

    Full Text Available Prescriptive discussions regarding international outsourcing abound, but there are few previously proposed models of the antecedents leading to such sourcing. This empirical study utilizes the relational exchange framework proposed by Dwyer, Schurr and Oh (1987 and builds a conceptual model of the firm's use of international outsourcing within the context of relational exchange. The model concentrates on the aspects of relational exchange that are crucial antecedents to successful international outsourcing and empirically demonstrates how both economic and social factors have their influence. The resulting model demonstrates how cooperation, rather than competition, leads to successful international outsourcing.

  11. Fenestration system energy performance research, implementation, and international harmonization

    Energy Technology Data Exchange (ETDEWEB)

    McGowan, Raymond F [National Fenestration Rating Council, Greenbelt, MD (United States)

    2014-12-23

    The research conducted by the NFRC and its contractors adds significantly to the understanding of several areas of investigation. NFRC enables manufacturers to rate fenestration energy performance to comply with building energy codes, participate in ENERGY STAR, and compete fairly. NFRC continuously seeks to improve its ratings and also seeks to simplify the rating process. Several research projects investigated rating improvement potential such as • Complex Product VT Rating Research • Window 6 and Therm 6 Validation Research Project • Complex Product VT Rating Research Conclusions from these research projects led to important changes and increased confidence in the existing NFRC rating process. Conclusions from the Window 6/Therm 6 project will enable window manufacturers to rate an expanded array of products and improve existing ratings. Some research lead to an improved new rating method called the Component Modeling Approach. A primary goal of the CMA was a simplification of the commercial energy rating process to increase participation and make the commercial industry more competitive and code compliant. The project below contributed towards CMA development: • Component Modeling Approach Condensation Resistance Research NFRC continues to implement the Component Modeling Approach program. The program includes the CMA software tool, CMAST, and several procedural documents to govern the certification process. This significant accomplishment was a response the commercial fenestration industry’s need for a simplification of the present NFRC energy rating method (named site built). To date, most commercial fenestration is self-rated by a variety of techniques. The CMA enables commercial fenestration manufacturers to rate according to the NFRC 100/200 as most commercial energy codes require. International Harmonization NFRC achieved significant international harmonization success by continuing its licensing agreements with the Australian Fenestration

  12. Internal Fiber Structure of a High-Performing, Additively Manufactured Injection Molding Insert

    DEFF Research Database (Denmark)

    Hofstätter, Thomas; Baier, Sina; Trinderup, Camilla H.

    A standard mold is equipped with additively manufactured inserts in a rectangular shape produced with vat photo polymerization. While the lifetime compared to conventional materials such as brass, steel, and aluminum is reduced, the prototyping and design phase can be shortened significantly...... by using flexible and cost-effective additive manufacturing technologies. Higher production volumes still exceed the capability of additively manufactured inserts, which are overruled by the stronger performance of less-flexible but mechanically advanced materials. In this contribution, the internal...... structure of a high-performing, fiber-reinforced injection molding insert has been analyzed. The insert reached a statistically proven and reproducible lifetime of 4,500 shots, which significantly outperforms any other previously published additively manufactured inserts. Computer tomography, tensile tests...

  13. Internal Flow Simulation of Enhanced Performance Solid Rocket Booster for the Space Transportation System

    Science.gov (United States)

    Ahmad, Rashid A.; McCool, Alex (Technical Monitor)

    2001-01-01

    An enhanced performance solid rocket booster concept for the space shuttle system has been proposed. The concept booster will have strong commonality with the existing, proven, reliable four-segment Space Shuttle Reusable Solid Rocket Motors (RSRM) with individual component design (nozzle, insulator, etc.) optimized for a five-segment configuration. Increased performance is desirable to further enhance safety/reliability and/or increase payload capability. Performance increase will be achieved by adding a fifth propellant segment to the current four-segment booster and opening the throat to accommodate the increased mass flow while maintaining current pressure levels. One development concept under consideration is the static test of a "standard" RSRM with a fifth propellant segment inserted and appropriate minimum motor modifications. Feasibility studies are being conducted to assess the potential for any significant departure in component performance/loading from the well-characterized RSRM. An area of concern is the aft motor (submerged nozzle inlet, aft dome, etc.) where the altered internal flow resulting from the performance enhancing features (25% increase in mass flow rate, higher Mach numbers, modified subsonic nozzle contour) may result in increased component erosion and char. To assess this issue and to define the minimum design changes required to successfully static test a fifth segment RSRM engineering test motor, internal flow studies have been initiated. Internal aero-thermal environments were quantified in terms of conventional convective heating and discrete phase alumina particle impact/concentration and accretion calculations via Computational Fluid Dynamics (CFD) simulation. Two sets of comparative CFD simulations of the RSRM and the five-segment (IBM) concept motor were conducted with CFD commercial code FLUENT. The first simulation involved a two-dimensional axi-symmetric model of the full motor, initial grain RSRM. The second set of analyses

  14. Comparing the performance of SIMD computers by running large air pollution models

    DEFF Research Database (Denmark)

    Brown, J.; Hansen, Per Christian; Wasniewski, J.

    1996-01-01

    To compare the performance and use of three massively parallel SIMD computers, we implemented a large air pollution model on these computers. Using a realistic large-scale model, we gained detailed insight about the performance of the computers involved when used to solve large-scale scientific...... problems that involve several types of numerical computations. The computers used in our study are the Connection Machines CM-200 and CM-5, and the MasPar MP-2216...

  15. Proceedings of the 2011 International Conference on Informatics, Cybernetics, and Computer Engineering

    CERN Document Server

    2012-01-01

    The volume includes a set of selected papers extended and revised from the International Conference on Informatics, Cybernetics, and Computer Engineering. An information system (IS) is any combination of information technology and people's activities using that technology to support operations, management.  Computer engineering is a discipline that integrates several fields of electrical engineering and computer science required to develop computer systems. ICCE 2011 Volume 2 is to provide a forum for researchers, educators, engineers, and government officials involved in the general areas of Information system and Software Engineering to disseminate their latest research results and exchange views on the future research directions of these fields. 81 high-quality papers are included in the volume. Each paper has been peer-reviewed by at least 2 program committee members and selected by the volume editor.

  16. 6th International Conference on Practical Applications of Computational Biology & Bioinformatics

    CERN Document Server

    Luscombe, Nicholas; Fdez-Riverola, Florentino; Rodríguez, Juan; Practical Applications of Computational Biology & Bioinformatics

    2012-01-01

    The growth in the Bioinformatics and Computational Biology fields over the last few years has been remarkable.. The analysis of the datasets of Next Generation Sequencing needs new algorithms and approaches from fields such as Databases, Statistics, Data Mining, Machine Learning, Optimization, Computer Science and Artificial Intelligence. Also Systems Biology has also been emerging as an alternative to the reductionist view that dominated biological research in the last decades. This book presents the results of the  6th International Conference on Practical Applications of Computational Biology & Bioinformatics held at University of Salamanca, Spain, 28-30th March, 2012 which brought together interdisciplinary scientists that have a strong background in the biological and computational sciences.

  17. HEPLIB '91: International users meeting on the support and environments of high energy physics computing

    International Nuclear Information System (INIS)

    Johnstad, H.

    1991-01-01

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, data base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards

  18. Proceedings of the 2011 International Conference on Informatics, Cybernetics, and Computer Engineering

    CERN Document Server

    2012-01-01

    The volume includes a set of selected papers extended and revised from the International Conference on Informatics, Cybernetics, and Computer Engineering. Intelligent control is a class of control techniques, that use various AI computing approaches like neural networks, Bayesian probability, fuzzy logic, machine learning, evolutionary computation and genetic algorithms. Intelligent control can be divided into the following major sub-domains: Neural network control Bayesian control Fuzzy (logic) control Neuro-fuzzy control Expert Systems Genetic control Intelligent agents (Cognitive/Conscious control) New control techniques are created continuously as new models of intelligent behavior are created and computational methods developed to support them. Networks may be classified according to a wide variety of characteristics such as medium used to transport the data, communications protocol used, scale, topology, organizational scope, etc. ICCE 2011 Volume 1 is to provide a forum for researchers, educators, engi...

  19. 3th IEEE/ACIS International Conference on Computer and Information Science

    CERN Document Server

    2015-01-01

    This edited book presents scientific results of the 13th IEEE/ACIS International Conference on Computer and Information Science (ICIS 2014) which was held on June 4-6, 2014 in Taiyuan, China. The aim of this conference was to bring together researchers and scientists, businessmen and entrepreneurs, teachers, engineers, computer users, and students to discuss the numerous fields of computer science and to share their experiences and exchange new ideas and information in a meaningful way. Research results about all aspects (theory, applications and tools) of computer and information science, and to discuss the practical challenges encountered along the way and the solutions adopted to solve them. The conference organizers selected the best papers from those papers accepted for presentation at the conference.  The papers were chosen based on review scores submitted by members of the program committee, and underwent further rigorous rounds of review. This publication captures 14 of the conference’s most promis...

  20. High performance computing in power and energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Khaitan, Siddhartha Kumar [Iowa State Univ., Ames, IA (United States); Gupta, Anshul (eds.) [IBM Watson Research Center, Yorktown Heights, NY (United States)

    2013-07-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, cascading and security analysis, interaction between hybrid systems (electric, transport, gas, oil, coal, etc.) and so on, to get meaningful information in real time to ensure a secure, reliable and stable power system grid. Advanced research on development and implementation of market-ready leading-edge high-speed enabling technologies and algorithms for solving real-time, dynamic, resource-critical problems will be required for dynamic security analysis targeted towards successful implementation of Smart Grid initiatives. This books aims to bring together some of the latest research developments as well as thoughts on the future research directions of the high performance computing applications in electric power systems planning, operations, security, markets, and grid integration of alternate sources of energy, etc.

  1. The Future of Software Engineering for High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Pope, G [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-07-16

    DOE ASCR requested that from May through mid-July 2015 a study group identify issues and recommend solutions from a software engineering perspective transitioning into the next generation of High Performance Computing. The approach used was to ask some of the DOE complex experts who will be responsible for doing this work to contribute to the study group. The technique used was to solicit elevator speeches: a short and concise write up done as if the author was a speaker with only a few minutes to convince a decision maker of their top issues. Pages 2-18 contain the original texts of the contributed elevator speeches and end notes identifying the 20 contributors. The study group also ranked the importance of each topic, and those scores are displayed with each topic heading. A perfect score (and highest priority) is three, two is medium priority, and one is lowest priority. The highest scoring topic areas were software engineering and testing resources; the lowest scoring area was compliance to DOE standards. The following two paragraphs are an elevator speech summarizing the contributed elevator speeches. Each sentence or phrase in the summary is hyperlinked to its source via a numeral embedded in the text. A risk one liner has also been added to each topic to allow future risk tracking and mitigation.

  2. Current configuration and performance of the TFTR computer system

    International Nuclear Information System (INIS)

    Sauthoff, N.R.; Barnes, D.J.; Daniels, R.; Davis, S.; Reid, A.; Snyder, T.; Oliaro, G.; Stark, W.; Thompson, J.R. Jr.

    1986-01-01

    Developments in the TFTR (Tokamak Fusion Test Reactor) computer support system since its startup phases are described. Early emphasis on tokamak process control have been augmented by improved physics data handling, both on-line and off-line. Data acquisition volume and rate have been increased, and data is transmitted automatically to a new VAX-based off-line data reduction system. The number of interface points has increased dramatically, as has the number of man-machine interfaces. The graphics system performance has been accelerated by the introduction of parallelism, and new features such as shadowing and device independence have been added. To support multicycle operation for neutral beam conditioning and independence, the program control system has been generalized. A status and alarm system, including calculated variables, is in the installation phase. System reliability has been enhanced by both the redesign of weaker components and installation of a system status monitor. Development productivity has been enhanced by the addition of tools

  3. Lightweight Provenance Service for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Dong; Chen, Yong; Carns, Philip; Jenkins, John; Ross, Robert

    2017-09-09

    Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. The need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.

  4. DEISA2: supporting and developing a European high-performance computing ecosystem

    International Nuclear Information System (INIS)

    Lederer, H

    2008-01-01

    The DEISA Consortium has deployed and operated the Distributed European Infrastructure for Supercomputing Applications. Through the EU FP7 DEISA2 project (funded for three years as of May 2008), the consortium is continuing to support and enhance the distributed high-performance computing infrastructure and its activities and services relevant for applications enabling, operation, and technologies, as these are indispensable for the effective support of computational sciences for high-performance computing (HPC). The service-provisioning model will be extended from one that supports single projects to one supporting virtual European communities. Collaborative activities will also be carried out with new European and other international initiatives. Of strategic importance is cooperation with the PRACE project, which is preparing for the installation of a limited number of leadership-class Tier-0 supercomputers in Europe. The key role and aim of DEISA will be to deliver a turnkey operational solution for a persistent European HPC ecosystem that will integrate national Tier-1 centers and the new Tier-0 centers

  5. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Belov, S; Kaplin, V; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2011-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. Recently a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  6. Proceedings of the 2013 International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering - M and C 2013

    International Nuclear Information System (INIS)

    2013-01-01

    The Mathematics and Computation Division of the American Nuclear (ANS) and the Idaho Section of the ANS hosted the 2013 International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering (M and C 2013). This proceedings contains over 250 full papers with topics ranging from reactor physics; radiation transport; materials science; nuclear fuels; core performance and optimization; reactor systems and safety; fluid dynamics; medical applications; analytical and numerical methods; algorithms for advanced architectures; and validation verification, and uncertainty quantification

  7. Proceedings of the 2013 International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering - M and C 2013

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2013-07-01

    The Mathematics and Computation Division of the American Nuclear (ANS) and the Idaho Section of the ANS hosted the 2013 International Conference on Mathematics and Computational Methods Applied to Nuclear Science and Engineering (M and C 2013). This proceedings contains over 250 full papers with topics ranging from reactor physics; radiation transport; materials science; nuclear fuels; core performance and optimization; reactor systems and safety; fluid dynamics; medical applications; analytical and numerical methods; algorithms for advanced architectures; and validation verification, and uncertainty quantification.

  8. Big Data and High-Performance Computing in Global Seismology

    Science.gov (United States)

    Bozdag, Ebru; Lefebvre, Matthieu; Lei, Wenjie; Peter, Daniel; Smith, James; Komatitsch, Dimitri; Tromp, Jeroen

    2014-05-01

    Much of our knowledge of Earth's interior is based on seismic observations and measurements. Adjoint methods provide an efficient way of incorporating 3D full wave propagation in iterative seismic inversions to enhance tomographic images and thus our understanding of processes taking place inside the Earth. Our aim is to take adjoint tomography, which has been successfully applied to regional and continental scale problems, further to image the entire planet. This is one of the extreme imaging challenges in seismology, mainly due to the intense computational requirements and vast amount of high-quality seismic data that can potentially be assimilated. We have started low-resolution inversions (T > 30 s and T > 60 s for body and surface waves, respectively) with a limited data set (253 carefully selected earthquakes and seismic data from permanent and temporary networks) on Oak Ridge National Laboratory's Cray XK7 "Titan" system. Recent improvements in our 3D global wave propagation solvers, such as a GPU version of the SPECFEM3D_GLOBE package, will enable us perform higher-resolution (T > 9 s) and longer duration (~180 m) simulations to take the advantage of high-frequency body waves and major-arc surface waves, thereby improving imbalanced ray coverage as a result of the uneven global distribution of sources and receivers. Our ultimate goal is to use all earthquakes in the global CMT catalogue within the magnitude range of our interest and data from all available seismic networks. To take the full advantage of computational resources, we need a solid framework to manage big data sets during numerical simulations, pre-processing (i.e., data requests and quality checks, processing data, window selection, etc.) and post-processing (i.e., pre-conditioning and smoothing kernels, etc.). We address the bottlenecks in our global seismic workflow, which are mainly coming from heavy I/O traffic during simulations and the pre- and post-processing stages, by defining new data

  9. Evaluating brain-computer interface performance using color in the P300 checkerboard speller.

    Science.gov (United States)

    Ryan, D B; Townsend, G; Gates, N A; Colwell, K; Sellers, E W

    2017-10-01

    Current Brain-Computer Interface (BCI) systems typically flash an array of items from grey to white (GW). The objective of this study was to evaluate BCI performance using uniquely colored stimuli. In addition to the GW stimuli, the current study tested two types of color stimuli (grey to color [GC] and color intensification [CI]). The main hypotheses were that in a checkboard paradigm, unique color stimuli will: (1) increase BCI performance over the standard GW paradigm; (2) elicit larger event-related potentials (ERPs); and, (3) improve offline performance with an electrode selection algorithm (i.e., Jumpwise). Online results (n=36) showed that GC provides higher accuracy and information transfer rate than the CI and GW conditions. Waveform analysis showed that GC produced higher amplitude ERPs than CI and GW. Information transfer rate was improved by the Jumpwise-selected channel locations in all conditions. Unique color stimuli (GC) improved BCI performance and enhanced ERPs. Jumpwise-selected electrode locations improved offline performance. These results show that in a checkerboard paradigm, unique color stimuli increase BCI performance, are preferred by participants, and are important to the design of end-user applications; thus, could lead to an increase in end-user performance and acceptance of BCI technology. Copyright © 2017 International Federation of Clinical Neurophysiology. All rights reserved.

  10. Performance studies of four-dimensional cone beam computed tomography

    International Nuclear Information System (INIS)

    Qi Zhihua; Chen Guanghong

    2011-01-01

    Four-dimensional cone beam computed tomography (4DCBCT) has been proposed to characterize the breathing motion of tumors before radiotherapy treatment. However, when the acquired cone beam projection data are retrospectively gated into several respiratory phases, the available data to reconstruct each phase is under-sampled and thus causes streaking artifacts in the reconstructed images. To solve the under-sampling problem and improve image quality in 4DCBCT, various methods have been developed. This paper presents performance studies of three different 4DCBCT methods based on different reconstruction algorithms. The aims of this paper are to study (1) the relationship between the accuracy of the extracted motion trajectories and the data acquisition time of a 4DCBCT scan and (2) the relationship between the accuracy of the extracted motion trajectories and the number of phase bins used to sort projection data. These aims will be applied to three different 4DCBCT methods: conventional filtered backprojection reconstruction (FBP), FBP with McKinnon-Bates correction (MB) and prior image constrained compressed sensing (PICCS) reconstruction. A hybrid phantom consisting of realistic chest anatomy and a moving elliptical object with known 3D motion trajectories was constructed by superimposing the analytical projection data of the moving object to the simulated projection data from a chest CT volume dataset. CBCT scans with gantry rotation times from 1 to 4 min were simulated, and the generated projection data were sorted into 5, 10 and 20 phase bins before different methods were used to reconstruct 4D images. The motion trajectories of the moving object were extracted using a fast free-form deformable registration algorithm. The root mean square errors (RMSE) of the extracted motion trajectories were evaluated for all simulated cases to quantitatively study the performance. The results demonstrate (1) longer acquisition times result in more accurate motion delineation

  11. Pengaruh Computer Anxiety Terhadap Keahlian Pemakai Komputer dengan Internal Locus Of Control sebagai Variabel Moderasi

    Directory of Open Access Journals (Sweden)

    Fadjar Harimurti

    2017-03-01

    Full Text Available This study aims (1 to analyze the effect of computer anxiety in using accounting software towards the last users’ skill (2 to analyze the effect of computer anxiety in using accounting software towards the last users’ skill by using internal locus of control as a moderating variable. The population of the study is all students of the economics faculty majoring in accounting at UNISRI Surakarta. This study uses 46 students as samples who has met the purposive criterias such as they are taking computer accounting course on odd semester of the academic year 2015/2016. Hypothesis was tested using regression analysis with absolute residual. The results show that (1 computer anxiety in using accounting software has negative affect towards the last users’skill (students of accounting program in economics faculty of UNISRI Surakarta. (2 internal locus of control moderates the effect of computer anxiety in using accounting software towards end user skill (students of accounting program in economics faculty of UNISRI Surakarta.

  12. Virtual international experiences in veterinary medicine: an evaluation of students' attitudes toward computer-based learning.

    Science.gov (United States)

    French, Brigitte C; Hird, David W; Romano, Patrick S; Hayes, Rick H; Nijhof, Ard M; Jongejan, Frans; Mellor, Dominic J; Singer, Randall S; Fine, Amanda E; Gay, John M; Davis, Radford G; Conrad, Patricia A

    2007-01-01

    While many studies have evaluated whether or not factual information can be effectively communicated using computer-aided tools, none has focused on establishing and changing students' attitudes toward international animal-health issues. The study reported here was designed to assess whether educational modules on an interactive computer CD elicited a change in veterinary students' interest in and attitudes toward international animal-health issues. Volunteer veterinary students at seven universities (first-year students at three universities, second-year at one, third-year at one, and fourth-year at two) were given by random assignment either an International Animal Health (IAH) CD or a control CD, ParasitoLog (PL). Participants completed a pre-CD survey to establish baseline information on interest and attitudes toward both computers and international animal-health issues. Four weeks later, a post-CD questionnaire was distributed. On the initial survey, most students expressed an interest in working in the field of veterinary medicine in another country. Responses to the three pre-CD questions relating to attitudes toward the globalization of veterinary medicine, interest in foreign animal disease, and inclusion of a core course on international health issues in the veterinary curriculum were all positive, with average values above 3 (on a five-point scale where 5 represented strong agreement or interest). Almost all students considered it beneficial to learn about animal-health issues in other countries. After students reviewed the IAH CD, we found a decrease at four universities, an increase at one university, and no change at the remaining two universities in students' interest in working in some area of international veterinary medicine. However, none of the differences was statistically significant.

  13. Department of Energy research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-08-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programmatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models, the execution of which is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex, and consequently it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  14. A performance model for the communication in fast multipole methods on high-performance computing platforms

    KAUST Repository

    Ibeid, Huda

    2016-03-04

    Exascale systems are predicted to have approximately 1 billion cores, assuming gigahertz cores. Limitations on affordable network topologies for distributed memory systems of such massive scale bring new challenges to the currently dominant parallel programing model. Currently, there are many efforts to evaluate the hardware and software bottlenecks of exascale designs. It is therefore of interest to model application performance and to understand what changes need to be made to ensure extrapolated scalability. The fast multipole method (FMM) was originally developed for accelerating N-body problems in astrophysics and molecular dynamics but has recently been extended to a wider range of problems. Its high arithmetic intensity combined with its linear complexity and asynchronous communication patterns make it a promising algorithm for exascale systems. In this paper, we discuss the challenges for FMM on current parallel computers and future exascale architectures, with a focus on internode communication. We focus on the communication part only; the efficiency of the computational kernels are beyond the scope of the present study. We develop a performance model that considers the communication patterns of the FMM and observe a good match between our model and the actual communication time on four high-performance computing (HPC) systems, when latency, bandwidth, network topology, and multicore penalties are all taken into account. To our knowledge, this is the first formal characterization of internode communication in FMM that validates the model against actual measurements of communication time. The ultimate communication model is predictive in an absolute sense; however, on complex systems, this objective is often out of reach or of a difficulty out of proportion to its benefit when there exists a simpler model that is inexpensive and sufficient to guide coding decisions leading to improved scaling. The current model provides such guidance.

  15. 4th International Conference on Innovations in Computer Science and Engineering

    CERN Document Server

    Sayal, Rishi; Rawat, Sandeep

    2017-01-01

    The book is a collection of high-quality peer-reviewed research papers presented at the Fourth International Conference on Innovations in Computer Science and Engineering (ICICSE 2016) held at Guru Nanak Institutions, Hyderabad, India during 22 – 23 July 2016. The book discusses a wide variety of industrial, engineering and scientific applications of the emerging techniques. Researchers from academic and industry present their original work and exchange ideas, information, techniques and applications in the field of data science and analytics, artificial intelligence and expert systems, mobility, cloud computing, network security, and emerging technologies.

  16. 2nd International Doctoral Symposium on Applied Computation and Security Systems

    CERN Document Server

    Cortesi, Agostino; Saeed, Khalid; Chaki, Nabendu

    2016-01-01

    The book contains the extended version of the works that have been presented and discussed in the Second International Doctoral Symposium on Applied Computation and Security Systems (ACSS 2015) held during May 23-25, 2015 in Kolkata, India. The symposium has been jointly organized by the AGH University of Science & Technology, Cracow, Poland; Ca’ Foscari University, Venice, Italy and University of Calcutta, India. The book is divided into volumes and presents dissertation works in the areas of Image Processing, Biometrics-based Authentication, Soft Computing, Data Mining, Next Generation Networking and Network Security, Remote Healthcare, Communications, Embedded Systems, Software Engineering and Service Engineering.

  17. 3rd International Conference on Innovations in Computer Science and Engineering

    CERN Document Server

    Sayal, Rishi; Rawat, Sandeep

    2016-01-01

    The book is a collection of high-quality peer-reviewed research papers presented at the third International Conference on Innovations in Computer Science and Engineering (ICICSE 2015) held at Guru Nanak Institutions, Hyderabad, India during 7 – 8 August 2015. The book discusses a wide variety of industrial, engineering and scientific applications of the emerging techniques. Researchers from academic and industry present their original work and exchange ideas, information, techniques and applications in the field of Communication, Computing, and Data Science and Analytics.

  18. 3rd International Conference on Computer Science, Applied Mathematics and Applications

    CERN Document Server

    Nguyen, Ngoc; Do, Tien

    2015-01-01

    This volume contains the extended versions of papers presented at the 3rd International Conference on Computer Science, Applied Mathematics and Applications (ICCSAMA 2015) held on 11-13 May, 2015 in Metz, France. The book contains 5 parts: 1. Mathematical programming and optimization: theory, methods and software, Operational research and decision making, Machine learning, data security, and bioinformatics, Knowledge information system, Software engineering. All chapters in the book discuss theoretical and algorithmic as well as practical issues connected with computation methods & optimization methods for knowledge engineering and machine learning techniques.  

  19. First International Symposium on Applied Computing and Information Technology (ACIT 2013)

    CERN Document Server

    Applied Computing and Information Technology

    2014-01-01

    This book presents the selected results of the 1st International Symposium on Applied Computers and Information Technology (ACIT 2013) held on August 31 – September 4, 2013 in Matsue City, Japan, which brought together researchers, scientists, engineers, industry practitioners, and students to discuss all aspects of  Applied Computers & Information Technology, and its practical challenges. This book includes the best 12 papers presented at the conference, which were chosen based on review scores submitted by members of the program committee and underwent further rigorous rounds of review.  

  20. Proceedings of the international conference on advances in computer and communication technology

    International Nuclear Information System (INIS)

    Bakal, J.W.; Kunte, A.S.; Walinjkar, P.B.; Karnani, N.K.

    2012-02-01

    A nation's development is coupled with advancement and adoption of new technologies. During the past decade advancements in computer and communication technologies have grown multi fold. For the growth of any country it is necessary to keep pace with the latest innovations in technology. International Conference on Advances in Computer and Communication Technology organised by Institution of Electronics and Telecommunication Engineers, Mumbai Centre is an attempt to provide a platform for scientists, engineering students, educators and experts to share their knowledge and discuss the efforts put by them in the field of R and D. The papers relevant to INIS are indexed separately