WorldWideScience

Sample records for high computational demands

  1. Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments

    Directory of Open Access Journals (Sweden)

    Jose M. Moya

    2012-08-01

    Full Text Available Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

  2. Ubiquitous green computing techniques for high demand applications in Smart environments.

    Science.gov (United States)

    Zapater, Marina; Sanchez, Cesar; Ayala, Jose L; Moya, Jose M; Risco-Martín, José L

    2012-01-01

    Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

  3. Architectural Considerations for Highly Scalable Computing to Support On-demand Video Analytics

    Science.gov (United States)

    2017-04-19

    research were used to implement a distributed on-demand video analytics system that was prototyped for the use of forensics investigators in law...demand video intelligence; intelligent video system ; video analytics platform I. INTRODUCTION Video Analytics systems has been of tremendous interest...enforcement. The system was tested in the wild using video files as well as a commercial Video Management System supporting more than 100 surveillance

  4. Future demands highly integrated solutions

    Energy Technology Data Exchange (ETDEWEB)

    Mangler, Andreas [Rutronik Elektronische Bauelemente GmbH, Ispringen (Germany). Strategic Marketing

    2010-07-01

    The future energy supply with a high number of decentral power plants depends on the use of innovative system technology. It is a precondition for a well-functioning grid and power management over all voltage levels. (orig.)

  5. Conceptual Framework and Computational Research of Hierarchical Residential Household Water Demand

    Directory of Open Access Journals (Sweden)

    Baodeng Hou

    2018-05-01

    Full Text Available Although the quantity of household water consumption does not account for a huge proportion of the total water consumption amidst socioeconomic development, there has been a steadily increasing trend due to population growth and improved urbanization standards. As such, mastering the mechanisms of household water demand, scientifically predicting trends of household water demand, and implementing reasonable control measures are key focuses of current urban water management. Based on the categorization and characteristic analysis of household water, this paper used Maslow’s Hierarchy of Needs to establish a level and grade theory of household water demand, whereby household water is classified into three levels (rigid water demand, flexible water demand, and luxury water demand and three grades (basic water demand, reasonable water demand, and representational water demand. An in-depth analysis was then carried out on the factors that influence the computation of household water demand, whereby equations for different household water categories were established, and computations for different levels of household water were proposed. Finally, observational experiments on household water consumption were designed, and observation and simulation computations were performed on three typical households in order to verify the scientific outcome and rationality of the computation of household water demand. The research findings contribute to the enhancement and development of prediction theories on water demand, and they are of high theoretical and realistic significance in terms of scientifically predicting future household water demand and fine-tuning the management of urban water resources.

  6. Water demand forecasting: review of soft computing methods.

    Science.gov (United States)

    Ghalehkhondabi, Iman; Ardjmand, Ehsan; Young, William A; Weckman, Gary R

    2017-07-01

    Demand forecasting plays a vital role in resource management for governments and private companies. Considering the scarcity of water and its inherent constraints, demand management and forecasting in this domain are critically important. Several soft computing techniques have been developed over the last few decades for water demand forecasting. This study focuses on soft computing methods of water consumption forecasting published between 2005 and 2015. These methods include artificial neural networks (ANNs), fuzzy and neuro-fuzzy models, support vector machines, metaheuristics, and system dynamics. Furthermore, it was discussed that while in short-term forecasting, ANNs have been superior in many cases, but it is still very difficult to pick a single method as the overall best. According to the literature, various methods and their hybrids are applied to water demand forecasting. However, it seems soft computing has a lot more to contribute to water demand forecasting. These contribution areas include, but are not limited, to various ANN architectures, unsupervised methods, deep learning, various metaheuristics, and ensemble methods. Moreover, it is found that soft computing methods are mainly used for short-term demand forecasting.

  7. Delivering Training for Highly Demanding Information Systems

    Science.gov (United States)

    Norton, Andrew Lawrence; Coulson-Thomas, Yvette May; Coulson-Thomas, Colin Joseph; Ashurst, Colin

    2012-01-01

    Purpose: There is a lack of research covering the training requirements of organisations implementing highly demanding information systems (HDISs). The aim of this paper is to help in the understanding of appropriate training requirements for such systems. Design/methodology/approach: This research investigates the training delivery within a…

  8. Agent assisted interactive algorithm for computationally demanding multiobjective optimization problems

    OpenAIRE

    Ojalehto, Vesa; Podkopaev, Dmitry; Miettinen, Kaisa

    2015-01-01

    We generalize the applicability of interactive methods for solving computationally demanding, that is, time-consuming, multiobjective optimization problems. For this purpose we propose a new agent assisted interactive algorithm. It employs a computationally inexpensive surrogate problem and four different agents that intelligently update the surrogate based on the preferences specified by a decision maker. In this way, we decrease the waiting times imposed on the decision maker du...

  9. Lightweight on-demand computing with Elasticluster and Nordugrid ARC

    CERN Document Server

    Pedersen, Maiken; The ATLAS collaboration; Filipcic, Andrej

    2018-01-01

    The cloud computing paradigm allows scientists to elastically grow or shrink computing resources as requirements demand, so that resources only need to be paid for when necessary. The challenge of integrating cloud computing into distributed computing frameworks used by HEP experiments has led to many different solutions in the past years, however none of these solutions offer a complete, fully integrated cloud resource out of the box. This paper describes how to offer such a resource using stripped-down minimal versions of existing distributed computing software components combined with off-the-shelf cloud tools. The basis of the cloud resource is Elasticluster, and the glue to join to the HEP computing infrastructure is provided by the NorduGrid ARC middleware and the ARC Control Tower. These latter two components are stripped down to bare minimum edge services, removing the need for administering complex grid middleware, yet still provide the complete job and data management required to fully exploit the c...

  10. Multi-Locality Based Local and Symbiotic Computing for Interactively fast On-Demand Weather Forecasting for Small Regions, Short Durations, and Very High-Resolutions

    OpenAIRE

    Fjukstad, Bård

    2014-01-01

    Papers 1, 3 and 4 are not available in Munin: 1: Bård Fjukstad, Tor-Magne Stien Hagen, Daniel Stødle, Phuong Hoai Ha, John Markus Bjørndalen, and Otto Anshus: ‘Interactive Weather Simulation and Visualization on a Display Wall with Many-Core Compute Nodes’, in K. Jónasson (ed.): PARA 2010, Part I, LNCS 7133, pp. 142–151, 2012, © Springer-Verlag Berlin Heidelberg 3: Bård Fjukstad, John Markus Bjørndalen and Otto Anshus: ‘Accurate Weather Forecasting Through Locality Based Collaborative Computi...

  11. High energy physics and cloud computing

    International Nuclear Information System (INIS)

    Cheng Yaodong; Liu Baoxu; Sun Gongxing; Chen Gang

    2011-01-01

    High Energy Physics (HEP) has been a strong promoter of computing technology, for example WWW (World Wide Web) and the grid computing. In the new era of cloud computing, HEP has still a strong demand, and major international high energy physics laboratories have launched a number of projects to research on cloud computing technologies and applications. It describes the current developments in cloud computing and its applications in high energy physics. Some ongoing projects in the institutes of high energy physics, Chinese Academy of Sciences, including cloud storage, virtual computing clusters, and BESⅢ elastic cloud, are also described briefly in the paper. (authors)

  12. Design of massively parallel hardware multi-processors for highly-demanding embedded applications

    NARCIS (Netherlands)

    Jozwiak, L.; Jan, Y.

    2013-01-01

    Many new embedded applications require complex computations to be performed to tight schedules, while at the same time demanding low energy consumption and low cost. For implementation of these highly-demanding applications, highly-optimized application-specific multi-processor system-on-a-chip

  13. Computing in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Watase, Yoshiyuki

    1991-09-15

    The increasingly important role played by computing and computers in high energy physics is displayed in the 'Computing in High Energy Physics' series of conferences, bringing together experts in different aspects of computing - physicists, computer scientists, and vendors.

  14. Balancing exploration, uncertainty and computational demands in many objective reservoir optimization

    Science.gov (United States)

    Zatarain Salazar, Jazmin; Reed, Patrick M.; Quinn, Julianne D.; Giuliani, Matteo; Castelletti, Andrea

    2017-11-01

    Reservoir operations are central to our ability to manage river basin systems serving conflicting multi-sectoral demands under increasingly uncertain futures. These challenges motivate the need for new solution strategies capable of effectively and efficiently discovering the multi-sectoral tradeoffs that are inherent to alternative reservoir operation policies. Evolutionary many-objective direct policy search (EMODPS) is gaining importance in this context due to its capability of addressing multiple objectives and its flexibility in incorporating multiple sources of uncertainties. This simulation-optimization framework has high potential for addressing the complexities of water resources management, and it can benefit from current advances in parallel computing and meta-heuristics. This study contributes a diagnostic assessment of state-of-the-art parallel strategies for the auto-adaptive Borg Multi Objective Evolutionary Algorithm (MOEA) to support EMODPS. Our analysis focuses on the Lower Susquehanna River Basin (LSRB) system where multiple sectoral demands from hydropower production, urban water supply, recreation and environmental flows need to be balanced. Using EMODPS with different parallel configurations of the Borg MOEA, we optimize operating policies over different size ensembles of synthetic streamflows and evaporation rates. As we increase the ensemble size, we increase the statistical fidelity of our objective function evaluations at the cost of higher computational demands. This study demonstrates how to overcome the mathematical and computational barriers associated with capturing uncertainties in stochastic multiobjective reservoir control optimization, where parallel algorithmic search serves to reduce the wall-clock time in discovering high quality representations of key operational tradeoffs. Our results show that emerging self-adaptive parallelization schemes exploiting cooperative search populations are crucial. Such strategies provide a

  15. Computing in high energy physics

    International Nuclear Information System (INIS)

    Watase, Yoshiyuki

    1991-01-01

    The increasingly important role played by computing and computers in high energy physics is displayed in the 'Computing in High Energy Physics' series of conferences, bringing together experts in different aspects of computing - physicists, computer scientists, and vendors

  16. The impact of object size and precision demands on fatigue during computer mouse use

    DEFF Research Database (Denmark)

    Aasa, Ulrika; Jensen, Bente Rona; Sandfeld, Jesper

    2011-01-01

    use demands were of influence. Also, we investigated performance (number of rectangles painted), and whether perceived fatigue was paralleled by local muscle fatigue or tissue oxygenation. Ten women performed the task for three conditions (crossover design). At condition 1, rectangles were 45 × 25 mm...... not differ between conditions. In conclusion, computer work tasks imposing high visual and motor demands, and with high performance, seemed to have an influence on eye fatigue....... ratio was 1:8. The results showed increased self-reported fatigue over time, with the observed increase greater for the eyes, but no change in physiological responses. Condition 2 resulted in higher performance and increased eye fatigue. Perceived fatigue in the muscles or physiological responses did...

  17. Demand side management scheme in smart grid with cloud computing approach using stochastic dynamic programming

    Directory of Open Access Journals (Sweden)

    S. Sofana Reka

    2016-09-01

    Full Text Available This paper proposes a cloud computing framework in smart grid environment by creating small integrated energy hub supporting real time computing for handling huge storage of data. A stochastic programming approach model is developed with cloud computing scheme for effective demand side management (DSM in smart grid. Simulation results are obtained using GUI interface and Gurobi optimizer in Matlab in order to reduce the electricity demand by creating energy networks in a smart hub approach.

  18. Dynamic Placement of Virtual Machines with Both Deterministic and Stochastic Demands for Green Cloud Computing

    Directory of Open Access Journals (Sweden)

    Wenying Yue

    2014-01-01

    Full Text Available Cloud computing has come to be a significant commercial infrastructure offering utility-oriented IT services to users worldwide. However, data centers hosting cloud applications consume huge amounts of energy, leading to high operational cost and greenhouse gas emission. Therefore, green cloud computing solutions are needed not only to achieve high level service performance but also to minimize energy consumption. This paper studies the dynamic placement of virtual machines (VMs with deterministic and stochastic demands. In order to ensure a quick response to VM requests and improve the energy efficiency, a two-phase optimization strategy has been proposed, in which VMs are deployed in runtime and consolidated into servers periodically. Based on an improved multidimensional space partition model, a modified energy efficient algorithm with balanced resource utilization (MEAGLE and a live migration algorithm based on the basic set (LMABBS are, respectively, developed for each phase. Experimental results have shown that under different VMs’ stochastic demand variations, MEAGLE guarantees the availability of stochastic resources with a defined probability and reduces the number of required servers by 2.49% to 20.40% compared with the benchmark algorithms. Also, the difference between the LMABBS solution and Gurobi solution is fairly small, but LMABBS significantly excels in computational efficiency.

  19. COMPUTING THE VOCABULARY DEMANDS OF L2 READING

    Directory of Open Access Journals (Sweden)

    Tom Cobb

    2007-02-01

    Full Text Available Linguistic computing can make two important contributions to second language (L2 reading instruction. One is to resolve longstanding research issues that are based on an insufficiency of data for the researcher, and the other is to resolve related pedagogical problems based on insufficiency of input for the learner. The research section of the paper addresses the question of whether reading alone can give learners enough vocabulary to read. When the computer’s ability to process large amounts of both learner and linguistic data is applied to this question, it becomes clear that, for the vast majority of L2 learners, free or wide reading alone is not a sufficient source of vocabulary knowledge for reading. But computer processing also points to solutions to this problem. Through its ability to reorganize and link documents, the networked computer can increase the supply of vocabulary input that is available to the learner. The development section of the paper elaborates a principled role for computing in L2 reading pedagogy, with examples, in two broad areas, computer-based text design and computational enrichment of undesigned texts.

  20. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  1. High-End Scientific Computing

    Science.gov (United States)

    EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.

  2. Computing in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Sarah; Devenish, Robin [Nuclear Physics Laboratory, Oxford University (United Kingdom)

    1989-07-15

    Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations. The state of the art, as well as new trends and hopes, were reflected in this year's 'Computing In High Energy Physics' conference held in the dreamy setting of Oxford's spires. The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference's aim – 'to bring together high energy physicists and computer scientists'.

  3. Computing in high energy physics

    International Nuclear Information System (INIS)

    Smith, Sarah; Devenish, Robin

    1989-01-01

    Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations. The state of the art, as well as new trends and hopes, were reflected in this year's 'Computing In High Energy Physics' conference held in the dreamy setting of Oxford's spires. The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference's aim – 'to bring together high energy physicists and computer scientists'

  4. High Performance Computing Multicast

    Science.gov (United States)

    2012-02-01

    A History of the Virtual Synchrony Replication Model,” in Replication: Theory and Practice, Charron-Bost, B., Pedone, F., and Schiper, A. (Eds...Performance Computing IP / IPv4 Internet Protocol (version 4.0) IPMC Internet Protocol MultiCast LAN Local Area Network MCMD Dr. Multicast MPI

  5. INSPIRED High School Computing Academies

    Science.gov (United States)

    Doerschuk, Peggy; Liu, Jiangjiang; Mann, Judith

    2011-01-01

    If we are to attract more women and minorities to computing we must engage students at an early age. As part of its mission to increase participation of women and underrepresented minorities in computing, the Increasing Student Participation in Research Development Program (INSPIRED) conducts computing academies for high school students. The…

  6. Computing in high energy physics

    International Nuclear Information System (INIS)

    Hertzberger, L.O.; Hoogland, W.

    1986-01-01

    This book deals with advanced computing applications in physics, and in particular in high energy physics environments. The main subjects covered are networking; vector and parallel processing; and embedded systems. Also examined are topics such as operating systems, future computer architectures and commercial computer products. The book presents solutions that are foreseen as coping, in the future, with computing problems in experimental and theoretical High Energy Physics. In the experimental environment the large amounts of data to be processed offer special problems on-line as well as off-line. For on-line data reduction, embedded special purpose computers, which are often used for trigger applications are applied. For off-line processing, parallel computers such as emulator farms and the cosmic cube may be employed. The analysis of these topics is therefore a main feature of this volume

  7. Dimensioning storage and computing clusters for efficient High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Scientific experiments are producing huge amounts of data, and they continue increasing the size of their datasets and the total volume of data. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of Scientific Data Centres has shifted from coping efficiently with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful s...

  8. Effect of aging on performance, muscle activation and perceived stress during mentally demanding computer tasks

    DEFF Research Database (Denmark)

    Alkjaer, Tine; Pilegaard, Marianne; Bakke, Merete

    2005-01-01

    OBJECTIVES: This study examined the effects of age on performance, muscle activation, and perceived stress during computer tasks with different levels of mental demand. METHODS: Fifteen young and thirteen elderly women performed two computer tasks [color word test and reference task] with different...... levels of mental demand but similar physical demands. The performance (clicking frequency, percentage of correct answers, and response time for correct answers) and electromyography from the forearm, shoulder, and neck muscles were recorded. Visual analogue scales were used to measure the participants......' perception of the stress and difficulty related to the tasks. RESULTS: Performance decreased significantly in both groups during the color word test in comparison with performance on the reference task. However, the performance reduction was more pronounced in the elderly group than in the young group...

  9. Effective Management of High-Use/High-Demand Space Using Restaurant-Style Pagers

    Science.gov (United States)

    Gonzalez, Adriana

    2012-01-01

    The library landscape is changing at a fast pace, with an increase in the demand for study space including quiet, individualized study space; open group study space; and as enclosed group study space. In large academic libraries, managing limited high-demand resources is crucial and is partially being driven by the greater emphasis on group…

  10. The world energy demand in 2007: How high oil prices impact the global energy demand? June 9, 2008

    International Nuclear Information System (INIS)

    2008-01-01

    How high oil prices impact the global energy demand? The growth of energy demand continued to accelerate in 2007 despite soaring prices, to reach 2,8 % (+ 0,3 point compared to 2006). This evolution results from two diverging trends: a shrink in energy consumption in most of OECD countries, except North America, and a strong increase in emerging countries. Within the OECD, two contrasting trends can be reported, that compensate each other partially: the reduction of energy consumption in Japan (-0.8%) and in Europe (-1.2%), particularly significant in the EU-15 (-1.9%); the increase of energy consumption in North America (+2%). Globally, the OECD overall consumption continued to increase slightly (+0.5%), while electricity increased faster (2,1%) and fuels remained stable. Elsewhere, the strong energy demand growth remained very dynamic (+5% for the total demand, 8% for electricity only), driven by China (+7.3%). The world oil demand increased by 1% only, but the demand has focused even more on captive end usages, transports and petrochemistry. The world gasoline and diesel demand increased by around 5,7% in 2007, and represents 53% of the total oil products demand in 2007 (51% in 2006). If gasoline and diesel consumption remained quasi-stable within OECD countries, the growth has been extremely strong in the emerging countries, despite booming oil prices. There are mainly two factors explaining this evolution where both oil demand and oil prices increased: Weak elasticity-prices to the demand in transport and petrochemistry sectors Disconnection of domestic fuel prices in major emerging countries (China, India, Latin America) compared to world oil market prices Another striking point is that world crude oil and condensate production remained almost stable in 2007, hence the entire demand growth was supported by destocking. During the same period, the OPEC production decreased by 1%, mainly due to the production decrease in Saudi Arabia, that is probably more

  11. Computer controlled high voltage system

    Energy Technology Data Exchange (ETDEWEB)

    Kunov, B; Georgiev, G; Dimitrov, L [and others

    1996-12-31

    A multichannel computer controlled high-voltage power supply system is developed. The basic technical parameters of the system are: output voltage -100-3000 V, output current - 0-3 mA, maximum number of channels in one crate - 78. 3 refs.

  12. ICT Solutions for Highly-Customized Water Demand Management Strategies

    Science.gov (United States)

    Giuliani, M.; Cominola, A.; Castelletti, A.; Fraternali, P.; Guardiola, J.; Barba, J.; Pulido-Velazquez, M.; Rizzoli, A. E.

    2016-12-01

    The recent deployment of smart metering networks is opening new opportunities for advancing the design of residential water demand management strategies (WDMS) relying on improved understanding of water consumers' behaviors. Recent applications showed that retrieving information on users' consumption behaviors, along with their explanatory and/or causal factors, is key to spot potential areas where targeting water saving efforts, and to design user-tailored WDMS. In this study, we explore the potential of ICT-based solutions in supporting the design and implementation of highly customized WDMS. On one side, the collection of consumption data at high spatial and temporal resolutions requires big data analytics and machine learning techniques to extract typical consumption features from the metered population of water users. On the other side, ICT solutions and gamifications can be used as effective means for facilitating both users' engagement and the collection of socio-psychographic users' information. This latter allows interpreting and improving the extracted profiles, ultimately supporting the customization of WDMS, such as awareness campaigns or personalized recommendations. Our approach is implemented in the SmartH2O platform and demonstrated in a pilot application in Valencia, Spain. Results show how the analysis of the smart metered consumption data, combined with the information retrieved from an ICT gamified web user portal, successfully identify the typical consumption profiles of the metered users and supports the design of alternative WDMS targeting the different users' profiles.

  13. A High Performance VLSI Computer Architecture For Computer Graphics

    Science.gov (United States)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  14. Audiovisual integration of speech falters under high attention demands.

    Science.gov (United States)

    Alsius, Agnès; Navarra, Jordi; Campbell, Ruth; Soto-Faraco, Salvador

    2005-05-10

    One of the most commonly cited examples of human multisensory integration occurs during exposure to natural speech, when the vocal and the visual aspects of the signal are integrated in a unitary percept. Audiovisual association of facial gestures and vocal sounds has been demonstrated in nonhuman primates and in prelinguistic children, arguing for a general basis for this capacity. One critical question, however, concerns the role of attention in such multisensory integration. Although both behavioral and neurophysiological studies have converged on a preattentive conceptualization of audiovisual speech integration, this mechanism has rarely been measured under conditions of high attentional load, when the observers' attention resources are depleted. We tested the extent to which audiovisual integration was modulated by the amount of available attentional resources by measuring the observers' susceptibility to the classic McGurk illusion in a dual-task paradigm. The proportion of visually influenced responses was severely, and selectively, reduced if participants were concurrently performing an unrelated visual or auditory task. In contrast with the assumption that crossmodal speech integration is automatic, our results suggest that these multisensory binding processes are subject to attentional demands.

  15. Dimensioning storage and computing clusters for efficient high throughput computing

    International Nuclear Information System (INIS)

    Accion, E; Bria, A; Bernabeu, G; Caubet, M; Delfino, M; Espinal, X; Merino, G; Lopez, F; Martinez, F; Planas, E

    2012-01-01

    Scientific experiments are producing huge amounts of data, and the size of their datasets and total volume of data continues increasing. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of scientific data centers has shifted from efficiently coping with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful data storage and processing service in an intensive HTC environment.

  16. High resolution heat atlases for demand and supply mapping

    Directory of Open Access Journals (Sweden)

    Bernd Möller

    2014-02-01

    Full Text Available Significant reductions of heat demand, low-carbon and renewable energy sources, and district heating are key elements in 100% renewable energy systems. Appraisal of district heating along with energy efficient buildings and individual heat supply requires a geographical representation of heat demand, energy efficiency and energy supply. The present paper describes a Heat Atlas built around a spatial database using geographical information systems (GIS. The present atlas allows for per-building calculations of potentials and costs of energy savings, connectivity to existing district heat, and current heat supply and demand. For the entire building mass a conclusive link is established between the built environment and its heat supply. The expansion of district heating; the interconnection of distributed district heating systems; or the question whether to invest in ultra-efficient buildings with individual supply, or in collective heating using renewable energy for heating the current building stock, can be based on improved data.

  17. High resolution heat atlases for demand and supply mapping

    DEFF Research Database (Denmark)

    Möller, Bernd; Nielsen, Steffen

    2014-01-01

    Significant reductions of heat demand, low-carbon and renewable energy sources, and district heating are key elements in 100% renewable energy systems. Appraisal of district heating along with energy efficient buildings and individual heat supply requires a geographical representation of heat...... demand, energy efficiency and energy supply. The present paper describes a Heat Atlas built around a spatial database using geographical information systems (GIS). The present atlas allows for per-building calculations of potentials and costs of energy savings, connectivity to existing district heat......, and current heat supply and demand. For the entire building mass a conclusive link is established between the built environment and its heat supply. The expansion of district heating; the interconnection of distributed district heating systems; or the question whether to invest in ultra-efficient buildings...

  18. High speed computer assisted tomography

    International Nuclear Information System (INIS)

    Maydan, D.; Shepp, L.A.

    1980-01-01

    X-ray generation and detection apparatus for use in a computer assisted tomography system which permits relatively high speed scanning. A large x-ray tube having a circular anode (3) surrounds the patient area. A movable electron gun (8) orbits adjacent to the anode. The anode directs into the patient area xrays which are delimited into a fan beam by a pair of collimating rings (21). After passing through the patient, x-rays are detected by an array (22) of movable detectors. Detector subarrays (23) are synchronously movable out of the x-ray plane to permit the passage of the fan beam

  19. Analysis of Future Vehicle Energy Demand in China Based on a Gompertz Function Method and Computable General Equilibrium Model

    Directory of Open Access Journals (Sweden)

    Tian Wu

    2014-11-01

    Full Text Available This paper presents a model for the projection of Chinese vehicle stocks and road vehicle energy demand through 2050 based on low-, medium-, and high-growth scenarios. To derive a gross-domestic product (GDP-dependent Gompertz function, Chinese GDP is estimated using a recursive dynamic Computable General Equilibrium (CGE model. The Gompertz function is estimated using historical data on vehicle development trends in North America, Pacific Rim and Europe to overcome the problem of insufficient long-running data on Chinese vehicle ownership. Results indicate that the number of projected vehicle stocks for 2050 is 300, 455 and 463 million for low-, medium-, and high-growth scenarios respectively. Furthermore, the growth in China’s vehicle stock will increase beyond the inflection point of Gompertz curve by 2020, but will not reach saturation point during the period 2014–2050. Of major road vehicle categories, cars are the largest energy consumers, followed by trucks and buses. Growth in Chinese vehicle demand is primarily determined by per capita GDP. Vehicle saturation levels solely influence the shape of the Gompertz curve and population growth weakly affects vehicle demand. Projected total energy consumption of road vehicles in 2050 is 380, 575 and 586 million tonnes of oil equivalent for each scenario.

  20. High Speed Mobility Through On-Demand Aviation

    Science.gov (United States)

    Moore, Mark D.; Goodrich, Ken; Viken, Jeff; Smith, Jeremy; Fredericks, Bill; Trani, Toni; Barraclough, Jonathan; German, Brian; Patterson, Michael

    2013-01-01

    automobiles. ?? Community Noise: Hub and smaller GA airports are facing increasing noise restrictions, and while commercial airliners have dramatically decreased their community noise footprint over the past 30 years, GA aircraft noise has essentially remained same, and moreover, is located in closer proximity to neighborhoods and businesses. ?? Operating Costs: GA operating costs have risen dramatically due to average fuel costs of over $6 per gallon, which has constrained the market over the past decade and resulted in more than 50% lower sales and 35% less yearly operations. Infusion of autonomy and electric propulsion technologies can accomplish not only a transformation of the GA market, but also provide a technology enablement bridge for both larger aircraft and the emerging civil Unmanned Aerial Systems (UAS) markets. The NASA Advanced General Aviation Transport Experiments (AGATE) project successfully used a similar approach to enable the introduction of primary composite structures and flat panel displays in the 1990s, establishing both the technology and certification standardization to permit quick adoption through partnerships with industry, academia, and the Federal Aviation Administration (FAA). Regional and airliner markets are experiencing constant pressure to achieve decreasing levels of community emissions and noise, while lowering operating costs and improving safety. But to what degree can these new technology frontiers impact aircraft safety, the environment, operations, cost, and performance? Are the benefits transformational enough to fundamentally alter aircraft competiveness and productivity to permit much greater aviation use for high speed and On-Demand Mobility (ODM)? These questions were asked in a Zip aviation system study named after the Zip Car, an emerging car-sharing business model. Zip Aviation investigates the potential to enable new emergent markets for aviation that offer "more flexibility than the existing transportation solutions

  1. An Interactive Computer Tool for Teaching About Desalination and Managing Water Demand in the US

    Science.gov (United States)

    Ziolkowska, J. R.; Reyes, R.

    2016-12-01

    This paper presents an interactive tool to geospatially and temporally analyze desalination developments and trends in the US in the time span 1950-2013, its current contribution to satisfying water demands and its future potentials. The computer tool is open access and can be used by any user with Internet connection, thus facilitating interactive learning about water resources. The tool can also be used by stakeholders and policy makers for decision-making support and with designing sustainable water management strategies. Desalination technology has been acknowledged as a solution to a sustainable water demand management stemming from many sectors, including municipalities, industry, agriculture, power generation, and other users. Desalination has been applied successfully in the US and many countries around the world since 1950s. As of 2013, around 1,336 desalination plants were operating in the US alone, with a daily production capacity of 2 BGD (billion gallons per day) (GWI, 2013). Despite a steady increase in the number of new desalination plants and growing production capacity, in many regions, the costs of desalination are still prohibitive. At the same time, the technology offers a tremendous potential for `enormous supply expansion that exceeds all likely demands' (Chowdhury et al., 2013). The model and tool are based on data from Global Water Intelligence (GWI, 2013). The analysis shows that more than 90% of all the plants in the US are small-scale plants with the capacity below 4.31 MGD. Most of the plants (and especially larger plants) are located on the US East Coast, as well as in California, Texas, Oklahoma, and Florida. The models and the tool provide information about economic feasibility of potential new desalination plants based on the access to feed water, energy sources, water demand, and experiences of other plants in that region.

  2. On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

    Science.gov (United States)

    Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.

    2017-10-01

    This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.

  3. Computer simulation at high pressure

    International Nuclear Information System (INIS)

    Alder, B.J.

    1977-11-01

    The use of either the Monte Carlo or molecular dynamics method to generate equations-of-state data for various materials at high pressure is discussed. Particular emphasis is given to phase diagrams, such as the generation of various types of critical lines for mixtures, melting, structural and electronic transitions in solids, two-phase ionic fluid systems of astrophysical interest, as well as a brief aside of possible eutectic behavior in the interior of the earth. Then the application of the molecular dynamics method to predict transport coefficients and the neutron scattering function is discussed with a view as to what special features high pressure brings out. Lastly, an analysis by these computational methods of the measured intensity and frequency spectrum of depolarized light and also of the deviation of the dielectric measurements from the constancy of the Clausius--Mosotti function is given that leads to predictions of how the electronic structure of an atom distorts with pressure

  4. A compound Poisson EOQ model for perishable items with intermittent high and low demand periods

    NARCIS (Netherlands)

    Boxma, O.J.; Perry, D.; Stadje, W.; Zacks, S.

    2012-01-01

    We consider a stochastic EOQ-type model, with demand operating in a two-state random environment. This environment alternates between exponentially distributed periods of high demand and generally distributed periods of low demand. The inventory level starts at some level q, and decreases according

  5. High Performance Spaceflight Computing (HPSC)

    Data.gov (United States)

    National Aeronautics and Space Administration — Space-based computing has not kept up with the needs of current and future NASA missions. We are developing a next-generation flight computing system that addresses...

  6. High energy physics and grid computing

    International Nuclear Information System (INIS)

    Yu Chuansong

    2004-01-01

    The status of the new generation computing environment of the high energy physics experiments is introduced briefly in this paper. The development of the high energy physics experiments and the new computing requirements by the experiments are presented. The blueprint of the new generation computing environment of the LHC experiments, the history of the Grid computing, the R and D status of the high energy physics grid computing technology, the network bandwidth needed by the high energy physics grid and its development are described. The grid computing research in Chinese high energy physics community is introduced at last. (authors)

  7. High performance computing in Windows Azure cloud

    OpenAIRE

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  8. Electricity demand profile with high penetration of heat pumps in Nordic area

    DEFF Research Database (Denmark)

    Liu, Zhaoxi; Wu, Qiuwei; Nielsen, Arne Hejde

    2013-01-01

    This paper presents the heat pump (HP) demand profile with high HP penetration in the Nordic area in order to achieve the carbon neutrality power system. The calculation method in the European Standard EN14825 was used to estimate the HP electricity demand profile. The study results show...... there will be high power demand from HPs and the selection of supplemental heating for heat pumps has a big impact on the peak electrical power load of heating. The study in this paper gives an estimate of the scale of the electricity demand with high penetration of heat pumps in the Nordic area....

  9. GPU-based high-performance computing for radiation therapy

    International Nuclear Information System (INIS)

    Jia, Xun; Jiang, Steve B; Ziegenhein, Peter

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. (topical review)

  10. The effect of preferred music on mood and performance in a high-cognitive demand occupation.

    Science.gov (United States)

    Lesiuk, Teresa

    2010-01-01

    Mild positive affect has been shown in the psychological literature to improve cognitive skills of creative problem-solving and systematic thinking. Individual preferred music listening offers opportunity for improved positive affect. The purpose of this study was to examine the effect of preferred music listening on state-mood and cognitive performance in a high-cognitive demand occupation. Twenty-four professional computer information systems developers (CISD) from a North American IT company participated in a 3-week study with a music/no music/music weekly design. During the music weeks, participants listened to their preferred music "when they wanted, as they wanted." Self-reports of State Positive Affect, State Negative Affect, and Cognitive Performance were measured throughout the 3 weeks. Results indicate a statistically significant improvement in both state-mood and cognitive performance scores. "High-cognitive demand" is a relative term given that challenges presented to individuals may occur on a cognitive continuum from need for focus and selective attention to systematic analysis and creative problem-solving. The findings and recommendations have important implications for music therapists in their knowledge of the effect of music on emotion and cognition, and, as well, have important implications for music therapy consultation to organizations.

  11. More customers embrace Dell standards-based computing for even the most demanding applications-Growing demand among HPCC customers for Dell in Europe

    CERN Multimedia

    2003-01-01

    Dell Computers has signed agreements with several high-profile customers in Europe to provide high performance computing cluster (HPCC) solutions. One customer is a consortium of 4 universities involved in research at the Collider Detector Facility at Fermilab (1 page).

  12. Demands made on high-purity copper for special purposes

    International Nuclear Information System (INIS)

    Roettges, D.

    1977-01-01

    The properties (electrical resistivity, residual impurities) of high-purity copper produced on a technical scale are reported as well as its practical applications. The paper discusses a high-oxygen copper (SV) with low residual resistivity at low temperatures and an oxygen-free (hydrogen-stable) copper (BE electronic) with low gas content. The SV quality has been specially developed for use as stabilizer in superconductors while the BE quality is used in high and ultrahigh vacuum. (GSC) [de

  13. Demand Response in Low Voltage Distribution Networks with High PV Penetration

    DEFF Research Database (Denmark)

    Nainar, Karthikeyan; Pokhrel, Basanta Raj; Pillai, Jayakrishnan Radhakrishna

    2017-01-01

    the required flexibility from the electricity market through an aggregator. The optimum demand response enables consumption of maximum renewable energy within the network constraints. Simulation studies are conducted using Matlab and DigSilent Power factory software on a Danish low-voltage distribution system......In this paper, application of demand response to accommodate maximum PV power in a low-voltage distribution network is discussed. A centralized control based on model predictive control method is proposed for the computation of optimal demand response on an hourly basis. The proposed method uses PV...

  14. Predicting Short-Term Electricity Demand by Combining the Advantages of ARMA and XGBoost in Fog Computing Environment

    Directory of Open Access Journals (Sweden)

    Chuanbin Li

    2018-01-01

    Full Text Available With the rapid development of IoT, the disadvantages of Cloud framework have been exposed, such as high latency, network congestion, and low reliability. Therefore, the Fog Computing framework has emerged, with an extended Fog Layer between the Cloud and terminals. In order to address the real-time prediction on electricity demand, we propose an approach based on XGBoost and ARMA in Fog Computing environment. By taking the advantages of Fog Computing framework, we first propose a prototype-based clustering algorithm to divide enterprise users into several categories based on their total electricity consumption; we then propose a model selection approach by analyzing users’ historical records of electricity consumption and identifying the most important features. Generally speaking, if the historical records pass the test of stationarity and white noise, ARMA is used to model the user’s electricity consumption in time sequence; otherwise, if the historical records do not pass the test, and some discrete features are the most important, such as weather and whether it is weekend, XGBoost will be used. The experiment results show that our proposed approach by combining the advantage of ARMA and XGBoost is more accurate than the classical models.

  15. A Primer on High-Throughput Computing for Genomic Selection

    Directory of Open Access Journals (Sweden)

    Xiao-Lin eWu

    2011-02-01

    Full Text Available High-throughput computing (HTC uses computer clusters to solve advanced computational problems, with the goal of accomplishing high throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general purpose computation on a graphics processing unit (GPU provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin – Madison, which can be leveraged for genomic selection, in terms of central processing unit (CPU capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of

  16. High-Performance Computing Paradigm and Infrastructure

    CERN Document Server

    Yang, Laurence T

    2006-01-01

    With hyperthreading in Intel processors, hypertransport links in next generation AMD processors, multi-core silicon in today's high-end microprocessors from IBM and emerging grid computing, parallel and distributed computers have moved into the mainstream

  17. High energy physics computing in Japan

    International Nuclear Information System (INIS)

    Watase, Yoshiyuki

    1989-01-01

    A brief overview of the computing provision for high energy physics in Japan is presented. Most of the computing power for high energy physics is concentrated in KEK. Here there are two large scale systems: one providing a general computing service including vector processing and the other dedicated to TRISTAN experiments. Each university group has a smaller sized mainframe or VAX system to facilitate both their local computing needs and the remote use of the KEK computers through a network. The large computer system for the TRISTAN experiments is described. An overview of a prospective future large facility is also given. (orig.)

  18. On the Demand for High-beta Stocks

    DEFF Research Database (Denmark)

    Christoffersen, Susan E. K.; Simutin, Mikhail

    2017-01-01

    Prior studies have documented that pension plan sponsors often monitor a fund’s performance relative to a benchmark. We use a first-difference approach to show that in an effort to beat benchmarks, fund managers controlling large pension assets tend to increase their exposure to high-beta stocks...

  19. Demand outlook for sulfur and high-sulfur petroleum coke

    Energy Technology Data Exchange (ETDEWEB)

    Koshkarov, V.Ya.; Danil' yan, P.G.; Feotov, V.E.; Gimaev, R.N.; Koshkarova, M.E.; Sadykova, S.R.; Vodovichenko, N.S.

    1980-01-01

    The feasibility of using sulfur and high-sulfur petroleum coke fines in pyrometallurgical processes and also in the chemical and coal-tar chemical industry is examined. Results of industrial tests on briquetting fines of petroleum coke with a petroleum binder are presented. The feasibility of using the obtained briquets in shaft furnace smelting of oxidized nickel ores, production of anode stock, and also in the chemical industry are demonstrated.

  20. Meeting the energy demand of high load density areas

    International Nuclear Information System (INIS)

    Rillo, Carlos O.

    1997-01-01

    Due to the high cost of land and in some places, unavailability of land, the existing standard substation of Meralco (Manila Electric Company) can no longer be used in many places of Metro Manila. To cope with this problem, the GIS (Gas Insulated System) substation is now being resorted to. There are various schemes of developing a GIS substation, each fitted to certain particular conditions. Cost implications and design considerations were also briefly discussed. (author)

  1. Career Technical Education: Keeping Adult Learners Competitive for High-Demand Jobs

    Science.gov (United States)

    National Association of State Directors of Career Technical Education Consortium, 2011

    2011-01-01

    In today's turbulent economy, how can adult workers best position themselves to secure jobs in high-demand fields where they are more likely to remain competitive and earn more? Further, how can employers up-skill current employees so that they meet increasingly complex job demands? Research indicates that Career Technical Education (CTE) aligned…

  2. High-demand jobs: age-related diversity in work ability?

    NARCIS (Netherlands)

    Sluiter, Judith K.

    2006-01-01

    High-demand jobs include 'specific' job demands that are not preventable with state of the art ergonomics knowledge and may overburden the bodily capacities, safety or health of workers. An interesting question is whether the age of the worker is an important factor in explanations of diversity in

  3. High-performance scientific computing in the cloud

    Science.gov (United States)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  4. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  5. High accuracy ion optics computing

    International Nuclear Information System (INIS)

    Amos, R.J.; Evans, G.A.; Smith, R.

    1986-01-01

    Computer simulation of focused ion beams for surface analysis of materials by SIMS, or for microfabrication by ion beam lithography plays an important role in the design of low energy ion beam transport and optical systems. Many computer packages currently available, are limited in their applications, being inaccurate or inappropriate for a number of practical purposes. This work describes an efficient and accurate computer programme which has been developed and tested for use on medium sized machines. The programme is written in Algol 68 and models the behaviour of a beam of charged particles through an electrostatic system. A variable grid finite difference method is used with a unique data structure, to calculate the electric potential in an axially symmetric region, for arbitrary shaped boundaries. Emphasis has been placed upon finding an economic method of solving the resulting set of sparse linear equations in the calculation of the electric field and several of these are described. Applications include individual ion lenses, extraction optics for ions in surface analytical instruments and the design of columns for ion beam lithography. Computational results have been compared with analytical calculations and with some data obtained from individual einzel lenses. (author)

  6. Residential Consumer-Centric Demand-Side Management Based on Energy Disaggregation-Piloting Constrained Swarm Intelligence: Towards Edge Computing

    Science.gov (United States)

    Hu, Yu-Chen

    2018-01-01

    The emergence of smart Internet of Things (IoT) devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR) schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO)-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power consumption is achieved

  7. Residential Consumer-Centric Demand-Side Management Based on Energy Disaggregation-Piloting Constrained Swarm Intelligence: Towards Edge Computing

    Directory of Open Access Journals (Sweden)

    Yu-Hsiu Lin

    2018-04-01

    Full Text Available The emergence of smart Internet of Things (IoT devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power

  8. Residential Consumer-Centric Demand-Side Management Based on Energy Disaggregation-Piloting Constrained Swarm Intelligence: Towards Edge Computing.

    Science.gov (United States)

    Lin, Yu-Hsiu; Hu, Yu-Chen

    2018-04-27

    The emergence of smart Internet of Things (IoT) devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR) schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO)-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power consumption is achieved

  9. High-level language computer architecture

    CERN Document Server

    Chu, Yaohan

    1975-01-01

    High-Level Language Computer Architecture offers a tutorial on high-level language computer architecture, including von Neumann architecture and syntax-oriented architecture as well as direct and indirect execution architecture. Design concepts of Japanese-language data processing systems are discussed, along with the architecture of stack machines and the SYMBOL computer system. The conceptual design of a direct high-level language processor is also described.Comprised of seven chapters, this book first presents a classification of high-level language computer architecture according to the pr

  10. GLIF – striving towards a high-performance on-demand network

    CERN Multimedia

    Kristina Gunne

    2010-01-01

    If you were passing through the Mezzanine in the Main Building a couple of weeks ago, you probably noticed the large tiled panel display showing an ultra-high resolution visualization model of dark matter, developed by Cosmogrid. The display was one of the highlights of the 10th Annual Global Lambda Grid Workshop demo session, together with the first ever transfer of over 35 Gbit/second from one PC to another between the SARA Computing Centre in Amsterdam and CERN.   GLIF display. The transfer of such large amounts of data at this speed has been made possible thanks to the GLIF community's vision of a new computing paradigm, in which the central architectural element is an end-to-end path built on optical network wavelengths (so called lambdas). You may think of this as an on-demand private highway for data transfer: by using it you avoid the normal internet exchange points and “traffic jams”. GLIF is a virtual international organization managed as a cooperative activity, wi...

  11. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community

    Science.gov (United States)

    2012-01-01

    Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Results Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Conclusions Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the

  12. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community.

    Science.gov (United States)

    Krampis, Konstantinos; Booth, Tim; Chapman, Brad; Tiwari, Bela; Bicak, Mesude; Field, Dawn; Nelson, Karen E

    2012-03-19

    A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly

  13. High-performance computing — an overview

    Science.gov (United States)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  14. Persistent high job demands and reactivity to mental stress predict future ambulatory blood pressure.

    Science.gov (United States)

    Steptoe, A; Cropley, M

    2000-05-01

    To test the hypothesis that work stress (persistent high job demands over 1 year) in combination with high reactivity to mental stress predict ambulatory blood pressure. Assessment of cardiovascular responses to standardized behavioural tasks, job demands, and ambulatory blood pressure over a working day and evening after 12 months. We studied 81 school teachers (26 men, 55 women), 36 of whom experienced persistent high job demands over 1 year, while 45 reported lower job demands. Participants were divided on the basis of high and low job demands, and high and low systolic pressure reactions to an uncontrollable stress task. Blood pressure and concurrent physical activity were monitored using ambulatory apparatus from 0900 to 2230 h on a working day. Cardiovascular stress reactivity was associated with waist/hip ratio. Systolic and diastolic pressure during the working day were greater in high job demand participants who were stress reactive than in other groups, after adjustment for age, baseline blood pressure, body mass index and negative affectivity. The difference was not accounted for by variations in physical activity. Cardiovascular stress reactivity and sustained psychosocial stress may act in concert to increase cardiovascular risk in susceptible individuals.

  15. Worktime demands and work-family interference: Does worktime control buffer the adverse effects of high demands?

    NARCIS (Netherlands)

    Geurts, S.A.E.; Beckers, D.G.J.; Taris, T.W.; Kompier, M.A.J.; Smulders, P.G.W.

    2009-01-01

    This study examined whether worktime control buffered the impact of worktime demands on work-family interference (WFI), using data from 2,377 workers from various sectors of industry in The Netherlands. We distinguished among three types of worktime demands: time spent on work according to one's

  16. High School Computer Science Education Paves the Way for Higher Education: The Israeli Case

    Science.gov (United States)

    Armoni, Michal; Gal-Ezer, Judith

    2014-01-01

    The gap between enrollments in higher education computing programs and the high-tech industry's demands is widely reported, and is especially prominent for women. Increasing the availability of computer science education in high school is one of the strategies suggested in order to address this gap. We look at the connection between exposure to…

  17. Computer proficiency questionnaire: assessing low and high computer proficient seniors.

    Science.gov (United States)

    Boot, Walter R; Charness, Neil; Czaja, Sara J; Sharit, Joseph; Rogers, Wendy A; Fisk, Arthur D; Mitzner, Tracy; Lee, Chin Chin; Nair, Sankaran

    2015-06-01

    Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users. To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ. The CPQ demonstrated excellent reliability (Cronbach's α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions. The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  18. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  19. Demand side resource operation on the Irish power system with high wind power penetration

    International Nuclear Information System (INIS)

    Keane, A.; Tuohy, A.; Meibom, P.; Denny, E.; Flynn, D.; Mullane, A.; O'Malley, M.

    2011-01-01

    The utilisation of demand side resources is set to increase over the coming years with the advent of advanced metering infrastructure, home area networks and the promotion of increased energy efficiency. Demand side resources are proposed as an energy resource that, through aggregation, can form part of the power system plant mix and contribute to the flexible operation of a power system. A model for demand side resources is proposed here that captures its key characteristics for commitment and dispatch calculations. The model is tested on the all island Irish power system, and the operation of the model is simulated over one year in both a stochastic and deterministic mode, to illustrate the impact of wind and load uncertainty. The results illustrate that demand side resources can contribute to the efficient, flexible operation of systems with high penetrations of wind by replacing some of the functions of conventional peaking plant. Demand side resources are also shown to be capable of improving the reliability of the system, with reserve capability identified as a key requirement in this respect. - Highlights: → Demand side resource model presented for use in unit commitment and dispatch calculations. → Benefits of demand side aggregation demonstrated specifically as a peaking unit and provider of reserve. → Potential to displace or defer construction of conventional peaking units.

  20. High Job Demands, Still Engaged and Not Burned Out? The Role of Job Crafting.

    Science.gov (United States)

    Hakanen, Jari J; Seppälä, Piia; Peeters, Maria C W

    2017-08-01

    Traditionally, employee well-being has been considered as resulting from decent working conditions arranged by the organization. Much less is known about whether employees themselves can make self-initiated changes to their work, i.e., craft their jobs, in order to stay well, even in highly demanding work situations. The aim of this study was to use the job demands-resources (JD-R model) to investigate whether job crafting buffers the negative impacts of four types of job demands (workload, emotional dissonance, work contents, and physical demands) on burnout and work engagement. A questionnaire study was designed to examine the buffering role of job crafting among 470 Finnish dentists. All in all, 11 out of 16 possible interaction effects of job demands and job crafting on employee well-being were significant. Job crafting particularly buffered the negative effects of job demands on burnout (7/8 significant interactions) and to a somewhat lesser extent also on work engagement (4/8 significant interactions). Applying job crafting techniques appeared to be particularly effective in mitigating the negative effects of quantitative workload (4/4 significant interactions). By demonstrating that job crafting can also buffer the negative impacts of high job demands on employee well-being, this study contributed to the JD-R model as it suggests that job crafting may even be possible under high work demands, and not only in resourceful jobs, as most previous studies have indicated. In addition to the top-down initiatives for improving employee well-being, bottom-up approaches such as job crafting may also be efficient in preventing burnout and enhancing work engagement.

  1. Transactive Demand Side Management Programs in Smart Grids with High Penetration of EVs

    Directory of Open Access Journals (Sweden)

    Poria Hasanpor Divshali

    2017-10-01

    Full Text Available Due to environmental concerns, economic issues, and emerging new loads, such as electrical vehicles (EVs, the importance of demand side management (DSM programs has increased in recent years. DSM programs using a dynamic real-time pricing (RTP method can help to adaptively control the electricity consumption. However, the existing RTP methods, particularly when they consider the EVs and the power system constraints, have many limitations, such as computational complexity and the need for centralized control. Therefore, a new transactive DSM program is proposed in this paper using an imperfect competition model with high EV penetration levels. In particular, a heuristic two-stage iterative method, considering the influence of decisions made independently by customers to minimize their own costs, is developed to find the market equilibrium quickly in a distributed manner. Simulations in the IEEE 37-bus system with 1141 customers and 670 EVs are performed to demonstrate the effectiveness of the proposed method. The results show that the proposed method can better manage the EVs and elastic appliances than the existing methods in terms of power constraints and cost. Also, the proposed method can solve the optimization problem quick enough to run in real-time.

  2. Federal High End Computing (HEC) Information Portal

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This portal provides information about opportunities to engage in U.S. Federal government high performance computing activities, including supercomputer use,...

  3. Embedded High Performance Scalable Computing Systems

    National Research Council Canada - National Science Library

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  4. A theoretical model for oxygen transport in skeletal muscle under conditions of high oxygen demand.

    Science.gov (United States)

    McGuire, B J; Secomb, T W

    2001-11-01

    Oxygen transport from capillaries to exercising skeletal muscle is studied by use of a Krogh-type cylinder model. The goal is to predict oxygen consumption under conditions of high demand, on the basis of a consideration of transport processes occurring at the microvascular level. Effects of the decline in oxygen content of blood flowing along capillaries, intravascular resistance to oxygen diffusion, and myoglobin-facilitated diffusion are included. Parameter values are based on human skeletal muscle. The dependence of oxygen consumption on oxygen demand, perfusion, and capillary density are examined. When demand is moderate, the tissue is well oxygenated and consumption is slightly less than demand. When demand is high, capillary oxygen content declines rapidly with axial distance and radial oxygen transport is limited by diffusion resistance within the capillary and the tissue. Under these conditions, much of the tissue is hypoxic, consumption is substantially less than demand, and consumption is strongly dependent on capillary density. Predicted consumption rates are comparable with experimentally observed maximal rates of oxygen consumption.

  5. High-resolution stochastic integrated thermal–electrical domestic demand model

    International Nuclear Information System (INIS)

    McKenna, Eoghan; Thomson, Murray

    2016-01-01

    Highlights: • A major new version of CREST’s demand model is presented. • Simulates electrical and thermal domestic demands at high-resolution. • Integrated structure captures appropriate time-coincidence of variables. • Suitable for low-voltage network and urban energy analyses. • Open-source development in Excel VBA freely available for download. - Abstract: This paper describes the extension of CREST’s existing electrical domestic demand model into an integrated thermal–electrical demand model. The principle novelty of the model is its integrated structure such that the timing of thermal and electrical output variables are appropriately correlated. The model has been developed primarily for low-voltage network analysis and the model’s ability to account for demand diversity is of critical importance for this application. The model, however, can also serve as a basis for modelling domestic energy demands within the broader field of urban energy systems analysis. The new model includes the previously published components associated with electrical demand and generation (appliances, lighting, and photovoltaics) and integrates these with an updated occupancy model, a solar thermal collector model, and new thermal models including a low-order building thermal model, domestic hot water consumption, thermostat and timer controls and gas boilers. The paper reviews the state-of-the-art in high-resolution domestic demand modelling, describes the model, and compares its output with three independent validation datasets. The integrated model remains an open-source development in Excel VBA and is freely available to download for users to configure and extend, or to incorporate into other models.

  6. A primer on high-throughput computing for genomic selection.

    Science.gov (United States)

    Wu, Xiao-Lin; Beissinger, Timothy M; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J M; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel

    2011-01-01

    High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin-Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized

  7. Grid computing in high energy physics

    CERN Document Server

    Avery, P

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them. Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software r...

  8. Computing in high-energy physics

    International Nuclear Information System (INIS)

    Mount, Richard P.

    2016-01-01

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Lastly, I describe recent developments aimed at improving the overall coherence of high-energy physics software

  9. Computing in high-energy physics

    Science.gov (United States)

    Mount, Richard P.

    2016-04-01

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Finally, I describe recent developments aimed at improving the overall coherence of high-energy physics software.

  10. AHPCRC - Army High Performance Computing Research Center

    Science.gov (United States)

    2010-01-01

    computing. Of particular interest is the ability of a distrib- uted jamming network (DJN) to jam signals in all or part of a sensor or communications net...and reasoning, assistive technologies. FRIEDRICH (FRITZ) PRINZ Finmeccanica Professor of Engineering, Robert Bosch Chair, Department of Engineering...High Performance Computing Research Center www.ahpcrc.org BARBARA BRYAN AHPCRC Research and Outreach Manager, HPTi (650) 604-3732 bbryan@hpti.com Ms

  11. DURIP: High Performance Computing in Biomathematics Applications

    Science.gov (United States)

    2017-05-10

    Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied

  12. Performance Evaluation of Residential Demand Response Based on a Modified Fuzzy VIKOR and Scalable Computing Method

    Directory of Open Access Journals (Sweden)

    Jun Dong

    2018-04-01

    Full Text Available For better utilizing renewable energy resources and improving the sustainability of power systems, demand response is widely applied in China, especially in recent decades. Considering the massive potential flexible resources in the residential sector, demand response programs are able to achieve significant benefits. This paper proposes an effective performance evaluation framework for such programs aimed at residential customers. In general, the evaluation process will face multiple criteria and some uncertain factors. Therefore, we combine the multi-criteria decision making concept and fuzzy set theory to accomplish the model establishment. By introducing trapezoidal fuzzy numbers into the Vlsekriterijumska Optimizacijia I Kompromisno Resenje (VIKOR method, the evaluation model can effectively deal with the subjection and fuzziness of experts’ opinions. Furthermore, we ameliorate the criteria weight determination procedure of traditional models via combining the fuzzy Analytic Hierarchy Process and Shannon entropy method, which can incorporate objective information and subjective judgments. Finally, the proposed evaluation framework is verified by the empirical analysis of five demand response projects in Chinese residential areas. The results give a valid performance ranking of the five alternatives and indicate that more attention should be paid to the criteria affiliated with technology level and economy benefits. In addition, a series of sensitivity analyses are conducted to examine the validity and effectiveness of the established evaluation framework and results. The study improves traditional multi-criteria decision making method VIKOR by introducing trapezoidal fuzzy numbers and combination weighing technique, which can provide an effective mean for performance evaluation of residential demand response programs in a fuzzy environment.

  13. Information Literacy Skills Training: A Factor in Student Satisfaction with Access to High Demand Material

    Science.gov (United States)

    Perrett, Valerie

    2010-01-01

    In a survey of Business and Government, Law and Information Sciences students carried out at the University of Canberra, results showed that in-curricula information literacy skills training had a greater impact on students' satisfaction with access to high demand material than the purchase of additional copies of books. This paper will discuss…

  14. Rural Dilemmas in School-to-Work Transition: Low Skill Jobs, High Social Demands.

    Science.gov (United States)

    Danzig, Arnold

    1996-01-01

    Thirty-three employers in rural Arizona were interviewed concerning employer expectations, workplace opportunities, authority patterns, rewards, and social interaction at work regarding entry level workers directly out of high school. Available work was low skill with few rewards, yet demanded strong social skills and work ethic. Discusses…

  15. Planning and Enacting Mathematical Tasks of High Cognitive Demand in the Primary Classroom

    Science.gov (United States)

    Georgius, Kelly

    2013-01-01

    This study offers an examination of two primary-grades teachers as they learn to transfer knowledge from professional development into their classrooms. I engaged in planning sessions with each teacher to help plan tasks of high cognitive demand, including anticipating and planning for classroom discourse that would occur around the task. A…

  16. Employees facing high job demands: How to keep them fit, satisfied, and intrinsically motivated?

    NARCIS (Netherlands)

    Van Yperen, N.W.; Nagao, DH

    2002-01-01

    The purpose of the present research was to determine why some employees faced with high job demands feel fatigued, dissatisfied, and unmotivated, whereas others feel fatigued but satisfied and intrinsically motivated. It is argued and demonstrated that two job conditions, namely job control and job

  17. GRID computing for experimental high energy physics

    International Nuclear Information System (INIS)

    Moloney, G.R.; Martin, L.; Seviour, E.; Taylor, G.N.; Moorhead, G.F.

    2002-01-01

    Full text: The Large Hadron Collider (LHC), to be completed at the CERN laboratory in 2006, will generate 11 petabytes of data per year. The processing of this large data stream requires a large, distributed computing infrastructure. A recent innovation in high performance distributed computing, the GRID, has been identified as an important tool in data analysis for the LHC. GRID computing has actual and potential application in many fields which require computationally intensive analysis of large, shared data sets. The Australian experimental High Energy Physics community has formed partnerships with the High Performance Computing community to establish a GRID node at the University of Melbourne. Through Australian membership of the ATLAS experiment at the LHC, Australian researchers have an opportunity to be involved in the European DataGRID project. This presentation will include an introduction to the GRID, and it's application to experimental High Energy Physics. We will present the results of our studies, including participation in the first LHC data challenge

  18. Demand side resource operation on the Irish power system with high wind power penetration

    DEFF Research Database (Denmark)

    Keane, A.; Tuohy, A.; Meibom, Peter

    2011-01-01

    part of the power system plant mix and contribute to the flexible operation of a power system. A model for demand side resources is proposed here that captures its key characteristics for commitment and dispatch calculations. The model is tested on the all island Irish power system, and the operation...... of the functions of conventional peaking plant. Demand side resources are also shown to be capable of improving the reliability of the system, with reserve capability identified as a key requirement in this respect....... of the model is simulated over one year in both a stochastic and deterministic mode, to illustrate the impact of wind and load uncertainty. The results illustrate that demand side resources can contribute to the efficient, flexible operation of systems with high penetrations of wind by replacing some...

  19. High-performance computing for airborne applications

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Manuzatto, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-01-01

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  20. A high-resolution stochastic model of domestic activity patterns and electricity demand

    International Nuclear Information System (INIS)

    Widen, Joakim; Waeckelgard, Ewa

    2010-01-01

    Realistic time-resolved data on occupant behaviour, presence and energy use are important inputs to various types of simulations, including performance of small-scale energy systems and buildings' indoor climate, use of lighting and energy demand. This paper presents a modelling framework for stochastic generation of high-resolution series of such data. The model generates both synthetic activity sequences of individual household members, including occupancy states, and domestic electricity demand based on these patterns. The activity-generating model, based on non-homogeneous Markov chains that are tuned to an extensive empirical time-use data set, creates a realistic spread of activities over time, down to a 1-min resolution. A detailed validation against measurements shows that modelled power demand data for individual households as well as aggregate demand for an arbitrary number of households are highly realistic in terms of end-use composition, annual and diurnal variations, diversity between households, short time-scale fluctuations and load coincidence. An important aim with the model development has been to maintain a sound balance between complexity and output quality. Although the model yields a high-quality output, the proposed model structure is uncomplicated in comparison to other available domestic load models.

  1. Matching Behavior as a Tradeoff Between Reward Maximization and Demands on Neural Computation [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Jan Kubanek

    2015-10-01

    Full Text Available When faced with a choice, humans and animals commonly distribute their behavior in proportion to the frequency of payoff of each option. Such behavior is referred to as matching and has been captured by the matching law. However, matching is not a general law of economic choice. Matching in its strict sense seems to be specifically observed in tasks whose properties make matching an optimal or a near-optimal strategy. We engaged monkeys in a foraging task in which matching was not the optimal strategy. Over-matching the proportions of the mean offered reward magnitudes would yield more reward than matching, yet, surprisingly, the animals almost exactly matched them. To gain insight into this phenomenon, we modeled the animals' decision-making using a mechanistic model. The model accounted for the animals' macroscopic and microscopic choice behavior. When the models' three parameters were not constrained to mimic the monkeys' behavior, the model over-matched the reward proportions and in doing so, harvested substantially more reward than the monkeys. This optimized model revealed a marked bottleneck in the monkeys' choice function that compares the value of the two options. The model featured a very steep value comparison function relative to that of the monkeys. The steepness of the value comparison function had a profound effect on the earned reward and on the level of matching. We implemented this value comparison function through responses of simulated biological neurons. We found that due to the presence of neural noise, steepening the value comparison requires an exponential increase in the number of value-coding neurons. Matching may be a compromise between harvesting satisfactory reward and the high demands placed by neural noise on optimal neural computation.

  2. A Statist Political Economy and High Demand for Education in South Korea

    Directory of Open Access Journals (Sweden)

    Ki Su Kim

    1999-06-01

    Full Text Available In the 1998 academic year, 84 percent of South Korea's high school "leavers" entered a university or college while almost all children went up to high schools. This is to say, South Korea is now moving into a new age of universal higher education. Even so, competition for university entrance remains intense. What is here interesting is South Koreans' unusually high demand for education. In this article, I criticize the existing cultural and socio-economic interpretations of the phenomenon. Instead, I explore a new interpretation by critically referring to the recent political economy debate on South Korea's state-society/market relationship. In my interpretation, the unusually high demand for education is largely due to the powerful South Korean state's losing flexibility in the management of its "developmental" policies. For this, I blame the traditional "personalist ethic" which still prevails as the

  3. Preliminary energy demand studies for Ireland: base case and high case for 1980, 1985 and 1990

    Energy Technology Data Exchange (ETDEWEB)

    Henry, E W

    1981-01-01

    The framework of the Base Case and the High Case for 1990 for Ireland, related to the demand modules of the medium-term European Communities (EC) Energy Model, is described. The modules are: Multi-national Macre-economic Module (EURECA); National Input-Output Model (EXPLOR); and National Energy Demand Model (EDM). The final results of the EXPLOR and EDM are described; one set related to the Base Case and the other related to the High Case. The forecast or projection is termed Base Case because oil prices are assumed to increase with general price inflation, at the same rate. The other forecast is termed High Case because oil prices are assumed to increase at 5% per year more rapidly than general price inflation. The EXPLOR-EDM methodology is described. The lack of data on energy price elasticities for Ireland is noted. A comparison of the Base Case with the High Case is made. (MCW)

  4. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  5. High performance computing in linear control

    International Nuclear Information System (INIS)

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  6. High performance parallel computers for science

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  7. Grid Computing in High Energy Physics

    International Nuclear Information System (INIS)

    Avery, Paul

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them.Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software resources, regardless of location); (4) collaboration (providing tools that allow members full and fair access to all collaboration resources and enable distributed teams to work effectively, irrespective of location); and (5) education, training and outreach (providing resources and mechanisms for training students and for communicating important information to the public).It is believed that computing infrastructures based on Data Grids and optical networks can meet these challenges and can offer data intensive enterprises in high energy physics and elsewhere a comprehensive, scalable framework for collaboration and resource sharing. A number of Data Grid projects have been underway since 1999. Interestingly, the most exciting and far ranging of these projects are led by collaborations of high energy physicists, computer scientists and scientists from other disciplines in support of experiments with massive, near-term data needs. I review progress in this

  8. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  9. Implementation and evaluation of nonparametric regression procedures for sensitivity analysis of computationally demanding models

    International Nuclear Information System (INIS)

    Storlie, Curtis B.; Swiler, Laura P.; Helton, Jon C.; Sallaberry, Cedric J.

    2009-01-01

    The analysis of many physical and engineering problems involves running complex computational models (simulation models, computer codes). With problems of this type, it is important to understand the relationships between the input variables (whose values are often imprecisely known) and the output. The goal of sensitivity analysis (SA) is to study this relationship and identify the most significant factors or variables affecting the results of the model. In this presentation, an improvement on existing methods for SA of complex computer models is described for use when the model is too computationally expensive for a standard Monte-Carlo analysis. In these situations, a meta-model or surrogate model can be used to estimate the necessary sensitivity index for each input. A sensitivity index is a measure of the variance in the response that is due to the uncertainty in an input. Most existing approaches to this problem either do not work well with a large number of input variables and/or they ignore the error involved in estimating a sensitivity index. Here, a new approach to sensitivity index estimation using meta-models and bootstrap confidence intervals is described that provides solutions to these drawbacks. Further, an efficient yet effective approach to incorporate this methodology into an actual SA is presented. Several simulated and real examples illustrate the utility of this approach. This framework can be extended to uncertainty analysis as well.

  10. Computers and the Future of Skill Demand. Educational Research and Innovation Series

    Science.gov (United States)

    Elliott, Stuart W.

    2017-01-01

    Computer scientists are working on reproducing all human skills using artificial intelligence, machine learning and robotics. Unsurprisingly then, many people worry that these advances will dramatically change work skills in the years ahead and perhaps leave many workers unemployable. This report develops a new approach to understanding these…

  11. High-Degree Neurons Feed Cortical Computations.

    Directory of Open Access Journals (Sweden)

    Nicholas M Timme

    2016-05-01

    Full Text Available Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree or sends out (out-degree. To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to

  12. The relationship between demand and need for orthodontic treatment in high school students in Bangkok.

    Science.gov (United States)

    Atisook, Pitraporn; Chuacharoen, Rattiya

    2014-07-01

    Orthodontic service is limited in Thailand and cannot meet the demand of the population. (1) To assess the need for orthodontic treatment (OT) using the Index of Orthodontic Treatment Need (IOTN) to analyze the relationship between demand and need for OT and (2) to compare the demand and need for OT between genders. A cross-sectional study was conducted on 450 students aged 12- to 14-years-old in three government high schools in Bangkok. A constructed questionnaire was used to assess demand for OT Clinical examination was done by two orthodontists to determine the needfor OT using the IOTN RESULTS: Most of the students (74.0%) wished to have OT while only one-third (37.5%) had severe need, and one-third (34.4%) had moderate need for OT as judge by the DHC of the IOTN. The AC of the IOTN indicated that most students (55.8%) had mild or no need for OT Females (79%) demanded OT more than males (66% p-value = 0.033) but the need was similar in both sexes. Most functional factors had strong relationships with the demand for OTexcept lower teeth bite on palate, but none was found to be associated with need for OT All of the aesthetic factors had strong relationships with demand for OT There were significant relationships with needs in five categories, 1) crooked, crowded, or spacing teeth, 2) worried when speaking or smiling, 3) had suggestedfor OT 4) breath smell and halitosis, and 5) wanted to put on braces to be like other people or for fashionable reasons. Most of the students requested OT but females had significantly higher demand for OT than males. Most of the samples needed to have OT The aesthetic factors that had strong relationships with the need for OT were 1) crooked, crowded, or spacing teeth, 2) worried when speaking or smiling, 3) had suggested for T07 4) breath smell and halitosis, and 5) wanted to put on braces to be like otherpeople orfor fashionable reasons.

  13. Large Scale Computing and Storage Requirements for High Energy Physics

    International Nuclear Information System (INIS)

    Gerber, Richard A.; Wasserman, Harvey

    2010-01-01

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  14. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  15. CHEP95: Computing in high energy physics. Abstracts

    International Nuclear Information System (INIS)

    1995-01-01

    These proceedings cover the technical papers on computation in High Energy Physics, including computer codes, computer devices, control systems, simulations, data acquisition systems. New approaches on computer architectures are also discussed

  16. High-Precision Computation and Mathematical Physics

    International Nuclear Information System (INIS)

    Bailey, David H.; Borwein, Jonathan M.

    2008-01-01

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

  17. Computing Air Demand Using the Takagi–Sugeno Model for Dam Outlets

    Directory of Open Access Journals (Sweden)

    Mohammad Zounemat-Kermani

    2013-09-01

    Full Text Available An adaptive neuro-fuzzy inference system (ANFIS was developed using the subtractive clustering technique to study the air demand in low-level outlet works. The ANFIS model was employed to calculate vent air discharge in different gate openings for an embankment dam. A hybrid learning algorithm obtained from combining back-propagation and least square estimate was adopted to identify linear and non-linear parameters in the ANFIS model. Empirical relationships based on the experimental information obtained from physical models were applied to 108 experimental data points to obtain more reliable evaluations. The feed-forward Levenberg-Marquardt neural network (LMNN and multiple linear regression (MLR models were also built using the same data to compare model performances with each other. The results indicated that the fuzzy rule-based model performed better than the LMNN and MLR models, in terms of the simulation performance criteria established, as the root mean square error, the Nash–Sutcliffe efficiency, the correlation coefficient and the Bias.

  18. Electricity demand and spot price forecasting using evolutionary computation combined with chaotic nonlinear dynamic model

    International Nuclear Information System (INIS)

    Unsihuay-Vila, C.; Zambroni de Souza, A.C.; Marangon-Lima, J.W.; Balestrassi, P.P.

    2010-01-01

    This paper proposes a new hybrid approach based on nonlinear chaotic dynamics and evolutionary strategy to forecast electricity loads and prices. The main idea is to develop a new training or identification stage in a nonlinear chaotic dynamic based predictor. In the training stage five optimal parameters for a chaotic based predictor are searched through an optimization model based on evolutionary strategy. The objective function of the optimization model is the mismatch minimization between the multi-step-ahead forecasting of predictor and observed data such as it is done in identification problems. The first contribution of this paper is that the proposed approach is capable of capturing the complex dynamic of demand and price time series considered resulting in a more accuracy forecasting. The second contribution is that the proposed approach run on-line manner, i.e. the optimal set of parameters and prediction is executed automatically which can be used to prediction in real-time, it is an advantage in comparison with other models, where the choice of their input parameters are carried out off-line, following qualitative/experience-based recipes. A case study of load and price forecasting is presented using data from New England, Alberta, and Spain. A comparison with other methods such as autoregressive integrated moving average (ARIMA) and artificial neural network (ANN) is shown. The results show that the proposed approach provides a more accurate and effective forecasting than ARIMA and ANN methods. (author)

  19. PRIDE and "Database on Demand" as valuable tools for computational proteomics.

    Science.gov (United States)

    Vizcaíno, Juan Antonio; Reisinger, Florian; Côté, Richard; Martens, Lennart

    2011-01-01

    The Proteomics Identifications Database (PRIDE, http://www.ebi.ac.uk/pride ) provides users with the ability to explore and compare mass spectrometry-based proteomics experiments that reveal details of the protein expression found in a broad range of taxonomic groups, tissues, and disease states. A PRIDE experiment typically includes identifications of proteins, peptides, and protein modifications. Additionally, many of the submitted experiments also include the mass spectra that provide the evidence for these identifications. Finally, one of the strongest advantages of PRIDE in comparison with other proteomics repositories is the amount of metadata it contains, a key point to put the above-mentioned data in biological and/or technical context. Several informatics tools have been developed in support of the PRIDE database. The most recent one is called "Database on Demand" (DoD), which allows custom sequence databases to be built in order to optimize the results from search engines. We describe the use of DoD in this chapter. Additionally, in order to show the potential of PRIDE as a source for data mining, we also explore complex queries using federated BioMart queries to integrate PRIDE data with other resources, such as Ensembl, Reactome, or UniProt.

  20. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  1. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  2. Monitoring SLAC High Performance UNIX Computing Systems

    International Nuclear Information System (INIS)

    Lettsome, Annette K.

    2005-01-01

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface

  3. High Performance Computing Operations Review Report

    Energy Technology Data Exchange (ETDEWEB)

    Cupps, Kimberly C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  4. Neck pain and postural balance among workers with high postural demands - a cross-sectional study

    DEFF Research Database (Denmark)

    Jørgensen, Marie B.; Skotte, Jørgen H.; Holtermann, Andreas

    2011-01-01

    Neck pain is related to impaired postural balance among patients and is highly prevalent among workers with high postural demands, for example, cleaners. We therefore hypothesised, that cleaners with neck pain suffer from postural dysfunction. This cross-sectional study tested if cleaners with neck...... pain have an impaired postural balance compared with cleaners without neck pain. Postural balance of 194 cleaners with (n = 85) and without (N = 109) neck pain was studied using three different tests. Success or failure to maintain the standing position for 30 s in unilateral stance was recorded...... to cleaners without neck/low back pain (p balance, measured as CEA (p

  5. Developing Computer Assisted Media of Pneumatic System Learning Oriented to Industrial Demands

    Directory of Open Access Journals (Sweden)

    Wahyu Dwi Kurniawan

    2017-04-01

    Full Text Available This study aimed to develop learning media of pneumatic systems based on computer-assisted learning as an effort to improve the competence of students at the Department of Mechanical Engineering, Faculty of Engineering UNESA. The development method referred to the 4D model design of Thiagarajan comprising the steps of: define, design, develop, and desseminate. The results showed that the expert validation average score included in both categories was 3.54, indicating the learning application acceptable. A limited test showed effective results, namely: (a Analysis of the data included in the category of learning was good (3.64, indicated by students’ enthusiasm in the learning process; (b Teaching learning activities were categorized as good, the students actively involved in learning, and the most dominant activity was doing tasks while discussing; (c Learning objectives were both achieved individually and classically; (d The students showed a positive response expressed by the students’ interest, excitement, and motivation to follow the learning process.

  6. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  7. Grid computing in high-energy physics

    International Nuclear Information System (INIS)

    Bischof, R.; Kuhn, D.; Kneringer, E.

    2003-01-01

    Full text: The future high energy physics experiments are characterized by an enormous amount of data delivered by the large detectors presently under construction e.g. at the Large Hadron Collider and by a large number of scientists (several thousands) requiring simultaneous access to the resulting experimental data. Since it seems unrealistic to provide the necessary computing and storage resources at one single place, (e.g. CERN), the concept of grid computing i.e. the use of distributed resources, will be chosen. The DataGrid project (under the leadership of CERN) develops, based on the Globus toolkit, the software necessary for computation and analysis of shared large-scale databases in a grid structure. The high energy physics group Innsbruck participates with several resources in the DataGrid test bed. In this presentation our experience as grid users and resource provider is summarized. In cooperation with the local IT-center (ZID) we installed a flexible grid system which uses PCs (at the moment 162) in student's labs during nights, weekends and holidays, which is especially used to compare different systems (local resource managers, other grid software e.g. from the Nordugrid project) and to supply a test bed for the future Austrian Grid (AGrid). (author)

  8. Overcoming job demands to deliver high quality care in a hospital setting across Europe: The role of teamwork and positivity

    OpenAIRE

    Montgomery Anthony; Panagopoulou Efharis; Costa Patricia

    2014-01-01

    Health care professionals deal on a daily basis with several job demands – emotional, cognitive, organizational and physical. They must also ensure high quality care to their patients. The aim of this study is to analyse the impact of job demands on quality of care and to investigate team (backup behaviors) and individual (positivity ratio) processes that help to shield that impact. Data was collected from 2,890 doctors and nurses in 9 European countries by means of questionnaires. Job demand...

  9. The path toward HEP High Performance Computing

    International Nuclear Information System (INIS)

    Apostolakis, John; Brun, René; Gheata, Andrei; Wenzel, Sandro; Carminati, Federico

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit

  10. A computational study of high entropy alloys

    Science.gov (United States)

    Wang, Yang; Gao, Michael; Widom, Michael; Hawk, Jeff

    2013-03-01

    As a new class of advanced materials, high-entropy alloys (HEAs) exhibit a wide variety of excellent materials properties, including high strength, reasonable ductility with appreciable work-hardening, corrosion and oxidation resistance, wear resistance, and outstanding diffusion-barrier performance, especially at elevated and high temperatures. In this talk, we will explain our computational approach to the study of HEAs that employs the Korringa-Kohn-Rostoker coherent potential approximation (KKR-CPA) method. The KKR-CPA method uses Green's function technique within the framework of multiple scattering theory and is uniquely designed for the theoretical investigation of random alloys from the first principles. The application of the KKR-CPA method will be discussed as it pertains to the study of structural and mechanical properties of HEAs. In particular, computational results will be presented for AlxCoCrCuFeNi (x = 0, 0.3, 0.5, 0.8, 1.0, 1.3, 2.0, 2.8, and 3.0), and these results will be compared with experimental information from the literature.

  11. Computer simulation of high energy displacement cascades

    International Nuclear Information System (INIS)

    Heinisch, H.L.

    1990-01-01

    A methodology developed for modeling many aspects of high energy displacement cascades with molecular level computer simulations is reviewed. The initial damage state is modeled in the binary collision approximation (using the MARLOWE computer code), and the subsequent disposition of the defects within a cascade is modeled with a Monte Carlo annealing simulation (the ALSOME code). There are few adjustable parameters, and none are set to physically unreasonable values. The basic configurations of the simulated high energy cascades in copper, i.e., the number, size and shape of damage regions, compare well with observations, as do the measured numbers of residual defects and the fractions of freely migrating defects. The success of these simulations is somewhat remarkable, given the relatively simple models of defects and their interactions that are employed. The reason for this success is that the behavior of the defects is very strongly influenced by their initial spatial distributions, which the binary collision approximation adequately models. The MARLOWE/ALSOME system, with input from molecular dynamics and experiments, provides a framework for investigating the influence of high energy cascades on microstructure evolution. (author)

  12. High-resolution computer-aided moire

    Science.gov (United States)

    Sciammarella, Cesar A.; Bhat, Gopalakrishna K.

    1991-12-01

    This paper presents a high resolution computer assisted moire technique for the measurement of displacements and strains at the microscopic level. The detection of micro-displacements using a moire grid and the problem associated with the recovery of displacement field from the sampled values of the grid intensity are discussed. A two dimensional Fourier transform method for the extraction of displacements from the image of the moire grid is outlined. An example of application of the technique to the measurement of strains and stresses in the vicinity of the crack tip in a compact tension specimen is given.

  13. Research on Demand for Bus Transport and Transport Habits of High School Students in Žilina Region

    Directory of Open Access Journals (Sweden)

    Konečný Vladimír

    2017-11-01

    Full Text Available The paper deals with the analysis of demand for bus transport to examine determinants of demand and practices of high school students based on survey of their transport habits in Žilina Region. Transport habits of students are individual and variable in time. This group of passengers is dependent on public passenger transport services because of their travelling to schools. Significant part of demand for public passenger transport is also formed by this this group of passengers. The knowledge of student's transport habits may help in process of adaptation of offering and quality of transport serviceability what may subsequently stabilize demand for public passenger transport.

  14. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  15. Surprise responses in the human brain demonstrate statistical learning under high concurrent cognitive demand

    Science.gov (United States)

    Garrido, Marta Isabel; Teng, Chee Leong James; Taylor, Jeremy Alexander; Rowe, Elise Genevieve; Mattingley, Jason Brett

    2016-06-01

    The ability to learn about regularities in the environment and to make predictions about future events is fundamental for adaptive behaviour. We have previously shown that people can implicitly encode statistical regularities and detect violations therein, as reflected in neuronal responses to unpredictable events that carry a unique prediction error signature. In the real world, however, learning about regularities will often occur in the context of competing cognitive demands. Here we asked whether learning of statistical regularities is modulated by concurrent cognitive load. We compared electroencephalographic metrics associated with responses to pure-tone sounds with frequencies sampled from narrow or wide Gaussian distributions. We showed that outliers evoked a larger response than those in the centre of the stimulus distribution (i.e., an effect of surprise) and that this difference was greater for physically identical outliers in the narrow than in the broad distribution. These results demonstrate an early neurophysiological marker of the brain's ability to implicitly encode complex statistical structure in the environment. Moreover, we manipulated concurrent cognitive load by having participants perform a visual working memory task while listening to these streams of sounds. We again observed greater prediction error responses in the narrower distribution under both low and high cognitive load. Furthermore, there was no reliable reduction in prediction error magnitude under high-relative to low-cognitive load. Our findings suggest that statistical learning is not a capacity limited process, and that it proceeds automatically even when cognitive resources are taxed by concurrent demands.

  16. On-demand high-capacity ride-sharing via dynamic trip-vehicle assignment.

    Science.gov (United States)

    Alonso-Mora, Javier; Samaranayake, Samitha; Wallar, Alex; Frazzoli, Emilio; Rus, Daniela

    2017-01-17

    Ride-sharing services are transforming urban mobility by providing timely and convenient transportation to anybody, anywhere, and anytime. These services present enormous potential for positive societal impacts with respect to pollution, energy consumption, congestion, etc. Current mathematical models, however, do not fully address the potential of ride-sharing. Recently, a large-scale study highlighted some of the benefits of car pooling but was limited to static routes with two riders per vehicle (optimally) or three (with heuristics). We present a more general mathematical model for real-time high-capacity ride-sharing that (i) scales to large numbers of passengers and trips and (ii) dynamically generates optimal routes with respect to online demand and vehicle locations. The algorithm starts from a greedy assignment and improves it through a constrained optimization, quickly returning solutions of good quality and converging to the optimal assignment over time. We quantify experimentally the tradeoff between fleet size, capacity, waiting time, travel delay, and operational costs for low- to medium-capacity vehicles, such as taxis and van shuttles. The algorithm is validated with ∼3 million rides extracted from the New York City taxicab public dataset. Our experimental study considers ride-sharing with rider capacity of up to 10 simultaneous passengers per vehicle. The algorithm applies to fleets of autonomous vehicles and also incorporates rebalancing of idling vehicles to areas of high demand. This framework is general and can be used for many real-time multivehicle, multitask assignment problems.

  17. High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations

    Science.gov (United States)

    Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.

    2003-01-01

    Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.

  18. Computer code validation by high temperature chemistry

    International Nuclear Information System (INIS)

    Alexander, C.A.; Ogden, J.S.

    1988-01-01

    At least five of the computer codes utilized in analysis of severe fuel damage-type events are directly dependent upon or can be verified by high temperature chemistry. These codes are ORIGEN, CORSOR, CORCON, VICTORIA, and VANESA. With the exemption of CORCON and VANESA, it is necessary that verification experiments be performed on real irradiated fuel. For ORIGEN, the familiar knudsen effusion cell is the best choice and a small piece of known mass and known burn-up is selected and volatilized completely into the mass spectrometer. The mass spectrometer is used in the integral mode to integrate the entire signal from preselected radionuclides, and from this integrated signal the total mass of the respective nuclides can be determined. For CORSOR and VICTORIA, experiments with flowing high pressure hydrogen/steam must flow over the irradiated fuel and then enter the mass spectrometer. For these experiments, a high pressure-high temperature molecular beam inlet must be employed. Finally, in support of VANESA-CORCON, the very highest temperature and molten fuels must be contained and analyzed. Results from all types of experiments will be discussed and their applicability to present and future code development will also be covered

  19. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  20. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    California to date. The Titan system provides the largest extant heterogeneous architecture for computing and computational science. Usage is high, delivering on the promise of a system well-suited for capability simulations for science. This success is due in part to innovations in tracking and reporting the activity on the compute nodes, and using this information to further enable and optimize applications, extending and balancing workload across the entire node. The OLCF continues to invest in innovative processes, tools, and resources necessary to meet continuing user demand. The facility’s leadership in data analysis and workflows was featured at the Department of Energy (DOE) booth at SC15, for the second year in a row, highlighting work with researchers from the National Library of Medicine coupled with unique computational and data resources serving experimental and observational data across facilities. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. Building on the exemplary year of 2014, as shown by the 2014 Operational Assessment Report (OAR) review committee response in Appendix A, this OAR delineates the policies, procedures, and innovations implemented by the OLCF to continue delivering a multi-petaflop resource for cutting-edge research. This report covers CY 2015, which, unless otherwise specified, denotes January 1, 2015, through December 31, 2015.

  1. High Resolution Map of Water Supply and Demand for North East United States

    Science.gov (United States)

    Ehsani, N.; Vorosmarty, C. J.; Fekete, B. M.

    2012-12-01

    Accurate estimates of water supply and demand are crucial elements in water resources management and modeling. As part of our NSF-funded EaSM effort to build a Northeast Regional Earth System Model (NE-RESM) as a framework to improve our understanding and capacity to forecast the implications of planning decisions on the region's environment, ecosystem services, energy and economic systems through the 21st century, we are producing a high resolution map (3' x 3' lat/long) of estimated water supply and use for the north east region of United States. Focusing on water demand, results from this study enables us to quantify how demand sources affect the hydrology and thermal-chemical water pollution across the region. In an attempt to generate this 3-minute resolution map in which each grid cell has a specific estimated monthly domestic, agriculture, thermoelectric and industrial water use. Estimated Use of Water in the United States in 2005 (Kenny et al., 2009) is being coupled to high resolution land cover and land use, irrigation, power plant and population data sets. In addition to water demands, we tried to improve estimates of water supply from the WBM model by improving the way it controls discharge from reservoirs. Reservoirs are key characteristics of the modern hydrologic system, with a particular impact on altering the natural stream flow, thermal characteristics, and biogeochemical fluxes of rivers. Depending on dam characteristics, watershed characteristics and the purpose of building a dam, each reservoir has a specific optimum operating rule. It means that literally 84,000 dams in the National Inventory of Dams potentially follow 84,000 different sets of rules for storing and releasing water which must somehow be accounted for in our modeling exercise. In reality, there is no comprehensive observational dataset depicting these operating rules. Thus, we will simulate these rules. Our perspective is not to find the optimum operating rule per se but to find

  2. Computer simulations of high pressure systems

    International Nuclear Information System (INIS)

    Wilkins, M.L.

    1977-01-01

    Numerical methods are capable of solving very difficult problems in solid mechanics and gas dynamics. In the design of engineering structures, critical decisions are possible if the behavior of materials is correctly described in the calculation. Problems of current interest require accurate analysis of stress-strain fields that range from very small elastic displacement to very large plastic deformation. A finite difference program is described that solves problems over this range and in two and three space-dimensions and time. A series of experiments and calculations serve to establish confidence in the plasticity formulation. The program can be used to design high pressure systems where plastic flow occurs. The purpose is to identify material properties, strength and elongation, that meet the operating requirements. An objective is to be able to perform destructive testing on a computer rather than on the engineering structure. Examples of topical interest are given

  3. EBR-II high-ramp transients under computer control

    International Nuclear Information System (INIS)

    Forrester, R.J.; Larson, H.A.; Christensen, L.J.; Booty, W.F.; Dean, E.M.

    1983-01-01

    During reactor run 122, EBR-II was subjected to 13 computer-controlled overpower transients at ramps of 4 MWt/s to qualify the facility and fuel for transient testing of LMFBR oxide fuels as part of the EBR-II operational-reliability-testing (ORT) program. A computer-controlled automatic control-rod drive system (ACRDS), designed by EBR-II personnel, permitted automatic control on demand power during the transients

  4. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  5. Does good leadership buffer effects of high emotional demands at work on risk of antidepressant treatment?

    DEFF Research Database (Denmark)

    Madsen, Ida E H; Hanson, Linda L Magnusson; Rugulies, Reiner Ernst

    2014-01-01

    Emotionally demanding work has been associated with increased risk of common mental disorders. Because emotional demands may not be preventable in certain occupations, the identification of workplace factors that can modify this association is vital. This article examines whether effects of emoti...... of emotional demands on antidepressant treatment, as an indicator of common mental disorders, are buffered by good leadership.......Emotionally demanding work has been associated with increased risk of common mental disorders. Because emotional demands may not be preventable in certain occupations, the identification of workplace factors that can modify this association is vital. This article examines whether effects...

  6. The path toward HEP High Performance Computing

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  7. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  8. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  9. A High Performance COTS Based Computer Architecture

    Science.gov (United States)

    Patte, Mathieu; Grimoldi, Raoul; Trautner, Roland

    2014-08-01

    Using Commercial Off The Shelf (COTS) electronic components for space applications is a long standing idea. Indeed the difference in processing performance and energy efficiency between radiation hardened components and COTS components is so important that COTS components are very attractive for use in mass and power constrained systems. However using COTS components in space is not straightforward as one must account with the effects of the space environment on the COTS components behavior. In the frame of the ESA funded activity called High Performance COTS Based Computer, Airbus Defense and Space and its subcontractor OHB CGS have developed and prototyped a versatile COTS based architecture for high performance processing. The rest of the paper is organized as follows: in a first section we will start by recapitulating the interests and constraints of using COTS components for space applications; then we will briefly describe existing fault mitigation architectures and present our solution for fault mitigation based on a component called the SmartIO; in the last part of the paper we will describe the prototyping activities executed during the HiP CBC project.

  10. High performance computing in power and energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Khaitan, Siddhartha Kumar [Iowa State Univ., Ames, IA (United States); Gupta, Anshul (eds.) [IBM Watson Research Center, Yorktown Heights, NY (United States)

    2013-07-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, cascading and security analysis, interaction between hybrid systems (electric, transport, gas, oil, coal, etc.) and so on, to get meaningful information in real time to ensure a secure, reliable and stable power system grid. Advanced research on development and implementation of market-ready leading-edge high-speed enabling technologies and algorithms for solving real-time, dynamic, resource-critical problems will be required for dynamic security analysis targeted towards successful implementation of Smart Grid initiatives. This books aims to bring together some of the latest research developments as well as thoughts on the future research directions of the high performance computing applications in electric power systems planning, operations, security, markets, and grid integration of alternate sources of energy, etc.

  11. Personal computers in high energy physics

    International Nuclear Information System (INIS)

    Quarrie, D.R.

    1987-01-01

    The role of personal computers within HEP is expanding as their capabilities increase and their cost decreases. Already they offer greater flexibility than many low-cost graphics terminals for a comparable cost and in addition they can significantly increase the productivity of physicists and programmers. This talk will discuss existing uses for personal computers and explore possible future directions for their integration into the overall computing environment. (orig.)

  12. Power systems balancing with high penetration renewables: The potential of demand response in Hawaii

    International Nuclear Information System (INIS)

    Critz, D. Karl; Busche, Sarah; Connors, Stephen

    2013-01-01

    Highlights: • Demand response for Oahu results in system cost savings. • Demand response improves thermal power plant operations. • Increased use of wind generation possible with demand response. • WILMAR model used to simulate various levels and prices of demand response. - Abstract: The State of Hawaii’s Clean Energy policies call for 40% of the state’s electricity to be supplied by renewable sources by 2030. A recent study focusing on the island of Oahu showed that meeting large amounts of the island’s electricity needs with wind and solar introduced significant operational challenges, especially when renewable generation varies from forecasts. This paper focuses on the potential of demand response in balancing supply and demand on an hourly basis. Using the WILMAR model, various levels and prices of demand response were simulated. Results indicate that demand response has the potential to smooth overall power system operation, with production cost savings arising from both improved thermal power plant operations and increased wind production. Demand response program design and cost structure is then discussed drawing from industry experience in direct load control programs

  13. High performance computing in science and engineering Garching/Munich 2016

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Siegfried; Bode, Arndt; Bruechle, Helmut; Brehm, Matthias (eds.)

    2016-11-01

    Computer simulations are the well-established third pillar of natural sciences along with theory and experimentation. Particularly high performance computing is growing fast and constantly demands more and more powerful machines. To keep pace with this development, in spring 2015, the Leibniz Supercomputing Centre installed the high performance computing system SuperMUC Phase 2, only three years after the inauguration of its sibling SuperMUC Phase 1. Thereby, the compute capabilities were more than doubled. This book covers the time-frame June 2014 until June 2016. Readers will find many examples of outstanding research in the more than 130 projects that are covered in this book, with each one of these projects using at least 4 million core-hours on SuperMUC. The largest scientific communities using SuperMUC in the last two years were computational fluid dynamics simulations, chemistry and material sciences, astrophysics, and life sciences.

  14. NINJA: Java for High Performance Numerical Computing

    Directory of Open Access Journals (Sweden)

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  15. High resolution computed tomography of positron emitters

    International Nuclear Information System (INIS)

    Derenzo, S.E.; Budinger, T.F.; Cahoon, J.L.; Huesman, R.H.; Jackson, H.G.

    1976-10-01

    High resolution computed transaxial radionuclide tomography has been performed on phantoms containing positron-emitting isotopes. The imaging system consisted of two opposing groups of eight NaI(Tl) crystals 8 mm x 30 mm x 50 mm deep and the phantoms were rotated to measure coincident events along 8960 projection integrals as they would be measured by a 280-crystal ring system now under construction. The spatial resolution in the reconstructed images is 7.5 mm FWHM at the center of the ring and approximately 11 mm FWHM at a radius of 10 cm. We present measurements of imaging and background rates under various operating conditions. Based on these measurements, the full 280-crystal system will image 10,000 events per sec with 400 μCi in a section 1 cm thick and 20 cm in diameter. We show that 1.5 million events are sufficient to reliably image 3.5-mm hot spots with 14-mm center-to-center spacing and isolated 9-mm diameter cold spots in phantoms 15 to 20 cm in diameter

  16. Concept for high speed computer printer

    Science.gov (United States)

    Stephens, J. W.

    1970-01-01

    Printer uses Kerr cell as light shutter for controlling the print on photosensitive paper. Applied to output data transfer, the information transfer rate of graphic computer printers could be increased to speeds approaching the data transfer rate of computer central processors /5000 to 10,000 lines per minute/.

  17. Do traditional male role norms modify the association between high emotional demands in work, and sickness absence?

    DEFF Research Database (Denmark)

    Labriola, Merete; Hansen, Claus D.; Lund, Thomas

    2011-01-01

    analysis showed that participants with high MRNI-score were more affected by emotional demands in terms of their mental health than participants with lower MRNI-score. Conclusions The study confirms the association between emotional demands and absenteeism, and furthermore showed that the effect......Objectives Ambulance workers are exposed to high levels of emotional demands, which could affect sickness absence. Being a male dominated occupation, it is hypothesised that ambulance workers adhere to more traditional male role norms than men in other occupations. The aim is to investigate...... if adherence to traditional male role norms modifies the effect of emotional demands on sickness absence/presenteeism. Methods Data derive from MARS (Men, accidents, risk and safety), a two-wave panel study of ambulance workers and fire fighters in Denmark (n = 2585). Information was collected from...

  18. Greenhouse gas emissions from high demand, natural gas-intensive energy scenarios

    International Nuclear Information System (INIS)

    Victor, D.G.

    1990-01-01

    Since coal and oil emit 70% and 30% more CO 2 per unit of energy than natural gas (methane), fuel switching to natural gas is an obvious pathway to lower CO 2 emissions and reduced theorized greenhouse warming. However, methane is, itself, a strong greenhouse gas so the CO 2 advantages of natural gas may be offset by leaks in the natural gas recovery and supply system. Simple models of atmospheric CO 2 and methane are used to test this hypothesis for several natural gas-intensive energy scenarios, including the work of Ausubel et al (1988). It is found that the methane leaks are significant and may increase the total 'greenhouse effect' from natural gas-intensive energy scenarios by 10%. Furthermore, because methane is short-lived in the atmosphere, leaking methane from natural gas-intensive, high energy growth scenarios effectively recharges the concentration of atmospheric methane continuously. For such scenarios, the problem of methane leaks is even more serious. A second objective is to explore some high demand scenarios that describe the role of methane leaks in the greenhouse tradeoff between gas and coal as energy sources. It is found that the uncertainty in the methane leaks from the natural gas system are large enough to consume the CO 2 advantages from using natural gas instead of coal for 20% of the market share. (author)

  19. Neck pain and postural balance among workers with high postural demands - a cross-sectional study

    Science.gov (United States)

    2011-01-01

    Background Neck pain is related to impaired postural balance among patients and is highly prevalent among workers with high postural demands, for example, cleaners. We therefore hypothesised, that cleaners with neck pain suffer from postural dysfunction. This cross-sectional study tested if cleaners with neck pain have an impaired postural balance compared with cleaners without neck pain. Methods Postural balance of 194 cleaners with (n = 85) and without (N = 109) neck pain was studied using three different tests. Success or failure to maintain the standing position for 30 s in unilateral stance was recorded. Participants were asked to stand on a force platform for 30 s in the Romberg position with eyes open and closed. The centre of pressure of the sway was calculated, and separated into a slow (rambling) and fast (trembling) component. Subsequently, the 95% confidence ellipse area (CEA) was calculated. Furthermore a perturbation test was performed. Results More cleaners with neck pain (81%) failed the unilateral stance compared with cleaners without neck pain (61%) (p neck pain in comparison with cleaners without neck pain in the Romberg position with eyes closed, but not with eyes open. Conclusions Postural balance is impaired among cleaners with neck pain and the current study suggests a particular role of the slow component of postural sway. Furthermore, the unilateral stance test is a simple test to illustrate functional impairment among cleaners with concurrent neck and low back pain. Trial registration ISRCTN96241850 PMID:21806796

  20. Evaluation of high temperature gas reactor for demanding cogeneration load follow

    International Nuclear Information System (INIS)

    Yan, Xing L.; Sato, Hiroyuki; Tachibana, Yukio; Kunitomi, Kazuhiko; Hino, Ryutaro

    2012-01-01

    Modular nuclear reactor systems are being developed around the world for new missions among which is cogeneration for industries and remote areas. Like existing fossil energy counterpart in these markets, a nuclear plant would need to demonstrate the feasibility of load follow including (1) the reliability to generate power and heat simultaneously and alone and (2) the flexibility to vary cogeneration rates concurrent to demand changes. This article reports the results of JAEA's evaluation on the high temperature gas reactor (HTGR) to perform these duties. The evaluation results in a plant design based on the materials and design codes developed with JAEA's operating test reactor and from additional equipment validation programs. The 600 MWt-HTGR plant generates electricity efficiently by gas turbine and 900degC heat by a topping heater. The heater couples via a heat transport loop to industrial facility that consumes the high temperature heat to yield heat product such as hydrogen fuel, steel, or chemical. Original control methods are proposed to automate transition between the load duties. Equipment challenges are addressed for severe operation conditions. Performance limits of cogeneration load following are quantified from the plant system simulation to a range of bounding events including a loss of either load and a rapid peaking of electricity. (author)

  1. Lightweight Provenance Service for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Dong; Chen, Yong; Carns, Philip; Jenkins, John; Ross, Robert

    2017-09-09

    Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. The need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.

  2. Biomedical Requirements for High Productivity Computing Systems

    Science.gov (United States)

    2005-04-01

    differences in heart muscle structure between normal and brittle-boned mice suffering from osteogenesis imperfecta (OI) because of a deficiency in the protein...reached. In a typical comparative modeling exercise one would use a heuristic algorithm to determine possible sequences of interest, then the Smith...example exercise , require a description of the cellular events that create demands for oxygen. Having cellular level equations together with

  3. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    DEFF Research Database (Denmark)

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational thi...

  4. Technical Note: Method of Morris effectively reduces the computational demands of global sensitivity analysis for distributed watershed models

    Directory of Open Access Journals (Sweden)

    J. D. Herman

    2013-07-01

    Full Text Available The increase in spatially distributed hydrologic modeling warrants a corresponding increase in diagnostic methods capable of analyzing complex models with large numbers of parameters. Sobol' sensitivity analysis has proven to be a valuable tool for diagnostic analyses of hydrologic models. However, for many spatially distributed models, the Sobol' method requires a prohibitive number of model evaluations to reliably decompose output variance across the full set of parameters. We investigate the potential of the method of Morris, a screening-based sensitivity approach, to provide results sufficiently similar to those of the Sobol' method at a greatly reduced computational expense. The methods are benchmarked on the Hydrology Laboratory Research Distributed Hydrologic Model (HL-RDHM over a six-month period in the Blue River watershed, Oklahoma, USA. The Sobol' method required over six million model evaluations to ensure reliable sensitivity indices, corresponding to more than 30 000 computing hours and roughly 180 gigabytes of storage space. We find that the method of Morris is able to correctly screen the most and least sensitive parameters with 300 times fewer model evaluations, requiring only 100 computing hours and 1 gigabyte of storage space. The method of Morris proves to be a promising diagnostic approach for global sensitivity analysis of highly parameterized, spatially distributed hydrologic models.

  5. DOE research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  6. Peregrine System | High-Performance Computing | NREL

    Science.gov (United States)

    classes of nodes that users access: Login Nodes Peregrine has four login nodes, each of which has Intel E5 /scratch file systems, the /mss file system is mounted on all login nodes. Compute Nodes Peregrine has 2592

  7. Speed and path control for conflict-free flight in high air traffic demand in terminal airspace

    Science.gov (United States)

    Rezaei, Ali

    To accommodate the growing air traffic demand, flights will need to be planned and navigated with a much higher level of precision than today's aircraft flight path. The Next Generation Air Transportation System (NextGen) stands to benefit significantly in safety and efficiency from such movement of aircraft along precisely defined paths. Air Traffic Operations (ATO) relying on such precision--the Precision Air Traffic Operations or PATO--are the foundation of high throughput capacity envisioned for the future airports. In PATO, the preferred method is to manage the air traffic by assigning a speed profile to each aircraft in a given fleet in a given airspace (in practice known as (speed control). In this research, an algorithm has been developed, set in the context of a Hybrid Control System (HCS) model, that determines whether a speed control solution exists for a given fleet of aircraft in a given airspace and if so, computes this solution as a collective speed profile that assures separation if executed without deviation. Uncertainties such as weather are not considered but the algorithm can be modified to include uncertainties. The algorithm first computes all feasible sequences (i.e., all sequences that allow the given fleet of aircraft to reach destinations without violating the FAA's separation requirement) by looking at all pairs of aircraft. Then, the most likely sequence is determined and the speed control solution is constructed by a backward trajectory generation, starting with the aircraft last out and proceeds to the first out. This computation can be done for different sequences in parallel which helps to reduce the computation time. If such a solution does not exist, then the algorithm calculates a minimal path modification (known as path control) that will allow separation-compliance speed control. We will also prove that the algorithm will modify the path without creating a new separation violation. The new path will be generated by adding new

  8. Inkjet metrology: high-accuracy mass measurements of microdroplets produced by a drop-on-demand dispenser.

    Science.gov (United States)

    Verkouteren, R Michael; Verkouteren, Jennifer R

    2009-10-15

    We describe gravimetric methods for measuring the mass of droplets generated by a drop-on-demand (DOD) microdispenser. Droplets are deposited, either continuously at a known frequency or as a burst of known number, into a cylinder positioned on a submicrogram balance. Mass measurements are acquired precisely by computer, and results are corrected for evaporation. Capabilities are demonstrated using isobutyl alcohol droplets. For ejection rates greater than 100 Hz, the repeatability of droplet mass measurements was 0.2%, while the combined relative standard uncertainty (u(c)) was 0.9%. When bursts of droplets were dispensed, the limit of quantitation was 72 microg (1490 droplets) with u(c) = 1.0%. Individual droplet size in a burst was evaluated by high-speed videography. Diameters were consistent from the tenth droplet onward, and the mass of an individual droplet was best estimated by the average droplet mass with a combined uncertainty of about 1%. Diameters of the first several droplets were anomalous, but their contribution was accounted for when dispensing bursts. Above the limits of quantitation, the gravimetric methods provided statistically equivalent results and permit detailed study of operational factors that influence droplet mass during dispensing, including the development of reliable microassays and standard materials using DOD technologies.

  9. Bringing together high energy physicist and computer scientist

    International Nuclear Information System (INIS)

    Bock, R.K.

    1989-01-01

    The Oxford Conference on Computing in High Energy Physics approached the physics and computing issues with the question, ''Can computer science help?'' always in mind. This summary is a personal recollection of what I considered to be the highlights of the conference: the parts which contributed to my own learning experience. It can be used as a general introduction to the following papers, or as a brief overview of the current states of computer science within high energy physics. (orig.)

  10. Integrated computer network high-speed parallel interface

    International Nuclear Information System (INIS)

    Frank, R.B.

    1979-03-01

    As the number and variety of computers within Los Alamos Scientific Laboratory's Central Computer Facility grows, the need for a standard, high-speed intercomputer interface has become more apparent. This report details the development of a High-Speed Parallel Interface from conceptual through implementation stages to meet current and future needs for large-scle network computing within the Integrated Computer Network. 4 figures

  11. Bringing Computational Thinking into the High School Science and Math Classroom

    Science.gov (United States)

    Trouille, Laura; Beheshti, E.; Horn, M.; Jona, K.; Kalogera, V.; Weintrop, D.; Wilensky, U.; University CT-STEM Project, Northwestern; University CenterTalent Development, Northwestern

    2013-01-01

    Computational thinking (for example, the thought processes involved in developing algorithmic solutions to problems that can then be automated for computation) has revolutionized the way we do science. The Next Generation Science Standards require that teachers support their students’ development of computational thinking and computational modeling skills. As a result, there is a very high demand among teachers for quality materials. Astronomy provides an abundance of opportunities to support student development of computational thinking skills. Our group has taken advantage of this to create a series of astronomy-based computational thinking lesson plans for use in typical physics, astronomy, and math high school classrooms. This project is funded by the NSF Computing Education for the 21st Century grant and is jointly led by Northwestern University’s Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA), the Computer Science department, the Learning Sciences department, and the Office of STEM Education Partnerships (OSEP). I will also briefly present the online ‘Astro Adventures’ courses for middle and high school students I have developed through NU’s Center for Talent Development. The online courses take advantage of many of the amazing online astronomy enrichment materials available to the public, including a range of hands-on activities and the ability to take images with the Global Telescope Network. The course culminates with an independent computational research project.

  12. In demand

    Energy Technology Data Exchange (ETDEWEB)

    Coleman, B. [Bridgestone Ltd. (United Kingdom)

    2005-11-01

    The paper explains how good relationships can help alleviate potential tyre shortages. Demand for large dump truck tyres (largely for China) has increased by 50% within 12 months. Bridgestone's manufacturing plants are operating at maximum capacity. The company supplies tyres to all vehicles at Scottish Coal's opencast coal mines. Its Tyre Management System (TMS) supplied free of charge to customers helps maximise tyre life and minimise downtime from data on pressure, tread and general conditions fed into the hand-held TMS computer. 3 photos.

  13. High-Throughput Computing on High-Performance Platforms: A Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Matteo, Turilli [Rutgers University; Angius, Alessio [Rutgers University; Oral, H Sarp [ORNL; De, K [University of Texas at Arlington; Klimentov, A [Brookhaven National Laboratory (BNL); Wells, Jack C. [ORNL; Jha, S [Rutgers University

    2017-10-01

    The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan -- a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i) a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner.

  14. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    Directory of Open Access Journals (Sweden)

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  15. Association between job strain (high demand-low control and cardiovascular disease risk factors among petrochemical industry workers

    Directory of Open Access Journals (Sweden)

    Siamak Poorabdian

    2013-08-01

    Full Text Available Objective: One of the practical models for assessment of stressful working conditions due to job strain is "job demand and control" or Karasek's job strain model. This model explains how adverse physical and psychological effects including cardiovascular disease risk factors can be established due to high work demand. The aim was to investigate how certain cardiovascular risk factors including body mass index (BMI, heart rate, blood pressure, serum total cholesterol levels, and cigarette smoking are associated with job demand and control in workers. Materials and Methods: In this cohort study, 500 subjects completed "job demand and control" questionnaires. Factor analysis method was used in order to specify the most important "job demand and control" questions. Health check-up records of the workers were applied to extract data about cardiovascular disease risk factors. Ultimately, hypothesis testing, based on Eta, was used to assess the relationship between separated working groups and cardiovascular risk factors (hypertension and serum total cholesterol level. Results: A significant relationship was found between the job demand-control model and cardiovascular risk factors. In terms of chisquared test results, the highest value was assessed for heart rate (Chi2 = 145.078. The corresponding results for smoking and BMI were Chi2 = 85.652 and Chi2 = 30.941, respectively. Subsequently, Eta result for total cholesterol was 0.469, followed by hypertension equaling 0.684. Moreover, there was a significant difference between cardiovascular risk factors and job demand-control profiles among different working groups including the operational group, repairing group and servicing group. Conclusion: Job control and demand are significantly related to heart disease risk factors including hypertension, hyperlipidemia, and cigarette smoking.

  16. Inclusive vision for high performance computing at the CSIR

    CSIR Research Space (South Africa)

    Gazendam, A

    2006-02-01

    Full Text Available and computationally intensive applications. A number of different technologies and standards were identified as core to the open and distributed high-performance infrastructure envisaged...

  17. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  18. A high performance scientific cloud computing environment for materials simulations

    OpenAIRE

    Jorissen, Kevin; Vila, Fernando D.; Rehr, John J.

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including...

  19. Interactive Computer Lessons for Introductory Economics: Guided Inquiry-From Supply and Demand to Women in the Economy.

    Science.gov (United States)

    Miller, John; Weil, Gordon

    1986-01-01

    The interactive feature of computers is used to incorporate a guided inquiry method of learning introductory economics, extending the Computer Assisted Instruction (CAI) method beyond drills. (Author/JDH)

  20. Worksite interventions for preventing physical deterioration among employees in job-groups with high physical work demands

    DEFF Research Database (Denmark)

    Holtermann, Andreas; Jørgensen, Marie B; Gram, Bibi

    2010-01-01

    ) characterized by high physical work demands, musculoskeletal disorders, poor work ability and sickness absence. METHODS/DESIGN: A novel approach of the FINALE programme is that the interventions, i.e. 3 randomized controlled trials (RCT) and 1 exploratory case-control study are tailored to the physical work......BACKGROUND: A mismatch between individual physical capacities and physical work demands enhance the risk for musculoskeletal disorders, poor work ability and sickness absence, termed physical deterioration. However, effective intervention strategies for preventing physical deterioration in job...... groups with high physical demands remains to be established. This paper describes the background, design and conceptual model of the FINALE programme, a framework for health promoting interventions at 4 Danish job groups (i.e. cleaners, health-care workers, construction workers and industrial workers...

  1. Low cost highly available digital control computer

    International Nuclear Information System (INIS)

    Silvers, M.W.

    1986-01-01

    When designing digital controllers for critical plant control it is important to provide several features. Among these are reliability, availability, maintainability, environmental protection, and low cost. An examination of several applications has lead to a design that can be produced for approximately $20,000 (1000 control points). This design is compatible with modern concepts in distributed and hierarchical control. The canonical controller element is a dual-redundant self-checking computer that communicates with a cross-strapped, electrically isolated input/output system. The input/output subsystem comprises multiple intelligent input/output cards. These cards accept commands from the primary processor which are validated, executed, and acknowledged. Each card may be hot replaced to facilitate sparing. The implementation of the dual-redundant computer architecture is discussed. Called the FS-86, this computer can be used for a variety of applications. It has most recently found application in the upgrade of San Francisco's Bay Area Rapid Transit (BART) train control currently in progress and has been proposed for feedwater control in a boiling water reactor

  2. Underreporting on the MMPI-2-RF in a high-demand police officer selection context: an illustration.

    Science.gov (United States)

    Detrick, Paul; Chibnall, John T

    2014-09-01

    Positive response distortion is common in the high-demand context of employment selection. This study examined positive response distortion, in the form of underreporting, on the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF). Police officer job applicants completed the MMPI-2-RF under high-demand and low-demand conditions, once during the preemployment psychological evaluation and once without contingencies after completing the police academy. Demand-related score elevations were evident on the Uncommon Virtues (L-r) and Adjustment Validity (K-r) scales. Underreporting was evident on the Higher-Order scales Emotional/Internalizing Dysfunction and Behavioral/Externalizing Dysfunction; 5 of 9 Restructured Clinical scales; 6 of 9 Internalizing scales; 3 of 4 Externalizing scales; and 3 of 5 Personality Psychopathology 5 scales. Regression analyses indicated that L-r predicted demand-related underreporting on behavioral/externalizing scales, and K-r predicted underreporting on emotional/internalizing scales. Select scales of the MMPI-2-RF are differentially associated with different types of underreporting among police officer applicants. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  3. An infrastructure with a unified control plane to integrate IP into optical metro networks to provide flexible and intelligent bandwidth on demand for cloud computing

    Science.gov (United States)

    Yang, Wei; Hall, Trevor

    2012-12-01

    The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users and the nature of the Internet traffic will undertake a fundamental transformation. Consequently, the current Internet will no longer suffice for serving cloud traffic in metro areas. This work proposes an infrastructure with a unified control plane that integrates simple packet aggregation technology with optical express through the interoperation between IP routers and electrical traffic controllers in optical metro networks. The proposed infrastructure provides flexible, intelligent, and eco-friendly bandwidth on demand for cloud computing in metro areas.

  4. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  5. High performance computations using dynamical nucleation theory

    International Nuclear Information System (INIS)

    Windus, T L; Crosby, L D; Kathmann, S M

    2008-01-01

    Chemists continue to explore the use of very large computations to perform simulations that describe the molecular level physics of critical challenges in science. In this paper, we describe the Dynamical Nucleation Theory Monte Carlo (DNTMC) model - a model for determining molecular scale nucleation rate constants - and its parallel capabilities. The potential for bottlenecks and the challenges to running on future petascale or larger resources are delineated. A 'master-slave' solution is proposed to scale to the petascale and will be developed in the NWChem software. In addition, mathematical and data analysis challenges are described

  6. Social Demand of New Generation Information Network: Introduction to High Spectral Density Optical Communication Technology

    Science.gov (United States)

    Kamiya, Takeshi; Miyazaki, Tetsuya; Kubota, Fumito

    In this section, first, current situation of traffic growth and penetration of broadband services are described. Then social demand, technical issues, and research trend for future information network in the United States, Europe, and Japan are described. Finally, a detailed construction of this book is introduced.

  7. Hazard rate for a two-channel protective system subject to a high demand rate

    International Nuclear Information System (INIS)

    Oliveira, L.F.; Youngblood, R.; Melo, P.F.F.

    1989-01-01

    A basic figure of merit associated with a protective system for an industrial plant is the number of accidents expected to occur in the plant within a given period of time, with the system installed. By definition, in a plant equipped with a protective system, an accident can only happen if an initiating event (a demand) occurs while the protective system is unavailable, that is, while it is in one of its possible failed states. This means that the hazard rate or accident frequency depends on the demand rate and on the unavailability of the protective systems. It has long been recognized that the demand rate influences the unavailability of the protective system, and practical expressions incorporating that effect have been developed for single-channel (Lees, 1982) and multi-channel (Kumamoto and Henley 1978) protective systems. The effect has also been incorporated into a Markovian treatment of a plant protection system (Papazoglou and Cho, 1985). In a previous paper (Oliveira and Netto, 1987) a Markovian approach was used to derive analytical expressions for the evaluation of the plant hazard rate for a single-channel protective system, properly accounting for the effects of the demand and the repair rates. In this paper the authors present an extension of that model to the case of a plant equipped with a two-channel protective system

  8. Do high job demands increase intrinsic motivation or fatigue or both? The role of job control and job social support

    NARCIS (Netherlands)

    Van Yperen, N.W.; Hagedoorn, M.

    2003-01-01

    Examined whether job control and job social support reduce signs of fatigue and enhance intrinsic motivation among employees facing high job demands. 555 nurses (mean age 35.5 yrs) working at specialized units for patients with different levels of mental deficiency completed surveys regarding: (1)

  9. A Heterogeneous High-Performance System for Computational and Computer Science

    Science.gov (United States)

    2016-11-15

    expand the research infrastructure at the institution but also to enhance the high -performance computing training provided to both undergraduate and... cloud computing, supercomputing, and the availability of cheap memory and storage led to enormous amounts of data to be sifted through in forensic... High -Performance Computing (HPC) tools that can be integrated with existing curricula and support our research to modernize and dramatically advance

  10. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  11. From the Web to the Grid and beyond computing paradigms driven by high-energy physics

    CERN Document Server

    Carminati, Federico; Galli-Carminati, Giuliana

    2012-01-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the ...

  12. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  13. Technology Push, Demand Pull And The Shaping Of Technological Paradigms - Patterns In The Development Of Computing Technology

    NARCIS (Netherlands)

    J.C.M. van den Ende (Jan); W.A. Dolfsma (Wilfred)

    2002-01-01

    textabstractAn assumption generally subscribed in evolutionary economics is that new technological paradigms arise from advances is science and developments in technological knowledge. Demand only influences the selection among competing paradigms, and the course the paradigm after its inception. In

  14. High Available COTS Based Computer for Space

    Science.gov (United States)

    Hartmann, J.; Magistrati, Giorgio

    2015-09-01

    The availability and reliability factors of a system are central requirements of a target application. From a simple fuel injection system used in cars up to a flight control system of an autonomous navigating spacecraft, each application defines its specific availability factor under the target application boundary conditions. Increasing quality requirements on data processing systems used in space flight applications calling for new architectures to fulfill the availability, reliability as well as the increase of the required data processing power. Contrary to the increased quality request simplification and use of COTS components to decrease costs while keeping the interface compatibility to currently used system standards are clear customer needs. Data processing system design is mostly dominated by strict fulfillment of the customer requirements and reuse of available computer systems were not always possible caused by obsolescence of EEE-Parts, insufficient IO capabilities or the fact that available data processing systems did not provide the required scalability and performance.

  15. RISC Processors and High Performance Computing

    Science.gov (United States)

    Bailey, David H.; Saini, Subhash; Craw, James M. (Technical Monitor)

    1995-01-01

    This tutorial will discuss the top five RISC microprocessors and the parallel systems in which they are used. It will provide a unique cross-machine comparison not available elsewhere. The effective performance of these processors will be compared by citing standard benchmarks in the context of real applications. The latest NAS Parallel Benchmarks, both absolute performance and performance per dollar, will be listed. The next generation of the NPB will be described. The tutorial will conclude with a discussion of future directions in the field. Technology Transfer Considerations: All of these computer systems are commercially available internationally. Information about these processors is available in the public domain, mostly from the vendors themselves. The NAS Parallel Benchmarks and their results have been previously approved numerous times for public release, beginning back in 1991.

  16. Sizing Hydrogen Energy Storage in Consideration of Demand Response in Highly Renewable Generation Power Systems

    Directory of Open Access Journals (Sweden)

    Mubbashir Ali

    2018-05-01

    Full Text Available From an environment perspective, the increased penetration of wind and solar generation in power systems is remarkable. However, as the intermittent renewable generation briskly grows, electrical grids are experiencing significant discrepancies between supply and demand as a result of limited system flexibility. This paper investigates the optimal sizing and control of the hydrogen energy storage system for increased utilization of renewable generation. Using a Finnish case study, a mathematical model is presented to investigate the optimal storage capacity in a renewable power system. In addition, the impact of demand response for domestic storage space heating in terms of the optimal sizing of energy storage is discussed. Finally, sensitivity analyses are conducted to observe the impact of a small share of controllable baseload production as well as the oversizing of renewable generation in terms of required hydrogen storage size.

  17. A high turndown, ultra low emission low swirl burner for natural gas, on-demand water heaters

    Energy Technology Data Exchange (ETDEWEB)

    Rapp, Vi H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cheng, Robert K. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Therkelsen, Peter L. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-06-13

    Previous research has shown that on-demand water heaters are, on average, approximately 37% more efficient than storage water heaters. However, approximately 98% of water heaters in the U.S. use storage water heaters while the remaining 2% are on-demand. A major market barrier to deployment of on-demand water heaters is their high retail cost, which is due in part to their reliance on multi-stage burner banks that require complex electronic controls. This project aims to research and develop a cost-effective, efficient, ultra-low emission burner for next generation natural gas on-demand water heaters in residential and commercial buildings. To meet these requirements, researchers at the Lawrence Berkeley National Laboratory (LBNL) are adapting and testing the low-swirl burner (LSB) technology for commercially available on-demand water heaters. In this report, a low-swirl burner is researched, developed, and evaluated to meet targeted on-demand water heater performance metrics. Performance metrics for a new LSB design are identified by characterizing performance of current on-demand water heaters using published literature and technical specifications, and through experimental evaluations that measure fuel consumption and emissions output over a range of operating conditions. Next, target metrics and design criteria for the LSB are used to create six 3D printed prototypes for preliminary investigations. Prototype designs that proved the most promising were fabricated out of metal and tested further to evaluate the LSB’s full performance potential. After conducting a full performance evaluation on two designs, we found that one LSB design is capable of meeting or exceeding almost all the target performance metrics for on-demand water heaters. Specifically, this LSB demonstrated flame stability when operating from 4.07 kBTU/hr up to 204 kBTU/hr (50:1 turndown), compliance with SCAQMD Rule 1146.2 (14 ng/J or 20 ppm NOX @ 3% O2), and lower CO emissions than state

  18. Parallel Computing:. Some Activities in High Energy Physics

    Science.gov (United States)

    Willers, Ian

    This paper examines some activities in High Energy Physics that utilise parallel computing. The topic includes all computing from the proposed SIMD front end detectors, the farming applications, high-powered RISC processors and the large machines in the computer centers. We start by looking at the motivation behind using parallelism for general purpose computing. The developments around farming are then described from its simplest form to the more complex system in Fermilab. Finally, there is a list of some developments that are happening close to the experiments.

  19. A high performance scientific cloud computing environment for materials simulations

    Science.gov (United States)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  20. The Principals and Practice of Distributed High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The potential of Distributed Processing Systems to deliver computing capabilities with qualities ranging from high availability and reliability to easy expansion in functionality and capacity were recognized and formalized in the 1970’s. For more three decade these principals Distributed Computing guided the development of the HTCondor resource and job management system. The widely adopted suite of software tools offered by HTCondor are based on novel distributed computing technologies and are driven by the evolving needs of High Throughput scientific applications. We will review the principals that underpin our work, the distributed computing frameworks and technologies we developed and the lessons we learned from delivering effective and dependable software tools in an ever changing landscape computing technologies and needs that range today from a desktop computer to tens of thousands of cores offered by commercial clouds. About the speaker Miron Livny received a B.Sc. degree in Physics and Mat...

  1. Agglomeration Economies and the High-Tech Computer

    OpenAIRE

    Wallace, Nancy E.; Walls, Donald

    2004-01-01

    This paper considers the effects of agglomeration on the production decisions of firms in the high-tech computer cluster. We build upon an alternative definition of the high-tech computer cluster developed by Bardhan et al. (2003) and we exploit a new data source, the National Establishment Time-Series (NETS) Database, to analyze the spatial distribution of firms in this industry. An essential contribution of this research is the recognition that high-tech firms are heterogeneous collections ...

  2. Computer-aided engineering in High Energy Physics

    International Nuclear Information System (INIS)

    Bachy, G.; Hauviller, C.; Messerli, R.; Mottier, M.

    1988-01-01

    Computing, standard tool for a long time in the High Energy Physics community, is being slowly introduced at CERN in the mechanical engineering field. The first major application was structural analysis followed by Computer-Aided Design (CAD). Development work is now progressing towards Computer-Aided Engineering around a powerful data base. This paper gives examples of the power of this approach applied to engineering for accelerators and detectors

  3. Computational tools for high-throughput discovery in biology

    OpenAIRE

    Jones, Neil Christopher

    2007-01-01

    High throughput data acquisition technology has inarguably transformed the landscape of the life sciences, in part by making possible---and necessary---the computational disciplines of bioinformatics and biomedical informatics. These fields focus primarily on developing tools for analyzing data and generating hypotheses about objects in nature, and it is in this context that we address three pressing problems in the fields of the computational life sciences which each require computing capaci...

  4. Quantum Accelerators for High-Performance Computing Systems

    OpenAIRE

    Britt, Keith A.; Mohiyaddin, Fahd A.; Humble, Travis S.

    2017-01-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantu...

  5. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  6. Embedded computing technology for highly-demanding cyber-physical systems

    NARCIS (Netherlands)

    Jóźwiak, L.

    2015-01-01

    The recent spectacular progress in the microelectronic, information, communication, material and sensor technologies created a big stimulus towards development of much more sophisticated, coherent and fit to use, smart communicating cyber-physical systems (CPS). The huge and rapidly developing

  7. High temperature estimation through computer vision

    International Nuclear Information System (INIS)

    Segovia de los R, J.A.

    1996-01-01

    The form recognition process has between his purposes to conceive and to analyze the classification algorithms applied to the image representations, sounds or signals of any kind. In a process with a thermal plasma reactor in which cannot be employed conventional dispositives or methods for the measurement of the very high temperatures. The goal of this work was to determine these temperatures in an indirect way. (Author)

  8. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  9. High Demand, Core Geosciences, and Meeting the Challenges through Online Approaches

    Science.gov (United States)

    Keane, Christopher; Leahy, P. Patrick; Houlton, Heather; Wilson, Carolyn

    2014-05-01

    As the geosciences has evolved over the last several decades, so too has undergraduate geoscience education, both from a standpoint of curriculum and educational experience. In the United States, we have been experiencing very strong growth in enrollments in geoscience, as well as employment demand for the last 7 years. That growth has been largely fueled by all aspects of the energy boom in the US, both from the energy production side and the environmental management side. Interestingly the portfolio of experiences and knowledge required are strongly congruent as evidenced from results of the American Geosciences Institute's National Geoscience Exit Survey. Likewise, the demand for new geoscientists in the US is outstripping even the nearly unprecedented growth in enrollments and degrees, which is calling into question the geosciences' inability to effectively reach into the largest growing segments of the U.S. College population - underrepresented minorities. We will also examine the results of the AGI Survey on Geoscience Online Learning and examine how the results of that survey are rectified with Peter Smith's "Middle Third" theory on "wasted talent" because of spatial, economic, and social dislocation. In particular, the geosciences are late to the online learning game in the United States and most faculty engaged in such activities are "lone wolves" in their department operating with little knowledge of the support structures that exist in such development. Yet the most cited barriers for faculty not engaging actively in online learning is the assertion that laboratory and field experiences will be lost and thus fight engaging in this medium. However, the survey shows that faculty are discovering novel approaches to address these issues, many of which have great application to enabling geoscience programs in the United States to meet the expanding demand for geoscience degrees.

  10. Recycling of water of high pressure cleaning of pipes. Phase 1. Quality demands and economical aspects

    International Nuclear Information System (INIS)

    Van Weers, A.W.; Zwaard, J.

    1999-01-01

    According to the regulation 6.1 in the current licence Surface Water Pollution Law (WVO, abbreviated in Dutch) of October 10, 1997, ECN carried out the first phase of a study on the title subject with respect to pipes applied in oil and gas exploration. In the present situation water of the so-called pipe-cleaner is transported via a seapipe after precipitation and membrane filtration. Next to the quality demands and economical aspects attention is paid to a number of environmental aspects

  11. Computation of nonlinear water waves with a high-order Boussinesq model

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Madsen, Per A.; Bingham, Harry

    2005-01-01

    Computational highlights from a recently developed high-order Boussinesq model are shown. The model is capable of treating fully nonlinear waves (up to the breaking point) out to dimensionless depths of (wavenumber times depth) kh \\approx 25. Cases considered include the study of short......-crested waves in shallow/deep water, resulting in hexagonal/rectangular surface patterns; crescent waves, resulting from unstable perturbations of plane progressive waves; and highly-nonlinear wave-structure interactions. The emphasis is on physically demanding problems, and in eachcase qualitative and (when...

  12. The Computer Industry. High Technology Industries: Profiles and Outlooks.

    Science.gov (United States)

    International Trade Administration (DOC), Washington, DC.

    A series of meetings was held to assess future problems in United States high technology, particularly in the fields of robotics, computers, semiconductors, and telecommunications. This report, which focuses on the computer industry, includes a profile of this industry and the papers presented by industry speakers during the meetings. The profile…

  13. An Introduction to Computing: Content for a High School Course.

    Science.gov (United States)

    Rogers, Jean B.

    A general outline of the topics that might be covered in a computers and computing course for high school students is provided. Topics are listed in the order in which they should be taught, and the relative amount of time to be spent on each topic is suggested. Seven units are included in the course outline: (1) general introduction, (2) using…

  14. Improvements in high energy computed tomography

    International Nuclear Information System (INIS)

    Burstein, P.; Krieger, A.; Annis, M.

    1984-01-01

    In computerized axial tomography employed with large relatively dense objects such as a solid fuel rocket engine, using high energy x-rays, such as a 15 MeV source, a collimator is employed with an acceptance angle substantially less than 1 0 , in a preferred embodiment 7 minutes of a degree. In a preferred embodiment, the collimator may be located between the object and the detector, although in other embodiments, a pre-collimator may also be used, that is between the x-ray source and the object being illuminated. (author)

  15. A high level language for a high performance computer

    Science.gov (United States)

    Perrott, R. H.

    1978-01-01

    The proposed computational aerodynamic facility will join the ranks of the supercomputers due to its architecture and increased execution speed. At present, the languages used to program these supercomputers have been modifications of programming languages which were designed many years ago for sequential machines. A new programming language should be developed based on the techniques which have proved valuable for sequential programming languages and incorporating the algorithmic techniques required for these supercomputers. The design objectives for such a language are outlined.

  16. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1991-03-15

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour.

  17. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour

  18. Corrective economic dispatch and operational cycles for probabilistic unit commitment with demand response and high wind power

    International Nuclear Information System (INIS)

    Azizipanah-Abarghooee, Rasoul; Golestaneh, Faranak; Gooi, Hoay Beng; Lin, Jeremy; Bavafa, Farhad; Terzija, Vladimir

    2016-01-01

    Highlights: • Suggesting a new UC mixing a probabilistic security and incentive demand response. • Investigating the effects of uncertainty on UC using chance-constraint programming. • Proposing an efficient spinning reserve satisfaction based on a new ED correction. • Presenting a new operational cycles way to convert binary variable to discrete one. - Abstract: We propose a probabilistic unit commitment problem with incentive-based demand response and high level of wind power. Our novel formulation provides an optimal allocation of up/down spinning reserve. A more efficient unit commitment algorithm based on operational cycles is developed. A multi-period elastic residual demand economic model based on the self- and cross-price elasticities and customers’ benefit function is used. In the proposed scheme, the probability of residual demand falling within the up/down spinning reserve imposed by n − 1 security criterion is considered as a stochastic constraint. A chance-constrained method, with a new iterative economic dispatch correction, wind power curtailment, and commitment of cheaper units, is applied to guarantee that the probability of loss of load is lower than a pre-defined risk level. The developed architecture builds upon an improved Jaya algorithm to generate feasible, robust and optimal solutions corresponding to the operational cost. The proposed framework is applied to a small test system with 10 units and also to the IEEE 118-bus system to illustrate its advantages in efficient scheduling of generation in the power systems.

  19. High resolution computed tomography of auditory ossicles

    International Nuclear Information System (INIS)

    Isono, M.; Murata, K.; Ohta, F.; Yoshida, A.; Ishida, O.; Kinki Univ., Osaka

    1990-01-01

    Auditory ossicular sections were scanned at section thicknesses (mm)/section interspaces (mm) of 1.5/1.5 (61 patients), 1.0/1.0 (13 patients) or 1.5/1.0 (33 patients). At any type of section thickness/interspace, the malleal and incudal structures were observed with almost equal frequency. The region of the incudostapedial joint and each component part of the stapes were shown more frequently at a section interspace of 1.0 mm than at 1.5 mm. The visualization frequency of each auditory ossicular component on two or more serial sections was investigated. At a section thickness/section interspace of 1.5/1.5, the visualization rates were low except for large components such as the head of the malleus and the body of the incus, but at a slice interspace of 1.0 mm, they were high for most components of the auditory ossicles. (orig.)

  20. Improving the quality of pork and pork products for the consumer : development of innovative, integrated, and sustainable food production chains of high quality pork products matching consumer demands

    NARCIS (Netherlands)

    Heimann, B.; Christensen, M.; Rosendal Rasmussen, S.; Bonneau, M.; Grunert, K.G.; Arnau, J.; Trienekens, J.H.; Oksbjerg, N.; Greef, de K.H.; Petersen, B.

    2012-01-01

    Improving the quality of pork and pork products for the consumer: development of innovative, integrated, and sustainable food production chains of high quality pork products matching consumer demands.

  1. Analog computing

    CERN Document Server

    Ulmann, Bernd

    2013-01-01

    This book is a comprehensive introduction to analog computing. As most textbooks about this powerful computing paradigm date back to the 1960s and 1970s, it fills a void and forges a bridge from the early days of analog computing to future applications. The idea of analog computing is not new. In fact, this computing paradigm is nearly forgotten, although it offers a path to both high-speed and low-power computing, which are in even more demand now than they were back in the heyday of electronic analog computers.

  2. Exploring Tradeoffs in Demand-Side and Supply-Side Management of Urban Water Resources Using Agent-Based Modeling and Evolutionary Computation

    Directory of Open Access Journals (Sweden)

    Lufthansa Kanta

    2015-11-01

    Full Text Available Urban water supply systems may be managed through supply-side and demand-side strategies, which focus on water source expansion and demand reductions, respectively. Supply-side strategies bear infrastructure and energy costs, while demand-side strategies bear costs of implementation and inconvenience to consumers. To evaluate the performance of demand-side strategies, the participation and water use adaptations of consumers should be simulated. In this study, a Complex Adaptive Systems (CAS framework is developed to simulate consumer agents that change their consumption to affect the withdrawal from the water supply system, which, in turn influences operational policies and long-term resource planning. Agent-based models are encoded to represent consumers and a policy maker agent and are coupled with water resources system simulation models. The CAS framework is coupled with an evolutionary computation-based multi-objective methodology to explore tradeoffs in cost, inconvenience to consumers, and environmental impacts for both supply-side and demand-side strategies. Decisions are identified to specify storage levels in a reservoir that trigger: (1 increases in the volume of water pumped through inter-basin transfers from an external reservoir; and (2 drought stages, which restrict the volume of water that is allowed for residential outdoor uses. The proposed methodology is demonstrated for Arlington, Texas, water supply system to identify non-dominated strategies for an historic drought decade. Results demonstrate that pumping costs associated with maximizing environmental reliability exceed pumping costs associated with minimizing restrictions on consumer water use.

  3. Software Applications on the Peregrine System | High-Performance Computing

    Science.gov (United States)

    Algebraic Modeling System (GAMS) Statistics and analysis High-level modeling system for mathematical reactivity. Gurobi Optimizer Statistics and analysis Solver for mathematical programming LAMMPS Chemistry and , reactivities, and vibrational, electronic and NMR spectra. R Statistical Computing Environment Statistics and

  4. The comparison of high and standard definition computed ...

    African Journals Online (AJOL)

    The comparison of high and standard definition computed tomography techniques regarding coronary artery imaging. A Aykut, D Bumin, Y Omer, K Mustafa, C Meltem, C Orhan, U Nisa, O Hikmet, D Hakan, K Mert ...

  5. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa; Parashar, Manish; Kim, Hyunjoo; Jordan, Kirk E.; Sachdeva, Vipin; Sexton, James; Jamjoom, Hani; Shae, Zon-Yin; Pencheva, Gergina; Tavakoli, Reza; Wheeler, Mary F.

    2012-01-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a

  6. High contrast computed tomography with synchrotron radiation

    Science.gov (United States)

    Itai, Yuji; Takeda, Tohoru; Akatsuka, Takao; Maeda, Tomokazu; Hyodo, Kazuyuki; Uchida, Akira; Yuasa, Tetsuya; Kazama, Masahiro; Wu, Jin; Ando, Masami

    1995-02-01

    This article describes a new monochromatic x-ray CT system using synchrotron radiation with applications in biomedical diagnosis which is currently under development. The system is designed to provide clear images and to detect contrast materials at low concentration for the quantitative functional evaluation of organs in correspondence with their anatomical structures. In this system, with x-ray energy changing from 30 to 52 keV, images can be obtained to detect various contrast materials (iodine, barium, and gadolinium), and K-edge energy subtraction is applied. Herein, the features of the new system designed to enhance the advantages of SR are reported. With the introduction of a double-crystal monochromator, the high-order x-ray contamination is eliminated. The newly designed CCD detector with a wide dynamic range of 60 000:1 has a spatial resolution of 200 μm. The resulting image quality, which is expected to show improved contrast and spatial resolution, is currently under investigation.

  7. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  8. Integration of highly probabilistic sources into optical quantum architectures: perpetual quantum computation

    International Nuclear Information System (INIS)

    Devitt, Simon J; Stephens, Ashley M; Munro, William J; Nemoto, Kae

    2011-01-01

    In this paper, we introduce a design for an optical topological cluster state computer constructed exclusively from a single quantum component. Unlike previous efforts we eliminate the need for on demand, high fidelity photon sources and detectors and replace them with the same device utilized to create photon/photon entanglement. This introduces highly probabilistic elements into the optical architecture while maintaining complete specificity of the structure and operation for a large-scale computer. Photons in this system are continually recycled back into the preparation network, allowing for an arbitrarily deep three-dimensional cluster to be prepared using a comparatively small number of photonic qubits and consequently the elimination of high-frequency, deterministic photon sources.

  9. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  10. After Installation: Ubiquitous Computing and High School Science in Three Experienced, High-Technology Schools

    Science.gov (United States)

    Drayton, Brian; Falk, Joni K.; Stroud, Rena; Hobbs, Kathryn; Hammerman, James

    2010-01-01

    There are few studies of the impact of ubiquitous computing on high school science, and the majority of studies of ubiquitous computing report only on the early stages of implementation. The present study presents data on 3 high schools with carefully elaborated ubiquitous computing systems that have gone through at least one "obsolescence cycle"…

  11. High burnup models in computer code fair

    Energy Technology Data Exchange (ETDEWEB)

    Dutta, B K; Swami Prasad, P; Kushwaha, H S; Mahajan, S C; Kakodar, A [Bhabha Atomic Research Centre, Bombay (India)

    1997-08-01

    An advanced fuel analysis code FAIR has been developed for analyzing the behavior of fuel rods of water cooled reactors under severe power transients and high burnups. The code is capable of analyzing fuel pins of both collapsible clad, as in PHWR and free standing clad as in LWR. The main emphasis in the development of this code is on evaluating the fuel performance at extended burnups and modelling of the fuel rods for advanced fuel cycles. For this purpose, a number of suitable models have been incorporated in FAIR. For modelling the fission gas release, three different models are implemented, namely Physically based mechanistic model, the standard ANS 5.4 model and the Halden model. Similarly the pellet thermal conductivity can be modelled by the MATPRO equation, the SIMFUEL relation or the Halden equation. The flux distribution across the pellet is modelled by using the model RADAR. For modelling pellet clad interaction (PCMI)/ stress corrosion cracking (SCC) induced failure of sheath, necessary routines are provided in FAIR. The validation of the code FAIR is based on the analysis of fuel rods of EPRI project ``Light water reactor fuel rod modelling code evaluation`` and also the analytical simulation of threshold power ramp criteria of fuel rods of pressurized heavy water reactors. In the present work, a study is carried out by analysing three CRP-FUMEX rods to show the effect of various combinations of fission gas release models and pellet conductivity models, on the fuel analysis parameters. The satisfactory performance of FAIR may be concluded through these case studies. (author). 12 refs, 5 figs.

  12. High burnup models in computer code fair

    International Nuclear Information System (INIS)

    Dutta, B.K.; Swami Prasad, P.; Kushwaha, H.S.; Mahajan, S.C.; Kakodar, A.

    1997-01-01

    An advanced fuel analysis code FAIR has been developed for analyzing the behavior of fuel rods of water cooled reactors under severe power transients and high burnups. The code is capable of analyzing fuel pins of both collapsible clad, as in PHWR and free standing clad as in LWR. The main emphasis in the development of this code is on evaluating the fuel performance at extended burnups and modelling of the fuel rods for advanced fuel cycles. For this purpose, a number of suitable models have been incorporated in FAIR. For modelling the fission gas release, three different models are implemented, namely Physically based mechanistic model, the standard ANS 5.4 model and the Halden model. Similarly the pellet thermal conductivity can be modelled by the MATPRO equation, the SIMFUEL relation or the Halden equation. The flux distribution across the pellet is modelled by using the model RADAR. For modelling pellet clad interaction (PCMI)/ stress corrosion cracking (SCC) induced failure of sheath, necessary routines are provided in FAIR. The validation of the code FAIR is based on the analysis of fuel rods of EPRI project ''Light water reactor fuel rod modelling code evaluation'' and also the analytical simulation of threshold power ramp criteria of fuel rods of pressurized heavy water reactors. In the present work, a study is carried out by analysing three CRP-FUMEX rods to show the effect of various combinations of fission gas release models and pellet conductivity models, on the fuel analysis parameters. The satisfactory performance of FAIR may be concluded through these case studies. (author). 12 refs, 5 figs

  13. High Performance Computing Modernization Program Kerberos Throughput Test Report

    Science.gov (United States)

    2017-10-26

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/5524--17-9751 High Performance Computing Modernization Program Kerberos Throughput Test ...NUMBER 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 2. REPORT TYPE1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 6. AUTHOR(S) 8. PERFORMING...PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT High Performance Computing Modernization Program Kerberos Throughput Test Report Daniel G. Gdula* and

  14. From the web to the grid and beyond. Computing paradigms driven by high energy physics

    International Nuclear Information System (INIS)

    Brun, Rene; Carminati, Federico; Galli Carminati, Giuliana

    2012-01-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the Web, and grid- and cloud computing. The important issue of intellectual property regulations for distributed software engineering and computing is also addressed. Aptly, the book closes with a visionary chapter of what may lie ahead. Approachable and requiring only basic understanding of physics and computer sciences, this book is intended for both education and research. (orig.)

  15. From the web to the grid and beyond. Computing paradigms driven by high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Brun, Rene; Carminati, Federico [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Galli Carminati, Giuliana (eds.) [Hopitaux Universitaire de Geneve, Chene-Bourg (Switzerland). Unite de la Psychiatrie du Developpement Mental

    2012-07-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the Web, and grid- and cloud computing. The important issue of intellectual property regulations for distributed software engineering and computing is also addressed. Aptly, the book closes with a visionary chapter of what may lie ahead. Approachable and requiring only basic understanding of physics and computer sciences, this book is intended for both education and research. (orig.)

  16. Energy efficiency models and optimization algoruthm to enhance on-demand resource delivery in a cloud computing environment / Thusoyaone Joseph Moemi

    OpenAIRE

    Moemi, Thusoyaone Joseph

    2013-01-01

    Online hosed services are what is referred to as Cloud Computing. Access to these services is via the internet. h shifts the traditional IT resource ownership model to renting. Thus, high cost of infrastructure cannot limit the less privileged from experiencing the benefits that this new paradigm brings. Therefore, c loud computing provides flexible services to cloud user in the form o f software, platform and infrastructure as services. The goal behind cloud computing is to provi...

  17. Metacognitive Load--Useful, or Extraneous Concept? Metacognitive and Self-Regulatory Demands in Computer-Based Learning

    Science.gov (United States)

    Schwonke, Rolf

    2015-01-01

    Instructional design theories such as the "cognitive load theory" (CLT) or the "cognitive theory of multimedia learning" (CTML) explain learning difficulties in (computer-based) learning usually as a result of design deficiencies that hinder effective schema construction. However, learners often struggle even in well-designed…

  18. Parallel computing for event reconstruction in high-energy physics

    International Nuclear Information System (INIS)

    Wolbers, S.

    1993-01-01

    Parallel computing has been recognized as a solution to large computing problems. In High Energy Physics offline event reconstruction of detector data is a very large computing problem that has been solved with parallel computing techniques. A review of the parallel programming package CPS (Cooperative Processes Software) developed and used at Fermilab for offline reconstruction of Terabytes of data requiring the delivery of hundreds of Vax-Years per experiment is given. The Fermilab UNIX farms, consisting of 180 Silicon Graphics workstations and 144 IBM RS6000 workstations, are used to provide the computing power for the experiments. Fermilab has had a long history of providing production parallel computing starting with the ACP (Advanced Computer Project) Farms in 1986. The Fermilab UNIX Farms have been in production for over 2 years with 24 hour/day service to experimental user groups. Additional tools for management, control and monitoring these large systems will be described. Possible future directions for parallel computing in High Energy Physics will be given

  19. A hybrid optical switch architecture to integrate IP into optical networks to provide flexible and intelligent bandwidth on demand for cloud computing

    Science.gov (United States)

    Yang, Wei; Hall, Trevor J.

    2013-12-01

    The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users. As a consequence, the nature of the Internet traffic has been fundamentally transformed from a pure packet-based pattern to today's predominantly flow-based pattern. Cloud computing has also brought about an unprecedented growth in the Internet traffic. In this paper, a hybrid optical switch architecture is presented to deal with the flow-based Internet traffic, aiming to offer flexible and intelligent bandwidth on demand to improve fiber capacity utilization. The hybrid optical switch is capable of integrating IP into optical networks for cloud-based traffic with predictable performance, for which the delay performance of the electronic module in the hybrid optical switch architecture is evaluated through simulation.

  20. CRITICAL ISSUES IN HIGH END COMPUTING - FINAL REPORT

    Energy Technology Data Exchange (ETDEWEB)

    Corones, James [Krell Institute

    2013-09-23

    High-End computing (HEC) has been a driver for advances in science and engineering for the past four decades. Increasingly HEC has become a significant element in the national security, economic vitality, and competitiveness of the United States. Advances in HEC provide results that cut across traditional disciplinary and organizational boundaries. This program provides opportunities to share information about HEC systems and computational techniques across multiple disciplines and organizations through conferences and exhibitions of HEC advances held in Washington DC so that mission agency staff, scientists, and industry can come together with White House, Congressional and Legislative staff in an environment conducive to the sharing of technical information, accomplishments, goals, and plans. A common thread across this series of conferences is the understanding of computational science and applied mathematics techniques across a diverse set of application areas of interest to the Nation. The specific objectives of this program are: Program Objective 1. To provide opportunities to share information about advances in high-end computing systems and computational techniques between mission critical agencies, agency laboratories, academics, and industry. Program Objective 2. To gather pertinent data, address specific topics of wide interest to mission critical agencies. Program Objective 3. To promote a continuing discussion of critical issues in high-end computing. Program Objective 4.To provide a venue where a multidisciplinary scientific audience can discuss the difficulties applying computational science techniques to specific problems and can specify future research that, if successful, will eliminate these problems.

  1. Case study of supply induced demand: the case of provision of imaging scans (computed tomography and magnetic resonance) at Unimed-Manaus

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, Edson de Oliveira; Andrade, Elizabeth Nogueira de, E-mail: dredsonandrade@gmail.co [Universidade Federal do Amazonas (UFAM), Manaus, AM (Brazil); Gallo, Jose Hiran [Universidade do Porto (U.Porto) (Portugal)

    2011-03-15

    Objective: to present the experience of a health plan operator (Unimed-Manaus) in Manaus, Amazonas, Brazil, with the accreditation of imaging services and the demand induced by the supply of new services (Roemer's Law). Methods: this is a retrospective work studying a time series covering the period from January 1998 to June 2004, in which the computed tomography and the magnetic resonance imaging services were implemented as part of the services offered by that health plan operator. Statistical analysis consisted of a descriptive and an inferential part, with the latter using a mean parametric test (Student T-test and ANOVA) and the Pearson correlation test. A 5% alpha and a 95% confidence interval were adopted. Results: at Unimed-Manaus, the supply of new imaging services, by itself, was identified as capable of generating an increased service demand, thus characterizing the phenomenon described by Roemer. Conclusion: the results underscore the need to be aware of the fact that the supply of new health services could bring about their increased use without a real demand. (author)

  2. Case study of supply induced demand: the case of provision of imaging scans (computed tomography and magnetic resonance) at Unimed-Manaus

    International Nuclear Information System (INIS)

    Andrade, Edson de Oliveira; Andrade, Elizabeth Nogueira de; Gallo, Jose Hiran

    2011-01-01

    Objective: to present the experience of a health plan operator (Unimed-Manaus) in Manaus, Amazonas, Brazil, with the accreditation of imaging services and the demand induced by the supply of new services (Roemer's Law). Methods: this is a retrospective work studying a time series covering the period from January 1998 to June 2004, in which the computed tomography and the magnetic resonance imaging services were implemented as part of the services offered by that health plan operator. Statistical analysis consisted of a descriptive and an inferential part, with the latter using a mean parametric test (Student T-test and ANOVA) and the Pearson correlation test. A 5% alpha and a 95% confidence interval were adopted. Results: at Unimed-Manaus, the supply of new imaging services, by itself, was identified as capable of generating an increased service demand, thus characterizing the phenomenon described by Roemer. Conclusion: the results underscore the need to be aware of the fact that the supply of new health services could bring about their increased use without a real demand. (author)

  3. Case study of supply induced demand: the case of provision of imaging scans (computed tomography and magnetic resonance) at Unimed-Manaus.

    Science.gov (United States)

    Andrade, Edson de Oliveira; Andrade, Elizabeth Nogueira de; Gallo, José Hiran

    2011-01-01

    To present the experience of a health plan operator (Unimed-Manaus) in Manaus, Amazonas, Brazil, with the accreditation of imaging services and the demand induced by the supply of new services (Roemer's Law). This is a retrospective work studying a time series covering the period from January 1998 to June 2004, in which the computed tomography and the magnetic resonance imaging services were implemented as part of the services offered by that health plan operator. Statistical analysis consisted of a descriptive and an inferential part, with the latter using a mean parametric test (Student T-test and ANOVA) and the Pearson correlation test. A 5% alpha and a 95% confidence interval were adopted. At Unimed-Manaus, the supply of new imaging services, by itself, was identified as capable of generating an increased service demand, thus characterizing the phenomenon described by Roemer. The results underscore the need to be aware of the fact that the supply of new health services could bring about their increased use without a real demand.

  4. Functional work breaks in a high-demanding work environment: an experimental field study.

    Science.gov (United States)

    Scholz, André; Ghadiri, Argang; Singh, Usha; Wendsche, Johannes; Peters, Theo; Schneider, Stefan

    2018-02-01

    Work breaks are known to have positive effects on employees' health, performance and safety. Using a sample of twelve employees working in a stressful and cognitively demanding working environment, this experimental field study examined how different types of work breaks (boxing, deep relaxation and usual breaks) affect participants' mood, cognitive performance and neurophysiological state compared to a control condition without any break. In a repeated measures experimental design, cognitive performance was assessed using an auditory oddball test and a Movement Detection Test. Brain cortical activity was recorded using electroencephalography. Individual's mood was analysed using a profile of mood state. Although neurophysiological data showed improved relaxation of cortical state after boxing (vs. 'no break' and 'deep relaxation'), neither performance nor mood assessment showed similar results. It remains questionable whether there is a universal work break type that has beneficial effects for all individuals. Practitioner Summary: Research on work breaks and their positive effects on employees' health and performance often disregards break activities. This experimental field study in a stressful working environment investigated the effect of different work break activities. A universal work break type that is beneficial for this workplace could not be identified.

  5. How do Air Traffic Controllers Use Automation and Tools Differently During High Demand Situations?

    Science.gov (United States)

    Kraut, Joshua M.; Mercer, Joey; Morey, Susan; Homola, Jeffrey; Gomez, Ashley; Prevot, Thomas

    2013-01-01

    In a human-in-the-loop simulation, two air traffic controllers managed identical airspace while burdened with higher than average workload, and while using advanced tools and automation designed to assist with scheduling aircraft on multiple arrival flows to a single meter fix. This paper compares the strategies employed by each controller, and investigates how the controllers' strategies change while managing their airspace under more normal workload conditions and a higher workload condition. Each controller engaged in different methods of maneuvering aircraft to arrive on schedule, and adapted their strategies to cope with the increased workload in different ways. Based on the conclusions three suggestions are made: that quickly providing air traffic controllers with recommendations and information to assist with maneuvering and scheduling aircraft when burdened with increased workload will improve the air traffic controller's effectiveness, that the tools should adapt to the strategy currently employed by a controller, and that training should emphasize which traffic management strategies are most effective given specific airspace demands.

  6. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  7. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  8. 6th International Conference on High Performance Scientific Computing

    CERN Document Server

    Phu, Hoang; Rannacher, Rolf; Schlöder, Johannes

    2017-01-01

    This proceedings volume highlights a selection of papers presented at the Sixth International Conference on High Performance Scientific Computing, which took place in Hanoi, Vietnam on March 16-20, 2015. The conference was jointly organized by the Heidelberg Institute of Theoretical Studies (HITS), the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University, and the Vietnam Institute for Advanced Study in Mathematics, Ministry of Education The contributions cover a broad, interdisciplinary spectrum of scientific computing and showcase recent advances in theory, methods, and practical applications. Subjects covered numerical simulation, methods for optimization and control, parallel computing, and software development, as well as the applications of scientific computing in physics, mechanics, biomechanics and robotics, material science, hydrology, biotechnology, medicine, transport, scheduling, and in...

  9. Money Demand in Latvia

    OpenAIRE

    Ivars Tillers

    2004-01-01

    The econometric analysis of the demand for broad money in Latvia suggests a stable relationship of money demand. The analysis of parameter exogeneity indicates that the equilibrium adjustment is driven solely by the changes in the amount of money. The demand for money in Latvia is characterised by relatively high income elasticity typical for the economy in a monetary expansion phase. Due to stability, close fit of the money demand function and rapid equilibrium adjustment, broad money aggreg...

  10. Micromagnetics on high-performance workstation and mobile computational platforms

    Science.gov (United States)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  11. New Challenges for Computing in High Energy Physics

    International Nuclear Information System (INIS)

    Santoro, Alberto

    2003-01-01

    In view of the new scientific programs established for the LHC (Large Hadron Collider) era, the way to face the technological challenges in computing was develop a new concept of GRID computing. We show some examples and, in particular, a proposal for high energy physicists in countries like Brazil. Due to the big amount of data and the need of close collaboration it will be impossible to work in research centers and universities very far from Fermilab or CERN unless a GRID architecture is built. An important effort is being made by the international community to up to date their computing infrastructure and networks

  12. OMNET - high speed data communications for PDP-11 computers

    International Nuclear Information System (INIS)

    Parkman, C.F.; Lee, J.G.

    1979-12-01

    Omnet is a high speed data communications network designed at CERN for PDP-11 computers. It has grown from a link multiplexor system built for a CII 10070 computer into a full multi-point network, to which some fifty computers are now connected. It provides communications facilities for several large experimental installations as well as many smaller systems and has connections to all parts of the CERN site. The transmission protocol is discussed and brief details are given of the hardware and software used in its implementation. Also described is the gateway interface to the CERN packet switching network, 'Cernet'. (orig.)

  13. Enabling high performance computational science through combinatorial algorithms

    International Nuclear Information System (INIS)

    Boman, Erik G; Bozdag, Doruk; Catalyurek, Umit V; Devine, Karen D; Gebremedhin, Assefaw H; Hovland, Paul D; Pothen, Alex; Strout, Michelle Mills

    2007-01-01

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation

  14. Enabling high performance computational science through combinatorial algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Erik G [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Bozdag, Doruk [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Catalyurek, Umit V [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Devine, Karen D [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Gebremedhin, Assefaw H [Computer Science and Center for Computational Science, Old Dominion University (United States); Hovland, Paul D [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Pothen, Alex [Computer Science and Center for Computational Science, Old Dominion University (United States); Strout, Michelle Mills [Computer Science, Colorado State University (United States)

    2007-07-15

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation.

  15. Experimental high energy physics and modern computer architectures

    International Nuclear Information System (INIS)

    Hoek, J.

    1988-06-01

    The paper examines how experimental High Energy Physics can use modern computer architectures efficiently. In this connection parallel and vector architectures are investigated, and the types available at the moment for general use are discussed. A separate section briefly describes some architectures that are either a combination of both, or exemplify other architectures. In an appendix some directions in which computing seems to be developing in the USA are mentioned. (author)

  16. Proceedings from the conference on high speed computing: High speed computing and national security

    Energy Technology Data Exchange (ETDEWEB)

    Hirons, K.P.; Vigil, M.; Carlson, R. [comps.

    1997-07-01

    This meeting covered the following topics: technologies/national needs/policies: past, present and future; information warfare; crisis management/massive data systems; risk assessment/vulnerabilities; Internet law/privacy and rights of society; challenges to effective ASCI programmatic use of 100 TFLOPs systems; and new computing technologies.

  17. Evaluation of resource allocation and supply-demand balance in clinical practice with high-cost technologies.

    Science.gov (United States)

    Otsubo, Tetsuya; Imanaka, Yuichi; Lee, Jason; Hayashida, Kenshi

    2011-12-01

    Japan has one of the highest numbers of high-cost medical devices installed relative to its population. While evaluations of the distribution of these devices traditionally involve simple population-based assessments, an indicator that includes the demand of these devices would more accurately reflect the situation. The purpose of this study was to develop an indicator of the supply-demand balance of such devices, using examples of magnetic resonance imaging scanners (MRI) and extracorporeal shockwave lithotripters (ESWL), and to investigate the relationship between this indicator, personnel distribution statuses and operating statuses at the prefectural level. Using data from nation-wide surveys and claims data from 16 hospitals, we developed an indicator based on the ratio of the supplied number of device units to the number of device units in demand for MRI and ESWL. The latter value was based on patient volume and utilization proportion. Correlation analyses were conducted between the supply-demand balances of these devices, personal distribution and operating statuses. Comparisons between our indicator and conventional population-based indicators revealed that 15% and 30% of prefectures were at risk of underestimating the availability of MRI and ESWL, respectively. The numbers of specialist personnel/device units showed significant, negative correlations with our indicators in both devices. Utilization-based analyses of health care resource placement and utilization status provide a more accurate indication than simple population-based assessments, and can assist decision makers in reviewing gaps between health policy and management. Such an indicator therefore has the potential to be a tool in helping to improve the efficiency of the allocation and placement of such devices. © 2010 Blackwell Publishing Ltd.

  18. High-performance computing in accelerating structure design and analysis

    International Nuclear Information System (INIS)

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  19. Demand Uncertainty

    DEFF Research Database (Denmark)

    Nguyen, Daniel Xuyen

    This paper presents a model of trade that explains why firms wait to export and why many exporters fail. Firms face uncertain demands that are only realized after the firm enters the destination. The model retools the timing of uncertainty resolution found in productivity heterogeneity models....... This retooling addresses several shortcomings. First, the imperfect correlation of demands reconciles the sales variation observed in and across destinations. Second, since demands for the firm's output are correlated across destinations, a firm can use previously realized demands to forecast unknown demands...... in untested destinations. The option to forecast demands causes firms to delay exporting in order to gather more information about foreign demand. Third, since uncertainty is resolved after entry, many firms enter a destination and then exit after learning that they cannot profit. This prediction reconciles...

  20. Design of Demand Driven Return Supply Chain for High-Tech Products

    NARCIS (Netherlands)

    Ashayeri, J.; Tuzkaya, G.

    2010-01-01

    Many high-tech supply chain operate in a context of high process and market uncertainties due to shorter product life cycles. When introducing a new product, a company must manage the cost of supply, including the cost of returns over its short life cycle. The returns distribution looks like a

  1. Analysis of water supply and demand in high mountain cities of Bolivia under growing population and changing climate

    Science.gov (United States)

    Kinouchi, T.; Mendoza, J.; Asaoka, Y.; Fuchs, P.

    2017-12-01

    Water resources in La Paz and El Alto, high mountain capital cities of Bolivia, strongly depend on the surface and subsurface runoff from partially glacierized catchments located in the Cordillera Real, Andes. Due to growing population and changing climate, the balance between water supply from the source catchments and demand for drinking, agriculture, industry and hydropower has become precarious in recent years as evidenced by a serious drought during the 2015-2016 El Nino event. To predict the long-term availability of water resources under changing climate, we developed a semi-distributed glacio-hydrological model that considers various runoff pathways from partially glacierized high-altitude catchments. Two GCM projections (MRI-AGCM and INGV-ECHAM4) were used for the prediction with bias corrected by reanalysis data (ERA-INTERIM) and downscaled to target areas using data monitored at several weather stations. The model was applied to three catchments from which current water resources are supplied and eight additional catchments that will be potentially effective in compensating reduced runoff from the current water resource areas. For predicting the future water demand, a cohort-component method was used for the projection of size and composition of population change, considering natural and social change (birth, death and transfer). As a result, total population is expected to increase from 1.6 million in 2012 to 2.0 million in 2036. The water demand was predicted for given unit water consumption, non-revenue water rate (NWR), and sectorial percentage of water consumption for domestic, industrial and commercial purposes. The results of hydrological simulations and the analysis of water demand indicated that water supply and demand are barely balanced in recent years, while the total runoff from current water resource areas will continue to decrease and unprecedented water shortage is likely to occur since around 2020 toward the middle of 21st century even

  2. Demand for alternative-fuel vehicles when registration taxes are high

    DEFF Research Database (Denmark)

    Mabit, Stefan Lindhard; Fosgerau, Mogens

    2011-01-01

    This paper investigates the potential futures for alternative-fuel vehicles in Denmark, where the vehicle registration tax is very high and large tax rebates can be given. A large stated choice dataset has been collected concerning vehicle choice among conventional, hydrogen, hybrid, bio......-diesel, and electric vehicles. We estimate a mixed logit model that improves on previous contributions by controlling for reference dependence and allowing for correlation of random effects. Both improvements are found to be important. An application of the model shows that alternative-fuel vehicles with present...... technology could obtain fairly high market shares given tax regulations possible in the present high-tax vehicle market....

  3. Convergence in Food Demand and Delivery: Do Middle-Income Countries Follow High-Income Trends?

    OpenAIRE

    Regmi, Anita; Takeshima, Hiroyuki; Unnevehr, Laurian J.

    2008-01-01

    This study uses food expenditures and food-sales data from 1990 to 2004 to examine whether food-consumption patterns and food-delivery-mechanism trends are converging across 47 high- and middle-income countries. Results point to a high degree of convergence in global food systems. Middle-income countries appear to be following trends in high-income countries. Convergence is apparent in most important food-expenditure categories and in indicators of food-system modernization such as supermarke...

  4. High Performance Computing Software Applications for Space Situational Awareness

    Science.gov (United States)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  5. Optical interconnection networks for high-performance computing systems

    International Nuclear Information System (INIS)

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  6. Estimation of the Demand for Hospital Care After a Possible High-Magnitude Earthquake in the City of Lima, Peru.

    Science.gov (United States)

    Bambarén, Celso; Uyen, Angela; Rodriguez, Miguel

    2017-02-01

    Introduction A model prepared by National Civil Defense (INDECI; Lima, Peru) estimated that an earthquake with an intensity of 8.0 Mw in front of the central coast of Peru would result in 51,019 deaths and 686,105 injured in districts of Metropolitan Lima and Callao. Using this information as a base, a study was designed to determine the characteristics of the demand for treatment in public hospitals and to estimate gaps in care in the hours immediately after such an event. A probabilistic model was designed that included the following variables: demand for hospital care; time of arrival at the hospitals; type of medical treatment; reason for hospital admission; and the need for specialized care like hemodialysis, blood transfusions, and surgical procedures. The values for these variables were obtained through a literature search of the databases of the MEDLINE medical bibliography, the Cochrane and SciELO libraries, and Google Scholar for information on earthquakes over the last 30 years of over magnitude 6.0 on the moment magnitude scale. If a high-magnitude earthquake were to occur in Lima, it was estimated that between 23,328 and 178,387 injured would go to hospitals, of which between 4,666 and 121,303 would require inpatient care, while between 18,662 and 57,084 could be treated as outpatients. It was estimated that there would be an average of 8,768 cases of crush syndrome and 54,217 cases of other health problems. Enough blood would be required for 8,761 wounded in the first 24 hours. Furthermore, it was expected that there would be a deficit of hospital beds and operating theaters due to the high demand. Sudden and violent disasters, such as earthquakes, represent significant challenges for health systems and services. This study shows the deficit of preparation and capacity to respond to a possible high-magnitude earthquake. The study also showed there are not enough resources to face mega-disasters, especially in large cities. Bambarén C , Uyen A

  7. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs

  8. Analysis of stationary fuel cell dynamic ramping capabilities and ultra capacitor energy storage using high resolution demand data

    Science.gov (United States)

    Meacham, James R.; Jabbari, Faryar; Brouwer, Jacob; Mauzey, Josh L.; Samuelsen, G. Scott

    Current high temperature fuel cell (HTFC) systems used for stationary power applications (in the 200-300 kW size range) have very limited dynamic load following capability or are simply base load devices. Considering the economics of existing electric utility rate structures, there is little incentive to increase HTFC ramping capability beyond 1 kWs -1 (0.4% s -1). However, in order to ease concerns about grid instabilities from utility companies and increase market adoption, HTFC systems will have to increase their ramping abilities, and will likely have to incorporate electrical energy storage (EES). Because batteries have low power densities and limited lifetimes in highly cyclic applications, ultra capacitors may be the EES medium of choice. The current analyses show that, because ultra capacitors have a very low energy storage density, their integration with HTFC systems may not be feasible unless the fuel cell has a ramp rate approaching 10 kWs -1 (4% s -1) when using a worst-case design analysis. This requirement for fast dynamic load response characteristics can be reduced to 1 kWs -1 by utilizing high resolution demand data to properly size ultra capacitor systems and through demand management techniques that reduce load volatility.

  9. High-End Computing Challenges in Aerospace Design and Engineering

    Science.gov (United States)

    Bailey, F. Ronald

    2004-01-01

    High-End Computing (HEC) has had significant impact on aerospace design and engineering and is poised to make even more in the future. In this paper we describe four aerospace design and engineering challenges: Digital Flight, Launch Simulation, Rocket Fuel System and Digital Astronaut. The paper discusses modeling capabilities needed for each challenge and presents projections of future near and far-term HEC computing requirements. NASA's HEC Project Columbia is described and programming strategies presented that are necessary to achieve high real performance.

  10. High performance computing and communications: FY 1997 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  11. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  12. The ongoing investigation of high performance parallel computing in HEP

    CERN Document Server

    Peach, Kenneth J; Böck, R K; Dobinson, Robert W; Hansroul, M; Norton, Alan Robert; Willers, Ian Malcolm; Baud, J P; Carminati, F; Gagliardi, F; McIntosh, E; Metcalf, M; Robertson, L; CERN. Geneva. Detector Research and Development Committee

    1993-01-01

    Past and current exploitation of parallel computing in High Energy Physics is summarized and a list of R & D projects in this area is presented. The applicability of new parallel hardware and software to physics problems is investigated, in the light of the requirements for computing power of LHC experiments and the current trends in the computer industry. Four main themes are discussed (possibilities for a finer grain of parallelism; fine-grain communication mechanism; usable parallel programming environment; different programming models and architectures, using standard commercial products). Parallel computing technology is potentially of interest for offline and vital for real time applications in LHC. A substantial investment in applications development and evaluation of state of the art hardware and software products is needed. A solid development environment is required at an early stage, before mainline LHC program development begins.

  13. High-order hydrodynamic algorithms for exascale computing

    Energy Technology Data Exchange (ETDEWEB)

    Morgan, Nathaniel Ray [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-05

    Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broad range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.

  14. Computer Security: SAHARA - Security As High As Reasonably Achievable

    CERN Multimedia

    Stefan Lueders, Computer Security Team

    2015-01-01

    History has shown us time and again that our computer systems, computing services and control systems have digital security deficiencies. Too often we deploy stop-gap solutions and improvised hacks, or we just accept that it is too late to change things.    In my opinion, this blatantly contradicts the professionalism we show in our daily work. Other priorities and time pressure force us to ignore security or to consider it too late to do anything… but we can do better. Just look at how “safety” is dealt with at CERN! “ALARA” (As Low As Reasonably Achievable) is the objective set by the CERN HSE group when considering our individual radiological exposure. Following this paradigm, and shifting it from CERN safety to CERN computer security, would give us “SAHARA”: “Security As High As Reasonably Achievable”. In other words, all possible computer security measures must be applied, so long as ...

  15. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Science.gov (United States)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the

  16. Study on a High Compression Processing for Video-on-Demand e-learning System

    Science.gov (United States)

    Nomura, Yoshihiko; Matsuda, Ryutaro; Sakamoto, Ryota; Sugiura, Tokuhiro; Matsui, Hirokazu; Kato, Norihiko

    The authors proposed a high-quality and small-capacity lecture-video-file creating system for distance e-learning system. Examining the feature of the lecturing scene, the authors ingeniously employ two kinds of image-capturing equipment having complementary characteristics : one is a digital video camera with a low resolution and a high frame rate, and the other is a digital still camera with a high resolution and a very low frame rate. By managing the two kinds of image-capturing equipment, and by integrating them with image processing, we can produce course materials with the greatly reduced file capacity : the course materials satisfy the requirements both for the temporal resolution to see the lecturer's point-indicating actions and for the high spatial resolution to read the small written letters. As a result of a comparative experiment, the e-lecture using the proposed system was confirmed to be more effective than an ordinary lecture from the viewpoint of educational effect.

  17. High Job Demands, Still Engaged and Not Burned Out? The Role of Job Crafting

    NARCIS (Netherlands)

    Hakanen, Jari J.; Seppälä, Piia; Peeters, Maria C W

    2017-01-01

    Purpose: Traditionally, employee well-being has been considered as resulting from decent working conditions arranged by the organization. Much less is known about whether employees themselves can make self-initiated changes to their work, i.e., craft their jobs, in order to stay well, even in highly

  18. Analysis and Modeling of Social In uence in High Performance Computing Workloads

    KAUST Repository

    Zheng, Shuai

    2011-06-01

    High Performance Computing (HPC) is becoming a common tool in many research areas. Social influence (e.g., project collaboration) among increasing users of HPC systems creates bursty behavior in underlying workloads. This bursty behavior is increasingly common with the advent of grid computing and cloud computing. Mining the user bursty behavior is important for HPC workloads prediction and scheduling, which has direct impact on overall HPC computing performance. A representative work in this area is the Mixed User Group Model (MUGM), which clusters users according to the resource demand features of their submissions, such as duration time and parallelism. However, MUGM has some difficulties when implemented in real-world system. First, representing user behaviors by the features of their resource demand is usually difficult. Second, these features are not always available. Third, measuring the similarities among users is not a well-defined problem. In this work, we propose a Social Influence Model (SIM) to identify, analyze, and quantify the level of social influence across HPC users. The advantage of the SIM model is that it finds HPC communities by analyzing user job submission time, thereby avoiding the difficulties of MUGM. An offline algorithm and a fast-converging, computationally-efficient online learning algorithm for identifying social groups are proposed. Both offline and online algorithms are applied on several HPC and grid workloads, including Grid 5000, EGEE 2005 and 2007, and KAUST Supercomputing Lab (KSL) BGP data. From the experimental results, we show the existence of a social graph, which is characterized by a pattern of dominant users and followers. In order to evaluate the effectiveness of identified user groups, we show the pattern discovered by the offline algorithm follows a power-law distribution, which is consistent with those observed in mainstream social networks. We finally conclude the thesis and discuss future directions of our work.

  19. Why Electricity Demand Is Highly Income-Elastic in Spain: A Cross-Country Comparison Based on an Index-Decomposition Analysis

    Directory of Open Access Journals (Sweden)

    Julián Pérez-García

    2017-03-01

    Full Text Available Since 1990, Spain has had one of the highest elasticities of electricity demand in the European Union. We provide an in-depth analysis into the causes of this high elasticity, and we examine how these same causes influence electricity demand in other European countries. To this end, we present an index-decomposition analysis of growth in electricity demand which allows us to identify three key factors in the relationship between gross domestic product (GDP and electricity demand: (i structural change; (ii GDP growth; and (iii intensity of electricity use. Our findings show that the main differences in electricity demand elasticities across countries and time are accounted for by the fast convergence in residential per capita electricity consumption. This convergence has almost concluded, and we expect the Spanish energy demand elasticity to converge to European standards in the near future.

  20. Can Pulsed Electromagnetic Fields Trigger On-Demand Drug Release from High-Tm Magnetoliposomes?

    Directory of Open Access Journals (Sweden)

    Martina Nardoni

    2018-03-01

    Full Text Available Recently, magnetic nanoparticles (MNPs have been used to trigger drug release from magnetoliposomes through a magneto-nanomechanical approach, where the mechanical actuation of the MNPs is used to enhance the membrane permeability. This result can be effectively achieved with low intensity non-thermal alternating magnetic field (AMF, which, however, found rare clinic application. Therefore, a different modality of generating non-thermal magnetic fields has now been investigated. Specifically, the ability of the intermittent signals generated by non-thermal pulsed electromagnetic fields (PEMFS were used to verify if, once applied to high-transition temperature magnetoliposomes (high-Tm MLs, they could be able to efficiently trigger the release of a hydrophilic model drug. To this end, hydrophilic MNPs were combined with hydrogenated soybean phosphatidylcholine and cholesterol to design high-Tm MLs. The release of a dye was evaluated under the effect of PEMFs for different times. The MNPs motions produced by PEMF could effectively increase the bilayer permeability, without affecting the liposomes integrity and resulted in nearly 20% of release after 3 h exposure. Therefore, the current contribution provides an exciting proof-of-concept for the ability of PEMFS to trigger drug release, considering that PEMFS find already application in therapy due to their anti-inflammatory effects.

  1. Can Pulsed Electromagnetic Fields Trigger On-Demand Drug Release from High-Tm Magnetoliposomes?

    Science.gov (United States)

    Nardoni, Martina; Della Valle, Elena; Liberti, Micaela; Relucenti, Michela; Casadei, Maria Antonietta; Paolicelli, Patrizia; Apollonio, Francesca; Petralito, Stefania

    2018-03-27

    Recently, magnetic nanoparticles (MNPs) have been used to trigger drug release from magnetoliposomes through a magneto-nanomechanical approach, where the mechanical actuation of the MNPs is used to enhance the membrane permeability. This result can be effectively achieved with low intensity non-thermal alternating magnetic field (AMF), which, however, found rare clinic application. Therefore, a different modality of generating non-thermal magnetic fields has now been investigated. Specifically, the ability of the intermittent signals generated by non-thermal pulsed electromagnetic fields (PEMFS) were used to verify if, once applied to high-transition temperature magnetoliposomes (high-Tm MLs), they could be able to efficiently trigger the release of a hydrophilic model drug. To this end, hydrophilic MNPs were combined with hydrogenated soybean phosphatidylcholine and cholesterol to design high-Tm MLs. The release of a dye was evaluated under the effect of PEMFs for different times. The MNPs motions produced by PEMF could effectively increase the bilayer permeability, without affecting the liposomes integrity and resulted in nearly 20% of release after 3 h exposure. Therefore, the current contribution provides an exciting proof-of-concept for the ability of PEMFS to trigger drug release, considering that PEMFS find already application in therapy due to their anti-inflammatory effects.

  2. Design of demand driven return supply chain for high-tech products

    Directory of Open Access Journals (Sweden)

    Jalal Ashayeri

    2011-10-01

    Full Text Available Purpose: The purpose of this study is to design a responsive network for after-sale services of high-tech products. Design/methodology/approach: Analytic Hierarchy Process (AHP and weighted max-min approach are integrated to solve a fuzzy goal programming model. Findings: Uncertainty is an important characteristic of reverse logistics networks, and the level of uncertainty increases with the decrease of the products’ life-cycle. Research limitations/implications: Some of the objective functions of our model are simplified to deal with non-linearities. Practical implications: Designing after-sale services networks for high-tech products is an overwhelming task, especially when the external environment is characterized by high levels of uncertainty and dynamism. This study presents a comprehensive modeling approach to simplify this task. Originality/value: Consideration of multiple objectives is rare in reverse logistics network design literature. Although the number of multi-objective reverse logistics network design studies has been increasing in recent years, the last two objective of our model is unique to this research area.

  3. Symbolic computation and its application to high energy physics

    International Nuclear Information System (INIS)

    Hearn, A.C.

    1981-01-01

    It is clear that we are in the middle of an electronic revolution whose effect will be as profound as the industrial revolution. The continuing advances in computing technology will provide us with devices which will make present day computers appear primitive. In this environment, the algebraic and other non-mumerical capabilities of such devices will become increasingly important. These lectures will review the present state of the field of algebraic computation and its potential for problem solving in high energy physics and related areas. We shall begin with a brief description of the available systems and examine the data objects which they consider. As an example of the facilities which these systems can offer, we shall then consider the problem of analytic integration, since this is so fundamental to many of the calculational techniques used by high energy physicists. Finally, we shall study the implications which the current developments in hardware technology hold for scientific problem solving. (orig.)

  4. High performance computing and communications: FY 1996 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  5. C-ITS as Multidisciplinary Area with High Demand on Telecommunications Solutions

    Directory of Open Access Journals (Sweden)

    Tomas Zelinka

    2014-08-01

    Full Text Available Cooperative Intelligent Transport Systems (C-ITS are concentrated on transportation systems with goal to improve usability, efficiency and safety of the existing as well as newly constructed transportation infrastructure. These concepts are associated with high society expectations that C-ITS will principally participate in resolving of continuously growing transportation challenges. C-ITS represents typical multidisciplinary area where effective cooperation of wide range of different disciplines is the key condition of the success. Possible approach to treatment of requirements on telecommunication services in C-ITS applications is presented.

  6. High-integrity software, computation and the scientific method

    International Nuclear Information System (INIS)

    Hatton, L.

    2012-01-01

    Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. With the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. In this paper, some of the problems with computation, for example the long-term unquantifiable presence of undiscovered defect, problems with programming languages and process issues will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within computer science itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest. (author)

  7. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa

    2012-10-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a representative HPC application. © 2012 IEEE.

  8. Computer science of the high performance; Informatica del alto rendimiento

    Energy Technology Data Exchange (ETDEWEB)

    Moraleda, A.

    2008-07-01

    The high performance computing is taking shape as a powerful accelerator of the process of innovation, to drastically reduce the waiting times for access to the results and the findings in a growing number of processes and activities as complex and important as medicine, genetics, pharmacology, environment, natural resources management or the simulation of complex processes in a wide variety of industries. (Author)

  9. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2015-01-01

    A continuation of Contemporary High Performance Computing: From Petascale toward Exascale, this second volume continues the discussion of HPC flagship systems, major application workloads, facilities, and sponsors. The book includes of figures and pictures that capture the state of existing systems: pictures of buildings, systems in production, floorplans, and many block diagrams and charts to illustrate system design and performance.

  10. Hot Chips and Hot Interconnects for High End Computing Systems

    Science.gov (United States)

    Saini, Subhash

    2005-01-01

    I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).

  11. Can high psychological job demands, low decision latitude, and high job strain predict disability pensions? A 12-year follow-up of middle-aged Swedish workers.

    Science.gov (United States)

    Canivet, Catarina; Choi, BongKyoo; Karasek, Robert; Moghaddassi, Mahnaz; Staland-Nyman, Carin; Östergren, Per-Olof

    2013-04-01

    The aim of this study was to investigate whether job strain, psychological demands, and decision latitude are independent determinants of disability pension rates over a 12-year follow-up period. We studied 3,181 men and 3,359 women, all middle-aged and working at least 30 h per week, recruited from the general population of Malmö, Sweden, in 1992. The participation rate was 41 %. Baseline data include sociodemographics, the Job Content Questionnaire, lifestyle, and health-related variables. Disability pension information was obtained through record linkage from the National Health Insurance Register. Nearly 20 % of the women and 15 % of the men were granted a disability pension during the follow-up period. The highest quartile of psychological job demands and the lowest quartile of decision latitude were associated with disability pensions when controlling for age, socioeconomic position, and health risk behaviours. In the final model, with adjustment also for health indicators and stress from outside the workplace, the hazard ratios for high strain jobs (i.e. high psychological demands in combination with low decision latitude) were 1.5 in men (95 % CI, 1.04-2.0) and 1.7 in women (95 % CI, 1.3-2.2). Stratifying for health at baseline showed that high strain tended to affect healthy but not unhealthy men, while this pattern was reversed in women. High psychological demands, low decision latitude, and job strain were all confirmed as independent risk factors for subsequent disability pensions. In order to increase chances of individuals remaining in the work force, interventions against these adverse psychosocial factors appear worthwhile.

  12. A transport layer protocol for the future high speed grid computing: SCTP versus fast tcp multihoming

    International Nuclear Information System (INIS)

    Arshad, M.J.; Mian, M.S.

    2010-01-01

    TCP (Transmission Control Protocol) is designed for reliable data transfer on the global Internet today. One of its strong points is its use of flow control algorithm that allows TCP to adjust its congestion window if network congestion is occurred. A number of studies and investigations have confirmed that traditional TCP is not suitable for each and every type of application, for example, bulk data transfer over high speed long distance networks. TCP sustained the time of low-capacity and short-delay networks, however, for numerous factors it cannot be capable to efficiently deal with today's growing technologies (such as wide area Grid computing and optical-fiber networks). This research work surveys the congestion control mechanism of transport protocols, and addresses the different issues involved for transferring the huge data over the future high speed Grid computing and optical-fiber networks. This work also presents the simulations to compare the performance of FAST TCP multihoming with SCTP (Stream Control Transmission Protocol) multihoming in high speed networks. These simulation results show that FAST TCP multihoming achieves bandwidth aggregation efficiently and outperforms SCTP multihoming under a similar network conditions. The survey and simulation results presented in this work reveal that multihoming support into FAST TCP does provide a lot of benefits like redundancy, load-sharing and policy-based routing, which largely improves the whole performance of a network and can meet the increasing demand of the future high-speed network infrastructures (such as in Grid computing). (author)

  13. Assess the feasibility of the high-speed railway construction in China by measuring the traffic demand elastic

    Science.gov (United States)

    Yu, Nan; Cao, Yu

    2017-05-01

    The traffic demand elastic is proposed as a new indicator in this study to measure the feasibility of the high-speed railway construction in a more intuitive way. The Matrix Completion (MC) and Semi-Supervised Support Vector Machine (S3VM) are used to realize the measurement and prediction of this index on the basis of the satisfaction investigation on the 326 inter-city railways in china. It is demonstrated that instead of calculating the economic benefits brought by the construction of high-speed railway, this indicator can find the most urgent railways to be improved by directly evaluate the existing railway facilities from the perspective of transportation service improvement requirements.

  14. A Web Based Educational Programming Logic Controller Training Set Based on Vocational High School Students' Demands

    Directory of Open Access Journals (Sweden)

    Abdullah Alper Efe

    2018-01-01

    Full Text Available The purpose of this study was to design and develop aProgramming Logic Controller Training Set according to vocational high school students’ educational needs. In this regard, by using the properties of distance education the proposed system supported “hands-on” PLC programming laboratory exercises in industrial automation area. The system allowed students to access and control the PLC training set remotely. For this purpose, researcher designed a web site to facilitate students’ interactivity and support PLC programming. In the training set, Induction Motor, Frequency Converter and Encoder tripart controlled by Siemens Simatic S7-200 PLC controller by the help of SIMATIC Step 7 Programming Software were used to make the system more effective and efficient. Moreover, training set included an IP camera system allowing to monitor devices and pilot application. By working with this novel remote accessible training set, students and researchers recieved a chance to inhere self paced learning experiences. Also, The PLC training set offered an effective learning enviroenment for distance education, which is based on presenting the content on the web and opening it to the online users and provided a safe and economical solution for multiple users in a workplace to enhance the quality of education with less overall cost.

  15. Computation of high Reynolds number internal/external flows

    Science.gov (United States)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, was developed to calculate high Reynolds number, internal/ external flows. The VNAP2 program solves the two dimensional, time dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack Scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.

  16. Computation of high Reynolds number internal/external flows

    International Nuclear Information System (INIS)

    Cline, M.C.; Wilmoth, R.G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, has been developed to calculate high Reynolds number, internal/external flows. VNAP2 solves the two-dimensional, time-dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented

  17. 2003 Conference for Computing in High Energy and Nuclear Physics

    International Nuclear Information System (INIS)

    Schalk, T.

    2003-01-01

    The conference was subdivided into the follow separate tracks. Electronic presentations and/or videos are provided on the main website link. Sessions: Plenary Talks and Panel Discussion; Grid Architecture, Infrastructure, and Grid Security; HENP Grid Applications, Testbeds, and Demonstrations; HENP Computing Systems and Infrastructure; Monitoring; High Performance Networking; Data Acquisition, Triggers and Controls; First Level Triggers and Trigger Hardware; Lattice Gauge Computing; HENP Software Architecture and Software Engineering; Data Management and Persistency; Data Analysis Environment and Visualization; Simulation and Modeling; and Collaboration Tools and Information Systems

  18. Component-based software for high-performance scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  19. Component-based software for high-performance scientific computing

    International Nuclear Information System (INIS)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly

  20. Nuclear forces and high-performance computing: The perfect match

    International Nuclear Information System (INIS)

    Luu, T; Walker-Loud, A

    2009-01-01

    High-performance computing is now enabling the calculation of certain hadronic interaction parameters directly from Quantum Chromodynamics, the quantum field theory that governs the behavior of quarks and gluons and is ultimately responsible for the nuclear strong force. In this paper we briefly describe the state of the field and show how other aspects of hadronic interactions will be ascertained in the near future. We give estimates of computational requirements needed to obtain these goals, and outline a procedure for incorporating these results into the broader nuclear physics community.

  1. Computational methods for high-energy source shielding

    International Nuclear Information System (INIS)

    Armstrong, T.W.; Cloth, P.; Filges, D.

    1983-01-01

    The computational methods for high-energy radiation transport related to shielding of the SNQ-spallation source are outlined. The basic approach is to couple radiation-transport computer codes which use Monte Carlo methods and discrete ordinates methods. A code system is suggested that incorporates state-of-the-art radiation-transport techniques. The stepwise verification of that system is briefly summarized. The complexity of the resulting code system suggests a more straightforward code specially tailored for thick shield calculations. A short guide line to future development of such a Monte Carlo code is given

  2. High Electricity Demand in the Northeast U.S.: PJM Reliability Network and Peaking Unit Impacts on Air Quality.

    Science.gov (United States)

    Farkas, Caroline M; Moeller, Michael D; Felder, Frank A; Henderson, Barron H; Carlton, Annmarie G

    2016-08-02

    On high electricity demand days, when air quality is often poor, regional transmission organizations (RTOs), such as PJM Interconnection, ensure reliability of the grid by employing peak-use electric generating units (EGUs). These "peaking units" are exempt from some federal and state air quality rules. We identify RTO assignment and peaking unit classification for EGUs in the Eastern U.S. and estimate air quality for four emission scenarios with the Community Multiscale Air Quality (CMAQ) model during the July 2006 heat wave. Further, we population-weight ambient values as a surrogate for potential population exposure. Emissions from electricity reliability networks negatively impact air quality in their own region and in neighboring geographic areas. Monitored and controlled PJM peaking units are generally located in economically depressed areas and can contribute up to 87% of hourly maximum PM2.5 mass locally. Potential population exposure to peaking unit PM2.5 mass is highest in the model domain's most populated cities. Average daily temperature and national gross domestic product steer peaking unit heat input. Air quality planning that capitalizes on a priori knowledge of local electricity demand and economics may provide a more holistic approach to protect human health within the context of growing energy needs in a changing world.

  3. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    Energy Technology Data Exchange (ETDEWEB)

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  4. Computing trends using graphic processor in high energy physics

    CERN Document Server

    Niculescu, Mihai

    2011-01-01

    One of the main challenges in Heavy Energy Physics is to make fast analysis of high amount of experimental and simulated data. At LHC-CERN one p-p event is approximate 1 Mb in size. The time taken to analyze the data and obtain fast results depends on high computational power. The main advantage of using GPU(Graphic Processor Unit) programming over traditional CPU one is that graphical cards bring a lot of computing power at a very low price. Today a huge number of application(scientific, financial etc) began to be ported or developed for GPU, including Monte Carlo tools or data analysis tools for High Energy Physics. In this paper, we'll present current status and trends in HEP using GPU.

  5. Overview of Parallel Platforms for Common High Performance Computing

    Directory of Open Access Journals (Sweden)

    T. Fryza

    2012-04-01

    Full Text Available The paper deals with various parallel platforms used for high performance computing in the signal processing domain. More precisely, the methods exploiting the multicores central processing units such as message passing interface and OpenMP are taken into account. The properties of the programming methods are experimentally proved in the application of a fast Fourier transform and a discrete cosine transform and they are compared with the possibilities of MATLAB's built-in functions and Texas Instruments digital signal processors with very long instruction word architectures. New FFT and DCT implementations were proposed and tested. The implementation phase was compared with CPU based computing methods and with possibilities of the Texas Instruments digital signal processing library on C6747 floating-point DSPs. The optimal combination of computing methods in the signal processing domain and new, fast routines' implementation is proposed as well.

  6. School of Analytic Computing in Theoretical High-Energy Physics

    CERN Document Server

    2015-01-01

    In recent years, a huge progress has been made on computing rates for production processes of direct relevance to experiments at the Large Hadron Collider (LHC). Crucial to that remarkable advance has been our understanding and ability to compute scattering amplitudes and cross sections. The aim of the School is to bring together young theorists working on the phenomenology of LHC physics with those working in more formal areas, and to provide them the analytic tools to compute amplitudes in gauge theories. The school is addressed to Ph.D. students and post-docs in Theoretical High-Energy Physics. 30 hours of lectures and 4 hours of tutorials will be delivered over the 6 days of the School.

  7. Providing a computing environment for a high energy physics workshop

    International Nuclear Information System (INIS)

    Nicholls, J.

    1991-03-01

    Although computing facilities have been provided at conferences and workshops remote from the hose institution for some years, the equipment provided has rarely been capable of providing for much more than simple editing and electronic mail over leased lines. This presentation describes the pioneering effort involved by the Computing Department/Division at Fermilab in providing a local computing facility with world-wide networking capability for the Physics at Fermilab in the 1990's workshop held in Breckenridge, Colorado, in August 1989, as well as the enhanced facilities provided for the 1990 Summer Study on High Energy Physics at Snowmass, Colorado, in June/July 1990. Issues discussed include type and sizing of the facilities, advance preparations, shipping, on-site support, as well as an evaluation of the value of the facility to the workshop participants

  8. BEAGLE: an application programming interface and high-performance computing library for statistical phylogenetics.

    Science.gov (United States)

    Ayres, Daniel L; Darling, Aaron; Zwickl, Derrick J; Beerli, Peter; Holder, Mark T; Lewis, Paul O; Huelsenbeck, John P; Ronquist, Fredrik; Swofford, David L; Cummings, Michael P; Rambaut, Andrew; Suchard, Marc A

    2012-01-01

    Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software.

  9. Effect of computer mouse gain and visual demand on mouse clicking performance and muscle activation in a young and elderly group of experienced computer users

    DEFF Research Database (Denmark)

    Sandfeld, Jesper; Jensen, Bente R.

    2005-01-01

    and three levels of target size were used. All subjects demonstrated a reduced working speed and hit rate at the highest mouse gain (1:8) when the target size was small. The young group had an optimum at mouse gain 1:4. The elderly group was most sensitive to the combination of high mouse gain and small...

  10. Worksite interventions for preventing physical deterioration among employees in job-groups with high physical work demands: background, design and conceptual model of FINALE

    DEFF Research Database (Denmark)

    Holtermann, Andreas; Jørgensen, Marie B; Gram, Bibi

    2010-01-01

    physical demands remains to be established. This paper describes the background, design and conceptual model of the FINALE programme, a framework for health promoting interventions at 4 Danish job groups (i.e. cleaners, health-care workers, construction workers and industrial workers) characterized by high......A mismatch between individual physical capacities and physical work demands enhance the risk for musculoskeletal disorders, poor work ability and sickness absence, termed physical deterioration. However, effective intervention strategies for preventing physical deterioration in job groups with high...... physical work demands, musculoskeletal disorders, poor work ability and sickness absence....

  11. Simple, parallel, high-performance virtual machines for extreme computations

    International Nuclear Information System (INIS)

    Chokoufe Nejad, Bijan; Ohl, Thorsten; Reuter, Jurgen

    2014-11-01

    We introduce a high-performance virtual machine (VM) written in a numerically fast language like Fortran or C to evaluate very large expressions. We discuss the general concept of how to perform computations in terms of a VM and present specifically a VM that is able to compute tree-level cross sections for any number of external legs, given the corresponding byte code from the optimal matrix element generator, O'Mega. Furthermore, this approach allows to formulate the parallel computation of a single phase space point in a simple and obvious way. We analyze hereby the scaling behaviour with multiple threads as well as the benefits and drawbacks that are introduced with this method. Our implementation of a VM can run faster than the corresponding native, compiled code for certain processes and compilers, especially for very high multiplicities, and has in general runtimes in the same order of magnitude. By avoiding the tedious compile and link steps, which may fail for source code files of gigabyte sizes, new processes or complex higher order corrections that are currently out of reach could be evaluated with a VM given enough computing power.

  12. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    International Nuclear Information System (INIS)

    Pop, Florin

    2014-01-01

    Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  13. Achieving high performance in numerical computations on RISC workstations and parallel systems

    Energy Technology Data Exchange (ETDEWEB)

    Goedecker, S. [Max-Planck Inst. for Solid State Research, Stuttgart (Germany); Hoisie, A. [Los Alamos National Lab., NM (United States)

    1997-08-20

    The nominal peak speeds of both serial and parallel computers is raising rapidly. At the same time however it is becoming increasingly difficult to get out a significant fraction of this high peak speed from modern computer architectures. In this tutorial the authors give the scientists and engineers involved in numerically demanding calculations and simulations the necessary basic knowledge to write reasonably efficient programs. The basic principles are rather simple and the possible rewards large. Writing a program by taking into account optimization techniques related to the computer architecture can significantly speedup your program, often by factors of 10--100. As such, optimizing a program can for instance be a much better solution than buying a faster computer. If a few basic optimization principles are applied during program development, the additional time needed for obtaining an efficient program is practically negligible. In-depth optimization is usually only needed for a few subroutines or kernels and the effort involved is therefore also acceptable.

  14. Benchmarking high performance computing architectures with CMS’ skeleton framework

    Science.gov (United States)

    Sexton-Kennedy, E.; Gartung, P.; Jones, C. D.

    2017-10-01

    In 2012 CMS evaluated which underlying concurrency technology would be the best to use for its multi-threaded framework. The available technologies were evaluated on the high throughput computing systems dominating the resources in use at that time. A skeleton framework benchmarking suite that emulates the tasks performed within a CMSSW application was used to select Intel’s Thread Building Block library, based on the measured overheads in both memory and CPU on the different technologies benchmarked. In 2016 CMS will get access to high performance computing resources that use new many core architectures; machines such as Cori Phase 1&2, Theta, Mira. Because of this we have revived the 2012 benchmark to test it’s performance and conclusions on these new architectures. This talk will discuss the results of this exercise.

  15. Unravelling the structure of matter on high-performance computers

    International Nuclear Information System (INIS)

    Kieu, T.D.; McKellar, B.H.J.

    1992-11-01

    The various phenomena and the different forms of matter in nature are believed to be the manifestation of only a handful set of fundamental building blocks-the elementary particles-which interact through the four fundamental forces. In the study of the structure of matter at this level one has to consider forces which are not sufficiently weak to be treated as small perturbations to the system, an example of which is the strong force that binds the nucleons together. High-performance computers, both vector and parallel machines, have facilitated the necessary non-perturbative treatments. The principles and the techniques of computer simulations applied to Quantum Chromodynamics are explained examples include the strong interactions, the calculation of the mass of nucleons and their decay rates. Some commercial and special-purpose high-performance machines for such calculations are also mentioned. 3 refs., 2 tabs

  16. Integrating Embedded Computing Systems into High School and Early Undergraduate Education

    Science.gov (United States)

    Benson, B.; Arfaee, A.; Choon Kim; Kastner, R.; Gupta, R. K.

    2011-01-01

    Early exposure to embedded computing systems is crucial for students to be prepared for the embedded computing demands of today's world. However, exposure to systems knowledge often comes too late in the curriculum to stimulate students' interests and to provide a meaningful difference in how they direct their choice of electives for future…

  17. Multi-Language Programming Environments for High Performance Java Computing

    OpenAIRE

    Vladimir Getov; Paul Gray; Sava Mintchev; Vaidy Sunderam

    1999-01-01

    Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI) tool which provides ...

  18. Aspects of pulmonary histiocytosis X on high resolution computed tomography

    International Nuclear Information System (INIS)

    Costa, N.S.S.; Castro Lessa Angela, M.T. de; Angelo Junior, J.R.L.; Silva, F.M.D.; Kavakama, J.; Carvalho, C.R.R. de; Cerri, G.G.

    1995-01-01

    Pulmonary histiocytosis X is a disease that occurs in young adults and presents with nodules and cysts, mainly in upper lobes, with consequent pulmonary fibrosis. These pulmonary changes are virtually pathognomonic findings on high resolution computed tomography, that allows estimate the area of the lung involved and distinguish histiocytosis X from other disorders that also produces nodules and cysts. (author). 10 refs, 2 tabs, 6 figs

  19. Intel: High Throughput Computing Collaboration: A CERN openlab / Intel collaboration

    CERN Multimedia

    CERN. Geneva

    2015-01-01

    The Intel/CERN High Throughput Computing Collaboration studies the application of upcoming Intel technologies to the very challenging environment of the LHC trigger and data-acquisition systems. These systems will need to transport and process many terabits of data every second, in some cases with tight latency constraints. Parallelisation and tight integration of accelerators and classical CPU via Intel's OmniPath fabric are the key elements in this project.

  20. High-Performance Java Codes for Computational Fluid Dynamics

    Science.gov (United States)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  1. High-Precision Computation: Mathematical Physics and Dynamics

    International Nuclear Information System (INIS)

    Bailey, D.H.; Barrio, R.; Borwein, J.M.

    2010-01-01

    At the present time, IEEE 64-bit oating-point arithmetic is suficiently accurate for most scientic applications. However, for a rapidly growing body of important scientic computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion e ort. This pa- per presents a survey of recent applications of these techniques and provides someanalysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, studies of the one structure constant, scattering amplitudes of quarks, glu- ons and bosons, nonlinear oscillator theory, experimental mathematics, evaluation of orthogonal polynomials, numerical integration of ODEs, computation of periodic orbits, studies of the splitting of separatrices, detection of strange nonchaotic at- tractors, Ising theory, quantum held theory, and discrete dynamical systems. We conclude that high-precision arithmetic facilities are now an indispensable compo- nent of a modern large-scale scientic computing environment.

  2. High-Precision Computation: Mathematical Physics and Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, D. H.; Barrio, R.; Borwein, J. M.

    2010-04-01

    At the present time, IEEE 64-bit oating-point arithmetic is suficiently accurate for most scientic applications. However, for a rapidly growing body of important scientic computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion e ort. This pa- per presents a survey of recent applications of these techniques and provides someanalysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, studies of the one structure constant, scattering amplitudes of quarks, glu- ons and bosons, nonlinear oscillator theory, experimental mathematics, evaluation of orthogonal polynomials, numerical integration of ODEs, computation of periodic orbits, studies of the splitting of separatrices, detection of strange nonchaotic at- tractors, Ising theory, quantum held theory, and discrete dynamical systems. We conclude that high-precision arithmetic facilities are now an indispensable compo- nent of a modern large-scale scientic computing environment.

  3. Electricity demand in Kazakhstan

    International Nuclear Information System (INIS)

    Atakhanova, Zauresh; Howie, Peter

    2007-01-01

    Properties of electricity demand in transition economies have not been sufficiently well researched mostly due to data limitations. However, information on the properties of electricity demand is necessary for policy makers to evaluate effects of price changes on different consumers and obtain demand forecasts for capacity planning. This study estimates Kazakhstan's aggregate demand for electricity as well as electricity demand in the industrial, service, and residential sectors using regional data. Firstly, our results show that price elasticity of demand in all sectors is low. This fact suggests that there is considerable room for price increases necessary to finance generation and distribution system upgrading. Secondly, we find that income elasticity of demand in the aggregate and all sectoral models is less than unity. Of the three sectors, electricity demand in the residential sector has the lowest income elasticity. This result indicates that policy initiatives to secure affordability of electricity consumption to lower income residential consumers may be required. Finally, our forecast shows that electricity demand may grow at either 3% or 5% per year depending on rates of economic growth and government policy regarding price increases and promotion of efficiency. We find that planned supply increases would be sufficient to cover growing demand only if real electricity prices start to increase toward long-run cost-recovery levels and policy measures are implemented to maintain the current high growth of electricity efficiency

  4. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  5. An Experimental QoE Performance Study for the Efficient Transmission of High Demanding Traffic over an Ad Hoc Network Using BATMAN

    Directory of Open Access Journals (Sweden)

    Ramon Sanchez-Iborra

    2015-01-01

    Full Text Available Multimedia communications are attracting great attention from the research, industry, and end-user communities. The latter are increasingly claiming for higher levels of quality and the possibility of consuming multimedia content from a plethora of devices at their disposal. Clearly, the most appealing gadgets are those that communicate wirelessly to access these services. However, current wireless technologies raise severe concerns to support extremely demanding services such as real-time multimedia transmissions. This paper evaluates from QoE and QoS perspectives the capability of the ad hoc routing protocol called BATMAN to support Voice over IP and video traffic. To this end, two test-benches were proposed, namely, a real (emulated testbed and a simulation framework. Additionally, a series of modifications was proposed on both protocols’ parameters settings and video-stream characteristics that contributes to further improving the multimedia quality perceived by the users. The performance of the well-extended protocol OLSR is also evaluated in detail to compare it with BATMAN. From the results, a notably high correlation between real experimentation and computer simulation outcomes was observed. It was also found out that, with the proper configuration, BATMAN is able to transmit several QCIF video-streams and VoIP calls with high quality. In addition, BATMAN outperforms OLSR supporting multimedia traffic in both experimental and simulated environments.

  6. What Physicists Should Know About High Performance Computing - Circa 2002

    Science.gov (United States)

    Frederick, Donald

    2002-08-01

    High Performance Computing (HPC) is a dynamic, cross-disciplinary field that traditionally has involved applied mathematicians, computer scientists, and others primarily from the various disciplines that have been major users of HPC resources - physics, chemistry, engineering, with increasing use by those in the life sciences. There is a technological dynamic that is powered by economic as well as by technical innovations and developments. This talk will discuss practical ideas to be considered when developing numerical applications for research purposes. Even with the rapid pace of development in the field, the author believes that these concepts will not become obsolete for a while, and will be of use to scientists who either are considering, or who have already started down the HPC path. These principles will be applied in particular to current parallel HPC systems, but there will also be references of value to desktop users. The talk will cover such topics as: computing hardware basics, single-cpu optimization, compilers, timing, numerical libraries, debugging and profiling tools and the emergence of Computational Grids.

  7. STTR Phase I: Low-Cost, High-Accuracy, Whole-Building Carbon Dioxide Monitoring for Demand Control Ventilation

    Energy Technology Data Exchange (ETDEWEB)

    Hallstrom, Jason; Ni, Zheng Richard

    2018-05-15

    This STTR Phase I project assessed the feasibility of a new CO2 sensing system optimized for low-cost, high-accuracy, whole-building monitoring for use in demand control ventilation. The focus was on the development of a wireless networking platform and associated firmware to provide signal conditioning and conversion, fault- and disruptiontolerant networking, and multi-hop routing at building scales to avoid wiring costs. Early exploration of a bridge (or “gateway”) to direct digital control services was also explored. Results of the project contributed to an improved understanding of a new electrochemical sensor for monitoring indoor CO2 concentrations, as well as the electronics and networking infrastructure required to deploy those sensors at building scales. New knowledge was acquired concerning the sensor’s accuracy, environmental response, and failure modes, and the acquisition electronics required to achieve accuracy over a wide range of CO2 concentrations. The project demonstrated that the new sensor offers repeatable correspondence with commercial optical sensors, with supporting electronics that offer gain accuracy within 0.5%, and acquisition accuracy within 1.5% across three orders of magnitude variation in generated current. Considering production, installation, and maintenance costs, the technology presents a foundation for achieving whole-building CO2 sensing at a price point below $0.066 / sq-ft – meeting economic feasibility criteria established by the Department of Energy. The technology developed under this award addresses obstacles on the critical path to enabling whole-building CO2 sensing and demand control ventilation in commercial retrofits, small commercial buildings, residential complexes, and other highpotential structures that have been slow to adopt these technologies. It presents an opportunity to significantly reduce energy use throughout the United States a

  8. Computing with high-resolution upwind schemes for hyperbolic equations

    International Nuclear Information System (INIS)

    Chakravarthy, S.R.; Osher, S.; California Univ., Los Angeles)

    1985-01-01

    Computational aspects of modern high-resolution upwind finite-difference schemes for hyperbolic systems of conservation laws are examined. An operational unification is demonstrated for constructing a wide class of flux-difference-split and flux-split schemes based on the design principles underlying total variation diminishing (TVD) schemes. Consideration is also given to TVD scheme design by preprocessing, the extension of preprocessing and postprocessing approaches to general control volumes, the removal of expansion shocks and glitches, relaxation methods for implicit TVD schemes, and a new family of high-accuracy TVD schemes. 21 references

  9. High spatial resolution CT image reconstruction using parallel computing

    International Nuclear Information System (INIS)

    Yin Yin; Liu Li; Sun Gongxing

    2003-01-01

    Using the PC cluster system with 16 dual CPU nodes, we accelerate the FBP and OR-OSEM reconstruction of high spatial resolution image (2048 x 2048). Based on the number of projections, we rewrite the reconstruction algorithms into parallel format and dispatch the tasks to each CPU. By parallel computing, the speedup factor is roughly equal to the number of CPUs, which can be up to about 25 times when 25 CPUs used. This technique is very suitable for real-time high spatial resolution CT image reconstruction. (authors)

  10. A Computer Controlled Precision High Pressure Measuring System

    Science.gov (United States)

    Sadana, S.; Yadav, S.; Jha, N.; Gupta, V. K.; Agarwal, R.; Bandyopadhyay, A. K.; Saxena, T. K.

    2011-01-01

    A microcontroller (AT89C51) based electronics has been designed and developed for high precision calibrator based on Digiquartz pressure transducer (DQPT) for the measurement of high hydrostatic pressure up to 275 MPa. The input signal from DQPT is converted into a square wave form and multiplied through frequency multiplier circuit over 10 times to input frequency. This input frequency is multiplied by a factor of ten using phased lock loop. Octal buffer is used to store the calculated frequency, which in turn is fed to microcontroller AT89C51 interfaced with a liquid crystal display for the display of frequency as well as corresponding pressure in user friendly units. The electronics developed is interfaced with a computer using RS232 for automatic data acquisition, computation and storage. The data is acquired by programming in Visual Basic 6.0. This system is interfaced with the PC to make it a computer controlled system. The system is capable of measuring the frequency up to 4 MHz with a resolution of 0.01 Hz and the pressure up to 275 MPa with a resolution of 0.001 MPa within measurement uncertainty of 0.025%. The details on the hardware of the pressure measuring system, associated electronics, software and calibration are discussed in this paper.

  11. High performance computing and communications: FY 1995 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-01

    The High Performance Computing and Communications (HPCC) Program was formally established following passage of the High Performance Computing Act of 1991 signed on December 9, 1991. Ten federal agencies in collaboration with scientists and managers from US industry, universities, and laboratories have developed the HPCC Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1994 and FY 1995. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency. Although the Department of Education is an official HPCC agency, its current funding and reporting of crosscut activities goes through the Committee on Education and Health Resources, not the HPCC Program. For this reason the Implementation Plan covers nine HPCC agencies.

  12. FPGAs in High Perfomance Computing: Results from Two LDRD Projects.

    Energy Technology Data Exchange (ETDEWEB)

    Underwood, Keith D; Ulmer, Craig D.; Thompson, David; Hemmert, Karl Scott

    2006-11-01

    Field programmable gate arrays (FPGAs) have been used as alternative computational de-vices for over a decade; however, they have not been used for traditional scientific com-puting due to their perceived lack of floating-point performance. In recent years, there hasbeen a surge of interest in alternatives to traditional microprocessors for high performancecomputing. Sandia National Labs began two projects to determine whether FPGAs wouldbe a suitable alternative to microprocessors for high performance scientific computing and,if so, how they should be integrated into the system. We present results that indicate thatFPGAs could have a significant impact on future systems. FPGAs have thepotentialtohave order of magnitude levels of performance wins on several key algorithms; however,there are serious questions as to whether the system integration challenge can be met. Fur-thermore, there remain challenges in FPGA programming and system level reliability whenusing FPGA devices.4 AcknowledgmentArun Rodrigues provided valuable support and assistance in the use of the Structural Sim-ulation Toolkit within an FPGA context. Curtis Janssen and Steve Plimpton provided valu-able insights into the workings of two Sandia applications (MPQC and LAMMPS, respec-tively).5

  13. High performance computing in science and engineering '09: transactions of the High Performance Computing Center, Stuttgart (HLRS) 2009

    National Research Council Canada - National Science Library

    Nagel, Wolfgang E; Kröner, Dietmar; Resch, Michael

    2010-01-01

    ...), NIC/JSC (J¨ u lich), and LRZ (Munich). As part of that strategic initiative, in May 2009 already NIC/JSC has installed the first phase of the GCS HPC Tier-0 resources, an IBM Blue Gene/P with roughly 300.000 Cores, this time in J¨ u lich, With that, the GCS provides the most powerful high-performance computing infrastructure in Europe alread...

  14. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  15. Computational Fluid Dynamics (CFD) Computations With Zonal Navier-Stokes Flow Solver (ZNSFLOW) Common High Performance Computing Scalable Software Initiative (CHSSI) Software

    National Research Council Canada - National Science Library

    Edge, Harris

    1999-01-01

    ...), computational fluid dynamics (CFD) 6 project. Under the project, a proven zonal Navier-Stokes solver was rewritten for scalable parallel performance on both shared memory and distributed memory high performance computers...

  16. High quantitative job demands and low coworker support as risk factors for neck pain: Results of a prospective cohort study

    NARCIS (Netherlands)

    Ariëns, G.A.M.; Bongers, P.M.; Hoogendoorn, W.E.; Houtman, I.L.D.; Wal, G. van der; Mechelen, W. van

    2001-01-01

    Study Design. A 3-year prospective cohort study among 1334 workers was conducted. Objective. To determine whether the work-related psychosocial factors of quantitative job demands, conflicting job demands, skill discretion, decision authority, supervisor support, coworker support, and job security

  17. High Performance Computing - Power Application Programming Interface Specification.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  18. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  19. Highly efficient method for 125I-radiolabeling of biomolecules using inverse-electron-demand Diels-Alder reaction.

    Science.gov (United States)

    Choi, Mi Hee; Shim, Ha Eun; Yun, Seong-Jae; Kim, Hye Rim; Mushtaq, Sajid; Lee, Chang Heon; Park, Sang Hyun; Choi, Dae Seong; Lee, Dong-Eun; Byun, Eui-Baek; Jang, Beom-Su; Jeon, Jongho

    2016-04-19

    In this report, we present a rapid and highly efficient method for radioactive iodine labeling of trans-cyclooctene group conjugated biomolecules using inverse-electron-demand Diels-Alder reaction. Radioiodination reaction of the tetrazine structure was carried out using the stannylated precursor 2 to give 125 I-labeled azide ([ 125 I]1) with high radiochemical yield (65±8%) and radiochemical purity (>99%). For radiolabeling application of [ 125 I]1, trans-cyclooctene derived cRGD peptide and human serum albumin were prepared. These substrated were reacted with [ 125 I]1 under mild condition to provide the radiolabeled products [ 125 I]6 and [ 125 I]8, respectively, with excellent radiochemical yields. The biodistribution study of [ 125 I]8 in normal ICR mice showed significantly lower thyroid uptake values than that of 125 I-labeled human serum albumin prepared by a traditional radiolabeling method. Therefore [ 125 I]8 will be a useful radiolabeled tracer in various molecular imaging and biological studies. Those results clearly demonstrate that [ 125 I]1 will be used as a valuable prosthetic group for radiolabeling of biomolecules. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. High-reliability computing for the smarter planet

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Graham, Paul; Manuzzato, Andrea; Dehon, Andre

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is necessary

  1. High-reliability computing for the smarter planet

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Heather M [Los Alamos National Laboratory; Graham, Paul [Los Alamos National Laboratory; Manuzzato, Andrea [UNIV OF PADOVA; Dehon, Andre [UNIV OF PENN; Carter, Nicholas [INTEL CORPORATION

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is

  2. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  3. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  4. High-throughput computational search for strengthening precipitates in alloys

    International Nuclear Information System (INIS)

    Kirklin, S.; Saal, James E.; Hegde, Vinay I.; Wolverton, C.

    2016-01-01

    The search for high-strength alloys and precipitation hardened systems has largely been accomplished through Edisonian trial and error experimentation. Here, we present a novel strategy using high-throughput computational approaches to search for promising precipitate/alloy systems. We perform density functional theory (DFT) calculations of an extremely large space of ∼200,000 potential compounds in search of effective strengthening precipitates for a variety of different alloy matrices, e.g., Fe, Al, Mg, Ni, Co, and Ti. Our search strategy involves screening phases that are likely to produce coherent precipitates (based on small lattice mismatch) and are composed of relatively common alloying elements. When combined with the Open Quantum Materials Database (OQMD), we can computationally screen for precipitates that either have a stable two-phase equilibrium with the host matrix, or are likely to precipitate as metastable phases. Our search produces (for the structure types considered) nearly all currently known high-strength precipitates in a variety of fcc, bcc, and hcp matrices, thus giving us confidence in the strategy. In addition, we predict a number of new, currently-unknown precipitate systems that should be explored experimentally as promising high-strength alloy chemistries.

  5. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  6. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  7. Textbook Factor Demand Curves.

    Science.gov (United States)

    Davis, Joe C.

    1994-01-01

    Maintains that teachers and textbook graphics follow the same basic pattern in illustrating changes in demand curves when product prices increase. Asserts that the use of computer graphics will enable teachers to be more precise in their graphic presentation of price elasticity. (CFR)

  8. A Crafts-Oriented Approach to Computing in High School: Introducing Computational Concepts, Practices, and Perspectives with Electronic Textiles

    Science.gov (United States)

    Kafai, Yasmin B.; Lee, Eunkyoung; Searle, Kristin; Fields, Deborah; Kaplan, Eliot; Lui, Debora

    2014-01-01

    In this article, we examine the use of electronic textiles (e-textiles) for introducing key computational concepts and practices while broadening perceptions about computing. The starting point of our work was the design and implementation of a curriculum module using the LilyPad Arduino in a pre-AP high school computer science class. To…

  9. Building highly available control system applications with Advanced Telecom Computing Architecture and open standards

    International Nuclear Information System (INIS)

    Kazakov, Artem; Furukawa, Kazuro

    2010-01-01

    Requirements for modern and future control systems for large projects like International Linear Collider demand high availability for control system components. Recently telecom industry came up with a great open hardware specification - Advanced Telecom Computing Architecture (ATCA). This specification is aimed for better reliability, availability and serviceability. Since its first market appearance in 2004, ATCA platform has shown tremendous growth and proved to be stable and well represented by a number of vendors. ATCA is an industry standard for highly available systems. On the other hand Service Availability Forum, a consortium of leading communications and computing companies, describes interaction between hardware and software. SAF defines a set of specifications such as Hardware Platform Interface, Application Interface Specification. SAF specifications provide extensive description of highly available systems, services and their interfaces. Originally aimed for telecom applications, these specifications can be used for accelerator controls software as well. This study describes benefits of using these specifications and their possible adoption to accelerator control systems. It is demonstrated how EPICS Redundant IOC was extended using Hardware Platform Interface specification, which made it possible to utilize benefits of the ATCA platform.

  10. Computational Thermodynamics and Kinetics-Based ICME Framework for High-Temperature Shape Memory Alloys

    Science.gov (United States)

    Arróyave, Raymundo; Talapatra, Anjana; Johnson, Luke; Singh, Navdeep; Ma, Ji; Karaman, Ibrahim

    2015-11-01

    Over the last decade, considerable interest in the development of High-Temperature Shape Memory Alloys (HTSMAs) for solid-state actuation has increased dramatically as key applications in the aerospace and automotive industry demand actuation temperatures well above those of conventional SMAs. Most of the research to date has focused on establishing the (forward) connections between chemistry, processing, (micro)structure, properties, and performance. Much less work has been dedicated to the development of frameworks capable of addressing the inverse problem of establishing necessary chemistry and processing schedules to achieve specific performance goals. Integrated Computational Materials Engineering (ICME) has emerged as a powerful framework to address this problem, although it has yet to be applied to the development of HTSMAs. In this paper, the contributions of computational thermodynamics and kinetics to ICME of HTSMAs are described. Some representative examples of the use of computational thermodynamics and kinetics to understand the phase stability and microstructural evolution in HTSMAs are discussed. Some very recent efforts at combining both to assist in the design of HTSMAs and limitations to the full implementation of ICME frameworks for HTSMA development are presented.

  11. High degree utilization of computers for design of nuclear power plants

    International Nuclear Information System (INIS)

    Masui, Takao; Sawada, Takashi

    1992-01-01

    Nuclear power plants are the huge technology in which various technologies are compounded, and the high safety is demanded. Therefore, in the design of nuclear power plants, it is necessary to carry out the design by sufficiently grasping the behavior of the plants, and to confirm the safety by carrying out the accurate design evaluation supposing the various operational conditions, and as the indispensable tool for these analysis and evaluation, the most advanced computers in that age have been utilized. As to the utilization for the design, there are the fields of design, analysis and evaluation and another fields of the application to the support of design. Also in the field of the application to operation control, computers are utilized. The utilization of computers for the core design, hydrothermal design, core structure design, safety analysis and structural analysis of PWR plants, and for the nuclear design, safety analysis and heat flow analysis of FBR plants, the application to the support of design and the application to operation control are explained. (K.I.)

  12. Diagnosis of cholesteatoma by high resolution computed tomography

    International Nuclear Information System (INIS)

    Kakitsubata, Yousuke; Kakitsubata, Sachiko; Ogata, Noboru; Asada, Keiko; Watanabe, Katsushi; Tohno, Tetsuya; Makino, Kohji

    1988-01-01

    Three normal volunteers and 57 patients with cholesteatoma were examined by high resolution computed tomography. Serial sections were made through the temporal bone at the nasaly inclined position of 30 degree to the orbitomeatal line (semiaxial plane ; SAP). The findings of temporal bone structures in normal subjects were evaluated in SAP and axial plane (OM). Although the both planes showed good visualization, SAP showed both the eustachian tube and tympanic cavity in one slice. In cholesteatoma soft tissue masses in the tympanic cavity, mastoid air cells and Eustachian tube were demonstrated clearly by SAP. (author)

  13. Could High Mental Demands at Work Offset the Adverse Association Between Social Isolation and Cognitive Functioning? Results of the Population-Based LIFE-Adult-Study.

    Science.gov (United States)

    Rodriguez, Francisca S; Schroeter, Matthias L; Witte, A Veronica; Engel, Christoph; Löffler, Markus; Thiery, Joachim; Villringer, Arno; Luck, Tobias; Riedel-Heller, Steffi G

    2017-11-01

    The study investigated whether high mental demands at work, which have shown to promote a good cognitive functioning in old age, could offset the adverse association between social isolation and cognitive functioning. Based on data from the population-based LIFE-Adult-Study, the association between cognitive functioning (Verbal Fluency Test, Trail Making Test B) and social isolation (Lubben Social Network Scale) as well as mental demands at work (O*NET database) was analyzed via linear regression analyses adjusted for age, sex, education, and sampling weights. Cognitive functioning was significantly lower in socially isolated individuals and in individuals working in low mental demands jobs-even in old age after retirement and even after taking into account the educational level. An interaction effect suggested stronger effects of mental demands at work in socially isolated than nonisolated individuals. The findings suggest that working in high mental-demand jobs could offset the adverse association between social isolation and cognitive functioning. Further research should evaluate how interventions that target social isolation and enhance mentally demanding activities promote a good cognitive functioning in old age. Copyright © 2017 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  14. How Does the Presence of High Need for Recovery Affect the Association Between Perceived High Chronic Exposure to Stressful Work Demands and Work Productivity Loss?

    Science.gov (United States)

    Dewa, Carolyn S; Nieuwenhuijsen, Karen; Sluiter, Judith K

    2016-06-01

    Employers have increasingly been interested in decreasing work stress. However, little attention has been given to recovery from the exertion experienced during work. This paper addresses the question: how does the presence of high need for recovery (HNFR) affect the association between perceived high chronic exposure to stressful work demands (PHCE) and work productivity loss (WPL)?. Data were from a population-based survey of 2219 Ontario workers. The Work Limitations Questionnaire was used to measure WPL. The relationship between HNFR and WPL was examined using four multiple regression models. Our results indicate that HNFR affects the association between PHCE and WPL. They also suggest that PHCE alone significantly increases the risk of WPL. Our results suggest that HNFR as well as PHCE could be an important factor for workplaces to target to increase worker productivity.

  15. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    Science.gov (United States)

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  16. Quantitative analysis of cholesteatoma using high resolution computed tomography

    International Nuclear Information System (INIS)

    Kikuchi, Shigeru; Yamasoba, Tatsuya; Iinuma, Toshitaka.

    1992-01-01

    Seventy-three cases of adult cholesteatoma, including 52 cases of pars flaccida type cholesteatoma and 21 of pars tensa type cholesteatoma, were examined using high resolution computed tomography, in both axial (lateral semicircular canal plane) and coronal sections (cochlear, vestibular and antral plane). These cases were classified into two subtypes according to the presence of extension of cholesteatoma into the antrum. Sixty cases of chronic otitis media with central perforation (COM) were also examined as controls. Various locations of the middle ear cavity were measured in terms of size in comparison with pars flaccida type cholesteatoma, pars tensa type cholesteatoma and COM. The width of the attic was significantly larger in both pars flaccida type and pars tensa type cholesteatoma than in COM. With pars flaccida type cholesteatoma there was a significantly larger distance between the malleus and lateral wall of the attic than with COM. In contrast, the distance between the malleus and medial wall of the attic was significantly larger with pars tensa type cholesteatoma than with COM. With cholesteatoma extending into the antrum, regardless of the type of cholesteatoma, there were significantly larger distances than with COM at the following sites: the width and height of the aditus ad antrum, and the width, height and anterior-posterior diameter of the antrum. However, these distances were not significantly different between cholesteatoma without extension into the antrum and COM. The hitherto demonstrated qualitative impressions of bone destruction in cholesteatoma were quantitatively verified in detail using high resolution computed tomography. (author)

  17. High resolution muon computed tomography at neutrino beam facilities

    International Nuclear Information System (INIS)

    Suerfu, B.; Tully, C.G.

    2016-01-01

    X-ray computed tomography (CT) has an indispensable role in constructing 3D images of objects made from light materials. However, limited by absorption coefficients, X-rays cannot deeply penetrate materials such as copper and lead. Here we show via simulation that muon beams can provide high resolution tomographic images of dense objects and of structures within the interior of dense objects. The effects of resolution broadening from multiple scattering diminish with increasing muon momentum. As the momentum of the muon increases, the contrast of the image goes down and therefore requires higher resolution in the muon spectrometer to resolve the image. The variance of the measured muon momentum reaches a minimum and then increases with increasing muon momentum. The impact of the increase in variance is to require a higher integrated muon flux to reduce fluctuations. The flux requirements and level of contrast needed for high resolution muon computed tomography are well matched to the muons produced in the pion decay pipe at a neutrino beam facility and what can be achieved for momentum resolution in a muon spectrometer. Such an imaging system can be applied in archaeology, art history, engineering, material identification and whenever there is a need to image inside a transportable object constructed of dense materials

  18. A High-Resolution Spatially Explicit Monte-Carlo Simulation Approach to Commercial and Residential Electricity and Water Demand Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Morton, April M [ORNL; McManamay, Ryan A [ORNL; Nagle, Nicholas N [ORNL; Piburn, Jesse O [ORNL; Stewart, Robert N [ORNL; Surendran Nair, Sujithkumar [ORNL

    2016-01-01

    Abstract As urban areas continue to grow and evolve in a world of increasing environmental awareness, the need for high resolution spatially explicit estimates for energy and water demand has become increasingly important. Though current modeling efforts mark significant progress in the effort to better understand the spatial distribution of energy and water consumption, many are provided at a course spatial resolution or rely on techniques which depend on detailed region-specific data sources that are not publicly available for many parts of the U.S. Furthermore, many existing methods do not account for errors in input data sources and may therefore not accurately reflect inherent uncertainties in model outputs. We propose an alternative and more flexible Monte-Carlo simulation approach to high-resolution residential and commercial electricity and water consumption modeling that relies primarily on publicly available data sources. The method s flexible data requirement and statistical framework ensure that the model is both applicable to a wide range of regions and reflective of uncertainties in model results. Key words: Energy Modeling, Water Modeling, Monte-Carlo Simulation, Uncertainty Quantification Acknowledgment This manuscript has been authored by employees of UT-Battelle, LLC, under contract DE-AC05-00OR22725 with the U.S. Department of Energy. Accordingly, the United States Government retains and the publisher, by accepting the article for publication, acknowledges that the United States Government retains a non-exclusive, paid-up, irrevocable, world-wide license to publish or reproduce the published form of this manuscript, or allow others to do so, for United States Government purposes.

  19. High resolution computed tomography of the post partum pituitary gland

    International Nuclear Information System (INIS)

    Hinshaw, D.B.; Hasso, A.N.; Thompson, J.R.; Davidson, B.J.

    1984-01-01

    Eight volunteer post partum female patients were examined with high resolution computed tomography during the week immediately after delivery. All patients received high dose (40-70 gm) intravenous iodine contrast administration. The scans were examined for pituitary gland height, shape and homogeneity. All of the patients had enlarged glands by the traditional standards (i.e. gland height of 8 mm or greater). The diaphragma sellae in every call bulged upward with a convex domed appearance. The glands were generally inhomogeneous. One gland had a 4 mm focal well defined area of decreased attenuation. Two patients who were studied again months later had glands which had returned to ''normal'' size. The enlarged, upwardly convex pituitary gland appears to be typical and normal for the recently post partum period. (orig.)

  20. FPGA based compute nodes for high level triggering in PANDA

    International Nuclear Information System (INIS)

    Kuehn, W; Gilardi, C; Kirschner, D; Lang, J; Lange, S; Liu, M; Perez, T; Yang, S; Schmitt, L; Jin, D; Li, L; Liu, Z; Lu, Y; Wang, Q; Wei, S; Xu, H; Zhao, D; Korcyl, K; Otwinowski, J T; Salabura, P

    2008-01-01

    PANDA is a new universal detector for antiproton physics at the HESR facility at FAIR/GSI. The PANDA data acquisition system has to handle interaction rates of the order of 10 7 /s and data rates of several 100 Gb/s. FPGA based compute nodes with multi-Gb/s bandwidth capability using the ATCA architecture are designed to handle tasks such as event building, feature extraction and high level trigger processing. Data connectivity is provided via optical links as well as multiple Gb Ethernet ports. The boards will support trigger algorithms such us pattern recognition for RICH detectors, EM shower analysis, fast tracking algorithms and global event characterization. Besides VHDL, high level C-like hardware description languages will be considered to implement the firmware

  1. QSPIN: A High Level Java API for Quantum Computing Experimentation

    Science.gov (United States)

    Barth, Tim

    2017-01-01

    QSPIN is a high level Java language API for experimentation in QC models used in the calculation of Ising spin glass ground states and related quadratic unconstrained binary optimization (QUBO) problems. The Java API is intended to facilitate research in advanced QC algorithms such as hybrid quantum-classical solvers, automatic selection of constraint and optimization parameters, and techniques for the correction and mitigation of model and solution errors. QSPIN includes high level solver objects tailored to the D-Wave quantum annealing architecture that implement hybrid quantum-classical algorithms [Booth et al.] for solving large problems on small quantum devices, elimination of variables via roof duality, and classical computing optimization methods such as GPU accelerated simulated annealing and tabu search for comparison. A test suite of documented NP-complete applications ranging from graph coloring, covering, and partitioning to integer programming and scheduling are provided to demonstrate current capabilities.

  2. Ground-glass opacity: High-resolution computed tomography and 64-multi-slice computed tomography findings comparison

    International Nuclear Information System (INIS)

    Sergiacomi, Gianluigi; Ciccio, Carmelo; Boi, Luca; Velari, Luca; Crusco, Sonia; Orlacchio, Antonio; Simonetti, Giovanni

    2010-01-01

    Objective: Comparative evaluation of ground-glass opacity using conventional high-resolution computed tomography technique and volumetric computed tomography by 64-row multi-slice scanner, verifying advantage of volumetric acquisition and post-processing technique allowed by 64-row CT scanner. Methods: Thirty-four patients, in which was assessed ground-glass opacity pattern by previous high-resolution computed tomography during a clinical-radiological follow-up for their lung disease, were studied by means of 64-row multi-slice computed tomography. Comparative evaluation of image quality was done by both CT modalities. Results: It was reported good inter-observer agreement (k value 0.78-0.90) in detection of ground-glass opacity with high-resolution computed tomography technique and volumetric Computed Tomography acquisition with moderate increasing of intra-observer agreement (k value 0.46) using volumetric computed tomography than high-resolution computed tomography. Conclusions: In our experience, volumetric computed tomography with 64-row scanner shows good accuracy in detection of ground-glass opacity, providing a better spatial and temporal resolution and advanced post-processing technique than high-resolution computed tomography.

  3. Definition, modeling and simulation of a grid computing system for high throughput computing

    CERN Document Server

    Caron, E; Tsaregorodtsev, A Yu

    2006-01-01

    In this paper, we study and compare grid and global computing systems and outline the benefits of having an hybrid system called dirac. To evaluate the dirac scheduling for high throughput computing, a new model is presented and a simulator was developed for many clusters of heterogeneous nodes belonging to a local network. These clusters are assumed to be connected to each other through a global network and each cluster is managed via a local scheduler which is shared by many users. We validate our simulator by comparing the experimental and analytical results of a M/M/4 queuing system. Next, we do the comparison with a real batch system and we obtain an average error of 10.5% for the response time and 12% for the makespan. We conclude that the simulator is realistic and well describes the behaviour of a large-scale system. Thus we can study the scheduling of our system called dirac in a high throughput context. We justify our decentralized, adaptive and oppor! tunistic approach in comparison to a centralize...

  4. Real-time Tsunami Inundation Prediction Using High Performance Computers

    Science.gov (United States)

    Oishi, Y.; Imamura, F.; Sugawara, D.

    2014-12-01

    Recently off-shore tsunami observation stations based on cabled ocean bottom pressure gauges are actively being deployed especially in Japan. These cabled systems are designed to provide real-time tsunami data before tsunamis reach coastlines for disaster mitigation purposes. To receive real benefits of these observations, real-time analysis techniques to make an effective use of these data are necessary. A representative study was made by Tsushima et al. (2009) that proposed a method to provide instant tsunami source prediction based on achieving tsunami waveform data. As time passes, the prediction is improved by using updated waveform data. After a tsunami source is predicted, tsunami waveforms are synthesized from pre-computed tsunami Green functions of linear long wave equations. Tsushima et al. (2014) updated the method by combining the tsunami waveform inversion with an instant inversion of coseismic crustal deformation and improved the prediction accuracy and speed in the early stages. For disaster mitigation purposes, real-time predictions of tsunami inundation are also important. In this study, we discuss the possibility of real-time tsunami inundation predictions, which require faster-than-real-time tsunami inundation simulation in addition to instant tsunami source analysis. Although the computational amount is large to solve non-linear shallow water equations for inundation predictions, it has become executable through the recent developments of high performance computing technologies. We conducted parallel computations of tsunami inundation and achieved 6.0 TFLOPS by using 19,000 CPU cores. We employed a leap-frog finite difference method with nested staggered grids of which resolution range from 405 m to 5 m. The resolution ratio of each nested domain was 1/3. Total number of grid points were 13 million, and the time step was 0.1 seconds. Tsunami sources of 2011 Tohoku-oki earthquake were tested. The inundation prediction up to 2 hours after the

  5. Investigation of Vocational High-School Students' Computer Anxiety

    Science.gov (United States)

    Tuncer, Murat; Dogan, Yunus; Tanas, Ramazan

    2013-01-01

    With the advent of the computer technologies, we are increasingly encountering these technologies in every field of life. The fact that the computer technology is so much interwoven with the daily life makes it necessary to investigate certain psychological attitudes of those working with computers towards computers. As this study is limited to…

  6. High performance computing network for cloud environment using simulators

    OpenAIRE

    Singh, N. Ajith; Hemalatha, M.

    2012-01-01

    Cloud computing is the next generation computing. Adopting the cloud computing is like signing up new form of a website. The GUI which controls the cloud computing make is directly control the hardware resource and your application. The difficulty part in cloud computing is to deploy in real environment. Its' difficult to know the exact cost and it's requirement until and unless we buy the service not only that whether it will support the existing application which is available on traditional...

  7. Proceedings of the workshop on high resolution computed microtomography (CMT)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-02-01

    The purpose of the workshop was to determine the status of the field, to define instrumental and computational requirements, and to establish minimum specifications required by possible users. The most important message sent by implementers was the remainder that CMT is a tool. It solves a wide spectrum of scientific problems and is complementary to other microscopy techniques, with certain important advantages that the other methods do not have. High-resolution CMT can be used non-invasively and non-destructively to study a variety of hierarchical three-dimensional microstructures, which in turn control body function. X-ray computed microtomography can also be used at the frontiers of physics, in the study of granular systems, for example. With high-resolution CMT, for example, three-dimensional pore geometries and topologies of soils and rocks can be obtained readily and implemented directly in transport models. In turn, these geometries can be used to calculate fundamental physical properties, such as permeability and electrical conductivity, from first principles. Clearly, use of the high-resolution CMT technique will contribute tremendously to the advancement of current R and D technologies in the production, transport, storage, and utilization of oil and natural gas. It can also be applied to problems related to environmental pollution, particularly to spilling and seepage of hazardous chemicals into the Earth's subsurface. Applications to energy and environmental problems will be far-ranging and may soon extend to disciplines such as materials science--where the method can be used in the manufacture of porous ceramics, filament-resin composites, and microelectronics components--and to biomedicine, where it could be used to design biocompatible materials such as artificial bones, contact lenses, or medication-releasing implants. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  8. Computation of High-Frequency Waves with Random Uncertainty

    KAUST Repository

    Malenova, Gabriela

    2016-01-06

    We consider the forward propagation of uncertainty in high-frequency waves, described by the second order wave equation with highly oscillatory initial data. The main sources of uncertainty are the wave speed and/or the initial phase and amplitude, described by a finite number of random variables with known joint probability distribution. We propose a stochastic spectral asymptotic method [1] for computing the statistics of uncertain output quantities of interest (QoIs), which are often linear or nonlinear functionals of the wave solution and its spatial/temporal derivatives. The numerical scheme combines two techniques: a high-frequency method based on Gaussian beams [2, 3], a sparse stochastic collocation method [4]. The fast spectral convergence of the proposed method depends crucially on the presence of high stochastic regularity of the QoI independent of the wave frequency. In general, the high-frequency wave solutions to parametric hyperbolic equations are highly oscillatory and non-smooth in both physical and stochastic spaces. Consequently, the stochastic regularity of the QoI, which is a functional of the wave solution, may in principle below and depend on frequency. In the present work, we provide theoretical arguments and numerical evidence that physically motivated QoIs based on local averages of |uE|2 are smooth, with derivatives in the stochastic space uniformly bounded in E, where uE and E denote the highly oscillatory wave solution and the short wavelength, respectively. This observable related regularity makes the proposed approach more efficient than current asymptotic approaches based on Monte Carlo sampling techniques.

  9. A highly efficient parallel algorithm for solving the neutron diffusion nodal equations on shared-memory computers

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1990-01-01

    Modern parallel computer architectures offer an enormous potential for reducing CPU and wall-clock execution times of large-scale computations commonly performed in various applications in science and engineering. Recently, several authors have reported their efforts in developing and implementing parallel algorithms for solving the neutron diffusion equation on a variety of shared- and distributed-memory parallel computers. Testing of these algorithms for a variety of two- and three-dimensional meshes showed significant speedup of the computation. Even for very large problems (i.e., three-dimensional fine meshes) executed concurrently on a few nodes in serial (nonvector) mode, however, the measured computational efficiency is very low (40 to 86%). In this paper, the authors present a highly efficient (∼85 to 99.9%) algorithm for solving the two-dimensional nodal diffusion equations on the Sequent Balance 8000 parallel computer. Also presented is a model for the performance, represented by the efficiency, as a function of problem size and the number of participating processors. The model is validated through several tests and then extrapolated to larger problems and more processors to predict the performance of the algorithm in more computationally demanding situations

  10. Computational Fluid Dynamics Analysis of High Injection Pressure Blended Biodiesel

    Science.gov (United States)

    Khalid, Amir; Jaat, Norrizam; Faisal Hushim, Mohd; Manshoor, Bukhari; Zaman, Izzuddin; Sapit, Azwan; Razali, Azahari

    2017-08-01

    Biodiesel have great potential for substitution with petrol fuel for the purpose of achieving clean energy production and emission reduction. Among the methods that can control the combustion properties, controlling of the fuel injection conditions is one of the successful methods. The purpose of this study is to investigate the effect of high injection pressure of biodiesel blends on spray characteristics using Computational Fluid Dynamics (CFD). Injection pressure was observed at 220 MPa, 250 MPa and 280 MPa. The ambient temperature was kept held at 1050 K and ambient pressure 8 MPa in order to simulate the effect of boost pressure or turbo charger during combustion process. Computational Fluid Dynamics were used to investigate the spray characteristics of biodiesel blends such as spray penetration length, spray angle and mixture formation of fuel-air mixing. The results shows that increases of injection pressure, wider spray angle is produced by biodiesel blends and diesel fuel. The injection pressure strongly affects the mixture formation, characteristics of fuel spray, longer spray penetration length thus promotes the fuel and air mixing.

  11. Computational aspects in high intensity ultrasonic surgery planning.

    Science.gov (United States)

    Pulkkinen, A; Hynynen, K

    2010-01-01

    Therapeutic ultrasound treatment planning is discussed and computational aspects regarding it are reviewed. Nonlinear ultrasound simulations were solved with a combined frequency domain Rayleigh and KZK model. Ultrasonic simulations were combined with thermal simulations and were used to compute heating of muscle tissue in vivo for four different focused ultrasound transducers. The simulations were compared with measurements and good agreement was found for large F-number transducers. However, at F# 1.9 the simulated rate of temperature rise was approximately a factor of 2 higher than the measured ones. The power levels used with the F# 1 transducer were too low to show any nonlinearity. The simulations were used to investigate the importance of nonlinarities generated in the coupling water, and also the importance of including skin in the simulations. Ignoring either of these in the model would lead to larger errors. Most notably, the nonlinearities generated in the water can enhance the focal temperature by more than 100%. The simulations also demonstrated that pulsed high power sonications may provide an opportunity to significantly (up to a factor of 3) reduce the treatment time. In conclusion, nonlinear propagation can play an important role in shaping the energy distribution during a focused ultrasound treatment and it should not be ignored in planning. However, the current simulation methods are accurate only with relatively large F-numbers and better models need to be developed for sharply focused transducers. Copyright 2009 Elsevier Ltd. All rights reserved.

  12. Performance management of high performance computing for medical image processing in Amazon Web Services

    Science.gov (United States)

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-03-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  13. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services.

    Science.gov (United States)

    Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha

    2016-02-27

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  14. Development and application of computer network for working out of researches on high energy physics

    International Nuclear Information System (INIS)

    Boos, Eh.G.; Tashimov, M.A.

    2001-01-01

    Computer network of the Physical and Technological Institute of the Ministry and Science and Education of the Republic of Kazakhstan (FTI of MSE RK) jointing a number of the research institutions, leading universities and other enterprises of Almaty city. At the present time more than 350 computers are connected to this network, the velocity of satellite channel is increased up to 192 k bit/s per one reception. The university segments of the network are separated in individual domen. A new software for analysis and proceeding of experimental data are implemented and other measures are carried out as well. However an increasing volume of information exchange between nuclear-physical center demanding the further information network development. So for providing consumers demands in information exchange in the nearest years in the paper the possibility for following measures maintenance are considered: (1) Increase of satellite channel velocity up to 1-2 M bit/s by replace of the existing SDM-100 modem on a rapid one. Now using the Kedr-M station and the CISCO-2501 tracer allowing to provide such velocity; (2) Convert of the Institute local calculation network on the new Fast Ethernet technology permitting to increase the information transmission velocity up to 100 M bit/s at the complete succession of existing Ethernet; (3) The Proxy-server (Firewaal) install at the network support assay, that giving the possibility for discharging of satellite channel and localization of segment of the network, connected with learning on the Internet not in damage to educational process. In the framework of cooperation with DESY German accelerating center with help of the indicated network the data about 2 hundred thousand deep inelastic interactions of electrons with protons measured at ZEUS detector are obtained. Data about 10 thousand of events simulated at the OPAL installation are received as well. Besides the computer network is using for operative information exchange and

  15. On energy demand

    International Nuclear Information System (INIS)

    Haefele, W.

    1977-01-01

    Since the energy crisis, a number of energy plans have been proposed, and almost all of these envisage some kind of energy demand adaptations or conservation measures, hoping thus to escape the anticipated problems of energy supply. However, there seems to be no clear explanation of the basis on which our foreseeable future energy problems could be eased. And in fact, a first attempt at a more exact definition of energy demand and its interaction with other objectives, such as economic ones, shows that it is a highly complex concept which we still hardly understand. The article explains in some detail why it is so difficult to understand energy demand

  16. Trends in high-performance computing for engineering calculations.

    Science.gov (United States)

    Giles, M B; Reguly, I

    2014-08-13

    High-performance computing has evolved remarkably over the past 20 years, and that progress is likely to continue. However, in recent years, this progress has been achieved through greatly increased hardware complexity with the rise of multicore and manycore processors, and this is affecting the ability of application developers to achieve the full potential of these systems. This article outlines the key developments on the hardware side, both in the recent past and in the near future, with a focus on two key issues: energy efficiency and the cost of moving data. It then discusses the much slower evolution of system software, and the implications of all of this for application developers. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  17. Power/energy use cases for high performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kelly, Suzanne M. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hammond, Steven [National Renewable Energy Lab. (NREL), Golden, CO (United States); Elmore, Ryan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Munch, Kristin [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  18. High resolution computed tomography of the middle ear

    International Nuclear Information System (INIS)

    Ikeda, Katsuhisa; Sakurai, Tokio; Saijo, Shigeru; Kobayashi, Toshimitsu

    1983-01-01

    High resolution computed tomography was performed in 57 cases with various middle ear diseases (chronic otitis media, otitis media with effusion, acute otitis media and atelectasis). Although further improvement in detectability is necessary in order to discriminate each type of the soft tissue lesions, CT is the most useful method currently available in detecting the small structures and soft tissue lesions of the middle ear. In particular, the lesions at the tympanic isthmus and tympanic fold could very clearly be detected only by CT. In acute otitis media, lesions usually started in the attic and spread to the mastoid air cells. In otitis media with effusion, the soft tissue shadow was ovserved in the attic and mastoid air cell. CT is valuable in diagnosis, evaluation of the treatment and prognosis, and analysis of pathophysiology in the middle ear diseases. (author)

  19. Scalability of DL_POLY on High Performance Computing Platform

    Directory of Open Access Journals (Sweden)

    Mabule Samuel Mabakane

    2017-12-01

    Full Text Available This paper presents a case study on the scalability of several versions of the molecular dynamics code (DL_POLY performed on South Africa‘s Centre for High Performance Computing e1350 IBM Linux cluster, Sun system and Lengau supercomputers. Within this study different problem sizes were designed and the same chosen systems were employed in order to test the performance of DL_POLY using weak and strong scalability. It was found that the speed-up results for the small systems were better than large systems on both Ethernet and Infiniband network. However, simulations of large systems in DL_POLY performed well using Infiniband network on Lengau cluster as compared to e1350 and Sun supercomputer.

  20. Electromagnetic Modeling of Human Body Using High Performance Computing

    Science.gov (United States)

    Ng, Cho-Kuen; Beall, Mark; Ge, Lixin; Kim, Sanghoek; Klaas, Ottmar; Poon, Ada

    Realistic simulation of electromagnetic wave propagation in the actual human body can expedite the investigation of the phenomenon of harvesting implanted devices using wireless powering coupled from external sources. The parallel electromagnetics code suite ACE3P developed at SLAC National Accelerator Laboratory is based on the finite element method for high fidelity accelerator simulation, which can be enhanced to model electromagnetic wave propagation in the human body. Starting with a CAD model of a human phantom that is characterized by a number of tissues, a finite element mesh representing the complex geometries of the individual tissues is built for simulation. Employing an optimal power source with a specific pattern of field distribution, the propagation and focusing of electromagnetic waves in the phantom has been demonstrated. Substantial speedup of the simulation is achieved by using multiple compute cores on supercomputers.

  1. Paracoccidioidomycosis: High-resolution computed tomography-pathologic correlation

    International Nuclear Information System (INIS)

    Marchiori, Edson; Valiante, Paulo Marcos; Mano, Claudia Mauro; Zanetti, Glaucia; Escuissato, Dante L.; Souza, Arthur Soares; Capone, Domenico

    2011-01-01

    Objective: The purpose of this study was to describe the high-resolution computed tomography (HRCT) features of pulmonary paracoccidioidomycosis and to correlate them with pathologic findings. Methods: The study included 23 adult patients with pulmonary paracoccidioidomycosis. All patients had undergone HRCT, and the images were retrospectively analyzed by two chest radiologists, who reached decisions by consensus. An experienced lung pathologist reviewed all pathological specimens. The HRCT findings were correlated with histopathologic data. Results: The predominant HRCT findings included areas of ground-glass opacities, nodules, interlobular septal thickening, airspace consolidation, cavitation, and fibrosis. The main pathological features consisted of alveolar and interlobular septal inflammatory infiltration, granulomas, alveolar exudate, cavitation secondary to necrosis, and fibrosis. Conclusion: Paracoccidioidomycosis can present different tomography patterns, which can involve both the interstitium and the airspace. These abnormalities can be pathologically correlated with inflammatory infiltration, granulomatous reaction, and fibrosis.

  2. High-order computer-assisted estimates of topological entropy

    Science.gov (United States)

    Grote, Johannes

    The concept of Taylor Models is introduced, which offers highly accurate C0-estimates for the enclosures of functional dependencies, combining high-order Taylor polynomial approximation of functions and rigorous estimates of the truncation error, performed using verified interval arithmetic. The focus of this work is on the application of Taylor Models in algorithms for strongly nonlinear dynamical systems. A method to obtain sharp rigorous enclosures of Poincare maps for certain types of flows and surfaces is developed and numerical examples are presented. Differential algebraic techniques allow the efficient and accurate computation of polynomial approximations for invariant curves of certain planar maps around hyperbolic fixed points. Subsequently we introduce a procedure to extend these polynomial curves to verified Taylor Model enclosures of local invariant manifolds with C0-errors of size 10-10--10 -14, and proceed to generate the global invariant manifold tangle up to comparable accuracy through iteration in Taylor Model arithmetic. Knowledge of the global manifold structure up to finite iterations of the local manifold pieces enables us to find all homoclinic and heteroclinic intersections in the generated manifold tangle. Combined with the mapping properties of the homoclinic points and their ordering we are able to construct a subshift of finite type as a topological factor of the original planar system to obtain rigorous lower bounds for its topological entropy. This construction is fully automatic and yields homoclinic tangles with several hundred homoclinic points. As an example rigorous lower bounds for the topological entropy of the Henon map are computed, which to the best knowledge of the authors yield the largest such estimates published so far.

  3. Development of superconductor electronics technology for high-end computing

    Energy Technology Data Exchange (ETDEWEB)

    Silver, A [Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109-8099 (United States); Kleinsasser, A [Jet Propulsion Laboratory, 4800 Oak Grove Drive, Pasadena, CA 91109-8099 (United States); Kerber, G [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States); Herr, Q [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States); Dorojevets, M [Department of Electrical and Computer Engineering, SUNY-Stony Brook, NY 11794-2350 (United States); Bunyk, P [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States); Abelson, L [Northrop Grumman Space Technology, One Space Park, Redondo Beach, CA 90278 (United States)

    2003-12-01

    This paper describes our programme to develop and demonstrate ultra-high performance single flux quantum (SFQ) VLSI technology that will enable superconducting digital processors for petaFLOPS-scale computing. In the hybrid technology, multi-threaded architecture, the computational engine to power a petaFLOPS machine at affordable power will consist of 4096 SFQ multi-chip processors, with 50 to 100 GHz clock frequency and associated cryogenic RAM. We present the superconducting technology requirements, progress to date and our plan to meet these requirements. We improved SFQ Nb VLSI by two generations, to a 8 kA cm{sup -2}, 1.25 {mu}m junction process, incorporated new CAD tools into our methodology, demonstrated methods for recycling the bias current and data communication at speeds up to 60 Gb s{sup -1}, both on and between chips through passive transmission lines. FLUX-1 is the most ambitious project implemented in SFQ technology to date, a prototype general-purpose 8 bit microprocessor chip. We are testing the FLUX-1 chip (5K gates, 20 GHz clock) and designing a 32 bit floating-point SFQ multiplier with vector-register memory. We report correct operation of the complete stripline-connected gate library with large bias margins, as well as several larger functional units used in FLUX-1. The next stage will be an SFQ multi-processor machine. Important challenges include further reducing chip supply current and on-chip power dissipation, developing at least 64 kbit, sub-nanosecond cryogenic RAM chips, developing thermally and electrically efficient high data rate cryogenic-to-ambient input/output technology and improving Nb VLSI to increase gate density.

  4. Development of superconductor electronics technology for high-end computing

    International Nuclear Information System (INIS)

    Silver, A; Kleinsasser, A; Kerber, G; Herr, Q; Dorojevets, M; Bunyk, P; Abelson, L

    2003-01-01

    This paper describes our programme to develop and demonstrate ultra-high performance single flux quantum (SFQ) VLSI technology that will enable superconducting digital processors for petaFLOPS-scale computing. In the hybrid technology, multi-threaded architecture, the computational engine to power a petaFLOPS machine at affordable power will consist of 4096 SFQ multi-chip processors, with 50 to 100 GHz clock frequency and associated cryogenic RAM. We present the superconducting technology requirements, progress to date and our plan to meet these requirements. We improved SFQ Nb VLSI by two generations, to a 8 kA cm -2 , 1.25 μm junction process, incorporated new CAD tools into our methodology, demonstrated methods for recycling the bias current and data communication at speeds up to 60 Gb s -1 , both on and between chips through passive transmission lines. FLUX-1 is the most ambitious project implemented in SFQ technology to date, a prototype general-purpose 8 bit microprocessor chip. We are testing the FLUX-1 chip (5K gates, 20 GHz clock) and designing a 32 bit floating-point SFQ multiplier with vector-register memory. We report correct operation of the complete stripline-connected gate library with large bias margins, as well as several larger functional units used in FLUX-1. The next stage will be an SFQ multi-processor machine. Important challenges include further reducing chip supply current and on-chip power dissipation, developing at least 64 kbit, sub-nanosecond cryogenic RAM chips, developing thermally and electrically efficient high data rate cryogenic-to-ambient input/output technology and improving Nb VLSI to increase gate density

  5. Concept and computation of radiation dose at high energies

    International Nuclear Information System (INIS)

    Sarkar, P.K.

    2010-01-01

    Computational dosimetry, a subdiscipline of computational physics devoted to radiation metrology, is determination of absorbed dose and other dose related quantities by numbers. Computations are done separately both for external and internal dosimetry. The methodology used in external beam dosimetry is necessarily a combination of experimental radiation dosimetry and theoretical dose computation since it is not feasible to plan any physical dose measurements from inside a living human body

  6. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  7. Computer Simulation Studies of Ion Channels at High Temperatures

    Science.gov (United States)

    Song, Hyun Deok

    The gramicidin channel is the smallest known biological ion channel, and it exhibits cation selectivity. Recently, Dr. John Cuppoletti's group at the University of Cincinnati showed that the gramicidin channel can function at high temperatures (360 ˜ 380K) with significant currents. This finding may have significant implications for fuel cell technology. In this thesis, we have examined the gramicidin channel at 300K, 330K, and 360K by computer simulation. We have investigated how the temperature affects the current and differences in magnitude of free energy between the two gramicidin forms, the helical dimer (HD) and the double helix (DH). A slight decrease of the free energy barrier inside the gramicidin channel and increased diffusion at high temperatures result in an increase of current. An applied external field of 0.2V/nm along the membrane normal results in directly observable ion transport across the channels at high temperatures for both HD and DH forms. We found that higher temperatures also affect the probability distribution of hydrogen bonds, the bending angle, the distance between dimers, and the size of the pore radius for the helical dimer structure. These findings may be related to the gating of the gramicidin channel. Methanococcus jannaschii (MJ) is a methane-producing thermophile, which was discovered at a depth of 2600m in a Pacific Ocean vent in 1983. It has the ability to thrive at high temperatures and high pressures, which are unfavorable for most life forms. There have been some experiments to study its stability under extreme conditions, but still the origin of the stability of MJ is not exactly known. MJ0305 is the chloride channel protein from the thermophile MJ. After generating a structure of MJ0305 by homology modeling based on the Ecoli ClC templates, we examined the thermal stability, and the network stability from the change of network entropy calculated from the adjacency matrices of the protein. High temperatures increase the

  8. FY 1995 Blue Book: High Performance Computing and Communications: Technology for the National Information Infrastructure

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — The Federal High Performance Computing and Communications HPCC Program was created to accelerate the development of future generations of high performance computers...

  9. Evaluation of flexible demand-side load-following reserves in power systems with high wind generation penetration

    NARCIS (Netherlands)

    Paterakis, N.G.; Catalao, J.P.S.; Ntomaris, A.V.; Erdinc, O.

    2015-01-01

    In this study, a two-stage stochastic programming joint energy and reserve day-ahead market structure is proposed in order to procure the required load-following reserves to tackle with wind power production uncertainty. Reserves can be procured both from generation and demand-side. Responsive

  10. Asian oil demand

    International Nuclear Information System (INIS)

    Fesharaki, F.

    2005-01-01

    This conference presentation examined global oil market development and the role of Asian demand. It discussed plateau change versus cyclical movement in the global oil market; supply and demand issues of OPEC and non-OPEC oil; if high oil prices reduce demand; and the Asian oil picture in the global context. Asian oil demand has accounted for about 50 per cent of the global incremental oil market growth. The presentation provided data charts in graphical format on global and Asia-Pacific incremental oil demand from 1990-2005; Asia oil demand growth for selected nations; real GDP growth in selected Asian countries; and, Asia-Pacific oil production and net import requirements. It also included charts in petroleum product demand for Asia-Pacific, China, India, Japan, and South Korea. Other data charts included key indicators for China's petroleum sector; China crude production and net oil import requirements; China's imports and the share of the Middle East; China's oil exports and imports; China's crude imports by source for 2004; China's imports of main oil products for 2004; India's refining capacity; India's product balance for net-imports and net-exports; and India's trade pattern of oil products. tabs., figs

  11. Is the effect of job strain on myocardial infarction risk due to interaction between high psychological demands and low decision latitude?

    DEFF Research Database (Denmark)

    Hallqvist, J; Diderichsen, Finn; Theorell, T

    1998-01-01

    The objectives are to examine if the excess risk of myocardial infarction from exposure to job strain is due to interaction between high demands and low control and to analyse what role such an interaction has regarding socioeconomic differences in risk of myocardial infarction. The material...

  12. On-Demand Single Photons with High Extraction Efficiency and Near-Unity Indistinguishability from a Resonantly Driven Quantum Dot in a Micropillar

    DEFF Research Database (Denmark)

    Ding, Xing; He, Yu; Duan, Z.-C.

    2016-01-01

    Scalable photonic quantum technologies require on-demand single-photon sources with simultaneously high levels of purity, indistinguishability, and efficiency. These key features, however, have only been demonstrated separately in previous experiments. Here, by s-shell pulsed resonant excitation ...

  13. Highly parallel machines and future of scientific computing

    International Nuclear Information System (INIS)

    Singh, G.S.

    1992-01-01

    Computing requirement of large scale scientific computing has always been ahead of what state of the art hardware could supply in the form of supercomputers of the day. And for any single processor system the limit to increase in the computing power was realized a few years back itself. Now with the advent of parallel computing systems the availability of machines with the required computing power seems a reality. In this paper the author tries to visualize the future large scale scientific computing in the penultimate decade of the present century. The author summarized trends in parallel computers and emphasize the need for a better programming environment and software tools for optimal performance. The author concludes this paper with critique on parallel architectures, software tools and algorithms. (author). 10 refs., 2 tabs

  14. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  15. A computational study of highly viscous impinging jets

    International Nuclear Information System (INIS)

    Silva, M.W.

    1998-11-01

    Two commercially-available computational fluid dynamics codes, FIDAP (Fluent, Inc., Lebanon, NH) and FLOW-3D (Flow Science, Inc., Los Alamos, NM), were used to simulate the landing region of jets of highly viscous fluids impinging on flat surfaces. The volume-of-fluid method was combined with finite difference and finite element approaches to predict the jet behavior. Several computational models with varying degrees of physical realism were developed, and the results were compared with experimental observations. In experiments, the jet exhibited several complex behaviors. As soon as it exited the nozzle, the jet began to neck down and become narrower. When it impacted the solid surface, the jet developed an instability near the impact point and buckled to the side. This buckling became a spiraling motion, and the jet spiraled about the impact point. As the jet spiraled around, a cone-shaped pile was build up which eventually became unstable and slumped to the side. While all of these behaviors were occurring, air bubbles, or voids, were being entrapped in the fluid pool. The results obtained from the FLOW-3D models more closely matched the behavior of real jets than the results obtained from /the FIDAP models. Most of the FLOW-3D models predicted all of the significant jet behaviors observed in experiments: necking, buckling, spiraling, slumping, and void entrapment. All of the FIDAP models predicted that the jet would buckle relatively far from the point of impact, whereas the experimentally observed jet behavior indicates that the jets buckle much nearer the impact point. Furthermore, it was shown that FIDAP is incapable of incorporating heat transfer effects into the model, making it unsuitable for this work

  16. A computational study of highly viscous impinging jets

    Energy Technology Data Exchange (ETDEWEB)

    Silva, M.W. [Univ. of Texas, Austin, TX (United States). Dept. of Mechanical Engineering

    1998-11-01

    Two commercially-available computational fluid dynamics codes, FIDAP (Fluent, Inc., Lebanon, NH) and FLOW-3D (Flow Science, Inc., Los Alamos, NM), were used to simulate the landing region of jets of highly viscous fluids impinging on flat surfaces. The volume-of-fluid method was combined with finite difference and finite element approaches to predict the jet behavior. Several computational models with varying degrees of physical realism were developed, and the results were compared with experimental observations. In experiments, the jet exhibited several complex behaviors. As soon as it exited the nozzle, the jet began to neck down and become narrower. When it impacted the solid surface, the jet developed an instability near the impact point and buckled to the side. This buckling became a spiraling motion, and the jet spiraled about the impact point. As the jet spiraled around, a cone-shaped pile was build up which eventually became unstable and slumped to the side. While all of these behaviors were occurring, air bubbles, or voids, were being entrapped in the fluid pool. The results obtained from the FLOW-3D models more closely matched the behavior of real jets than the results obtained from /the FIDAP models. Most of the FLOW-3D models predicted all of the significant jet behaviors observed in experiments: necking, buckling, spiraling, slumping, and void entrapment. All of the FIDAP models predicted that the jet would buckle relatively far from the point of impact, whereas the experimentally observed jet behavior indicates that the jets buckle much nearer the impact point. Furthermore, it was shown that FIDAP is incapable of incorporating heat transfer effects into the model, making it unsuitable for this work.

  17. High threshold distributed quantum computing with three-qubit nodes

    International Nuclear Information System (INIS)

    Li Ying; Benjamin, Simon C

    2012-01-01

    In the distributed quantum computing paradigm, well-controlled few-qubit ‘nodes’ are networked together by connections which are relatively noisy and failure prone. A practical scheme must offer high tolerance to errors while requiring only simple (i.e. few-qubit) nodes. Here we show that relatively modest, three-qubit nodes can support advanced purification techniques and so offer robust scalability: the infidelity in the entanglement channel may be permitted to approach 10% if the infidelity in local operations is of order 0.1%. Our tolerance of network noise is therefore an order of magnitude beyond prior schemes, and our architecture remains robust even in the presence of considerable decoherence rates (memory errors). We compare the performance with that of schemes involving nodes of lower and higher complexity. Ion traps, and NV-centres in diamond, are two highly relevant emerging technologies: they possess the requisite properties of good local control, rapid and reliable readout, and methods for entanglement-at-a-distance. (paper)

  18. Pulmonary leukemic involvement: high-resolution computed tomography evaluation

    International Nuclear Information System (INIS)

    Oliveira, Ana Paola de; Marchiori, Edson; Souza Junior, Arthur Soares

    2004-01-01

    Objective: To evaluate the role of high-resolution computed tomography (HRCT) in patients with leukemia and pulmonary symptoms, to establish the main patterns and to correlate them with the etiology. Materials and Methods: This is a retrospective study of the HRCT of 15 patients with leukemia and pulmonary symptoms. The examinations were performed using a spatial high-resolution protocol and were analyzed by two independent radiologists. Results: The main HRCT patterns found were ground-glass opacity (n=11), consolidation (n=9), airspace nodules (n=3), septal thickening (n=3), tree-in-bud pattern (n=3), and pleural effusion (n=3). Pulmonary infection was the most common finding seen in 12 patients: bacterial pneumonia (n=6), fungal infection (n = 4), pulmonary tuberculosis (n=1) and viral infection (n=1). Leukemic pleural infiltration (n=1), lymphoma (n=1) and pulmonary hemorrhage (n=1) were detected in the other three patients. Conclusion: HRCT is an important tool that may suggest the cause of lung involvement, its extension and in some cases to guide invasive procedures in patients with leukemia. (author)

  19. Computer-aided control of high-quality cast iron

    Directory of Open Access Journals (Sweden)

    S. Pietrowski

    2008-04-01

    Full Text Available The study discusses the possibility of control of the high-quality grey cast iron and ductile iron using the author’s genuine computer programs. The programs have been developed with the help of algorithms based on statistical relationships that are said to exist between the characteristic parameters of DTA curves and properties, like Rp0,2, Rm, A5 and HB. It has been proved that the spheroidisation and inoculation treatment of cast iron changes in an important way the characteristic parameters of DTA curves, thus enabling a control of these operations as regards their correctness and effectiveness, along with the related changes in microstructure and mechanical properties of cast iron. Moreover, some examples of statistical relationships existing between the typical properties of ductile iron and its control process were given for cases of the melts consistent and inconsistent with the adopted technology.A test stand for control of the high-quality cast iron and respective melts has been schematically depicted.

  20. Automated high speed volume computed tomography for inline quality control

    International Nuclear Information System (INIS)

    Hanke, R.; Kugel, A.; Troup, P.

    2004-01-01

    Increasing complexity of innovative products as well as growing requirements on quality and reliability call for more detailed knowledge about internal structures of manufactured components rather by 100 % inspection than just by sampling test. A first-step solution, like radioscopic inline inspection machines, equipped with automated data evaluation software, have become state of the art in the production floor during the last years. However, these machines provide just ordinary two-dimensional information and deliver no volume data e.g. to evaluate exact position or shape of detected defects. One way to solve this problem is the application of X-ray computed tomography (CT). Compared to the performance of the first generation medical scanners (scanning times of many hours), today, modern Volume CT machines for industrial applications need about 5 minutes for a full object scan depending on the object size. Of course, this is still too long to introduce this powerful method into the inline production quality control. In order to gain acceptance, the scanning time including subsequent data evaluation must be decreased significantly and adapted to the manufacturing cycle times. This presentation demonstrates the new technical set up, reconstruction results and the methods for high-speed volume data evaluation of a new fully automated high-speed CT scanner with cycle times below one minute for an object size of less than 15 cm. This will directly create new opportunities in design and construction of more complex objects. (author)

  1. COMPUTER APPROACHES TO WHEAT HIGH-THROUGHPUT PHENOTYPING

    Directory of Open Access Journals (Sweden)

    Afonnikov D.

    2012-08-01

    Full Text Available The growing need for rapid and accurate approaches for large-scale assessment of phenotypic characters in plants becomes more and more obvious in the studies looking into relationships between genotype and phenotype. This need is due to the advent of high throughput methods for analysis of genomes. Nowadays, any genetic experiment involves data on thousands and dozens of thousands of plants. Traditional ways of assessing most phenotypic characteristics (those with reliance on the eye, the touch, the ruler are little effective on samples of such sizes. Modern approaches seek to take advantage of automated phenotyping, which warrants a much more rapid data acquisition, higher accuracy of the assessment of phenotypic features, measurement of new parameters of these features and exclusion of human subjectivity from the process. Additionally, automation allows measurement data to be rapidly loaded into computer databases, which reduces data processing time.In this work, we present the WheatPGE information system designed to solve the problem of integration of genotypic and phenotypic data and parameters of the environment, as well as to analyze the relationships between the genotype and phenotype in wheat. The system is used to consolidate miscellaneous data on a plant for storing and processing various morphological traits and genotypes of wheat plants as well as data on various environmental factors. The system is available at www.wheatdb.org. Its potential in genetic experiments has been demonstrated in high-throughput phenotyping of wheat leaf pubescence.

  2. Demand response in energy markets

    International Nuclear Information System (INIS)

    Skytte, K.; Birk Mortensen, J.

    2004-11-01

    Improving the ability of energy demand to respond to wholesale prices during critical periods of the spot market can reduce the total costs of reliably meeting demand, and the level and volatility of the prices. This fact has lead to a growing interest in the short-run demand response. There has especially been a growing interest in the electricity market where peak-load periods with high spot prices and occasional local blackouts have recently been seen. Market concentration at the supply side can result in even higher peak-load prices. Demand response by shifting demand from peak to base-load periods can counteract the market power in the peak-load. However, demand response has so far been modest since the current short-term price elasticity seems to be small. This is also the case for related markets, for example, green certificates where the demand is determined as a percentage of the power demand, or for heat and natural gas markets. This raises a number of interesting research issues: 1) Demand response in different energy markets, 2) Estimation of price elasticity and flexibility, 3) Stimulation of demand response, 4) Regulation, policy and modelling aspects, 5) Demand response and market power at the supply side, 6) Energy security of supply, 7) Demand response in forward, spot, ancillary service, balance and capacity markets, 8) Demand response in deviated markets, e.g., emission, futures, and green certificate markets, 9) Value of increased demand response, 10) Flexible households. (BA)

  3. FPGA Compute Acceleration for High-Throughput Data Processing in High-Energy Physics Experiments

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    The upgrades of the four large experiments of the LHC at CERN in the coming years will result in a huge increase of data bandwidth for each experiment which needs to be processed very efficiently. For example the LHCb experiment will upgrade its detector 2019/2020 to a 'triggerless' readout scheme, where all of the readout electronics and several sub-detector parts will be replaced. The new readout electronics will be able to readout the detector at 40MHz. This increases the data bandwidth from the detector down to the event filter farm to 40TBit/s, which must be processed to select the interesting proton-proton collisions for later storage. The architecture of such a computing farm, which can process this amount of data as efficiently as possible, is a challenging task and several compute accelerator technologies are being considered.    In the high performance computing sector more and more FPGA compute accelerators are being used to improve the compute performance and reduce the...

  4. Big Data and High-Performance Computing in Global Seismology

    Science.gov (United States)

    Bozdag, Ebru; Lefebvre, Matthieu; Lei, Wenjie; Peter, Daniel; Smith, James; Komatitsch, Dimitri; Tromp, Jeroen

    2014-05-01

    Much of our knowledge of Earth's interior is based on seismic observations and measurements. Adjoint methods provide an efficient way of incorporating 3D full wave propagation in iterative seismic inversions to enhance tomographic images and thus our understanding of processes taking place inside the Earth. Our aim is to take adjoint tomography, which has been successfully applied to regional and continental scale problems, further to image the entire planet. This is one of the extreme imaging challenges in seismology, mainly due to the intense computational requirements and vast amount of high-quality seismic data that can potentially be assimilated. We have started low-resolution inversions (T > 30 s and T > 60 s for body and surface waves, respectively) with a limited data set (253 carefully selected earthquakes and seismic data from permanent and temporary networks) on Oak Ridge National Laboratory's Cray XK7 "Titan" system. Recent improvements in our 3D global wave propagation solvers, such as a GPU version of the SPECFEM3D_GLOBE package, will enable us perform higher-resolution (T > 9 s) and longer duration (~180 m) simulations to take the advantage of high-frequency body waves and major-arc surface waves, thereby improving imbalanced ray coverage as a result of the uneven global distribution of sources and receivers. Our ultimate goal is to use all earthquakes in the global CMT catalogue within the magnitude range of our interest and data from all available seismic networks. To take the full advantage of computational resources, we need a solid framework to manage big data sets during numerical simulations, pre-processing (i.e., data requests and quality checks, processing data, window selection, etc.) and post-processing (i.e., pre-conditioning and smoothing kernels, etc.). We address the bottlenecks in our global seismic workflow, which are mainly coming from heavy I/O traffic during simulations and the pre- and post-processing stages, by defining new data

  5. Building a High Performance Computing Infrastructure for Novosibirsk Scientific Center

    International Nuclear Information System (INIS)

    Adakin, A; Chubarov, D; Nikultsev, V; Belov, S; Kaplin, V; Sukharev, A; Zaytsev, A; Kalyuzhny, V; Kuchin, N; Lomakin, S

    2011-01-01

    Novosibirsk Scientific Center (NSC), also known worldwide as Akademgorodok, is one of the largest Russian scientific centers hosting Novosibirsk State University (NSU) and more than 35 research organizations of the Siberian Branch of Russian Academy of Sciences including Budker Institute of Nuclear Physics (BINP), Institute of Computational Technologies (ICT), and Institute of Computational Mathematics and Mathematical Geophysics (ICM and MG). Since each institute has specific requirements on the architecture of the computing farms involved in its research field, currently we've got several computing facilities hosted by NSC institutes, each optimized for the particular set of tasks, of which the largest are the NSU Supercomputer Center, Siberian Supercomputer Center (ICM and MG), and a Grid Computing Facility of BINP. Recently a dedicated optical network with the initial bandwidth of 10 Gbps connecting these three facilities was built in order to make it possible to share the computing resources among the research communities of participating institutes, thus providing a common platform for building the computing infrastructure for various scientific projects. Unification of the computing infrastructure is achieved by extensive use of virtualization technologies based on XEN and KVM platforms. The solution implemented was tested thoroughly within the computing environment of KEDR detector experiment which is being carried out at BINP, and foreseen to be applied to the use cases of other HEP experiments in the upcoming future.

  6. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and

  7. On the impact of quantum computing technology on future developments in high-performance scientific computing

    OpenAIRE

    Möller, Matthias; Vuik, Cornelis

    2017-01-01

    Quantum computing technologies have become a hot topic in academia and industry receiving much attention and financial support from all sides. Building a quantum computer that can be used practically is in itself an outstanding challenge that has become the ‘new race to the moon’. Next to researchers and vendors of future computing technologies, national authorities are showing strong interest in maturing this technology due to its known potential to break many of today’s encryption technique...

  8. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  9. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  10. Computer-aided safety systems of industrial high energy objects

    International Nuclear Information System (INIS)

    Topolsky, N.G.; Gordeev, S.G.

    1995-01-01

    Modern objects of fuel and energy, chemical industries are characterized by high power consumption; by presence of large quantities of combustible and explosive substances used in technological processes; by advanced communications of submission systems of initial liquid and gasiform reagents, lubricants and coolants, the products of processing, and wastes of production; by advanced ventilation and pneumatic transport; and by complex control systems of energy, material and information flows. Such objects have advanced infrastructures, including a significant quantity of engineering buildings intended for storage, transportation, and processing of combustible liquids, gasiform fuels and materials, and firm materials. Examples of similar objects are nuclear and thermal power stations, chemical plants, machine-building factories, iron and steel industry enterprises, etc. Many tasks and functions characterizing the problem of fire safety of these objects can be accomplished only upon the development of special Computer-Aided Fire Safety Systems (CAFSS). The CAFSS for these objects are intended to reduce the hazard of disastrous accidents both causing fires and caused by them. The tasks of fire prevention and rescue work of large-scale industrial objects are analyzed within the bounds of the recommended conception. A functional structure of CAFSS with a list of the main subsystems forming a part of its composition has been proposed

  11. Pulmonary high-resolution computed tomography findings in nephropathia epidemica

    Energy Technology Data Exchange (ETDEWEB)

    Paakkala, Antti, E-mail: antti.paakkala@pshp.fi [Medical Imaging Centre, Tampere University Hospital, 33521 Tampere (Finland); Jaervenpaeae, Ritva, E-mail: ritva.jarvenpaa@pshp.fi [Medical Imaging Centre, Tampere University Hospital, 33521 Tampere (Finland); Maekelae, Satu, E-mail: satu.marjo.makela@uta.fi [Department of Internal Medicine, Tampere University Hospital, 33521 Tampere (Finland); Medical School, University of Tampere, 33521 Tampere (Finland); Huhtala, Heini, E-mail: heini.huhtala@uta.fi [School of Public Health, University of Tampere, 33521 Tampere (Finland); Mustonen, Jukka, E-mail: jukka.mustonen@uta.fi [Department of Internal Medicine, Tampere University Hospital, 33521 Tampere (Finland); Medical School, University of Tampere, 33521 Tampere (Finland)

    2012-08-15

    Purpose: To evaluate lung high-resolution computed tomography (HRCT) findings in patients with Puumala hantavirus-induced nephropathia epidemica (NE), and to determine if these findings correspond to chest radiograph findings. Materials and methods: HRCT findings and clinical course were studied in 13 hospital-treated NE patients. Chest radiograph findings were studied in 12 of them. Results: Twelve patients (92%) showed lung parenchymal abnormalities in HRCT, while only 8 had changes in their chest radiography. Atelectasis, pleural effusion, intralobular and interlobular septal thickening were the most common HRCT findings. Ground-glass opacification (GGO) was seen in 4 and hilar and mediastinal lymphadenopathy in 3 patients. Atelectasis and pleural effusion were also mostly seen in chest radiographs, other findings only in HRCT. Conclusion: Almost every NE patient showed lung parenchymal abnormalities in HRCT. The most common findings of lung involvement in NE can be defined as accumulation of pleural fluid and atelectasis and intralobular and interlobular septal thickening, most profusely in the lower parts of the lung. As a novel finding, lymphadenopathy was seen in a minority, probably related to capillary leakage and overall fluid overload. Pleural effusion is not the prominent feature in other viral pneumonias, whereas intralobular and interlobular septal thickening are characteristic of other viral pulmonary infections as well. Lung parenchymal findings in HRCT can thus be taken not to be disease-specific in NE and HRCT is useful only for scientific purposes.

  12. High performance computing environment for multidimensional image analysis.

    Science.gov (United States)

    Rao, A Ravishankar; Cecchi, Guillermo A; Magnasco, Marcelo

    2007-07-10

    The processing of images acquired through microscopy is a challenging task due to the large size of datasets (several gigabytes) and the fast turnaround time required. If the throughput of the image processing stage is significantly increased, it can have a major impact in microscopy applications. We present a high performance computing (HPC) solution to this problem. This involves decomposing the spatial 3D image into segments that are assigned to unique processors, and matched to the 3D torus architecture of the IBM Blue Gene/L machine. Communication between segments is restricted to the nearest neighbors. When running on a 2 Ghz Intel CPU, the task of 3D median filtering on a typical 256 megabyte dataset takes two and a half hours, whereas by using 1024 nodes of Blue Gene, this task can be performed in 18.8 seconds, a 478x speedup. Our parallel solution dramatically improves the performance of image processing, feature extraction and 3D reconstruction tasks. This increased throughput permits biologists to conduct unprecedented large scale experiments with massive datasets.

  13. The Future of Software Engineering for High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Pope, G [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-07-16

    DOE ASCR requested that from May through mid-July 2015 a study group identify issues and recommend solutions from a software engineering perspective transitioning into the next generation of High Performance Computing. The approach used was to ask some of the DOE complex experts who will be responsible for doing this work to contribute to the study group. The technique used was to solicit elevator speeches: a short and concise write up done as if the author was a speaker with only a few minutes to convince a decision maker of their top issues. Pages 2-18 contain the original texts of the contributed elevator speeches and end notes identifying the 20 contributors. The study group also ranked the importance of each topic, and those scores are displayed with each topic heading. A perfect score (and highest priority) is three, two is medium priority, and one is lowest priority. The highest scoring topic areas were software engineering and testing resources; the lowest scoring area was compliance to DOE standards. The following two paragraphs are an elevator speech summarizing the contributed elevator speeches. Each sentence or phrase in the summary is hyperlinked to its source via a numeral embedded in the text. A risk one liner has also been added to each topic to allow future risk tracking and mitigation.

  14. A checkpoint compression study for high-performance computing systems

    Energy Technology Data Exchange (ETDEWEB)

    Ibtesham, Dewan [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science; Ferreira, Kurt B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States). Scalable System Software Dept.; Arnold, Dorian [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Computer Science

    2015-02-17

    As high-performance computing systems continue to increase in size and complexity, higher failure rates and increased overheads for checkpoint/restart (CR) protocols have raised concerns about the practical viability of CR protocols for future systems. Previously, compression has proven to be a viable approach for reducing checkpoint data volumes and, thereby, reducing CR protocol overhead leading to improved application performance. In this article, we further explore compression-based CR optimization by exploring its baseline performance and scaling properties, evaluating whether improved compression algorithms might lead to even better application performance and comparing checkpoint compression against and alongside other software- and hardware-based optimizations. Our results highlights are: (1) compression is a very viable CR optimization; (2) generic, text-based compression algorithms appear to perform near optimally for checkpoint data compression and faster compression algorithms will not lead to better application performance; (3) compression-based optimizations fare well against and alongside other software-based optimizations; and (4) while hardware-based optimizations outperform software-based ones, they are not as cost effective.

  15. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    Science.gov (United States)

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  16. Multi-Language Programming Environments for High Performance Java Computing

    Directory of Open Access Journals (Sweden)

    Vladimir Getov

    1999-01-01

    Full Text Available Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI tool which provides application programmers wishing to use Java with immediate accessibility to existing scientific packages. The JCI tool also facilitates rapid development and reuse of existing code. These benefits are provided at minimal cost to the programmer. While beneficial to the programmer, the additional advantages of mixed‐language programming in terms of application performance and portability are addressed in detail within the context of this paper. In addition, we discuss how the JCI tool is complementing other ongoing projects such as IBM’s High‐Performance Compiler for Java (HPCJ and IceT’s metacomputing environment.

  17. Acquisition of ICU data: concepts and demands.

    Science.gov (United States)

    Imhoff, M

    1992-12-01

    As the issue of data overload is a problem in critical care today, it is of utmost importance to improve acquisition, storage, integration, and presentation of medical data, which appears only feasible with the help of bedside computers. The data originates from four major sources: (1) the bedside medical devices, (2) the local area network (LAN) of the ICU, (3) the hospital information system (HIS) and (4) manual input. All sources differ markedly in quality and quantity of data and in the demands of the interfaces between source of data and patient database. The demands for data acquisition from bedside medical devices, ICU-LAN and HIS concentrate on technical problems, such as computational power, storage capacity, real-time processing, interfacing with different devices and networks and the unmistakable assignment of data to the individual patient. The main problem of manual data acquisition is the definition and configuration of the user interface that must allow the inexperienced user to interact with the computer intuitively. Emphasis must be put on the construction of a pleasant, logical and easy-to-handle graphical user interface (GUI). Short response times will require high graphical processing capacity. Moreover, high computational resources are necessary in the future for additional interfacing devices such as speech recognition and 3D-GUI. Therefore, in an ICU environment the demands for computational power are enormous. These problems are complicated by the urgent need for friendly and easy-to-handle user interfaces. Both facts place ICU bedside computing at the vanguard of present and future workstation development leaving no room for solutions based on traditional concepts of personal computers.(ABSTRACT TRUNCATED AT 250 WORDS)

  18. Computational Study of Nonequilibrium Chemistry in High Temperature Flows

    Science.gov (United States)

    Doraiswamy, Sriram

    Recent experimental measurements in the reflected shock tunnel CUBRC LENS-I facility raise questions about our ability to correctly model the recombination processes in high enthalpy flows. In the carbon dioxide flow, the computed shock standoff distance over the Mars Science Laboratory (MSL) shape was less than half of the experimental result. For the oxygen flows, both pressure and heat transfer data on the double cone geometry were not correctly predicted. The objective of this work is to investigate possible reasons for these discrepancies. This process involves systematically addressing different factors that could possibly explain the differences. These factors include vibrational modeling, role of electronic states and chemistry-vibrational coupling in high enthalpy flows. A state-specific vibrational model for CO2, CO, O2 and O system is devised by taking into account the first few vibrational states of each species. All vibrational states with energies at or below 1 eV are included in the present work. Of the three modes of vibration in CO2 , the antisymmetric mode is considered separately from the symmetric stretching mode and the doubly degenerate bending modes. The symmetric and the bending modes are grouped together since the energy transfer rates between the two modes are very large due to Fermi resonance. The symmetric and bending modes are assumed to be in equilibrium with the translational and rotational modes. The kinetic rates for the vibrational-translation energy exchange reactions, and the intermolecular and intramolecular vibrational-vibrational energy exchange reactions are based on experimental data to the maximum extent possible. Extrapolation methods are employed when necessary. This vibrational model is then coupled with an axisymmetric computational fluid dynamics code to study the expansion of CO2 in a nozzle. The potential role of low lying electronic states is also investigated. Carbon dioxide has a single excited state just below

  19. Highly reliable computer network for real time system

    International Nuclear Information System (INIS)

    Mohammed, F.A.; Omar, A.A.; Ayad, N.M.A.; Madkour, M.A.I.; Ibrahim, M.K.

    1988-01-01

    Many of computer networks have been studied different trends regarding the network architecture and the various protocols that govern data transfers and guarantee a reliable communication among all a hierarchical network structure has been proposed to provide a simple and inexpensive way for the realization of a reliable real-time computer network. In such architecture all computers in the same level are connected to a common serial channel through intelligent nodes that collectively control data transfers over the serial channel. This level of computer network can be considered as a local area computer network (LACN) that can be used in nuclear power plant control system since it has geographically dispersed subsystems. network expansion would be straight the common channel for each added computer (HOST). All the nodes are designed around a microprocessor chip to provide the required intelligence. The node can be divided into two sections namely a common section that interfaces with serial data channel and a private section to interface with the host computer. This part would naturally tend to have some variations in the hardware details to match the requirements of individual host computers. fig 7

  20. Computer Science in High School Graduation Requirements. ECS Education Trends

    Science.gov (United States)

    Zinth, Jennifer Dounay

    2015-01-01

    Computer science and coding skills are widely recognized as a valuable asset in the current and projected job market. The Bureau of Labor Statistics projects 37.5 percent growth from 2012 to 2022 in the "computer systems design and related services" industry--from 1,620,300 jobs in 2012 to an estimated 2,229,000 jobs in 2022. Yet some…

  1. Using a Computer Animation to Teach High School Molecular Biology

    Science.gov (United States)

    Rotbain, Yosi; Marbach-Ad, Gili; Stavy, Ruth

    2008-01-01

    We present an active way to use a computer animation in secondary molecular genetics class. For this purpose we developed an activity booklet that helps students to work interactively with a computer animation which deals with abstract concepts and processes in molecular biology. The achievements of the experimental group were compared with those…

  2. Multimodal Information Presentation for High-Load Human Computer Interaction

    NARCIS (Netherlands)

    Cao, Y.

    2011-01-01

    This dissertation addresses multimodal information presentation in human computer interaction. Information presentation refers to the manner in which computer systems/interfaces present information to human users. More specifically, the focus of our work is not on which information to present, but

  3. A ground-up approach to High Throughput Cloud Computing in High-Energy Physics

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00245123; Ganis, Gerardo; Bagnasco, Stefano

    The thesis explores various practical approaches in making existing High Throughput computing applications common in High Energy Physics work on cloud-provided resources, as well as opening the possibility for running new applications. The work is divided into two parts: firstly we describe the work done at the computing facility hosted by INFN Torino to entirely convert former Grid resources into cloud ones, eventually running Grid use cases on top along with many others in a more flexible way. Integration and conversion problems are duly described. The second part covers the development of solutions for automatizing the orchestration of cloud workers based on the load of a batch queue and the development of HEP applications based on ROOT's PROOF that can adapt at runtime to a changing number of workers.

  4. High throughput computing: a solution for scientific analysis

    Science.gov (United States)

    O'Donnell, M.

    2011-01-01

    Public land management agencies continually face resource management problems that are exacerbated by climate warming, land-use change, and other human activities. As the U.S. Geological Survey (USGS) Fort Collins Science Center (FORT) works with managers in U.S. Department of the Interior (DOI) agencies and other federal, state, and private entities, researchers are finding that the science needed to address these complex ecological questions across time and space produces substantial amounts of data. The additional data and the volume of computations needed to analyze it require expanded computing resources well beyond single- or even multiple-computer workstations. To meet this need for greater computational capacity, FORT investigated how to resolve the many computational shortfalls previously encountered when analyzing data for such projects. Our objectives included finding a solution that would:

  5. On the impact of quantum computing technology on future developments in high-performance scientific computing

    NARCIS (Netherlands)

    Möller, M.; Vuik, C.

    2017-01-01

    Quantum computing technologies have become a hot topic in academia and industry receiving much attention and financial support from all sides. Building a quantum computer that can be used practically is in itself an outstanding challenge that has become the ‘new race to the moon’. Next to

  6. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Michael J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  7. Security Services Lifecycle Management in on-demand infrastructure services

    NARCIS (Netherlands)

    Demchenko, Y.; de Laat, C.; Lopez, D.R.; García-Espín, J.A.; Qiu, J.; Zhao, G.; Rong, C.

    2010-01-01

    Modern e-Science and high technology industry require high-performance and complicated network and computer infrastructure to support distributed collaborating groups of researchers and applications that should be provisioned on-demand. The effective use and management of the dynamically provisioned

  8. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  9. Path Not Found: Disparities in Access to Computer Science Courses in California High Schools

    Science.gov (United States)

    Martin, Alexis; McAlear, Frieda; Scott, Allison

    2015-01-01

    "Path Not Found: Disparities in Access to Computer Science Courses in California High Schools" exposes one of the foundational causes of underrepresentation in computing: disparities in access to computer science courses in California's public high schools. This report provides new, detailed data on these disparities by student body…

  10. AELAS: Automatic ELAStic property derivations via high-throughput first-principles computation

    Science.gov (United States)

    Zhang, S. H.; Zhang, R. F.

    2017-11-01

    The elastic properties are fundamental and important for crystalline materials as they relate to other mechanical properties, various thermodynamic qualities as well as some critical physical properties. However, a complete set of experimentally determined elastic properties is only available for a small subset of known materials, and an automatic scheme for the derivations of elastic properties that is adapted to high-throughput computation is much demanding. In this paper, we present the AELAS code, an automated program for calculating second-order elastic constants of both two-dimensional and three-dimensional single crystal materials with any symmetry, which is designed mainly for high-throughput first-principles computation. Other derivations of general elastic properties such as Young's, bulk and shear moduli as well as Poisson's ratio of polycrystal materials, Pugh ratio, Cauchy pressure, elastic anisotropy and elastic stability criterion, are also implemented in this code. The implementation of the code has been critically validated by a lot of evaluations and tests on a broad class of materials including two-dimensional and three-dimensional materials, providing its efficiency and capability for high-throughput screening of specific materials with targeted mechanical properties. Program Files doi:http://dx.doi.org/10.17632/f8fwg4j9tw.1 Licensing provisions: BSD 3-Clause Programming language: Fortran Nature of problem: To automate the calculations of second-order elastic constants and the derivations of other elastic properties for two-dimensional and three-dimensional materials with any symmetry via high-throughput first-principles computation. Solution method: The space-group number is firstly determined by the SPGLIB code [1] and the structure is then redefined to unit cell with IEEE-format [2]. Secondly, based on the determined space group number, a set of distortion modes is automatically specified and the distorted structure files are generated

  11. Department of Energy research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-08-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programmatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models, the execution of which is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex, and consequently it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  12. FY 1992 Blue Book: Grand Challenges: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  13. FY 1993 Blue Book: Grand Challenges 1993: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  14. International Conference: Computer-Aided Design of High-Temperature Materials

    National Research Council Canada - National Science Library

    Kalia, Rajiv

    1998-01-01

    .... The conference was attended by experimental and computational materials scientists, and experts in high performance computing and communications from universities, government laboratories, and industries in the U.S., Europe, and Japan...

  15. The contribution of high-performance computing and modelling for industrial development

    CSIR Research Space (South Africa)

    Sithole, Happy

    2017-10-01

    Full Text Available Performance Computing and Modelling for Industrial Development Dr Happy Sithole and Dr Onno Ubbink 2 Strategic context • High-performance computing (HPC) combined with machine Learning and artificial intelligence present opportunities to non...

  16. Computation of order and volume fill rates for a base stock inventory control system with heterogeneous demand to investigate which customer class gets the best service

    DEFF Research Database (Denmark)

    Larsen, Christian

    We consider a base stock inventory control system serving two customer classes whose demands are generated by two independent compound renewal processes. We show how to derive order and volume fill rates of each class. Based on assumptions about first order stochastic dominance we prove when one...

  17. High-speed packet switching network to link computers

    CERN Document Server

    Gerard, F M

    1980-01-01

    Virtually all of the experiments conducted at CERN use minicomputers today; some simply acquire data and store results on magnetic tape while others actually control experiments and help to process the resulting data. Currently there are more than two hundred minicomputers being used in the laboratory. In order to provide the minicomputer users with access to facilities available on mainframes and also to provide intercommunication between various experimental minicomputers, CERN opted for a packet switching network back in 1975. It was decided to use Modcomp II computers as switching nodes. The only software to be taken was a communications-oriented operating system called Maxcom. Today eight Modcomp II 16-bit computers plus six newer Classic minicomputers from Modular Computer Services have been purchased for the CERNET data communications networks. The current configuration comprises 11 nodes connecting more than 40 user machines to one another and to the laboratory's central computing facility. (0 refs).

  18. Distributed metadata in a high performance computing environment

    Science.gov (United States)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua; Liu, Xuezhao; Tang, Haiying

    2017-07-11

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination that a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.

  19. Two-dimensional computer simulation of high intensity proton beams

    CERN Document Server

    Lapostolle, Pierre M

    1972-01-01

    A computer program has been developed which simulates the two- dimensional transverse behaviour of a proton beam in a focusing channel. The model is represented by an assembly of a few thousand 'superparticles' acted upon by their own self-consistent electric field and an external focusing force. The evolution of the system is computed stepwise in time by successively solving Poisson's equation and Newton's law of motion. Fast Fourier transform techniques are used for speed in the solution of Poisson's equation, while extensive area weighting is utilized for the accurate evaluation of electric field components. A computer experiment has been performed on the CERN CDC 6600 computer to study the nonlinear behaviour of an intense beam in phase space, showing under certain circumstances a filamentation due to space charge and an apparent emittance growth. (14 refs).

  20. Benchmark Numerical Toolkits for High Performance Computing, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Computational codes in physics and engineering often use implicit solution algorithms that require linear algebra tools such as Ax=b solvers, eigenvalue,...

  1. Multicore Challenges and Benefits for High Performance Scientific Computing

    Directory of Open Access Journals (Sweden)

    Ida M.B. Nielsen

    2008-01-01

    Full Text Available Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexity of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.

  2. Diamond High Assurance Security Program: Trusted Computing Exemplar

    Science.gov (United States)

    2002-09-01

    computing component, the Embedded MicroKernel Prototype. A third-party evaluation of the component will be initiated during development (e.g., once...target technologies and larger projects is a topic for future research. Trusted Computing Reference Component – The Embedded MicroKernel Prototype We...Kernel The primary security function of the Embedded MicroKernel will be to enforce process and data-domain separation, while providing primitive

  3. Soft Computing Techniques for the Protein Folding Problem on High Performance Computing Architectures.

    Science.gov (United States)

    Llanes, Antonio; Muñoz, Andrés; Bueno-Crespo, Andrés; García-Valverde, Teresa; Sánchez, Antonia; Arcas-Túnez, Francisco; Pérez-Sánchez, Horacio; Cecilia, José M

    2016-01-01

    The protein-folding problem has been extensively studied during the last fifty years. The understanding of the dynamics of global shape of a protein and the influence on its biological function can help us to discover new and more effective drugs to deal with diseases of pharmacological relevance. Different computational approaches have been developed by different researchers in order to foresee the threedimensional arrangement of atoms of proteins from their sequences. However, the computational complexity of this problem makes mandatory the search for new models, novel algorithmic strategies and hardware platforms that provide solutions in a reasonable time frame. We present in this revision work the past and last tendencies regarding protein folding simulations from both perspectives; hardware and software. Of particular interest to us are both the use of inexact solutions to this computationally hard problem as well as which hardware platforms have been used for running this kind of Soft Computing techniques.

  4. Implementation of the Principal Component Analysis onto High-Performance Computer Facilities for Hyperspectral Dimensionality Reduction: Results and Comparisons

    Directory of Open Access Journals (Sweden)

    Ernestina Martel

    2018-06-01

    Full Text Available Dimensionality reduction represents a critical preprocessing step in order to increase the efficiency and the performance of many hyperspectral imaging algorithms. However, dimensionality reduction algorithms, such as the Principal Component Analysis (PCA, suffer from their computationally demanding nature, becoming advisable for their implementation onto high-performance computer architectures for applications under strict latency constraints. This work presents the implementation of the PCA algorithm onto two different high-performance devices, namely, an NVIDIA Graphics Processing Unit (GPU and a Kalray manycore, uncovering a highly valuable set of tips and tricks in order to take full advantage of the inherent parallelism of these high-performance computing platforms, and hence, reducing the time that is required to process a given hyperspectral image. Moreover, the achieved results obtained with different hyperspectral images have been compared with the ones that were obtained with a field programmable gate array (FPGA-based implementation of the PCA algorithm that has been recently published, providing, for the first time in the literature, a comprehensive analysis in order to highlight the pros and cons of each option.

  5. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  6. HIGH-PERFORMANCE COMPUTING FOR THE STUDY OF EARTH AND ENVIRONMENTAL SCIENCE MATERIALS USING SYNCHROTRON X-RAY COMPUTED MICROTOMOGRAPHY

    International Nuclear Information System (INIS)

    FENG, H.; JONES, K.W.; MCGUIGAN, M.; SMITH, G.J.; SPILETIC, J.

    2001-01-01

    Synchrotron x-ray computed microtomography (CMT) is a non-destructive method for examination of rock, soil, and other types of samples studied in the earth and environmental sciences. The high x-ray intensities of the synchrotron source make possible the acquisition of tomographic volumes at a high rate that requires the application of high-performance computing techniques for data reconstruction to produce the three-dimensional volumes, for their visualization, and for data analysis. These problems are exacerbated by the need to share information between collaborators at widely separated locations over both local and tide-area networks. A summary of the CMT technique and examples of applications are given here together with a discussion of the applications of high-performance computing methods to improve the experimental techniques and analysis of the data

  7. HIGH-PERFORMANCE COMPUTING FOR THE STUDY OF EARTH AND ENVIRONMENTAL SCIENCE MATERIALS USING SYNCHROTRON X-RAY COMPUTED MICROTOMOGRAPHY.

    Energy Technology Data Exchange (ETDEWEB)

    FENG,H.; JONES,K.W.; MCGUIGAN,M.; SMITH,G.J.; SPILETIC,J.

    2001-10-12

    Synchrotron x-ray computed microtomography (CMT) is a non-destructive method for examination of rock, soil, and other types of samples studied in the earth and environmental sciences. The high x-ray intensities of the synchrotron source make possible the acquisition of tomographic volumes at a high rate that requires the application of high-performance computing techniques for data reconstruction to produce the three-dimensional volumes, for their visualization, and for data analysis. These problems are exacerbated by the need to share information between collaborators at widely separated locations over both local and tide-area networks. A summary of the CMT technique and examples of applications are given here together with a discussion of the applications of high-performance computing methods to improve the experimental techniques and analysis of the data.

  8. Usage of super high speed computer for clarification of complex phenomena

    International Nuclear Information System (INIS)

    Sekiguchi, Tomotsugu; Sato, Mitsuhisa; Nakata, Hideki; Tatebe, Osami; Takagi, Hiromitsu

    1999-01-01

    This study aims at construction of an efficient super high speed computer system application environment in response to parallel distributed system with easy transplantation to different computer system and different number by conducting research and development on super high speed computer application technology required for elucidation of complicated phenomenon in elucidation of complicated phenomenon of nuclear power field due to computed scientific method. In order to realize such environment, the Electrotechnical Laboratory has conducted development on Ninf, a network numerical information library. This Ninf system can supply a global network infrastructure for worldwide computing with high performance on further wide range distributed network (G.K.)

  9. Oil supply and demand

    Energy Technology Data Exchange (ETDEWEB)

    Babusiaux, D

    2004-07-01

    Following the military intervention in Iraq, it is taking longer than expected for Iraqi exports to make a comeback on the market. Demand is sustained by economic growth in China and in the United States. OPEC is modulating production to prevent inventory build-up. Prices have stayed high despite increased production by non-OPEC countries, especially Russia. (author)

  10. Oil supply and demand

    Energy Technology Data Exchange (ETDEWEB)

    Rech, O

    2006-07-01

    The year 2004 saw a change in the oil market paradigm that was confirmed in 2005. Despite a calmer geopolitical context, prices continued to rise vigorously. Driven by world demand, they remain high as a result of the saturation of production and refining capacity. The market is still seeking its new equilibrium. (author)

  11. Oil supply and demand

    International Nuclear Information System (INIS)

    Rech, O.

    2006-01-01

    The year 2004 saw a change in the oil market paradigm that was confirmed in 2005. Despite a calmer geopolitical context, prices continued to rise vigorously. Driven by world demand, they remain high as a result of the saturation of production and refining capacity. The market is still seeking its new equilibrium. (author)

  12. Oil supply and demand

    International Nuclear Information System (INIS)

    Babusiaux, D.

    2004-01-01

    Following the military intervention in Iraq, it is taking longer than expected for Iraqi exports to make a comeback on the market. Demand is sustained by economic growth in China and in the United States. OPEC is modulating production to prevent inventory build-up. Prices have stayed high despite increased production by non-OPEC countries, especially Russia. (author)

  13. Performance Measurements in a High Throughput Computing Environment

    CERN Document Server

    AUTHOR|(CDS)2145966; Gribaudo, Marco

    The IT infrastructures of companies and research centres are implementing new technologies to satisfy the increasing need of computing resources for big data analysis. In this context, resource profiling plays a crucial role in identifying areas where the improvement of the utilisation efficiency is needed. In order to deal with the profiling and optimisation of computing resources, two complementary approaches can be adopted: the measurement-based approach and the model-based approach. The measurement-based approach gathers and analyses performance metrics executing benchmark applications on computing resources. Instead, the model-based approach implies the design and implementation of a model as an abstraction of the real system, selecting only those aspects relevant to the study. This Thesis originates from a project carried out by the author within the CERN IT department. CERN is an international scientific laboratory that conducts fundamental researches in the domain of elementary particle physics. The p...

  14. A PROFICIENT MODEL FOR HIGH END SECURITY IN CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    R. Bala Chandar

    2014-01-01

    Full Text Available Cloud computing is an inspiring technology due to its abilities like ensuring scalable services, reducing the anxiety of local hardware and software management associated with computing while increasing flexibility and scalability. A key trait of the cloud services is remotely processing of data. Even though this technology had offered a lot of services, there are a few concerns such as misbehavior of server side stored data , out of control of data owner's data and cloud computing does not control the access of outsourced data desired by the data owner. To handle these issues, we propose a new model to ensure the data correctness for assurance of stored data, distributed accountability for authentication and efficient access control of outsourced data for authorization. This model strengthens the correctness of data and helps to achieve the cloud data integrity, supports data owner to have control on their own data through tracking and improves the access control of outsourced data.

  15. High-pressure fluid phase equilibria phenomenology and computation

    CERN Document Server

    Deiters, Ulrich K

    2012-01-01

    The book begins with an overview of the phase diagrams of fluid mixtures (fluid = liquid, gas, or supercritical state), which can show an astonishing variety when elevated pressures are taken into account; phenomena like retrograde condensation (single and double) and azeotropy (normal and double) are discussed. It then gives an introduction into the relevant thermodynamic equations for fluid mixtures, including some that are rarely found in modern textbooks, and shows how they can they be used to compute phase diagrams and related properties. This chapter gives a consistent and axiomatic approach to fluid thermodynamics; it avoids using activity coefficients. Further chapters are dedicated to solid-fluid phase equilibria and global phase diagrams (systematic search for phase diagram classes). The appendix contains numerical algorithms needed for the computations. The book thus enables the reader to create or improve computer programs for the calculation of fluid phase diagrams. introduces phase diagram class...

  16. High resolution computed tomographic features of pulmonary alveolar microlithiasis

    International Nuclear Information System (INIS)

    Deniz, Omer; Ors, Fatih; Tozkoparan, Ergun; Ozcan, Ayhan; Gumus, Seyfettin; Bozlar, Ugur; Bilgic, Hayati; Ekiz, Kudret; Demirci, Necmettin

    2005-01-01

    Background: Pulmonary alveolar microlithiasis (PAM) is a rare, chronic lung disease with unknown etiology and with a nonuniform clinical course. Nonuniformity of clinical course might be related to the degree of pulmonary parenchymal alterations, which can be revealed with high resolution computed tomography (HRCT). However, HRCT findings of PAM were not fully described in the current literature. Aim: The aim of this study was to interpret and to contribute to describe HRCT findings of PAM and to investigate a correlation between profusion of micro nodules (MN) and pulmonary parenchymal alterations in patients with PAM. Material and methods: Ten male patients with PAM (mean age: 22 ± 3.2) were included into the study. HRCT images were assessed for patterns, distribution, and profusion of pulmonary abnormalities. Dividing the lungs into three zones, profusion of abnormalities was assessed. A profusion score (1-4) was given and the scores of each zone were then summed to obtain a global profusion score for HRCT ranging from 0 to 12. Also a parenchymal alteration score (PAS) was defined with respect to profusion of abnormalities. Chest X-rays were also scored. Results: All of ten patients with PAM had findings of interstitial lung disease in varying degrees on their HRCTs. HRCT findings of patients with PAM were as following: MN, parenchymal bands (PB), ground glass opacity (GGO) and, sub pleural interstitial thickening (SPIT) in 10 patients; interlobular septal thickening (ILST), in 9 patients; paraseptal emphysema (PSA) in 8 patients; centrilobular emphysema (CLA) in 7 patients; bronchiectasis (BE), confluent micro nodules (CMN) in 6 patients; peri bronchovascular interstitial thickening (PBIT) in 5 patients; panacinar emphysema (PANAA) in 3 patients; pleural calcification (PC) in 2 patients. A significant correlation between MN scores and PAS (r = 0.68, p = 0.031, MN scores and GGO scores (r = 0.69, p = 0.027) and, MN scores and CLA scores (r = 0.67, p = 0

  17. High Resolution Computed Tomography in Asthma 

    Directory of Open Access Journals (Sweden)

    Nabil Maradny

    2012-03-01

    Full Text Available  Objectives: High-resolution computed tomography (HRCT can detect the structural abnormalities in asthma. This study attempts to correlate these abnormalities with clinical and pulmonary function test (PFT data.Methods: Consecutive stable asthma patients attending Mubarak Al Kabeer Hospital, Kuwait, were subjected to HRCT during a six month period from July 2004 to December 2004, after initial evaluation and PFT.Results: Of the 28 cases, sixteen (57.1�20had moderate, 6 (21.4�20had mild and 6 (21.4�20had severe persistent asthma. Thirteen (46.4�20patients had asthma for 1 to 5 years and 12 (42.9�20were having asthma for >10 years. Bronchial wall thickening (57.1� bronchiectasis (28.6� mucoid impaction (17.9� mosaic attenuation (10.7� air trapping (78.6�20and plate like atelectasis (21.4�20were noted. Bronchial wall thickening (p=0.044 and bronchiectasis (p=0.063 were most prevalent in males. Ten (35.7�20patients exhibited mild, 9 (32.1�20had moderate and 3 (10.7�20had severe air trapping. The difference in Hounsfield units between expiratory and inspiratory slices (air trapping when correlated with percent-predicted FEV1 in right upper (r=0.25;p=0.30, left upper (r=0.20; p=0.41, right mid (r=0.15; p=0.53, left mid (r=-0.04; p=0.60, right lower (r=0.04; p=0.86 and left lower zones (r=-0.13; p=0.58 showed no relation. The same when correlated as above with the percent predicted FEF 25-75 did not show any significant association. The presence of air trapping was compared with sex (p=0.640, nationality (p=1.000, disease duration (p=1.000 and severity of symptoms (p=0.581.Conclusion: Abnormal HRCT findings are common in asthma; however, air trapping when present was not related to the duration or severity of the illness or to the FEV1.

  18. Scalability of DL_POLY on High Performance Computing Platform

    CSIR Research Space (South Africa)

    Mabakane, Mabule S

    2017-12-01

    Full Text Available stream_source_info Mabakanea_19979_2017.pdf.txt stream_content_type text/plain stream_size 33716 Content-Encoding UTF-8 stream_name Mabakanea_19979_2017.pdf.txt Content-Type text/plain; charset=UTF-8 SACJ 29(3) December... when using many processors within the compute nodes of the supercomputer. The type of the processors of compute nodes and their memory also play an important role in the overall performance of the parallel application running on a supercomputer. DL...

  19. Computation of order and volume fill rates for a base stock inventory control system with heterogeneous demand to investigate which customer class gets the best service

    OpenAIRE

    Larsen, Christian

    2006-01-01

    We consider a base stock inventory control system serving two customer classes whose demands are generated by two independent compound renewal processes. We show how to derive order and volume fill rates of each class. Based on assumptions about first order stochastic dominance we prove when one customer class will get the best service. That theoretical result is validated through a series of numerical experiments which also reveal that it is quite robust.

  20. Failure detection in high-performance clusters and computers using chaotic map computations

    Science.gov (United States)

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  1. Digital imaging of root traits (DIRT): a high-throughput computing and collaboration platform for field-based root phenomics.

    Science.gov (United States)

    Das, Abhiram; Schneider, Hannah; Burridge, James; Ascanio, Ana Karine Martinez; Wojciechowski, Tobias; Topp, Christopher N; Lynch, Jonathan P; Weitz, Joshua S; Bucksch, Alexander

    2015-01-01

    Plant root systems are key drivers of plant function and yield. They are also under-explored targets to meet global food and energy demands. Many new technologies have been developed to characterize crop root system architecture (CRSA). These technologies have the potential to accelerate the progress in understanding the genetic control and environmental response of CRSA. Putting this potential into practice requires new methods and algorithms to analyze CRSA in digital images. Most prior approaches have solely focused on the estimation of root traits from images, yet no integrated platform exists that allows easy and intuitive access to trait extraction and analysis methods from images combined with storage solutions linked to metadata. Automated high-throughput phenotyping methods are increasingly used in laboratory-based efforts to link plant genotype with phenotype, whereas similar field-based studies remain predominantly manual low-throughput. Here, we present an open-source phenomics platform "DIRT", as a means to integrate scalable supercomputing architectures into field experiments and analysis pipelines. DIRT is an online platform that enables researchers to store images of plant roots, measure dicot and monocot root traits under field conditions, and share data and results within collaborative teams and the broader community. The DIRT platform seamlessly connects end-users with large-scale compute "commons" enabling the estimation and analysis of root phenotypes from field experiments of unprecedented size. DIRT is an automated high-throughput computing and collaboration platform for field based crop root phenomics. The platform is accessible at http://www.dirt.iplantcollaborative.org/ and hosted on the iPlant cyber-infrastructure using high-throughput grid computing resources of the Texas Advanced Computing Center (TACC). DIRT is a high volume central depository and high-throughput RSA trait computation platform for plant scientists working on crop roots

  2. Effective computing algorithm for maintenance optimization of highly reliable systems

    Czech Academy of Sciences Publication Activity Database

    Briš, R.; Byczanski, Petr

    2013-01-01

    Roč. 109, č. 1 (2013), s. 77-85 ISSN 0951-8320 R&D Projects: GA MŠk(CZ) ED1.1.00/02.0070 Institutional support: RVO:68145535 Keywords : exact computing * maintenance * optimization * unavailability Subject RIV: BA - General Mathematics Impact factor: 2.048, year: 2013 http://www.sciencedirect.com/science/article/pii/S0951832012001639

  3. High speed switching for computer and communication networks

    NARCIS (Netherlands)

    Dorren, H.J.S.

    2014-01-01

    The role of data centers and computers are vital for the future of our data-centric society. Historically the performance of data-centers is increasing with a factor 100-1000 every ten years and as a result of this the capacity of the data-center communication network has to scale accordingly. This

  4. Running Batch Jobs on Peregrine | High-Performance Computing | NREL

    Science.gov (United States)

    and run your application. Users typically create or edit job scripts using a text editor such as vi Using Resource Feature to Request Different Node Types Peregrine has several types of compute nodes , which differ in the amount of memory and number of processor cores. The majority of the nodes have 24

  5. Running Interactive Jobs on Peregrine | High-Performance Computing | NREL

    Science.gov (United States)

    shell prompt, which allows users to execute commands and scripts as they would on the login nodes. Login performed on the compute nodes rather than on login nodes. This page provides instructions and examples of , start GUIs etc. and the commands will execute on that node instead of on the login node. The -V option

  6. Simulating elastic light scattering using high performance computing methods

    NARCIS (Netherlands)

    Hoekstra, A.G.; Sloot, P.M.A.; Verbraeck, A.; Kerckhoffs, E.J.H.

    1993-01-01

    The Coupled Dipole method, as originally formulated byPurcell and Pennypacker, is a very powerful method tosimulate the Elastic Light Scattering from arbitraryparticles. This method, which is a particle simulationmodel for Computational Electromagnetics, has one majordrawback: if the size of the

  7. High performance computer code for molecular dynamics simulations

    International Nuclear Information System (INIS)

    Levay, I.; Toekesi, K.

    2007-01-01

    Complete text of publication follows. Molecular Dynamics (MD) simulation is a widely used technique for modeling complicated physical phenomena. Since 2005 we are developing a MD simulations code for PC computers. The computer code is written in C++ object oriented programming language. The aim of our work is twofold: a) to develop a fast computer code for the study of random walk of guest atoms in Be crystal, b) 3 dimensional (3D) visualization of the particles motion. In this case we mimic the motion of the guest atoms in the crystal (diffusion-type motion), and the motion of atoms in the crystallattice (crystal deformation). Nowadays, it is common to use Graphics Devices in intensive computational problems. There are several ways to use this extreme processing performance, but never before was so easy to programming these devices as now. The CUDA (Compute Unified Device) Architecture introduced by nVidia Corporation in 2007 is a very useful for every processor hungry application. A Unified-architecture GPU include 96-128, or more stream processors, so the raw calculation performance is 576(!) GFLOPS. It is ten times faster, than the fastest dual Core CPU [Fig.1]. Our improved MD simulation software uses this new technology, which speed up our software and the code run 10 times faster in the critical calculation code segment. Although the GPU is a very powerful tool, it has a strongly paralleled structure. It means, that we have to create an algorithm, which works on several processors without deadlock. Our code currently uses 256 threads, shared and constant on-chip memory, instead of global memory, which is 100 times slower than others. It is possible to implement the total algorithm on GPU, therefore we do not need to download and upload the data in every iteration. On behalf of maximal throughput, every thread run with the same instructions

  8. Worksite interventions for preventing physical deterioration among employees in job-groups with high physical work demands: Background, design and conceptual model of FINALE

    Directory of Open Access Journals (Sweden)

    Mortensen Ole S

    2010-03-01

    Full Text Available Abstract Background A mismatch between individual physical capacities and physical work demands enhance the risk for musculoskeletal disorders, poor work ability and sickness absence, termed physical deterioration. However, effective intervention strategies for preventing physical deterioration in job groups with high physical demands remains to be established. This paper describes the background, design and conceptual model of the FINALE programme, a framework for health promoting interventions at 4 Danish job groups (i.e. cleaners, health-care workers, construction workers and industrial workers characterized by high physical work demands, musculoskeletal disorders, poor work ability and sickness absence. Methods/Design A novel approach of the FINALE programme is that the interventions, i.e. 3 randomized controlled trials (RCT and 1 exploratory case-control study are tailored to the physical work demands, physical capacities and health profile of workers in each job-group. The RCT among cleaners, characterized by repetitive work tasks and musculoskeletal disorders, aims at making the cleaners less susceptible to musculoskeletal disorders by physical coordination training or cognitive behavioral theory based training (CBTr. Because health-care workers are reported to have high prevalence of overweight and heavy lifts, the aim of the RCT is long-term weight-loss by combined physical exercise training, CBTr and diet. Construction work, characterized by heavy lifting, pushing and pulling, the RCT aims at improving physical capacity and promoting musculoskeletal and cardiovascular health. At the industrial work-place characterized by repetitive work tasks, the intervention aims at reducing physical exertion and musculoskeletal disorders by combined physical exercise training, CBTr and participatory ergonomics. The overall aim of the FINALE programme is to improve the safety margin between individual resources (i.e. physical capacities, and

  9. Worksite interventions for preventing physical deterioration among employees in job-groups with high physical work demands: background, design and conceptual model of FINALE.

    Science.gov (United States)

    Holtermann, Andreas; Jørgensen, Marie B; Gram, Bibi; Christensen, Jeanette R; Faber, Anne; Overgaard, Kristian; Ektor-Andersen, John; Mortensen, Ole S; Sjøgaard, Gisela; Søgaard, Karen

    2010-03-09

    A mismatch between individual physical capacities and physical work demands enhance the risk for musculoskeletal disorders, poor work ability and sickness absence, termed physical deterioration. However, effective intervention strategies for preventing physical deterioration in job groups with high physical demands remains to be established. This paper describes the background, design and conceptual model of the FINALE programme, a framework for health promoting interventions at 4 Danish job groups (i.e. cleaners, health-care workers, construction workers and industrial workers) characterized by high physical work demands, musculoskeletal disorders, poor work ability and sickness absence. A novel approach of the FINALE programme is that the interventions, i.e. 3 randomized controlled trials (RCT) and 1 exploratory case-control study are tailored to the physical work demands, physical capacities and health profile of workers in each job-group. The RCT among cleaners, characterized by repetitive work tasks and musculoskeletal disorders, aims at making the cleaners less susceptible to musculoskeletal disorders by physical coordination training or cognitive behavioral theory based training (CBTr). Because health-care workers are reported to have high prevalence of overweight and heavy lifts, the aim of the RCT is long-term weight-loss by combined physical exercise training, CBTr and diet. Construction work, characterized by heavy lifting, pushing and pulling, the RCT aims at improving physical capacity and promoting musculoskeletal and cardiovascular health. At the industrial work-place characterized by repetitive work tasks, the intervention aims at reducing physical exertion and musculoskeletal disorders by combined physical exercise training, CBTr and participatory ergonomics. The overall aim of the FINALE programme is to improve the safety margin between individual resources (i.e. physical capacities, and cognitive and behavioral skills) and physical work demands

  10. Using NCLab-karel to improve computational thinking skill of junior high school students

    Science.gov (United States)

    Kusnendar, J.; Prabawa, H. W.

    2018-05-01

    Increasingly human interaction with technology and the increasingly complex development of digital technology world make the theme of computer science education interesting to study. Previous studies on Computer Literacy and Competency reveal that Indonesian teachers in general have fairly high computational skill, but their skill utilization are limited to some applications. This engenders limited and minimum computer-related learning for the students. On the other hand, computer science education is considered unrelated to real-world solutions. This paper attempts to address the utilization of NCLab- Karel in shaping the computational thinking in students. This computational thinking is believed to be able to making learn students about technology. Implementation of Karel utilization provides information that Karel is able to increase student interest in studying computational material, especially algorithm. Observations made during the learning process also indicate the growth and development of computing mindset in students.

  11. Serum brain-derived neurotrophic factor and interleukin-6 response to high-volume mechanically demanding exercise.

    Science.gov (United States)

    Verbickas, Vaidas; Kamandulis, Sigitas; Snieckus, Audrius; Venckunas, Tomas; Baranauskiene, Neringa; Brazaitis, Marius; Satkunskiene, Danguole; Unikauskas, Alvydas; Skurvydas, Albertas

    2018-01-01

    The aim of this study was to follow circulating brain-derived neurotrophic factor (BDNF) and interleukin-6 (IL-6) levels in response to severe muscle-damaging exercise. Young healthy men (N = 10) performed a bout of mechanically demanding stretch-shortening cycle exercise consisting of 200 drop jumps. Voluntary and electrically induced knee extension torque, serum BDNF levels, and IL-6 levels were measured before and for up to 7 days after exercise. Muscle force decreased by up to 40% and did not recover by 24 hours after exercise. Serum BDNF was decreased 1 hour and 24 hours after exercise, whereas IL-6 increased immediately and 1 hour after but recovered to baseline by 24 hours after exercise. IL-6 and 100-Hz stimulation torque were correlated (r = -0.64, P exercise. In response to acute, severe muscle-damaging exercise, serum BDNF levels decrease, whereas IL-6 levels increase and are associated with peripheral fatigue. Muscle Nerve 57: E46-E51, 2018. © 2017 Wiley Periodicals, Inc.

  12. Lightweight high-performance 1-4 meter class spaceborne mirrors: emerging technology for demanding spaceborne requirements

    Science.gov (United States)

    Hull, Tony; Hartmann, Peter; Clarkson, Andrew R.; Barentine, John M.; Jedamzik, Ralf; Westerhoff, Thomas

    2010-07-01

    Pending critical spaceborne requirements, including coronagraphic detection of exoplanets, require exceptionally smooth mirror surfaces, aggressive lightweighting, and low-risk cost-effective optical manufacturing methods. Simultaneous development at Schott for production of aggressively lightweighted (>90%) Zerodur® mirror blanks, and at L-3 Brashear for producing ultra-smooth surfaces on Zerodur®, will be described. New L-3 techniques for large-mirror optical fabrication include Computer Controlled Optical Surfacing (CCOS) pioneered at L-3 Tinsley, and the world's largest MRF machine in place at L-3 Brashear. We propose that exceptional mirrors for the most critical spaceborne applications can now be produced with the technologies described.

  13. High performance stream computing for particle beam transport simulations

    International Nuclear Information System (INIS)

    Appleby, R; Bailey, D; Higham, J; Salt, M

    2008-01-01

    Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the simultaneous transport of many particles using dedicated hardware. Modern Graphics Processing Units (GPUs) are powerful and affordable stream computing devices. The results of simulations of particle transport through the booster-to-storage-ring transfer line of the DIAMOND synchrotron light source using an NVidia GeForce 7900 GPU are compared to the standard transport code MAD. It is found that particle transport calculations are suitable for stream processing and large performance increases are possible. The accuracy and potential speed gains are compared and the prospects for future work in the area are discussed

  14. Computational model of lightness perception in high dynamic range imaging

    Science.gov (United States)

    Krawczyk, Grzegorz; Myszkowski, Karol; Seidel, Hans-Peter

    2006-02-01

    An anchoring theory of lightness perception by Gilchrist et al. [1999] explains many characteristics of human visual system such as lightness constancy and its spectacular failures which are important in the perception of images. The principal concept of this theory is the perception of complex scenes in terms of groups of consistent areas (frameworks). Such areas, following the gestalt theorists, are defined by the regions of common illumination. The key aspect of the image perception is the estimation of lightness within each framework through the anchoring to the luminance perceived as white, followed by the computation of the global lightness. In this paper we provide a computational model for automatic decomposition of HDR images into frameworks. We derive a tone mapping operator which predicts lightness perception of the real world scenes and aims at its accurate reproduction on low dynamic range displays. Furthermore, such a decomposition into frameworks opens new grounds for local image analysis in view of human perception.

  15. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    CERN Document Server

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  16. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    Science.gov (United States)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  17. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Muzaffar, Shahzad; Knight, Robert

    2015-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG). (paper)

  18. Toward a Computational Neuropsychology of High-Level Vision.

    Science.gov (United States)

    1984-08-20

    known as visual agnosia ’ (also called "mindblindness’)l this patient failed to *recognize her nurses, got lost frequently when travelling familiar routes...visual agnosia are not blind: these patients can compare two shapes reliably when Computational neuropsychology 16 both are visible, but they cannot...visually recognize what an object is (although many can recognize objects by touch). This sort of agnosia has been well-documented in the literature (see

  19. WinHPC System Configuration | High-Performance Computing | NREL

    Science.gov (United States)

    ), login node (WinHPC02) and worker/compute nodes. The head node acts as the file, DNS, and license server . The login node is where the users connect to access the cluster. Node 03 has dual Intel Xeon E5530 2008 R2 HPC Edition. The login node, WinHPC02, is where users login to access the system. This is where

  20. Architecture and Programming Models for High Performance Intensive Computation

    Science.gov (United States)

    2016-06-29

    commands from the data processing center to the sensors is needed. It has been noted that the ubiquity of mobile communication devices offers the...commands from a Processing Facility by way of mobile Relay Stations. The activity of each component of this model other than the Merge module can be...evaluation of the initial system implementation. Gao also was in charge of the development of Fresh Breeze architecture backend on new many-core computers

  1. An experimental platform for triaxial high-pressure/high-temperature testing of rocks using computed tomography

    Science.gov (United States)

    Glatz, Guenther; Lapene, Alexandre; Castanier, Louis M.; Kovscek, Anthony R.

    2018-04-01

    A conventional high-pressure/high-temperature experimental apparatus for combined geomechanical and flow-through testing of rocks is not X-ray compatible. Additionally, current X-ray transparent systems for computed tomography (CT) of cm-sized samples are limited to design temperatures below 180 °C. We describe a novel, high-temperature (>400 °C), high-pressure (>2000 psi/>13.8 MPa confining, >10 000 psi/>68.9 MPa vertical load) triaxial core holder suitable for X-ray CT scanning. The new triaxial system permits time-lapse imaging to capture the role of effective stress on fluid distribution and porous medium mechanics. System capabilities are demonstrated using ultimate compressive strength (UCS) tests of Castlegate sandstone. In this case, flooding the porous medium with a radio-opaque gas such as krypton before and after the UCS test improves the discrimination of rock features such as fractures. The results of high-temperature tests are also presented. A Uintah Basin sample of immature oil shale is heated from room temperature to 459 °C under uniaxial compression. The sample contains kerogen that pyrolyzes as temperature rises, releasing hydrocarbons. Imaging reveals the formation of stress bands as well as the evolution and connectivity of the fracture network within the sample as a function of time.

  2. Voluntary medical male circumcision: matching demand and supply with quality and efficiency in a high-volume campaign in Iringa Region, Tanzania.

    Science.gov (United States)

    Mahler, Hally R; Kileo, Baldwin; Curran, Kelly; Plotkin, Marya; Adamu, Tigistu; Hellar, Augustino; Koshuma, Sifuni; Nyabenda, Simeon; Machaku, Michael; Lukobo-Durrell, Mainza; Castor, Delivette; Njeuhmeli, Emmanuel; Fimbo, Bennett

    2011-11-01

    The government of Tanzania has adopted voluntary medical male circumcision (VMMC) as an important component of its national HIV prevention strategy and is scaling up VMMC in eight regions nationwide, with the goal of reaching 2.8 million uncircumcised men by 2015. In a 2010 campaign lasting six weeks, five health facilities in Tanzania's Iringa Region performed 10,352 VMMCs, which exceeded the campaign's target by 72%, with an adverse event (AE) rate of 1%. HIV testing was almost universal during the campaign. Through the adoption of approaches designed to improve clinical efficiency-including the use of the forceps-guided surgical method, the use of multiple beds in an assembly line by surgical teams, and task shifting and task sharing-the campaign matched the supply of VMMC services with demand. Community mobilization and bringing client preparation tasks (such as counseling, testing, and client scheduling) out of the facility and into the community helped to generate demand. This case study suggests that a campaign approach can be used to provide high-volume quality VMMC services without compromising client safety, and provides a model for matching supply and demand for VMMC services in other settings.

  3. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    Science.gov (United States)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-03-01

    We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  4. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    International Nuclear Information System (INIS)

    Hadjidoukas, P.E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-01-01

    We present Π4U, 1 an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow

  5. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    Energy Technology Data Exchange (ETDEWEB)

    Hadjidoukas, P.E.; Angelikopoulos, P. [Computational Science and Engineering Laboratory, ETH Zürich, CH-8092 (Switzerland); Papadimitriou, C. [Department of Mechanical Engineering, University of Thessaly, GR-38334 Volos (Greece); Koumoutsakos, P., E-mail: petros@ethz.ch [Computational Science and Engineering Laboratory, ETH Zürich, CH-8092 (Switzerland)

    2015-03-01

    We present Π4U,{sup 1} an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  6. Study of application technology of ultra-high speed computer to the elucidation of complex phenomena

    International Nuclear Information System (INIS)

    Sekiguchi, Tomotsugu

    1996-01-01

    The basic design of numerical information library in the decentralized computer network was explained at the first step of constructing the application technology of ultra-high speed computer to the elucidation of complex phenomena. Establishment of the system makes possible to construct the efficient application environment of ultra-high speed computer system to be scalable with the different computing systems. We named the system Ninf (Network Information Library for High Performance Computing). The summary of application technology of library was described as follows: the application technology of library under the distributed environment, numeric constants, retrieval of value, library of special functions, computing library, Ninf library interface, Ninf remote library and registration. By the system, user is able to use the program concentrating the analyzing technology of numerical value with high precision, reliability and speed. (S.Y.)

  7. Using High Performance Computing to Support Water Resource Planning

    Energy Technology Data Exchange (ETDEWEB)

    Groves, David G. [RAND Corporation, Santa Monica, CA (United States); Lembert, Robert J. [RAND Corporation, Santa Monica, CA (United States); May, Deborah W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Leek, James R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Syme, James [RAND Corporation, Santa Monica, CA (United States)

    2015-10-22

    In recent years, decision support modeling has embraced deliberation-withanalysis— an iterative process in which decisionmakers come together with experts to evaluate a complex problem and alternative solutions in a scientifically rigorous and transparent manner. Simulation modeling supports decisionmaking throughout this process; visualizations enable decisionmakers to assess how proposed strategies stand up over time in uncertain conditions. But running these simulation models over standard computers can be slow. This, in turn, can slow the entire decisionmaking process, interrupting valuable interaction between decisionmakers and analytics.

  8. Can high psychological job demands, low decision latitude, and high job strain predict disability pensions? A 12-year follow-up of middle-aged Swedish workers.

    OpenAIRE

    Canivet, Catarina; Choi, Bongkyoo; Karasek, Robert; Moghaddassi, Mahnaz; Staland-Nyman, Carin; Östergren, Per-Olof

    2013-01-01

    OBJECTIVES: The aim of this study was to investigate whether job strain, psychological demands, and decision latitude are independent determinants of disability pension rates over a 12-year follow-up period. METHODS: We studied 3,181 men and 3,359 women, all middle-aged and working at least 30 h per week, recruited from the general population of Malmö, Sweden, in 1992. The participation rate was 41 %. Baseline data include sociodemographics, the Job Content Questionnaire, lifestyle, a...

  9. CUDA/GPU Technology : Parallel Programming For High Performance Scientific Computing

    OpenAIRE

    YUHENDRA; KUZE, Hiroaki; JOSAPHAT, Tetuko Sri Sumantyo

    2009-01-01

    [ABSTRACT]Graphics processing units (GP Us) originally designed for computer video cards have emerged as the most powerful chip in a high-performance workstation. In the high performance computation capabilities, graphic processing units (GPU) lead to much more powerful performance than conventional CPUs by means of parallel processing. In 2007, the birth of Compute Unified Device Architecture (CUDA) and CUDA-enabled GPUs by NVIDIA Corporation brought a revolution in the general purpose GPU a...

  10. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Bland, Arthur S Buddy [ORNL; Boudwin, Kathlyn J. [ORNL; Hack, James J [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL; Hudson, Douglas L [ORNL

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation

  11. Calculation of demands for nuclear fuels and fuel cycle services. Description of computer model and strategies developed by Working Group 1

    International Nuclear Information System (INIS)

    Working Group 1 examined a range of reactor deployment strategies and fuel cycle options, in oder to estimate the range of nuclear fuel requirements and fuel cycle service needs which would result. The computer model, its verification in comparison with other models, the strategies to be examined through use of the model, and the range of results obtained are described

  12. Exploring the relationships between high involvement work system practices, work demands and emotional exhaustion : A multi-level study.

    NARCIS (Netherlands)

    Oppenauer, V.; van de Voorde, F.C.

    2018-01-01

    This study explores the impact of enacted high involvement work systems (HIWS) practices on employee emotional exhaustion. This study hypothesized that work overload and job responsibility mediate the relationship between HIWS practices (ability, motivation, opportunity and work design HIWS

  13. Calibration of high-resolution electronic autocollimators with demanded low uncertainties using single reading head angle encoders

    International Nuclear Information System (INIS)

    Yandayan, Tanfer; Akgoz, S Asli; Asar, Muharrem

    2014-01-01

    Calibration of high-resolution electronic autocollimators is carried out in TUBITAK UME using an angle comparator to ensure direct traceability to the SI unit of plane angle, radian (rad). The device is a specially designed air-bearing rotary table fitted with a commercially available angular encoder utilizing a single reading head. It is shown that high-resolution electronic autocollimators in the large measurement range (e.g. ±1000 arcsec) can be calibrated with an expanded uncertainty of 0.035 arcsec (k = 2) in conventional dimensional laboratory conditions, applying good measurement strategy for single reading head angle encoders and taking simple but smart precautions. Description of the angle comparator is presented with various test results derived using different high-precision autocollimators, and a detailed uncertainty budget is given for the calibration of a high-resolution electronic autocollimator. (paper)

  14. Computer Science in High School Graduation Requirements. ECS Education Trends (Updated)

    Science.gov (United States)

    Zinth, Jennifer

    2016-01-01

    Allowing high school students to fulfill a math or science high school graduation requirement via a computer science credit may encourage more student to pursue computer science coursework. This Education Trends report is an update to the original report released in April 2015 and explores state policies that allow or require districts to apply…

  15. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    Science.gov (United States)

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  16. The Relationship between Utilization of Computer Games and Spatial Abilities among High School Students

    Science.gov (United States)

    Motamedi, Vahid; Yaghoubi, Razeyah Mohagheghyan

    2015-01-01

    This study aimed at investigating the relationship between computer game use and spatial abilities among high school students. The sample consisted of 300 high school male students selected through multi-stage cluster sampling. Data gathering tools consisted of a researcher made questionnaire (to collect information on computer game usage) and the…

  17. Fluid/Structure Interaction Studies of Aircraft Using High Fidelity Equations on Parallel Computers

    Science.gov (United States)

    Guruswamy, Guru; VanDalsem, William (Technical Monitor)

    1994-01-01

    Abstract Aeroelasticity which involves strong coupling of fluids, structures and controls is an important element in designing an aircraft. Computational aeroelasticity using low fidelity methods such as the linear aerodynamic flow equations coupled with the modal structural equations are well advanced. Though these low fidelity approaches are computationally less intensive, they are not adequate for the analysis of modern aircraft such as High Speed Civil Transport (HSCT) and Advanced Subsonic Transport (AST) which can experience complex flow/structure interactions. HSCT can experience vortex induced aeroelastic oscillations whereas AST can experience transonic buffet associated structural oscillations. Both aircraft may experience a dip in the flutter speed at the transonic regime. For accurate aeroelastic computations at these complex fluid/structure interaction situations, high fidelity equations such as the Navier-Stokes for fluids and the finite-elements for structures are needed. Computations using these high fidelity equations require large computational resources both in memory and speed. Current conventional super computers have reached their limitations both in memory and speed. As a result, parallel computers have evolved to overcome the limitations of conventional computers. This paper will address the transition that is taking place in computational aeroelasticity from conventional computers to parallel computers. The paper will address special techniques needed to take advantage of the architecture of new parallel computers. Results will be illustrated from computations made on iPSC/860 and IBM SP2 computer by using ENSAERO code that directly couples the Euler/Navier-Stokes flow equations with high resolution finite-element structural equations.

  18. Harnessing the power of demand

    Energy Technology Data Exchange (ETDEWEB)

    Sheffrin, Anjali; Yoshimura, Henry; LaPlante, David; Neenan, Bernard

    2008-03-15

    Demand response can provide a series of economic services to the market and also provide ''insurance value'' under low-likelihood, but high-impact circumstances in which grid reliablity is enhanced. Here is how ISOs and RTOs are fostering demand response within wholesale electricity markets. (author)

  19. Mixed-Language High-Performance Computing for Plasma Simulations

    Directory of Open Access Journals (Sweden)

    Quanming Lu

    2003-01-01

    Full Text Available Java is receiving increasing attention as the most popular platform for distributed computing. However, programmers are still reluctant to embrace Java as a tool for writing scientific and engineering applications due to its still noticeable performance drawbacks compared with other programming languages such as Fortran or C. In this paper, we present a hybrid Java/Fortran implementation of a parallel particle-in-cell (PIC algorithm for plasma simulations. In our approach, the time-consuming components of this application are designed and implemented as Fortran subroutines, while less calculation-intensive components usually involved in building the user interface are written in Java. The two types of software modules have been glued together using the Java native interface (JNI. Our mixed-language PIC code was tested and its performance compared with pure Java and Fortran versions of the same algorithm on a Sun E6500 SMP system and a Linux cluster of Pentium~III machines.

  20. High resolution computed tomography of chronic otitis media

    International Nuclear Information System (INIS)

    Shirahata, Yuichi; Tachibana, Toshiro; Fukami, Masaya; Onishi, Toshiro; Doi, Osamu

    1986-01-01

    Seventy six patients with chronic otitis media were examined by CT. Using 3 dried skulls, the epitympanum was impacted with a piece of paraffin containing of 2 % iodine, and studied with CT-scan (Toshiba 60A-30) to clarify whether or not the paraffin could produce a soft tissue density on CT which was similar to that of cholesteatoma in the middle ear. The results showed that computed tomography was excellent in demonstrating a soft tissue mass in the middle ear with inflammatory disease. When the middle ear infection with granulation tissue or cholesteatoma existed, the resulting soft tissue mass was indistinguishable. CT scanning was useful for accurate determination of location of bone destruction in the middle ear as well as of the ossicles. (author)