WorldWideScience

Sample records for high computational demand

  1. Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments

    Directory of Open Access Journals (Sweden)

    Jose M. Moya

    2012-08-01

    Full Text Available Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

  2. Ubiquitous green computing techniques for high demand applications in Smart environments.

    Science.gov (United States)

    Zapater, Marina; Sanchez, Cesar; Ayala, Jose L; Moya, Jose M; Risco-Martín, José L

    2012-01-01

    Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

  3. Developing on-demand secure high-performance computing services for biomedical data analytics.

    Science.gov (United States)

    Robison, Nicholas; Anderson, Nick

    2013-01-01

    We propose a technical and process model to support biomedical researchers requiring on-demand high performance computing on potentially sensitive medical datasets. Our approach describes the use of cost-effective, secure and scalable techniques for processing medical information via protected and encrypted computing clusters within a model High Performance Computing (HPC) environment. The process model supports an investigator defined data analytics platform capable of accepting secure data migration from local clinical research data silos into a dedicated analytic environment, and secure environment cleanup upon completion. We define metrics to support the evaluation of this pilot model through performance and stability tests, and describe evaluation of its suitability towards enabling rapid deployment by individual investigators.

  4. Computational Imaging in Demanding Conditions

    Science.gov (United States)

    2015-11-18

    WORK UNIT NUMBER 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) University of California, Santa Cruz, 1156 High Street , Santa Cruz, CA 95064 8...addressed recently in computational  photography  is that of  producing a good picture of a poorly lit scene. The consensus approach for  solving this

  5. Computational Imaging in Demanding Conditions

    Science.gov (United States)

    2015-11-18

    addressed recently in computational  photography  is that of  producing a good picture of a poorly lit scene. The consensus approach for  solving this...of-the- art  methods for both flash/no-flash denoising,  and deblurring.  ○ Also see related ​project page.  3      ● Nonlocal Image Editing  ○ We

  6. Gravitational demand on the neck musculature during tablet computer use.

    Science.gov (United States)

    Vasavada, Anita N; Nevins, Derek D; Monda, Steven M; Hughes, Ellis; Lin, David C

    2015-01-01

    Tablet computer use requires substantial head and neck flexion, which is a risk factor for neck pain. The goal of this study was to evaluate the biomechanics of the head-neck system during seated tablet computer use under a variety of conditions. A physiologically relevant variable, gravitational demand (the ratio of gravitational moment due to the weight of the head to maximal muscle moment capacity), was estimated using a musculoskeletal model incorporating subject-specific size and intervertebral postures from radiographs. Gravitational demand in postures adopted during tablet computer use was 3-5 times that of the neutral posture, with the lowest demand when the tablet was in a high propped position. Moreover, the estimated gravitational demand could be correlated to head and neck postural measures (0.48 quantitative data about mechanical requirements on the neck musculature during tablet computer use and are important for developing ergonomics guidelines. Practitioner Summary: Flexed head and neck postures occur during tablet computer use and are implicated in neck pain. The mechanical demand on the neck muscles was estimated to increase 3-5 times during seated tablet computer use versus seated neutral posture, with the lowest demand in a high propped tablet position but few differences in other conditions.

  7. Controlling Energy Demand in Mobile Computing Systems

    CERN Document Server

    Ellis, Carla

    2007-01-01

    This lecture provides an introduction to the problem of managing the energy demand of mobile devices. Reducing energy consumption, primarily with the goal of extending the lifetime of battery-powered devices, has emerged as a fundamental challenge in mobile computing and wireless communication. The focus of this lecture is on a systems approach where software techniques exploit state-of-the-art architectural features rather than relying only upon advances in lower-power circuitry or the slow improvements in battery technology to solve the problem. Fortunately, there are many opportunities to i

  8. Parallel high-performance grid computing: capabilities and opportunities of a novel demanding service and business class allowing highest resource efficiency.

    Science.gov (United States)

    Kepper, Nick; Ettig, Ramona; Dickmann, Frank; Stehr, Rene; Grosveld, Frank G; Wedemann, Gero; Knoch, Tobias A

    2010-01-01

    Especially in the life-science and the health-care sectors the huge IT requirements are imminent due to the large and complex systems to be analysed and simulated. Grid infrastructures play here a rapidly increasing role for research, diagnostics, and treatment, since they provide the necessary large-scale resources efficiently. Whereas grids were first used for huge number crunching of trivially parallelizable problems, increasingly parallel high-performance computing is required. Here, we show for the prime example of molecular dynamic simulations how the presence of large grid clusters including very fast network interconnects within grid infrastructures allows now parallel high-performance grid computing efficiently and thus combines the benefits of dedicated super-computing centres and grid infrastructures. The demands for this service class are the highest since the user group has very heterogeneous requirements: i) two to many thousands of CPUs, ii) different memory architectures, iii) huge storage capabilities, and iv) fast communication via network interconnects, are all needed in different combinations and must be considered in a highly dedicated manner to reach highest performance efficiency. Beyond, advanced and dedicated i) interaction with users, ii) the management of jobs, iii) accounting, and iv) billing, not only combines classic with parallel high-performance grid usage, but more importantly is also able to increase the efficiency of IT resource providers. Consequently, the mere "yes-we-can" becomes a huge opportunity like e.g. the life-science and health-care sectors as well as grid infrastructures by reaching higher level of resource efficiency.

  9. Water demand forecasting: review of soft computing methods.

    Science.gov (United States)

    Ghalehkhondabi, Iman; Ardjmand, Ehsan; Young, William A; Weckman, Gary R

    2017-07-01

    Demand forecasting plays a vital role in resource management for governments and private companies. Considering the scarcity of water and its inherent constraints, demand management and forecasting in this domain are critically important. Several soft computing techniques have been developed over the last few decades for water demand forecasting. This study focuses on soft computing methods of water consumption forecasting published between 2005 and 2015. These methods include artificial neural networks (ANNs), fuzzy and neuro-fuzzy models, support vector machines, metaheuristics, and system dynamics. Furthermore, it was discussed that while in short-term forecasting, ANNs have been superior in many cases, but it is still very difficult to pick a single method as the overall best. According to the literature, various methods and their hybrids are applied to water demand forecasting. However, it seems soft computing has a lot more to contribute to water demand forecasting. These contribution areas include, but are not limited, to various ANN architectures, unsupervised methods, deep learning, various metaheuristics, and ensemble methods. Moreover, it is found that soft computing methods are mainly used for short-term demand forecasting.

  10. Delivering Training for Highly Demanding Information Systems

    Science.gov (United States)

    Norton, Andrew Lawrence; Coulson-Thomas, Yvette May; Coulson-Thomas, Colin Joseph; Ashurst, Colin

    2012-01-01

    Purpose: There is a lack of research covering the training requirements of organisations implementing highly demanding information systems (HDISs). The aim of this paper is to help in the understanding of appropriate training requirements for such systems. Design/methodology/approach: This research investigates the training delivery within a…

  11. Resource Optimization Based on Demand in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Ramakrishnan Ramanathan

    2014-10-01

    Full Text Available A Cloud Computing gives the opportunity to dynamically scale the computing resources for application. Cloud Computing consist of large number of resources, it is called resource pool. These resources are shared among the cloud consumer using virtualization technology. Virtualization technologies engaged in cloud environment is resource consolidation and management. Cloud consists of physical and virtual resources. Cloud performance is important for Cloud Provider perspective predicts the dynamic nature of users, user demands and application demand. The cloud consumer perspective, the job should be completed on time with minimum cost and limited resources. Finding optimum resource allocation is difficult in huge system like Cluster, Data Centre and Grid. In this study we present two types of resource allocation schemes such as Commitment Allocation (CA and Over Commitment Allocation (OCA in the physical and virtual level resource. These resource allocation schemes helps to identify the virtual resource utilization and physical resource availability.

  12. Radiation therapy calculations using an on-demand virtual cluster via cloud computing

    CERN Document Server

    Keyes, Roy W; Arnold, Dorian; Luan, Shuang

    2010-01-01

    Computer hardware costs are the limiting factor in producing highly accurate radiation dose calculations on convenient time scales. Because of this, large-scale, full Monte Carlo simulations and other resource intensive algorithms are often considered infeasible for clinical settings. The emerging cloud computing paradigm promises to fundamentally alter the economics of such calculations by providing relatively cheap, on-demand, pay-as-you-go computing resources over the Internet. We believe that cloud computing will usher in a new era, in which very large scale calculations will be routinely performed by clinics and researchers using cloud-based resources. In this research, several proof-of-concept radiation therapy calculations were successfully performed on a cloud-based virtual Monte Carlo cluster. Performance evaluations were made of a distributed processing framework developed specifically for this project. The expected 1/n performance was observed with some caveats. The economics of cloud-based virtual...

  13. Demand side management scheme in smart grid with cloud computing approach using stochastic dynamic programming

    Directory of Open Access Journals (Sweden)

    S. Sofana Reka

    2016-09-01

    Full Text Available This paper proposes a cloud computing framework in smart grid environment by creating small integrated energy hub supporting real time computing for handling huge storage of data. A stochastic programming approach model is developed with cloud computing scheme for effective demand side management (DSM in smart grid. Simulation results are obtained using GUI interface and Gurobi optimizer in Matlab in order to reduce the electricity demand by creating energy networks in a smart hub approach.

  14. COMPUTING THE VOCABULARY DEMANDS OF L2 READING

    Directory of Open Access Journals (Sweden)

    Tom Cobb

    2007-02-01

    Full Text Available Linguistic computing can make two important contributions to second language (L2 reading instruction. One is to resolve longstanding research issues that are based on an insufficiency of data for the researcher, and the other is to resolve related pedagogical problems based on insufficiency of input for the learner. The research section of the paper addresses the question of whether reading alone can give learners enough vocabulary to read. When the computer’s ability to process large amounts of both learner and linguistic data is applied to this question, it becomes clear that, for the vast majority of L2 learners, free or wide reading alone is not a sufficient source of vocabulary knowledge for reading. But computer processing also points to solutions to this problem. Through its ability to reorganize and link documents, the networked computer can increase the supply of vocabulary input that is available to the learner. The development section of the paper elaborates a principled role for computing in L2 reading pedagogy, with examples, in two broad areas, computer-based text design and computational enrichment of undesigned texts.

  15. The impact of object size and precision demands on fatigue during computer mouse use

    DEFF Research Database (Denmark)

    Aasa, Ulrika; Jensen, Bente Rona; Sandfeld, Jesper;

    2011-01-01

    Prolonged computer use, especially if fatigue ensues, is associated with visual and musculoskeletal symptoms. The aim was to determine the time-course of perceived fatigue in the wrist, forearm, shoulder and eyes during a 60-min mouse task (painting rectangles), and whether object size and/or mouse......, square paint cursor size 1.3 × 1.3 mm, and mouse–pointer movement ratio 1:26. At condition 2, the same cursor size and mouse–pointer movement ratio was used, but rectangles were smaller. At condition 3, the smaller rectangles were used, but the cursor size was also smaller and mouse–pointer movement...... not differ between conditions. In conclusion, computer work tasks imposing high visual and motor demands, and with high performance, seemed to have an influence on eye fatigue....

  16. Effect of aging on performance, muscle activation and perceived stress during mentally demanding computer tasks

    DEFF Research Database (Denmark)

    Alkjaer, Tine; Pilegaard, Marianne; Bakke, Merete

    2005-01-01

    OBJECTIVES: This study examined the effects of age on performance, muscle activation, and perceived stress during computer tasks with different levels of mental demand. METHODS: Fifteen young and thirteen elderly women performed two computer tasks [color word test and reference task] with different...... demands affect young and elderly women differently. Thus the mentally demanding computer task had a more pronounced effect on the elderly than on the young. In contrast to the results in the reference task, the same level of muscle activity for most muscles and the same level of self-reported difficulty...... levels of mental demand but similar physical demands. The performance (clicking frequency, percentage of correct answers, and response time for correct answers) and electromyography from the forearm, shoulder, and neck muscles were recorded. Visual analogue scales were used to measure the participants...

  17. Funding Public Computing Centers: Balancing Broadband Availability and Expected Demand

    Science.gov (United States)

    Jayakar, Krishna; Park, Eun-A

    2012-01-01

    The National Broadband Plan (NBP) recently announced by the Federal Communication Commission visualizes a significantly enhanced commitment to public computing centers (PCCs) as an element of the Commission's plans for promoting broadband availability. In parallel, the National Telecommunications and Information Administration (NTIA) has…

  18. Classroom computing using on demand desktop streaming / by Douglas Brinkley.

    OpenAIRE

    Brinkley, Douglas

    2010-01-01

    Two of the most popular choices for classroom computing are laptop PCs and thin-client devices. Deciding between the two is often a difficult decision because both platforms have their respective advantages. Modern laptops give excellent performance because of their powerful processors and large amounts of memory. Thin-clients reduce maintenance costs through centralized configuration management. The Naval Postgraduate School is achieving the advantages of both platforms by employing a new te...

  19. Refactoring Android Java Code for On-Demand Computation Offloading

    OpenAIRE

    Zhang, Ying; Huang, Gang; Liu, Xuanzhe; Zhang, Wei; Zhang, Wei; Mei, Hong; Yang, Shunxiang

    2012-01-01

    International audience; Computation offloading is a promising way to improve the performance as well as reduce the battery energy consumption of a smartphone application by executing some part of the application on a remote server. Supporting such capability is not easy to smartphone app developers for 1) correctness: some codes, e.g. those for GPS, gravity and other sensors, can only run on the smartphone so that the developers have to identify which part of the application cannot be offload...

  20. High-throughput computing in the sciences.

    Science.gov (United States)

    Morgan, Mark; Grimshaw, Andrew

    2009-01-01

    While it is true that the modern computer is many orders of magnitude faster than that of yesteryear; this tremendous growth in CPU clock rates is now over. Unfortunately, however, the growth in demand for computational power has not abated; whereas researchers a decade ago could simply wait for computers to get faster, today the only solution to the growing need for more powerful computational resource lies in the exploitation of parallelism. Software parallelization falls generally into two broad categories--"true parallel" and high-throughput computing. This chapter focuses on the latter of these two types of parallelism. With high-throughput computing, users can run many copies of their software at the same time across many different computers. This technique for achieving parallelism is powerful in its ability to provide high degrees of parallelism, yet simple in its conceptual implementation. This chapter covers various patterns of high-throughput computing usage and the skills and techniques necessary to take full advantage of them. By utilizing numerous examples and sample codes and scripts, we hope to provide the reader not only with a deeper understanding of the principles behind high-throughput computing, but also with a set of tools and references that will prove invaluable as she explores software parallelism with her own software applications and research.

  1. The economic impact of uncertain tourism demand in Hawaii: risk in a computable general equilibrium model

    OpenAIRE

    2009-01-01

    This thesis estimates the economic impact of uncertain tourism demand in Hawaii. It does this by incorporating risk into a Computable General Equilibrium (CGE) model. CGE models have been used to investigate a wide range of policy issues. To date, none have investigated how uncertainty regarding future tourism demand impacts on an economy. The context in which this research is set is the US State of Hawaii. The economy of Hawaii is heavily dependent on tourism as a source of income and a...

  2. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  3. High Performance Computing Today

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, Jack; Meuer,Hans; Simon,Horst D.; Strohmaier,Erich

    2000-04-01

    In last 50 years, the field of scientific computing has seen a rapid change of vendors, architectures, technologies and the usage of systems. Despite all these changes the evolution of performance on a large scale however seems to be a very steady and continuous process. Moore's Law is often cited in this context. If the authors plot the peak performance of various computers of the last 5 decades in Figure 1 that could have been called the supercomputers of their time they indeed see how well this law holds for almost the complete lifespan of modern computing. On average they see an increase in performance of two magnitudes of order every decade.

  4. Effective Management of High-Use/High-Demand Space Using Restaurant-Style Pagers

    Science.gov (United States)

    Gonzalez, Adriana

    2012-01-01

    The library landscape is changing at a fast pace, with an increase in the demand for study space including quiet, individualized study space; open group study space; and as enclosed group study space. In large academic libraries, managing limited high-demand resources is crucial and is partially being driven by the greater emphasis on group…

  5. A high-throughput bioinformatics distributed computing platform

    OpenAIRE

    Keane, Thomas M; Page, Andrew J.; McInerney, James O; Naughton, Thomas J.

    2005-01-01

    In the past number of years the demand for high performance computing has greatly increased in the area of bioinformatics. The huge increase in size of many genomic databases has meant that many common tasks in bioinformatics are not possible to complete in a reasonable amount of time on a single processor. Recently distributed computing has emerged as an inexpensive alternative to dedicated parallel computing. We have developed a general-purpose distributed computing platform ...

  6. High assurance services computing

    CERN Document Server

    2009-01-01

    Covers service-oriented technologies in different domains including high assurance systemsAssists software engineers from industry and government laboratories who develop mission-critical software, and simultaneously provides academia with a practitioner's outlook on the problems of high-assurance software development

  7. Effects of touch target location on performance and physical demands of computer touchscreen use.

    Science.gov (United States)

    Kang, Hwayeong; Shin, Gwanseob

    2017-05-01

    Touchscreen interfaces for computers are known to cause greater physical stress compared to traditional computer interfaces. The objective of this study was to evaluate how physical demands and task performance of a tap gesture on a computer touchscreen vary between target locations and display positions. Twenty-three healthy participants conducted reach-tap-return trials with touch targets at fifteen locations in three display positions. Mean completion time, touch accuracy and electromyography of the shoulder and neck extensor muscles were compared between the target locations and display positions. The results demonstrated that participants completed the trial 12%-27% faster with 13%-39% less muscle activity when interacting with targets in the lower area of the display compared to when tapping upper targets (p computer touchscreen interface use.

  8. ICT Solutions for Highly-Customized Water Demand Management Strategies

    Science.gov (United States)

    Giuliani, M.; Cominola, A.; Castelletti, A.; Fraternali, P.; Guardiola, J.; Barba, J.; Pulido-Velazquez, M.; Rizzoli, A. E.

    2016-12-01

    The recent deployment of smart metering networks is opening new opportunities for advancing the design of residential water demand management strategies (WDMS) relying on improved understanding of water consumers' behaviors. Recent applications showed that retrieving information on users' consumption behaviors, along with their explanatory and/or causal factors, is key to spot potential areas where targeting water saving efforts, and to design user-tailored WDMS. In this study, we explore the potential of ICT-based solutions in supporting the design and implementation of highly customized WDMS. On one side, the collection of consumption data at high spatial and temporal resolutions requires big data analytics and machine learning techniques to extract typical consumption features from the metered population of water users. On the other side, ICT solutions and gamifications can be used as effective means for facilitating both users' engagement and the collection of socio-psychographic users' information. This latter allows interpreting and improving the extracted profiles, ultimately supporting the customization of WDMS, such as awareness campaigns or personalized recommendations. Our approach is implemented in the SmartH2O platform and demonstrated in a pilot application in Valencia, Spain. Results show how the analysis of the smart metered consumption data, combined with the information retrieved from an ICT gamified web user portal, successfully identify the typical consumption profiles of the metered users and supports the design of alternative WDMS targeting the different users' profiles.

  9. Musculoskeletal demands of progressions for the longswing on high bar.

    Science.gov (United States)

    Irwin, Gareth; Kerwin, David G

    2007-09-01

    Kinetic analyses of the chalked bar longswing on high bar and its associated progressions were used to explain musculoskeletal contributions during the performance of these skills. Data on four international male gymnasts performing three series of chalked bar longswings and eight progressions were recorded. Customized body segment inertia parameters, two-dimensional kinematics (50 Hz), and bar forces (1000 Hz) were used as input to inverse dynamic modelling. The analysis focused on the relative contributions of the knees, hips, and shoulders with root mean squared differences between the chalked bar longswing and the progressions being used to rank the progressions. Seventy per cent of the total work occurred between 200 degrees and 240 degrees of angular rotation in the longswing, 67% of which was contributed by the shoulders. The shoulders were also dominant in all progressions, with the largest such contribution occurring in the looped bar longswing with "no action". The least similar progression was the looped bar pendulum swing, while the most similar was the chalked bar bent knee longswing. This study provides a useful means for ranking progressions based on their kinetic similarity to the chalked bar longswing and builds on earlier research in identifying that progressions can be classified into those similar in physical demand (kinetics) and those similar in geometry (kinematics).

  10. Virtual slides: high-quality demand, physical limitations, and affordability.

    Science.gov (United States)

    Glatz-Krieger, Katharina; Glatz, Dieter; Mihatsch, Michael J

    2003-10-01

    Virtual slides (VSs) have been around since the beginning of telepathology. As recently as a couple of years ago, only single small images could be acquired, and their distribution was limited to e-mail at best. Today, whole slides can be acquired, covering an area up to 100,000 times larger than that possible only a few years ago. Moreover, advanced Internet and world-wide web technologies enable delivery of those images to a broad audience. Despite considerable advances in technology, few good examples of VSs for public use can be found on the web. One of the reasons for this is a lack of sophisticated and integrated commercial solutions covering the needs from acquisition to delivery at reasonable cost. This article describes physical and technical limitations of the VS technology to clarify the demands on a VS acquisition system. A new type of web-based VS viewer (vMic; http://alf3.urz.unibas.ch/vmic/) open to public use is introduced, allowing anyone to set up a VS system with high usability at low cost.

  11. Factors Affecting Computer Anxiety in High School Computer Science Students.

    Science.gov (United States)

    Hayek, Linda M.; Stephens, Larry

    1989-01-01

    Examines factors related to computer anxiety measured by the Computer Anxiety Index (CAIN). Achievement in two programing courses was inversely related to computer anxiety. Students who had a home computer and had computer experience before high school had lower computer anxiety than those who had not. Lists 14 references. (YP)

  12. Soft Computing Based Procurement Planning of Time-variable Demand in Manufacturing Systems

    Institute of Scientific and Technical Information of China (English)

    Kai Leung Yung; Wai Hung Ip; Ding-Wei Wang

    2007-01-01

    Procurement planning with discrete time varying demand is an important problem in Enterprise Resource Planning (ERP). It can be described using the non-analytic mathematical programming model proposed in this paper. To solve the model we propose to use a fuzzy decision embedded genetic algorithm. The algorithm adopts an order strategy selection to simplify the original real optimization problem into binary ones. Then, a fuzzy decision quantification method is used to quantify experience from planning experts. Thus, decision rules can easily be embedded in the computation of genetic operations. This approach is applied to purchase planning problem in a practical machine tool works, where satisfactory results have been achieved.

  13. High resolution heat atlases for demand and supply mapping

    Directory of Open Access Journals (Sweden)

    Bernd Möller

    2014-02-01

    Full Text Available Significant reductions of heat demand, low-carbon and renewable energy sources, and district heating are key elements in 100% renewable energy systems. Appraisal of district heating along with energy efficient buildings and individual heat supply requires a geographical representation of heat demand, energy efficiency and energy supply. The present paper describes a Heat Atlas built around a spatial database using geographical information systems (GIS. The present atlas allows for per-building calculations of potentials and costs of energy savings, connectivity to existing district heat, and current heat supply and demand. For the entire building mass a conclusive link is established between the built environment and its heat supply. The expansion of district heating; the interconnection of distributed district heating systems; or the question whether to invest in ultra-efficient buildings with individual supply, or in collective heating using renewable energy for heating the current building stock, can be based on improved data.

  14. High resolution heat atlases for demand and supply mapping

    DEFF Research Database (Denmark)

    Möller, Bernd; Nielsen, Steffen

    2014-01-01

    Significant reductions of heat demand, low-carbon and renewable energy sources, and district heating are key elements in 100% renewable energy systems. Appraisal of district heating along with energy efficient buildings and individual heat supply requires a geographical representation of heat dem...

  15. An Interactive Computer Tool for Teaching About Desalination and Managing Water Demand in the US

    Science.gov (United States)

    Ziolkowska, J. R.; Reyes, R.

    2016-12-01

    This paper presents an interactive tool to geospatially and temporally analyze desalination developments and trends in the US in the time span 1950-2013, its current contribution to satisfying water demands and its future potentials. The computer tool is open access and can be used by any user with Internet connection, thus facilitating interactive learning about water resources. The tool can also be used by stakeholders and policy makers for decision-making support and with designing sustainable water management strategies. Desalination technology has been acknowledged as a solution to a sustainable water demand management stemming from many sectors, including municipalities, industry, agriculture, power generation, and other users. Desalination has been applied successfully in the US and many countries around the world since 1950s. As of 2013, around 1,336 desalination plants were operating in the US alone, with a daily production capacity of 2 BGD (billion gallons per day) (GWI, 2013). Despite a steady increase in the number of new desalination plants and growing production capacity, in many regions, the costs of desalination are still prohibitive. At the same time, the technology offers a tremendous potential for `enormous supply expansion that exceeds all likely demands' (Chowdhury et al., 2013). The model and tool are based on data from Global Water Intelligence (GWI, 2013). The analysis shows that more than 90% of all the plants in the US are small-scale plants with the capacity below 4.31 MGD. Most of the plants (and especially larger plants) are located on the US East Coast, as well as in California, Texas, Oklahoma, and Florida. The models and the tool provide information about economic feasibility of potential new desalination plants based on the access to feed water, energy sources, water demand, and experiences of other plants in that region.

  16. High Speed Mobility Through On-Demand Aviation

    Science.gov (United States)

    Moore, Mark D.; Goodrich, Ken; Viken, Jeff; Smith, Jeremy; Fredericks, Bill; Trani, Toni; Barraclough, Jonathan; German, Brian; Patterson, Michael

    2013-01-01

    automobiles. ?? Community Noise: Hub and smaller GA airports are facing increasing noise restrictions, and while commercial airliners have dramatically decreased their community noise footprint over the past 30 years, GA aircraft noise has essentially remained same, and moreover, is located in closer proximity to neighborhoods and businesses. ?? Operating Costs: GA operating costs have risen dramatically due to average fuel costs of over $6 per gallon, which has constrained the market over the past decade and resulted in more than 50% lower sales and 35% less yearly operations. Infusion of autonomy and electric propulsion technologies can accomplish not only a transformation of the GA market, but also provide a technology enablement bridge for both larger aircraft and the emerging civil Unmanned Aerial Systems (UAS) markets. The NASA Advanced General Aviation Transport Experiments (AGATE) project successfully used a similar approach to enable the introduction of primary composite structures and flat panel displays in the 1990s, establishing both the technology and certification standardization to permit quick adoption through partnerships with industry, academia, and the Federal Aviation Administration (FAA). Regional and airliner markets are experiencing constant pressure to achieve decreasing levels of community emissions and noise, while lowering operating costs and improving safety. But to what degree can these new technology frontiers impact aircraft safety, the environment, operations, cost, and performance? Are the benefits transformational enough to fundamentally alter aircraft competiveness and productivity to permit much greater aviation use for high speed and On-Demand Mobility (ODM)? These questions were asked in a Zip aviation system study named after the Zip Car, an emerging car-sharing business model. Zip Aviation investigates the potential to enable new emergent markets for aviation that offer "more flexibility than the existing transportation solutions

  17. Computer-Based Attention-Demanding Testing Unveils Severe Neglect in Apparently Intact Patients

    Science.gov (United States)

    Bonato, M.; Priftis, K.; Umiltà, C.; Zorzi, M.

    2013-01-01

    We tested a group of ten post-acute right-hemisphere damaged patients. Patients had no neglect according to paper-and-pencil cancellation tasks. They were administered computer-based single- and dual-tasks, requiring to orally name the position of appearance (e.g. left vs. right) of briefly-presented lateralized targets. Patients omitted a consistent number of contralesional targets (≈ 40%) under the single-task condition. When required to perform a concurrent task which recruited additional attentional resources (dual-tasks), patients’ awareness for contralesional hemispace was severely affected, with less than one third of contralesional targets detected (≈ 70% of omissions). In contrast, performance for ipsilesional (right-sided) targets was close to ceiling, showing that the deficit unveiled by computer-based testing selectively affected the contralesional hemispace. We conclude that computer-based, attention-demanding tasks are strikingly more sensitive than cancellation tasks in detecting neglect, because they are relatively immune to compensatory strategies that are often deployed by post-acute patients. PMID:22713418

  18. Electricity demand profile with high penetration of heat pumps in Nordic area

    DEFF Research Database (Denmark)

    Liu, Zhaoxi; Wu, Qiuwei; Nielsen, Arne Hejde

    2013-01-01

    This paper presents the heat pump (HP) demand profile with high HP penetration in the Nordic area in order to achieve the carbon neutrality power system. The calculation method in the European Standard EN14825 was used to estimate the HP electricity demand profile. The study results show there wi......This paper presents the heat pump (HP) demand profile with high HP penetration in the Nordic area in order to achieve the carbon neutrality power system. The calculation method in the European Standard EN14825 was used to estimate the HP electricity demand profile. The study results show...... there will be high power demand from HPs and the selection of supplemental heating for heat pumps has a big impact on the peak electrical power load of heating. The study in this paper gives an estimate of the scale of the electricity demand with high penetration of heat pumps in the Nordic area....

  19. The effect of preferred music on mood and performance in a high-cognitive demand occupation.

    Science.gov (United States)

    Lesiuk, Teresa

    2010-01-01

    Mild positive affect has been shown in the psychological literature to improve cognitive skills of creative problem-solving and systematic thinking. Individual preferred music listening offers opportunity for improved positive affect. The purpose of this study was to examine the effect of preferred music listening on state-mood and cognitive performance in a high-cognitive demand occupation. Twenty-four professional computer information systems developers (CISD) from a North American IT company participated in a 3-week study with a music/no music/music weekly design. During the music weeks, participants listened to their preferred music "when they wanted, as they wanted." Self-reports of State Positive Affect, State Negative Affect, and Cognitive Performance were measured throughout the 3 weeks. Results indicate a statistically significant improvement in both state-mood and cognitive performance scores. "High-cognitive demand" is a relative term given that challenges presented to individuals may occur on a cognitive continuum from need for focus and selective attention to systematic analysis and creative problem-solving. The findings and recommendations have important implications for music therapists in their knowledge of the effect of music on emotion and cognition, and, as well, have important implications for music therapy consultation to organizations.

  20. High-demand jobs: age-related diversity in work ability?

    Science.gov (United States)

    Sluiter, Judith K

    2006-07-01

    High-demand jobs include 'specific' job demands that are not preventable with state of the art ergonomics knowledge and may overburden the bodily capacities, safety or health of workers. An interesting question is whether the age of the worker is an important factor in explanations of diversity in work ability in the context of high-demand jobs. In this paper, the work ability of ageing workers is addressed according to aspects of diversity in specific job demands and the research methods that are needed to shed light upon the relevant associated questions. From the international literature, a body of evidence was elicited concerning rates of chronological ageing in distinct bodily systems and functions. Intra-age-cohort differences in capacities and work ability, however, require (not yet existing) valid estimates of functional age or biological age indices for the specific populations of workers in high-demand jobs. Many studies have drawn on the highly demanding work of fire-fighters, ambulance workers, police officers, medical specialists, pilots/astronauts and submarine officers. Specific job demands in these jobs can be physical, mental or psychosocial in origin but may cause combined task-level loadings. Therefore, the assessment of single demands probably will not reveal enough relevant information about work ability in high-demand jobs and there will be a call for more integrated measures. Existing studies have used a variety of methodologies to address parts of the issue: task analyses for quantifying physical work demands, observations of psychological and physiological parameters, measures of psychosocial work demands and health complaints. Specific details about the work ability of ageing workers in high-demand jobs are scarce. In general, specific demands are more likely to overtax the capacities of older workers than those of younger workers in high-demand jobs, implying greater repercussions for health, although these effects also vary considerably

  1. More customers embrace Dell standards-based computing for even the most demanding applications-Growing demand among HPCC customers for Dell in Europe

    CERN Multimedia

    2003-01-01

    Dell Computers has signed agreements with several high-profile customers in Europe to provide high performance computing cluster (HPCC) solutions. One customer is a consortium of 4 universities involved in research at the Collider Detector Facility at Fermilab (1 page).

  2. Operational characterisation of requirements and early validation environment for high demanding space systems

    Science.gov (United States)

    Barro, E.; Delbufalo, A.; Rossi, F.

    1993-01-01

    The definition of some modern high demanding space systems requires a different approach to system definition and design from that adopted for traditional missions. System functionality is strongly coupled to the operational analysis, aimed at characterizing the dynamic interactions of the flight element with its surrounding environment and its ground control segment. Unambiguous functional, operational and performance requirements are to be defined for the system, thus improving also the successive development stages. This paper proposes a Petri Nets based methodology and two related prototype applications (to ARISTOTELES orbit control and to Hermes telemetry generation) for the operational analysis of space systems through the dynamic modeling of their functions and a related computer aided environment (ISIDE) able to make the dynamic model work, thus enabling an early validation of the system functional representation, and to provide a structured system requirements data base, which is the shared knowledge base interconnecting static and dynamic applications, fully traceable with the models and interfaceable with the external world.

  3. Functional Outcomes in High-function-demand patients after total knee arthroplasty.

    Science.gov (United States)

    Lozano Calderón, Santiago A; Shen, Jianhua; Doumato, Diana F; Zelicof, Steven

    2012-05-01

    Total knee arthroplasty is a safe last-resort treatment for osteoarthritis that has excellent results in low-function-demand elderly patients. Current implants offer the same results in high-function-demand patients. However, supportive data do not exist.One-year Krackow Activity Scores (KAS) of 552 patients from 2 prospective studies were used to retrospectively determine low- and high-function-demand populations. Low function demand was defined as a KAS between 1 and 9 points, and high function demand was defined as a KAS between 10 and 18 points. Patients were assessed preoperatively and at 6 weeks, 3 months, and 1 and 2 years postoperatively per the Knee Society Score-function domain, KAS, SF-36, range of motion, and pain. Comparability between groups was tested for demographics and comorbidities.Both groups showed significant improvement in function, range of motion, and pain 2 years postoperatively. High-function-demand patients had comparable improvement in function compared with low-function-demand patients. Excellent function can be achieved in high-function-demand patients.

  4. Staying Well and Engaged When Demands Are High: The Role of Psychological Detachment

    Science.gov (United States)

    Sonnentag, Sabine; Binnewies, Carmen; Mojza, Eva J.

    2010-01-01

    The authors of this study examined the relation between job demands and psychological detachment from work during off-job time (i.e., mentally switching off) with psychological well-being and work engagement. They hypothesized that high job demands and low levels of psychological detachment predict poor well-being and low work engagement. They…

  5. Career Technical Education: Keeping Adult Learners Competitive for High-Demand Jobs

    Science.gov (United States)

    National Association of State Directors of Career Technical Education Consortium, 2011

    2011-01-01

    In today's turbulent economy, how can adult workers best position themselves to secure jobs in high-demand fields where they are more likely to remain competitive and earn more? Further, how can employers up-skill current employees so that they meet increasingly complex job demands? Research indicates that Career Technical Education (CTE) aligned…

  6. On the Demand for High-Beta Stocks

    DEFF Research Database (Denmark)

    Christoffersen, Susan E. K.; Simutin, Mikhail

    2017-01-01

    Prior studies have documented that pension plan sponsors often monitor a fund’s performance relative to a benchmark. We use a first-difference approach to show that in an effort to beat benchmarks, fund managers controlling large pension assets tend to increase their exposure to high-beta stocks......, while aiming to maintain tracking errors around the benchmark. The findings support theoretical conjectures that benchmarking can lead managers to tilt their portfolio toward high-beta stocks and away from low-beta stocks, which can reinforce observed pricing anomalies....

  7. High Energy Physics Experiments In Grid Computing Networks

    Directory of Open Access Journals (Sweden)

    Andrzej Olszewski

    2008-01-01

    Full Text Available The demand for computing resources used for detector simulations and data analysis in HighEnergy Physics (HEP experiments is constantly increasing due to the development of studiesof rare physics processes in particle interactions. The latest generation of experiments at thenewly built LHC accelerator at CERN in Geneva is planning to use computing networks fortheir data processing needs. A Worldwide LHC Computing Grid (WLCG organization hasbeen created to develop a Grid with properties matching the needs of these experiments. Inthis paper we present the use of Grid computing by HEP experiments and describe activitiesat the participating computing centers with the case of Academic Computing Center, ACKCyfronet AGH, Kraków, Poland.

  8. GPU-based high-performance computing for radiation therapy.

    Science.gov (United States)

    Jia, Xun; Ziegenhein, Peter; Jiang, Steve B

    2014-02-21

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented.

  9. Effect of computer mouse gain and visual demand on mouse clicking performance and muscle activation in a young and elderly group of experienced computer users

    DEFF Research Database (Denmark)

    Sandfeld, Jesper; Jensen, Bente R.

    2005-01-01

    The present study evaluated the specific effects of motor demand and visual demands on the ability to control motor output in terms of performance and muscle activation. Young and elderly subjects performed multidirectional pointing tasks with the computer mouse. Three levels of mouse gain...... was only to a minor degree influenced by mouse gain (and target sizes) indicating that stability of the forearm/hand is of significance during computer mouse control. The study has implications for ergonomists, pointing device manufacturers and software developers....

  10. Reach a New Threshold of Freedom and Control with Dell's Flexible Computing Solution: On-Demand Desktop Streaming

    Science.gov (United States)

    Technology & Learning, 2008

    2008-01-01

    When it comes to IT, there has always been an important link between data center control and client flexibility. As computing power increases, so do the potentially crippling threats to security, productivity and financial stability. This article talks about Dell's On-Demand Desktop Streaming solution which is designed to centralize complete…

  11. Reach a New Threshold of Freedom and Control with Dell's Flexible Computing Solution: On-Demand Desktop Streaming

    Science.gov (United States)

    Technology & Learning, 2008

    2008-01-01

    When it comes to IT, there has always been an important link between data center control and client flexibility. As computing power increases, so do the potentially crippling threats to security, productivity and financial stability. This article talks about Dell's On-Demand Desktop Streaming solution which is designed to centralize complete…

  12. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community

    Directory of Open Access Journals (Sweden)

    Krampis Konstantinos

    2012-03-01

    Full Text Available Abstract Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Results Cloud BioLinux is a publicly accessible Virtual Machine (VM that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Conclusions Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds

  13. GLIF – striving towards a high-performance on-demand network

    CERN Document Server

    Kristina Gunne

    2010-01-01

    If you were passing through the Mezzanine in the Main Building a couple of weeks ago, you probably noticed the large tiled panel display showing an ultra-high resolution visualization model of dark matter, developed by Cosmogrid. The display was one of the highlights of the 10th Annual Global Lambda Grid Workshop demo session, together with the first ever transfer of over 35 Gbit/second from one PC to another between the SARA Computing Centre in Amsterdam and CERN.   GLIF display. The transfer of such large amounts of data at this speed has been made possible thanks to the GLIF community's vision of a new computing paradigm, in which the central architectural element is an end-to-end path built on optical network wavelengths (so called lambdas). You may think of this as an on-demand private highway for data transfer: by using it you avoid the normal internet exchange points and “traffic jams”. GLIF is a virtual international organization managed as a cooperative activity, wi...

  14. High Energy Computed Tomographic Inspection of Munitions

    Science.gov (United States)

    2016-11-01

    UNCLASSIFIED UNCLASSIFIED AD-E403 815 Technical Report AREIS-TR-16006 HIGH ENERGY COMPUTED TOMOGRAPHIC INSPECTION OF MUNITIONS...REPORT DATE (DD-MM-YYYY) November 2016 2. REPORT TYPE Final 3. DATES COVERED (From – To) 4. TITLE AND SUBTITLE HIGH ENERGY COMPUTED...otherwise be accomplished by other nondestructive testing methods. 15. SUBJECT TERMS Radiography High energy Computed tomography (CT

  15. The Impact of High Speed Machining on Computing and Automation

    Institute of Scientific and Technical Information of China (English)

    KKB Hon; BT Hang Tuah Baharudin

    2006-01-01

    Machine tool technologies, especially Computer Numerical Control (CNC) High Speed Machining (HSM) have emerged as effective mechanisms for Rapid Tooling and Manufacturing applications. These new technologies are attractive for competitive manufacturing because of their technical advantages, i.e. a significant reduction in lead-time, high product accuracy, and good surface finish. However, HSM not only stimulates advancements in cutting tools and materials, it also demands increasingly sophisticated CAD/CAM software, and powerful CNC controllers that require more support technologies. This paper explores the computational requirement and impact of HSM on CNC controller, wear detection,look ahead programming, simulation, and tool management.

  16. A Primer on High-Throughput Computing for Genomic Selection

    Directory of Open Access Journals (Sweden)

    Xiao-Lin eWu

    2011-02-01

    Full Text Available High-throughput computing (HTC uses computer clusters to solve advanced computational problems, with the goal of accomplishing high throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general purpose computation on a graphics processing unit (GPU provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin – Madison, which can be leveraged for genomic selection, in terms of central processing unit (CPU capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of

  17. Computing support for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Avery, P.; Yelton, J. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-01

    This computing proposal (Task S) is submitted separately but in support of the High Energy Experiment (CLEO, Fermilab, CMS) and Theory tasks. The authors have built a very strong computing base at Florida over the past 8 years. In fact, computing has been one of the main contributions to their experimental collaborations, involving not just computing capacity for running Monte Carlos and data reduction, but participation in many computing initiatives, industrial partnerships, computing committees and collaborations. These facts justify the submission of a separate computing proposal.

  18. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  19. High-level language computer architecture

    CERN Document Server

    Chu, Yaohan

    1975-01-01

    High-Level Language Computer Architecture offers a tutorial on high-level language computer architecture, including von Neumann architecture and syntax-oriented architecture as well as direct and indirect execution architecture. Design concepts of Japanese-language data processing systems are discussed, along with the architecture of stack machines and the SYMBOL computer system. The conceptual design of a direct high-level language processor is also described.Comprised of seven chapters, this book first presents a classification of high-level language computer architecture according to the pr

  20. High-Productivity Computing in Computational Physics Education

    Science.gov (United States)

    Tel-Zur, Guy

    2011-03-01

    We describe the development of a new course in Computational Physics at the Ben-Gurion University. This elective course for 3rd year undergraduates and MSc. students is being taught during one semester. Computational Physics is by now well accepted as the Third Pillar of Science. This paper's claim is that modern Computational Physics education should deal also with High-Productivity Computing. The traditional approach of teaching Computational Physics emphasizes ``Correctness'' and then ``Accuracy'' and we add also ``Performance.'' Along with topics in Mathematical Methods and case studies in Physics the course deals a significant amount of time with ``Mini-Courses'' in topics such as: High-Throughput Computing - Condor, Parallel Programming - MPI and OpenMP, How to build a Beowulf, Visualization and Grid and Cloud Computing. The course does not intend to teach neither new physics nor new mathematics but it is focused on an integrated approach for solving problems starting from the physics problem, the corresponding mathematical solution, the numerical scheme, writing an efficient computer code and finally analysis and visualization.

  1. High School Computer Science Education Paves the Way for Higher Education: The Israeli Case

    Science.gov (United States)

    Armoni, Michal; Gal-Ezer, Judith

    2014-01-01

    The gap between enrollments in higher education computing programs and the high-tech industry's demands is widely reported, and is especially prominent for women. Increasing the availability of computer science education in high school is one of the strategies suggested in order to address this gap. We look at the connection between exposure to…

  2. High School Computer Science Education Paves the Way for Higher Education: The Israeli Case

    Science.gov (United States)

    Armoni, Michal; Gal-Ezer, Judith

    2014-01-01

    The gap between enrollments in higher education computing programs and the high-tech industry's demands is widely reported, and is especially prominent for women. Increasing the availability of computer science education in high school is one of the strategies suggested in order to address this gap. We look at the connection between exposure…

  3. High School Computer Science Education Paves the Way for Higher Education: The Israeli Case

    Science.gov (United States)

    Armoni, Michal; Gal-Ezer, Judith

    2014-01-01

    The gap between enrollments in higher education computing programs and the high-tech industry's demands is widely reported, and is especially prominent for women. Increasing the availability of computer science education in high school is one of the strategies suggested in order to address this gap. We look at the connection between exposure to…

  4. High-performance scientific computing

    CERN Document Server

    Berry, Michael W; Gallopoulos, Efstratios

    2012-01-01

    This book presents the state of the art in parallel numerical algorithms, applications, architectures, and system software. The book examines various solutions for issues of concurrency, scale, energy efficiency, and programmability, which are discussed in the context of a diverse range of applications. Features: includes contributions from an international selection of world-class authorities; examines parallel algorithm-architecture interaction through issues of computational capacity-based codesign and automatic restructuring of programs using compilation techniques; reviews emerging applic

  5. Optimum air-demand ratio for maximum aeration efficiency in high-head gated circular conduits.

    Science.gov (United States)

    Ozkan, Fahri; Tuna, M Cihat; Baylar, Ahmet; Ozturk, Mualla

    2014-01-01

    Oxygen is an important component of water quality and its ability to sustain life. Water aeration is the process of introducing air into a body of water to increase its oxygen saturation. Water aeration can be accomplished in a variety of ways, for instance, closed-conduit aeration. High-speed flow in a closed conduit involves air-water mixture flow. The air flow results from the subatmospheric pressure downstream of the gate. The air entrained by the high-speed flow is supplied by the air vent. The air entrained into the flow in the form of a large number of bubbles accelerates oxygen transfer and hence also increases aeration efficiency. In the present work, the optimum air-demand ratio for maximum aeration efficiency in high-head gated circular conduits was studied experimentally. Results showed that aeration efficiency increased with the air-demand ratio to a certain point and then aeration efficiency did not change with a further increase of the air-demand ratio. Thus, there was an optimum value for the air-demand ratio, depending on the Froude number, which provides maximum aeration efficiency. Furthermore, a design formula for aeration efficiency was presented relating aeration efficiency to the air-demand ratio and Froude number.

  6. Introduction to High Performance Scientific Computing

    OpenAIRE

    2016-01-01

    The field of high performance scientific computing lies at the crossroads of a number of disciplines and skill sets, and correspondingly, for someone to be successful at using high performance computing in science requires at least elementary knowledge of and skills in all these areas. Computations stem from an application context, so some acquaintance with physics and engineering sciences is desirable. Then, problems in these application areas are typically translated into linear algebraic, ...

  7. China's High Performance Computer Standard Commission Established

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    @@ China's High Performance Computer Standard Commission was established on March 28, 2007, under the guidance of the Science and Technology Bureau of the Ministry of Information Industry. It will prepare relevant professional standards on high performance computers to break through the monopoly in the field by foreign manufacturers and vendors.

  8. High visual demand following theta burst stimulation modulates the effect on visual cortex excitability.

    Science.gov (United States)

    Brückner, Sabrina; Kammer, Thomas

    2015-01-01

    Modulatory effects of repetitive transcranial magnetic stimulation (TMS) depend on the activity of the stimulated cortical area before, during, and even after application. In the present study, we investigated the effects of theta burst stimulation (TBS) on visual cortex excitability using phosphene threshold (PTs). In a between-group design either continuous or intermittent TBS was applied with 100% of individual PT intensity. We varied visual demand following stimulation in form of high demand (acuity task) or low demand (looking at the wall). No change of PTs was observed directly after TBS. We found increased PTs only if subjects had high visual demand following continuous TBS. With low visual demand following stimulation no change of PT was observed. Intermittent TBS had no effect on visual cortex excitability at all. Since other studies showed increased PTs following continuous TBS using subthreshold intensities, our results highlight the importance of stimulation intensity applying TBS to the visual cortex. Furthermore, the state of the neurons in the stimulated cortex area not only before but also following TBS has an important influence on the effects of stimulation, making it necessary to scrupulously control for activity during the whole experimental session in a study.

  9. 77 FR 19076 - High Density Traffic Airports; Notice of Determination Regarding Low Demand Periods at Ronald...

    Science.gov (United States)

    2012-03-30

    ...\\ 33 FR 17896 (Dec. 3, 1968). In 1985, the FAA issued part 93 subpart S (the ``Buy/Sell Rule'').\\2\\ As... the 0600 hour is not a low demand period.\\3\\ \\2\\ 50 FR 52195 (Dec. 20, 1985). \\3\\ 76 FR 58393 (Sept... Federal Aviation Administration 14 CFR Part 93 High Density Traffic Airports; Notice of...

  10. Employees facing high job demands: How to keep them fit, satisfied, and intrinsically motivated?

    NARCIS (Netherlands)

    Van Yperen, N.W.; Nagao, DH

    2002-01-01

    The purpose of the present research was to determine why some employees faced with high job demands feel fatigued, dissatisfied, and unmotivated, whereas others feel fatigued but satisfied and intrinsically motivated. It is argued and demonstrated that two job conditions, namely job control and job

  11. Employees facing high job demands: How to keep them fit, satisfied, and intrinsically motivated?

    NARCIS (Netherlands)

    Van Yperen, N.W.; Nagao, DH

    2002-01-01

    The purpose of the present research was to determine why some employees faced with high job demands feel fatigued, dissatisfied, and unmotivated, whereas others feel fatigued but satisfied and intrinsically motivated. It is argued and demonstrated that two job conditions, namely job control and job

  12. In vitro effects on mobile polyethylene insert under highly demanding daily activities: stair climbing

    National Research Council Canada - National Science Library

    Jaber, Sami Abdel; Taddei, Paola; Tozzi, Silvia; Sudanese, Alessandra; Affatato, Saverio

    2015-01-01

    ...?One set of the same total knee prosthesis (TKP), equal in design and size, was tested on a three-plus-one knee joint simulator for two million cycles using a highly demanding daily load waveform, replicating a stair-climbing movement...

  13. Computer proficiency questionnaire: assessing low and high computer proficient seniors.

    Science.gov (United States)

    Boot, Walter R; Charness, Neil; Czaja, Sara J; Sharit, Joseph; Rogers, Wendy A; Fisk, Arthur D; Mitzner, Tracy; Lee, Chin Chin; Nair, Sankaran

    2015-06-01

    Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users. To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ. The CPQ demonstrated excellent reliability (Cronbach's α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions. The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  14. PRCA:A highly efficient computing architecture

    Institute of Scientific and Technical Information of China (English)

    Luo Xingguo

    2014-01-01

    Applications can only reach 8 %~15 % of utilization on modern computer systems. There are many obstacles to improving system efficiency. The key root is the conflict between the fixed general computer architecture and the variable requirements of applications. Proactive reconfigurable computing architecture (PRCA) is proposed to improve computing efficiency. PRCA dynamically constructs an efficient computing ar chitecture for a specific application via reconfigurable technology by perceiving requirements,workload and utilization of computing resources. Proactive decision support system (PDSS),hybrid reconfigurable computing array (HRCA) and reconfigurable interconnect (RIC) are intensively researched as the key technologies. The principles of PRCA have been verified with four applications on a test bed. It is shown that PRCA is feasible and highly efficient.

  15. NASA High-End Computing Program Website

    Science.gov (United States)

    Cohen, Jarrett S.

    2008-01-01

    If you are a NASA-sponsored scientist or engineer. computing time is available to you at the High-End Computing (HEC) Program's NASA Advanced Supercomputing (NAS) Facility and NASA Center for Computational Sciences (NCCS). The Science Mission Directorate will select from requests NCCS Portals submitted to the e-Books online system for awards beginning on May 1. Current projects set to explore on April 30 must have a request in e-Books to be considered for renewal

  16. NASA High-End Computing Program Website

    Science.gov (United States)

    Cohen, Jarrett S.

    2008-01-01

    If you are a NASA-sponsored scientist or engineer. computing time is available to you at the High-End Computing (HEC) Program's NASA Advanced Supercomputing (NAS) Facility and NASA Center for Computational Sciences (NCCS). The Science Mission Directorate will select from requests NCCS Portals submitted to the e-Books online system for awards beginning on May 1. Current projects set to explore on April 30 must have a request in e-Books to be considered for renewal

  17. Matching Behavior as a Tradeoff Between Reward Maximization and Demands on Neural Computation [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Jan Kubanek

    2015-10-01

    Full Text Available When faced with a choice, humans and animals commonly distribute their behavior in proportion to the frequency of payoff of each option. Such behavior is referred to as matching and has been captured by the matching law. However, matching is not a general law of economic choice. Matching in its strict sense seems to be specifically observed in tasks whose properties make matching an optimal or a near-optimal strategy. We engaged monkeys in a foraging task in which matching was not the optimal strategy. Over-matching the proportions of the mean offered reward magnitudes would yield more reward than matching, yet, surprisingly, the animals almost exactly matched them. To gain insight into this phenomenon, we modeled the animals' decision-making using a mechanistic model. The model accounted for the animals' macroscopic and microscopic choice behavior. When the models' three parameters were not constrained to mimic the monkeys' behavior, the model over-matched the reward proportions and in doing so, harvested substantially more reward than the monkeys. This optimized model revealed a marked bottleneck in the monkeys' choice function that compares the value of the two options. The model featured a very steep value comparison function relative to that of the monkeys. The steepness of the value comparison function had a profound effect on the earned reward and on the level of matching. We implemented this value comparison function through responses of simulated biological neurons. We found that due to the presence of neural noise, steepening the value comparison requires an exponential increase in the number of value-coding neurons. Matching may be a compromise between harvesting satisfactory reward and the high demands placed by neural noise on optimal neural computation.

  18. High-resolution Behavioral Economic Analysis of Cigarette Demand to Inform Tax Policy

    Science.gov (United States)

    MacKillop, James; Few, Lauren R.; Murphy, James G.; Wier, Lauren M.; Acker, John; Murphy, Cara; Stojek, Monika; Carrigan, Maureen; Chaloupka, Frank

    2012-01-01

    Aims Novel methods in behavioral economics permit the systematic assessment of the relationship between cigarette consumption and price. Toward informing tax policy, the goals of this study were to conduct a high-resolution analysis of cigarette demand in a large sample of adult smokers and to use the data to estimate the effects of tax increases in ten U.S. States. Design In-person descriptive survey assessment. Setting Academic departments at three universities. Participants Adult daily smokers (i.e., 5+ cigarettes/day; 18+ years old; ≥8th grade education); N = 1056. Measurements Estimated cigarette demand, demographics, expired carbon monoxide. Findings The cigarette demand curve exhibited highly variable levels of price sensitivity, especially in the form of ‘left-digit effects’ (i.e., very high price sensitivity as pack prices transitioned from one whole number to the next; e.g., $5.80-$6/pack). A $1 tax increase in the ten states was projected to reduce the economic burden of smoking by an average of $531M (range: $93.6M-$976.5M) and increase gross tax revenue by an average of 162% (range: 114%- 247%). Conclusions Tobacco price sensitivity is nonlinear across the demand curve and in particular for pack-level left-digit price transitions. Tax increases in U.S. states with similar price and tax rates to the sample are projected to result in substantial decreases in smoking-related costs and substantial increases in tax revenues. PMID:22845784

  19. A Statist Political Economy and High Demand for Education in South Korea

    Directory of Open Access Journals (Sweden)

    Ki Su Kim

    1999-06-01

    Full Text Available In the 1998 academic year, 84 percent of South Korea's high school "leavers" entered a university or college while almost all children went up to high schools. This is to say, South Korea is now moving into a new age of universal higher education. Even so, competition for university entrance remains intense. What is here interesting is South Koreans' unusually high demand for education. In this article, I criticize the existing cultural and socio-economic interpretations of the phenomenon. Instead, I explore a new interpretation by critically referring to the recent political economy debate on South Korea's state-society/market relationship. In my interpretation, the unusually high demand for education is largely due to the powerful South Korean state's losing flexibility in the management of its "developmental" policies. For this, I blame the traditional "personalist ethic" which still prevails as the

  20. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  1. A primer on high-throughput computing for genomic selection.

    Science.gov (United States)

    Wu, Xiao-Lin; Beissinger, Timothy M; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J M; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel

    2011-01-01

    High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin-Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized

  2. Federal High End Computing (HEC) Information Portal

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This portal provides information about opportunities to engage in U.S. Federal government high performance computing activities, including supercomputer use,...

  3. High School Physics and the Affordable Computer.

    Science.gov (United States)

    Harvey, Norman L.

    1978-01-01

    Explains how the computer was used in a high school physics course; Project Physics program and individualized study PSSC physics program. Evaluates the capabilities and limitations of a $600 microcomputer system. (GA)

  4. Using high frequency consumption data to identify demand response potential for solar energy integration

    Science.gov (United States)

    Jin, L.; Borgeson, S.; Fredman, D.; Hans, L.; Spurlock, A.; Todd, A.

    2015-12-01

    California's renewable portfolio standard (2012) requires the state to get 33% of its electricity from renewable sources by 2020. Increased share of variable renewable sources such as solar and wind in the California electricity system may require more grid flexibility to insure reliable power services. Such grid flexibility can be potentially provided by changes in end use electricity consumptions in response to grid conditions (demand-response). In the solar case, residential consumption in the late afternoon can be used as reserve capacity to balance the drop in solar generation. This study presents our initial attempt to identify, from a behavior perspective, residential demand response potentials in relation to solar ramp events using a data-driven approach. Based on hourly residential energy consumption data, we derive representative daily load shapes focusing on discretionary consumption with an innovative clustering analysis technique. We aggregate the representative load shapes into behavior groups in terms of the timing and rhythm of energy use in the context of solar ramp events. Households of different behavior groups that are active during hours with high solar ramp rates are identified for capturing demand response potential. Insights into the nature and predictability of response to demand-response programs are provided.

  5. Computing Air Demand Using the Takagi–Sugeno Model for Dam Outlets

    Directory of Open Access Journals (Sweden)

    Mohammad Zounemat-Kermani

    2013-09-01

    Full Text Available An adaptive neuro-fuzzy inference system (ANFIS was developed using the subtractive clustering technique to study the air demand in low-level outlet works. The ANFIS model was employed to calculate vent air discharge in different gate openings for an embankment dam. A hybrid learning algorithm obtained from combining back-propagation and least square estimate was adopted to identify linear and non-linear parameters in the ANFIS model. Empirical relationships based on the experimental information obtained from physical models were applied to 108 experimental data points to obtain more reliable evaluations. The feed-forward Levenberg-Marquardt neural network (LMNN and multiple linear regression (MLR models were also built using the same data to compare model performances with each other. The results indicated that the fuzzy rule-based model performed better than the LMNN and MLR models, in terms of the simulation performance criteria established, as the root mean square error, the Nash–Sutcliffe efficiency, the correlation coefficient and the Bias.

  6. Estimating the Spatial Distribution of Groundwater Demand In the Texas High Plains

    OpenAIRE

    Zhao, Shiliang; Wang, Chenggang; James P. Bordovsky; Sheng, Zhuping; Gastelum, Jesus R.

    2011-01-01

    Developing groundwater management plans requires a good understanding of the interdependence of groundwater hydrology and producer water use behavior. While state-of-the-art groundwater models require water demand data at highly disaggregated levels, the lack of producer water use data has held up the progress to meet that need. This paper proposes an econometric framework that links county-level crop acreage data to well-level hydrologic data to produce heterogeneous patterns of crop choice ...

  7. Estimating the Spatial Distribution of Groundwater Demand In the Texas High Plains

    OpenAIRE

    Zhao, Shiliang; Wang, Chenggang; James P. Bordovsky; Sheng, Zhuping; Gastelum, Jesus R.

    2011-01-01

    Developing groundwater management plans requires a good understanding of the interdependence of groundwater hydrology and producer water use behavior. While state-of-the-art groundwater models require water demand data at highly disaggregated levels, the lack of producer water use data has held up the progress to meet that need. This paper proposes an econometric framework that links county-level crop acreage data to well-level hydrologic data to produce heterogeneous patterns of crop choice ...

  8. Grid computing in high energy physics

    CERN Document Server

    Avery, P

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them. Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software r...

  9. Dawning4000A high performance computer

    Institute of Scientific and Technical Information of China (English)

    SUN Ninghui; MENG Dan

    2007-01-01

    Dawning4000A is an AMD Opteron-based Linux Cluster with 11.2Tflops peak performance and 8.06Tflops Linpack performance.It was developed for the Shanghai Supercomputer Center (SSC)as one of the computing power stations of the China National Grid (CNGrid)project.The Massively Cluster Computer (MCC)architecture is proposed to put added-value on the industry standard system.Several grid-enabling components are developed to support the running environment of the CNGrid.It is an achievement for a high performance computer with the low-cost approach.

  10. High-performance computing for airborne applications

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Heather M [Los Alamos National Laboratory; Manuzzato, Andrea [Los Alamos National Laboratory; Fairbanks, Tom [Los Alamos National Laboratory; Dallmann, Nicholas [Los Alamos National Laboratory; Desgeorges, Rose [Los Alamos National Laboratory

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  11. Personnel and patient scheduling in the high demanded hospital services: a case study in the physiotherapy service.

    Science.gov (United States)

    Ogulata, S Noyan; Koyuncu, Melik; Karaskas, Esra

    2008-06-01

    High demand but limited staffs within some services of a hospital require a proper scheduling of staff and patients. In this study, a hierarchical mathematical model is proposed to generate weekly staff scheduling. Due to computational difficulty of this scheduling problem, the entire model is broken down into manageable three hierarchical stages: (1) selection of patients, (2) assignment of patients to the staff, (3) scheduling of patients throughout a day. The developed models were tested on the data collected in College of Medicine Research Hospital at Cukurova University using GAMS and MPL optimization packages. From the results of the case study, the presented hierarchical model provided a schedule that ensures to maximize the number of selected patients, to balance the workload of physiotherapist, and to minimize waiting time of patients in their treatment day.

  12. High-performance computing and networking as tools for accurate emission computed tomography reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Passeri, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Formiconi, A.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); De Cristofaro, M.T.E.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Pupi, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Meldolesi, U. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy)

    1997-04-01

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64 x 64) slices could be reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. (orig.). With 4 figs., 1 tab.

  13. Demand Forecasting: DLA’S Aviation Supply Chain High Value Products

    Science.gov (United States)

    2015-04-09

    seems to dominate the pattern of product demand from FY10 to FY13. However, as discussed above, the outlier demand value in FY13 could have zapped out...seems to dominate the pattern of product demand, and on the other, an extreme outlier value that could zap out some of the FY14 product demand. We

  14. Career Clusters: Forecasting Demand for High School through College Jobs, 2008-2018. Executive Summary

    Science.gov (United States)

    Carnevale, Anthony P.; Smith, Nicole; Stone, James R., III; Kotamraju, Pradeep; Steuernagel, Bruce; Green, Kimberly A.

    2011-01-01

    Going directly from high school to college is not possible for everyone. Many who go to college will not do so straight out of high school, and many more need to work to pay for college. Good jobs for people without college degrees certainly still exist, although they are on a steady decline as computers and related technology take over routine…

  15. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  16. Linear algebra on high-performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, J.J.; Sorensen, D.C.

    1986-01-01

    This paper surveys work recently done at Argonne National Laboratory in an attempt to discover ways to construct numerical software for high-performance computers. The numerical algorithms are taken from several areas of numerical linear algebra. We discuss certain architectural features of advanced-computer architectures that will affect the design of algorithms. The technique of restructuring algorithms in terms of certain modules is reviewed. This technique has proved successful in obtaining a high level of transportability without severe loss of performance on a wide variety of both vector and parallel computers. The module technique is demonstrably effective for dense linear algebra problems. However, in the case of sparse and structured problems it may be difficult to identify general modules that will be as effective. New algorithms have been devised for certain problems in this category. We present examples in three important areas: banded systems, sparse QR factorization, and symmetric eigenvalue problems. 32 refs., 10 figs., 6 tabs.

  17. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  18. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  19. High-performance computers for unmanned vehicles

    Science.gov (United States)

    Toms, David; Ettinger, Gil J.

    2005-10-01

    The present trend of increasing functionality onboard unmanned vehicles is made possible by rapid advances in high-performance computers (HPCs). An HPC is characterized by very high computational capability (100s of billions of operations per second) contained in lightweight, rugged, low-power packages. HPCs are critical to the processing of sensor data onboard these vehicles. Operations such as radar image formation, target tracking, target recognition, signal intelligence signature collection and analysis, electro-optic image compression, and onboard data exploitation are provided by these machines. The net effect of an HPC is to minimize communication bandwidth requirements and maximize mission flexibility. This paper focuses on new and emerging technologies in the HPC market. Emerging capabilities include new lightweight, low-power computing systems: multi-mission computing (using a common computer to support several sensors); onboard data exploitation; and large image data storage capacities. These new capabilities will enable an entirely new generation of deployed capabilities at reduced cost. New software tools and architectures available to unmanned vehicle developers will enable them to rapidly develop optimum solutions with maximum productivity and return on investment. These new technologies effectively open the trade space for unmanned vehicle designers.

  20. High-Degree Neurons Feed Cortical Computations.

    Directory of Open Access Journals (Sweden)

    Nicholas M Timme

    2016-05-01

    Full Text Available Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree or sends out (out-degree. To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to

  1. High performance computing and communications panel report

    Energy Technology Data Exchange (ETDEWEB)

    1992-12-01

    In FY92, a presidential initiative entitled High Performance Computing and Communications (HPCC) was launched, aimed at securing U.S. preeminence in high performance computing and related communication technologies. The stated goal of the initiative is threefold: extend U.S. technological leadership in high performance computing and computer communications; provide wide dissemination and application of the technologies; and spur gains in U.S. productivity and industrial competitiveness, all within the context of the mission needs of federal agencies. Because of the importance of the HPCC program to the national well-being, especially its potential implication for industrial competitiveness, the Assistant to the President for Science and Technology has asked that the President's Council of Advisors in Science and Technology (PCAST) establish a panel to advise PCAST on the strengths and weaknesses of the HPCC program. The report presents a program analysis based on strategy, balance, management, and vision. Both constructive recommendations for program improvement and positive reinforcement of successful program elements are contained within the report.

  2. High-Precision Computation and Mathematical Physics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.

    2008-11-03

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

  3. On-demand high-capacity ride-sharing via dynamic trip-vehicle assignment.

    Science.gov (United States)

    Alonso-Mora, Javier; Samaranayake, Samitha; Wallar, Alex; Frazzoli, Emilio; Rus, Daniela

    2017-01-17

    Ride-sharing services are transforming urban mobility by providing timely and convenient transportation to anybody, anywhere, and anytime. These services present enormous potential for positive societal impacts with respect to pollution, energy consumption, congestion, etc. Current mathematical models, however, do not fully address the potential of ride-sharing. Recently, a large-scale study highlighted some of the benefits of car pooling but was limited to static routes with two riders per vehicle (optimally) or three (with heuristics). We present a more general mathematical model for real-time high-capacity ride-sharing that (i) scales to large numbers of passengers and trips and (ii) dynamically generates optimal routes with respect to online demand and vehicle locations. The algorithm starts from a greedy assignment and improves it through a constrained optimization, quickly returning solutions of good quality and converging to the optimal assignment over time. We quantify experimentally the tradeoff between fleet size, capacity, waiting time, travel delay, and operational costs for low- to medium-capacity vehicles, such as taxis and van shuttles. The algorithm is validated with ∼3 million rides extracted from the New York City taxicab public dataset. Our experimental study considers ride-sharing with rider capacity of up to 10 simultaneous passengers per vehicle. The algorithm applies to fleets of autonomous vehicles and also incorporates rebalancing of idling vehicles to areas of high demand. This framework is general and can be used for many real-time multivehicle, multitask assignment problems.

  4. On-demand high-capacity ride-sharing via dynamic trip-vehicle assignment

    Science.gov (United States)

    Alonso-Mora, Javier; Samaranayake, Samitha; Wallar, Alex; Frazzoli, Emilio; Rus, Daniela

    2017-01-01

    Ride-sharing services are transforming urban mobility by providing timely and convenient transportation to anybody, anywhere, and anytime. These services present enormous potential for positive societal impacts with respect to pollution, energy consumption, congestion, etc. Current mathematical models, however, do not fully address the potential of ride-sharing. Recently, a large-scale study highlighted some of the benefits of car pooling but was limited to static routes with two riders per vehicle (optimally) or three (with heuristics). We present a more general mathematical model for real-time high-capacity ride-sharing that (i) scales to large numbers of passengers and trips and (ii) dynamically generates optimal routes with respect to online demand and vehicle locations. The algorithm starts from a greedy assignment and improves it through a constrained optimization, quickly returning solutions of good quality and converging to the optimal assignment over time. We quantify experimentally the tradeoff between fleet size, capacity, waiting time, travel delay, and operational costs for low- to medium-capacity vehicles, such as taxis and van shuttles. The algorithm is validated with ∼3 million rides extracted from the New York City taxicab public dataset. Our experimental study considers ride-sharing with rider capacity of up to 10 simultaneous passengers per vehicle. The algorithm applies to fleets of autonomous vehicles and also incorporates rebalancing of idling vehicles to areas of high demand. This framework is general and can be used for many real-time multivehicle, multitask assignment problems. PMID:28049820

  5. Surprise responses in the human brain demonstrate statistical learning under high concurrent cognitive demand

    Science.gov (United States)

    Garrido, Marta Isabel; Teng, Chee Leong James; Taylor, Jeremy Alexander; Rowe, Elise Genevieve; Mattingley, Jason Brett

    2016-06-01

    The ability to learn about regularities in the environment and to make predictions about future events is fundamental for adaptive behaviour. We have previously shown that people can implicitly encode statistical regularities and detect violations therein, as reflected in neuronal responses to unpredictable events that carry a unique prediction error signature. In the real world, however, learning about regularities will often occur in the context of competing cognitive demands. Here we asked whether learning of statistical regularities is modulated by concurrent cognitive load. We compared electroencephalographic metrics associated with responses to pure-tone sounds with frequencies sampled from narrow or wide Gaussian distributions. We showed that outliers evoked a larger response than those in the centre of the stimulus distribution (i.e., an effect of surprise) and that this difference was greater for physically identical outliers in the narrow than in the broad distribution. These results demonstrate an early neurophysiological marker of the brain's ability to implicitly encode complex statistical structure in the environment. Moreover, we manipulated concurrent cognitive load by having participants perform a visual working memory task while listening to these streams of sounds. We again observed greater prediction error responses in the narrower distribution under both low and high cognitive load. Furthermore, there was no reliable reduction in prediction error magnitude under high-relative to low-cognitive load. Our findings suggest that statistical learning is not a capacity limited process, and that it proceeds automatically even when cognitive resources are taxed by concurrent demands.

  6. [Withdrawal of high estrogen containing oral contraceptives and the demand for medical service].

    Science.gov (United States)

    Kamper-Jorgensen, F; Albertsen, J; Almind, G; Andersen, K; Braae, M; Dybkjaer, L; Frolund, F; Granlie, K; Hald, E; Hald, J; Hector, O; Jacobsen, K; Kaltoft, S; Kjaerulff, E; Mabeck, C D; Magnusson, B; Nielsen, A; Novella, P; Olsen, O M; Pedersen, P A; Rasmussen, I; Strunk, K; Traeden, J B; Veje, J O

    1975-03-24

    The results of a survey are presented conerning the effectiveness of mass media publicity with the public. After oral contraceptives containing high levels of estrogen were prohibited in Denmark, a telephone survey of 23 doctors was taken to determine the fluctuation in demand for medical information from patients, and the reason for the fluctuation. The reasons were divided into 3 groups: 1) resulting from mass media publicity, 2) resulting from the unavailability of a particular contraceptive, and 3) other. 3 surveys were conducted of the frequency of demand for information on the high estrogen contraceptives, 1 for each of the 2 weeks after the prohibition and withdrawal of the contraceptives took place, and 1 1 month after the prohibition. 2-3% of the inquiries received by the doctors concerned the prohibited contraceptives, and half of these could be attributed directly to the mass media publicity. The number of requests in categories 1 and 2 dropped sharply in the 2nd and 3rd surveys, indicating that the effect of the mass meida publicity and the withdrawal of the contraceptive from the market had only a very immediate effect. It is also shown that the telephone can be used successfully to ascertain the effects of a short-term social phenomenon on the public.

  7. PREFACE: High Performance Computing Symposium 2011

    Science.gov (United States)

    Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod

    2012-02-01

    HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.

  8. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  9. High-Fidelity Down-Conversion Source for Secure Communications Using On-Demand Single Photons

    Science.gov (United States)

    Roberts, Tony

    2015-01-01

    AdvR, Inc., has built an efficient, fully integrated, waveguide-based source of spectrally uncorrelated photon pairs that will accelerate research and development (R&D) in the emerging field of quantum information science. Key to the innovation is the use of submicron periodically poled waveguides to produce counter propagating photon pairs, which is enabled by AdvR's patented segmented microelectrode poling technique. This novel device will provide a high brightness source of down-conversion pairs with enhanced spectral properties and low attenuation, and it will operate in the visible to the mid-infrared spectral region. A waveguide-based source of spectrally and spatially pure heralded photons will contribute to a wide range of NASA's advanced technology development efforts, including on-demand single photon sources for high-rate spaced-based secure communications.

  10. A modified method for estimation of chemical oxygen demand for samples having high suspended solids.

    Science.gov (United States)

    Yadvika; Yadav, Asheesh Kumar; Sreekrishnan, T R; Satya, Santosh; Kohli, Sangeeta

    2006-03-01

    Determination of chemical oxygen demand (COD) of samples having high suspended solids concentration such as cattle dung slurry with open reflux method of APHA-AWWA-WPCF did not give consistent results. This study presents a modification of the open reflux method (APHA-AWWA-WPCF) to make it suitable for samples with high percentage of suspended solids. The new method is based on a different technique of sample preparation, modified quantities of reagents and higher reflux time as compared to the existing open reflux method. For samples having solids contents of 14.0 g/l or higher, the modified method was found to give higher value of COD with much higher consistency and accuracy as compared to the existing open reflux method.

  11. Varying Overhead Ad Hoc on Demand Vector Routing in Highly Mobile Ad Hoc Network

    Directory of Open Access Journals (Sweden)

    V. Balaji

    2011-01-01

    Full Text Available Problem statement: An inherent feature of mobile ad hoc networks is the frequent change of network topology leading to stability and reliability problems of the network. Highly dynamic and dense network have to maintain acceptable level of service to data packets and limit the network control overheads. This capability is closely related as how quickly the network protocol control overhead is managed as a function of increased link changes. Dynamically limiting the routing control overheads based on the network topology improves the throughput of the network. Approach: In this study we propose Varying Overhead - Ad hoc on Demand Vector routing protocol (VO-AODV for highly dynamic mobile Ad hoc network. The VO-AODV routing protocol proposed dynamically modifies the active route time based on the network topology. Results and Conclusion: Simulation results prove that the proposed model decreases the control overheads without decreasing the QOS of the network.

  12. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  13. Monitoring SLAC High Performance UNIX Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

  14. Does good leadership buffer effects of high emotional demands at work on risk of antidepressant treatment?

    DEFF Research Database (Denmark)

    Madsen, Ida E H; Hanson, Linda L Magnusson; Rugulies, Reiner Ernst;

    2014-01-01

    Emotionally demanding work has been associated with increased risk of common mental disorders. Because emotional demands may not be preventable in certain occupations, the identification of workplace factors that can modify this association is vital. This article examines whether effects of emoti......Emotionally demanding work has been associated with increased risk of common mental disorders. Because emotional demands may not be preventable in certain occupations, the identification of workplace factors that can modify this association is vital. This article examines whether effects...... of emotional demands on antidepressant treatment, as an indicator of common mental disorders, are buffered by good leadership....

  15. Does High Emotional Demand with Low Job Control Relate to Suicidal Ideation among Service and Sales Workers in Korea?

    Science.gov (United States)

    Yoon, Jin-Ha; Jeung, Dayee; Chang, Sei-Jin

    2016-07-01

    We examined the relationship of high emotional demands and low job control to suicidal ideation among service and sales workers in Korea. A total of 1,995 service and sales workers participated in this study. Suicidal ideation and level of emotional demand and job control were assessed by self-reported questionnaire in 4th Korean National Health and Nutrition Examination Survey. Gender-specific odds ratio (OR) and 95% confidence intervals (95% CI) for suicidal ideation were calculated using logistic regression analysis. The results show that workers who suffered from high emotional demands (OR, 2.07; 95% CI, 1.24-3.45 in men, OR, 1.97; 95% CI, 1.42-2.75 in women) or low job control (OR, 1.96; 95% CI, 1.42-2.75 in men, OR, 1.33; 95% CI, 0.91-1.93 in women) were more likely to experience suicidal ideation after controlling for age, household income, and employment characteristics. The interaction model of emotional demands and job control revealed that workers with high emotional demands and high job control (OR, 1.93; 95% CI, 1.08-3.45 in men, OR, 1.60; 95% CI,1.06-2.42 in women) and high emotional demands and low job control (OR; 4.60, 95% CI;1.88-11.29 in men, OR; 2.78, 95% CI;1.64-4.44 in women) had a higher risk for suicidal ideation compared to those with low emotional demands and high job control after controlling for age, household income, employment characteristics, smoking, alcohol drinking and physical activity habit. These results suggest that high emotional demands in both genders and low job control in men might play a crucial role in developing suicidal ideation among sales and service workers in Korea.

  16. Grid Computing

    Indian Academy of Sciences (India)

    2016-05-01

    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers on demand. In this article,we describe the grid computing model and enumerate themajor differences between grid and cloud computing.

  17. Computer Controlled High Precise,High Voltage Pules Generator

    Institute of Scientific and Technical Information of China (English)

    但果; 邹积岩; 丛吉远; 董恩源

    2003-01-01

    High precise, high voltage pulse generator made up of high-power IGBT and pulse transformers controlled by a computer are described. A simple main circuit topology employed in this pulse generator can reduce the cost meanwhile it still meets special requirements for pulsed electric fields (PEFs) in food process. The pulse generator utilizes a complex programmable logic device (CPLD) to generate trigger signals. Pulse-frequency, pulse-width and pulse-number are controlled via RS232 bus by a computer. The high voltage pulse generator well suits to the application for fluid food non-thermal effect in pulsed electric fields, for it can increase and decrease by the step length 1.

  18. High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations

    Science.gov (United States)

    Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.

    2003-01-01

    Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.

  19. Computer vision for high content screening.

    Science.gov (United States)

    Kraus, Oren Z; Frey, Brendan J

    2016-01-01

    High Content Screening (HCS) technologies that combine automated fluorescence microscopy with high throughput biotechnology have become powerful systems for studying cell biology and drug screening. These systems can produce more than 100 000 images per day, making their success dependent on automated image analysis. In this review, we describe the steps involved in quantifying microscopy images and different approaches for each step. Typically, individual cells are segmented from the background using a segmentation algorithm. Each cell is then quantified by extracting numerical features, such as area and intensity measurements. As these feature representations are typically high dimensional (>500), modern machine learning algorithms are used to classify, cluster and visualize cells in HCS experiments. Machine learning algorithms that learn feature representations, in addition to the classification or clustering task, have recently advanced the state of the art on several benchmarking tasks in the computer vision community. These techniques have also recently been applied to HCS image analysis.

  20. The path toward HEP High Performance Computing

    Science.gov (United States)

    Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-06-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from

  1. Computing High Accuracy Power Spectra with Pico

    CERN Document Server

    Fendt, William A

    2007-01-01

    This paper presents the second release of Pico (Parameters for the Impatient COsmologist). Pico is a general purpose machine learning code which we have applied to computing the CMB power spectra and the WMAP likelihood. For this release, we have made improvements to the algorithm as well as the data sets used to train Pico, leading to a significant improvement in accuracy. For the 9 parameter nonflat case presented here Pico can on average compute the TT, TE and EE spectra to better than 1% of cosmic standard deviation for nearly all $\\ell$ values over a large region of parameter space. Performing a cosmological parameter analysis of current CMB and large scale structure data, we show that these power spectra give very accurate 1 and 2 dimensional parameter posteriors. We have extended Pico to allow computation of the tensor power spectrum and the matter transfer function. Pico runs about 1500 times faster than CAMB at the default accuracy and about 250,000 times faster at high accuracy. Training Pico can be...

  2. Demand side resource operation on the Irish power system with high wind power penetration

    DEFF Research Database (Denmark)

    Keane, A.; Tuohy, A.; Meibom, Peter

    2011-01-01

    The utilisation of demand side resources is set to increase over the coming years with the advent of advanced metering infrastructure, home area networks and the promotion of increased energy efficiency. Demand side resources are proposed as an energy resource that, through aggregation, can form ...

  3. 75 FR 63724 - Raisins Produced From Grapes Grown in California; Use of Estimated Trade Demand To Compute Volume...

    Science.gov (United States)

    2010-10-18

    ... published in the Federal Register on August 6, 2010 (75 FR 47490), on the use of an estimated trade demand... Register on August 6, 2010 (75 FR 47490), on the establishment of an estimated trade demand figure to..., 2010 (75 FR 47490), is hereby withdrawn. List of Subjects in 7 CFR Part 989 Grapes,...

  4. High performance computing for beam physics applications

    Science.gov (United States)

    Ryne, R. D.; Habib, S.

    Several countries are now involved in efforts aimed at utilizing accelerator-driven technologies to solve problems of national and international importance. These technologies have both economic and environmental implications. The technologies include waste transmutation, plutonium conversion, neutron production for materials science and biological science research, neutron production for fusion materials testing, fission energy production systems, and tritium production. All of these projects require a high-intensity linear accelerator that operates with extremely low beam loss. This presents a formidable computational challenge: One must design and optimize over a kilometer of complex accelerating structures while taking into account beam loss to an accuracy of 10 parts per billion per meter. Such modeling is essential if one is to have confidence that the accelerator will meet its beam loss requirement, which ultimately affects system reliability, safety and cost. At Los Alamos, the authors are developing a capability to model ultra-low loss accelerators using the CM-5 at the Advanced Computing Laboratory. They are developing PIC, Vlasov/Poisson, and Langevin/Fokker-Planck codes for this purpose. With slight modification, they have also applied their codes to modeling mesoscopic systems and astrophysical systems. In this paper, they will first describe HPC activities in the accelerator community. Then they will discuss the tools they have developed to model classical and quantum evolution equations. Lastly they will describe how these tools have been used to study beam halo in high current, mismatched charged particle beams.

  5. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  6. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  7. Ultra-high resolution computed tomography imaging

    Energy Technology Data Exchange (ETDEWEB)

    Paulus, Michael J. (Knoxville, TN); Sari-Sarraf, Hamed (Knoxville, TN); Tobin, Jr., Kenneth William (Harriman, TN); Gleason, Shaun S. (Knoxville, TN); Thomas, Jr., Clarence E. (Knoxville, TN)

    2002-01-01

    A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.

  8. Enabling Airspace Integration for High-Density On-Demand Mobility Operations

    Science.gov (United States)

    Mueller, Eric; Kopardekar, Parimal; Goodrich, Kenneth H.

    2017-01-01

    Aviation technologies and concepts have reached a level of maturity that may soon enable an era of on-demand mobility (ODM) fueled by quiet, efficient, and largely automated air taxis. However, successfully bringing such a system to fruition will require introducing orders of magnitude more aircraft to a given airspace volume than can be accommodated by the traditional air traffic control system, among other important technical challenges. The airspace integration problem is further compounded by requirements to set aside appropriate ground infrastructure for take-off and landing areas and ensuring these new aircraft types and their operations do not burden traditional airspace users and air traffic control. This airspace integration challenge may be significantly reduced by extending the concepts and technologies developed to manage small unmanned aircraft systems (UAS) at low altitudethe UAS traffic management (UTM) systemto higher altitudes and new aircraft types, or by equipping ODM aircraft with advanced sensors, algorithms, and interfaces. The precedent of operational freedom inherent in visual flight rules and the technologies developed for large UAS and commercial aircraft automation will contribute to the evolution of an ODM system enabled by UTM. This paper describes the set of air traffic services, normally provided by the traditional air traffic system, that an ODM system would implement to achieve the high densities needed for ODMs economic viability. Finally, the paper proposes a framework for integrating, evaluating, and deploying low-, medium-, and high-density ODM concepts that build on each other to ensure operational and economic feasibility at every step.

  9. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    California to date. The Titan system provides the largest extant heterogeneous architecture for computing and computational science. Usage is high, delivering on the promise of a system well-suited for capability simulations for science. This success is due in part to innovations in tracking and reporting the activity on the compute nodes, and using this information to further enable and optimize applications, extending and balancing workload across the entire node. The OLCF continues to invest in innovative processes, tools, and resources necessary to meet continuing user demand. The facility’s leadership in data analysis and workflows was featured at the Department of Energy (DOE) booth at SC15, for the second year in a row, highlighting work with researchers from the National Library of Medicine coupled with unique computational and data resources serving experimental and observational data across facilities. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. Building on the exemplary year of 2014, as shown by the 2014 Operational Assessment Report (OAR) review committee response in Appendix A, this OAR delineates the policies, procedures, and innovations implemented by the OLCF to continue delivering a multi-petaflop resource for cutting-edge research. This report covers CY 2015, which, unless otherwise specified, denotes January 1, 2015, through December 31, 2015.

  10. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  11. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  12. Scalable resource management in high performance computers.

    Energy Technology Data Exchange (ETDEWEB)

    Frachtenberg, E. (Eitan); Petrini, F. (Fabrizio); Fernandez Peinador, J. (Juan); Coll, S. (Salvador)

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  13. Demand Response in Low Voltage Distribution Networks with High PV Penetration

    DEFF Research Database (Denmark)

    Nainar, Karthikeyan; Pokhrel, Basanta Raj; Pillai, Jayakrishnan Radhakrishna

    2017-01-01

    the required flexibility from the electricity market through an aggregator. The optimum demand response enables consumption of maximum renewable energy within the network constraints. Simulation studies are conducted using Matlab and DigSilent Power factory software on a Danish low-voltage distribution system...... generation and load forecasts, network topology and market price signals as inputs, limits of network voltages, line power flows, transformer loading and demand response dynamics as constraints to find the required demand response at each time step. The proposed method can be used by the DSOs to purchase...

  14. Speed and path control for conflict-free flight in high air traffic demand in terminal airspace

    Science.gov (United States)

    Rezaei, Ali

    To accommodate the growing air traffic demand, flights will need to be planned and navigated with a much higher level of precision than today's aircraft flight path. The Next Generation Air Transportation System (NextGen) stands to benefit significantly in safety and efficiency from such movement of aircraft along precisely defined paths. Air Traffic Operations (ATO) relying on such precision--the Precision Air Traffic Operations or PATO--are the foundation of high throughput capacity envisioned for the future airports. In PATO, the preferred method is to manage the air traffic by assigning a speed profile to each aircraft in a given fleet in a given airspace (in practice known as (speed control). In this research, an algorithm has been developed, set in the context of a Hybrid Control System (HCS) model, that determines whether a speed control solution exists for a given fleet of aircraft in a given airspace and if so, computes this solution as a collective speed profile that assures separation if executed without deviation. Uncertainties such as weather are not considered but the algorithm can be modified to include uncertainties. The algorithm first computes all feasible sequences (i.e., all sequences that allow the given fleet of aircraft to reach destinations without violating the FAA's separation requirement) by looking at all pairs of aircraft. Then, the most likely sequence is determined and the speed control solution is constructed by a backward trajectory generation, starting with the aircraft last out and proceeds to the first out. This computation can be done for different sequences in parallel which helps to reduce the computation time. If such a solution does not exist, then the algorithm calculates a minimal path modification (known as path control) that will allow separation-compliance speed control. We will also prove that the algorithm will modify the path without creating a new separation violation. The new path will be generated by adding new

  15. A High Throughput On-Demand Routing Protocol for Multirate Ad Hoc Wireless Networks

    Science.gov (United States)

    Rahman, Md. Mustafizur; Hong, Choong Seon; Lee, Sungwon

    Routing in wireless ad hoc networks is a challenging issue because it dynamically controls the network topology and determines the network performance. Most of the available protocols are based on single-rate radio networks and they use hop-count as the routing metric. There have been some efforts for multirate radios as well that use transmission-time of a packet as the routing metric. However, neither the hop-count nor the transmission-time may be a sufficient criterion for discovering a high-throughput path in a multirate wireless ad hoc network. Hop-count based routing metrics usually select a low-rate bound path whereas the transmission-time based metrics may select a path with a comparatively large number of hops. The trade-off between transmission time and effective transmission range of a data rate can be another key criterion for finding a high-throughput path in such environments. In this paper, we introduce a novel routing metric based on the efficiency of a data rate that balances the required time and covering distance by a transmission and results in increased throughput. Using the new metric, we propose an on-demand routing protocol for multirate wireless environment, dubbed MR-AODV, to discover high-throughput paths in the network. A key feature of MR-AODV is that it controls the data rate in transmitting both the data and control packets. Rate control during the route discovery phase minimizes the route request (RREQ) avalanche. We use simulations to evaluate the performance of the proposed MR-AODV protocol and results reveal significant improvements in end-to-end throughput and minimization of routing overhead.

  16. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  17. The path toward HEP High Performance Computing

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  18. Slovak High School Students' Attitudes toward Computers

    Science.gov (United States)

    Kubiatko, Milan; Halakova, Zuzana; Nagyova, Sona; Nagy, Tibor

    2011-01-01

    The pervasive involvement of information and communication technologies and computers in our daily lives influences changes of attitude toward computers. We focused on finding these ecological effects in the differences in computer attitudes as a function of gender and age. A questionnaire with 34 Likert-type items was used in our research. The…

  19. High speed and large scale scientific computing

    CERN Document Server

    Gentzsch, W; Joubert, GR

    2010-01-01

    Over the years parallel technologies have completely transformed main stream computing. This book deals with the issues related to the area of cloud computing and discusses developments in grids, applications and information processing, as well as e-science. It is suitable for computer scientists, IT engineers and IT managers.

  20. High performance computing in power and energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Khaitan, Siddhartha Kumar [Iowa State Univ., Ames, IA (United States); Gupta, Anshul (eds.) [IBM Watson Research Center, Yorktown Heights, NY (United States)

    2013-07-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, cascading and security analysis, interaction between hybrid systems (electric, transport, gas, oil, coal, etc.) and so on, to get meaningful information in real time to ensure a secure, reliable and stable power system grid. Advanced research on development and implementation of market-ready leading-edge high-speed enabling technologies and algorithms for solving real-time, dynamic, resource-critical problems will be required for dynamic security analysis targeted towards successful implementation of Smart Grid initiatives. This books aims to bring together some of the latest research developments as well as thoughts on the future research directions of the high performance computing applications in electric power systems planning, operations, security, markets, and grid integration of alternate sources of energy, etc.

  1. High performance computing in science and engineering Garching/Munich 2016

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Siegfried; Bode, Arndt; Bruechle, Helmut; Brehm, Matthias (eds.)

    2016-11-01

    Computer simulations are the well-established third pillar of natural sciences along with theory and experimentation. Particularly high performance computing is growing fast and constantly demands more and more powerful machines. To keep pace with this development, in spring 2015, the Leibniz Supercomputing Centre installed the high performance computing system SuperMUC Phase 2, only three years after the inauguration of its sibling SuperMUC Phase 1. Thereby, the compute capabilities were more than doubled. This book covers the time-frame June 2014 until June 2016. Readers will find many examples of outstanding research in the more than 130 projects that are covered in this book, with each one of these projects using at least 4 million core-hours on SuperMUC. The largest scientific communities using SuperMUC in the last two years were computational fluid dynamics simulations, chemistry and material sciences, astrophysics, and life sciences.

  2. Association between job strain (high demand-low control and cardiovascular disease risk factors among petrochemical industry workers

    Directory of Open Access Journals (Sweden)

    Siamak Poorabdian

    2013-08-01

    Full Text Available Objective: One of the practical models for assessment of stressful working conditions due to job strain is "job demand and control" or Karasek's job strain model. This model explains how adverse physical and psychological effects including cardiovascular disease risk factors can be established due to high work demand. The aim was to investigate how certain cardiovascular risk factors including body mass index (BMI, heart rate, blood pressure, serum total cholesterol levels, and cigarette smoking are associated with job demand and control in workers. Materials and Methods: In this cohort study, 500 subjects completed "job demand and control" questionnaires. Factor analysis method was used in order to specify the most important "job demand and control" questions. Health check-up records of the workers were applied to extract data about cardiovascular disease risk factors. Ultimately, hypothesis testing, based on Eta, was used to assess the relationship between separated working groups and cardiovascular risk factors (hypertension and serum total cholesterol level. Results: A significant relationship was found between the job demand-control model and cardiovascular risk factors. In terms of chisquared test results, the highest value was assessed for heart rate (Chi2 = 145.078. The corresponding results for smoking and BMI were Chi2 = 85.652 and Chi2 = 30.941, respectively. Subsequently, Eta result for total cholesterol was 0.469, followed by hypertension equaling 0.684. Moreover, there was a significant difference between cardiovascular risk factors and job demand-control profiles among different working groups including the operational group, repairing group and servicing group. Conclusion: Job control and demand are significantly related to heart disease risk factors including hypertension, hyperlipidemia, and cigarette smoking.

  3. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  4. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  5. Exploring Tradeoffs in Demand-side and Supply-side Management of Urban Water Resources using Agent-based Modeling and Evolutionary Computation

    Science.gov (United States)

    Kanta, L.; Berglund, E. Z.

    2015-12-01

    Urban water supply systems may be managed through supply-side and demand-side strategies, which focus on water source expansion and demand reductions, respectively. Supply-side strategies bear infrastructure and energy costs, while demand-side strategies bear costs of implementation and inconvenience to consumers. To evaluate the performance of demand-side strategies, the participation and water use adaptations of consumers should be simulated. In this study, a Complex Adaptive Systems (CAS) framework is developed to simulate consumer agents that change their consumption to affect the withdrawal from the water supply system, which, in turn influences operational policies and long-term resource planning. Agent-based models are encoded to represent consumers and a policy maker agent and are coupled with water resources system simulation models. The CAS framework is coupled with an evolutionary computation-based multi-objective methodology to explore tradeoffs in cost, inconvenience to consumers, and environmental impacts for both supply-side and demand-side strategies. Decisions are identified to specify storage levels in a reservoir that trigger (1) increases in the volume of water pumped through inter-basin transfers from an external reservoir and (2) drought stages, which restrict the volume of water that is allowed for residential outdoor uses. The proposed methodology is demonstrated for Arlington, Texas, water supply system to identify non-dominated strategies for an historic drought decade. Results demonstrate that pumping costs associated with maximizing environmental reliability exceed pumping costs associated with minimizing restrictions on consumer water use.

  6. Underreporting on the MMPI-2-RF in a high-demand police officer selection context: an illustration.

    Science.gov (United States)

    Detrick, Paul; Chibnall, John T

    2014-09-01

    Positive response distortion is common in the high-demand context of employment selection. This study examined positive response distortion, in the form of underreporting, on the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF). Police officer job applicants completed the MMPI-2-RF under high-demand and low-demand conditions, once during the preemployment psychological evaluation and once without contingencies after completing the police academy. Demand-related score elevations were evident on the Uncommon Virtues (L-r) and Adjustment Validity (K-r) scales. Underreporting was evident on the Higher-Order scales Emotional/Internalizing Dysfunction and Behavioral/Externalizing Dysfunction; 5 of 9 Restructured Clinical scales; 6 of 9 Internalizing scales; 3 of 4 Externalizing scales; and 3 of 5 Personality Psychopathology 5 scales. Regression analyses indicated that L-r predicted demand-related underreporting on behavioral/externalizing scales, and K-r predicted underreporting on emotional/internalizing scales. Select scales of the MMPI-2-RF are differentially associated with different types of underreporting among police officer applicants. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  7. Effects of physical and mental task demands on cervical and upper limb muscle activity and physiological responses during computer tasks and recovery periods.

    Science.gov (United States)

    Wang, Yuling; Szeto, Grace P Y; Chan, Chetwyn C H

    2011-11-01

    The present study examined the effects of physical and mental workload during computer tasks on muscle activity and physiological measures. Activity in cervical postural muscles and distal forearm muscles, heart rate and blood pressure were compared among three tasks and rest periods of 15 min each in an experimental study design. Fourteen healthy pain-free adults participated (7 males, mean age = 23.2 ± 3.0 years) and the tasks were: (1) copy-typing ("typing"), (2) typing at progressively faster speed ("pacing"), (3) mental arithmetic plus fast typing ("subtraction"). Typing task was performed first, followed by the other two tasks in a random order. Median muscle activity (50th percentile) was examined in 5-min intervals during each task and each rest period, and statistically significant differences in the "time" factor (within task) and time × task factors was found in bilateral cervical erector spinae and upper trapezius muscles. In contrast, distal forearm muscle activity did not show any significant differences among three tasks. All muscles showed reduced activity to about the baseline level within first 5 min of the rest periods. Heart rate and blood pressure showed significant differences during tasks compared to baseline, and diastolic pressure was significantly higher in the subtraction than pacing task. The results suggest that cervical postural muscles had higher reactivity than forearm muscles to high mental workload tasks, and cervical muscles were also more reactive to tasks with high physical demand compared to high mental workload. Heart rate and blood pressure seemed to respond similarly to high physical and mental workloads.

  8. Lightweight Provenance Service for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Dong; Chen, Yong; Carns, Philip; Jenkins, John; Ross, Robert

    2017-09-09

    Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. The need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.

  9. Status and Prospects of Supply/Demand for Polyethylene Made by High-Pressure Polymerization

    Institute of Scientific and Technical Information of China (English)

    Xu Qing

    2000-01-01

    This paper deals with analysis of the market position of LDPE inside and outside China, including domestic and overseas market demand and production capacities of LDPE inside and outside China. The economic indicators of LDPE and LLDPE production processes, including the technical and economic indi cators, production cost, price trend and properties of PE products, have been compared. Results have shown that the supply of LDPE in the world tends to be geared to the demand. LLDPE and LDPE in a long period of time to come will continue to play their roles in their respective domains to co-exist on the market and complement each other.

  10. NINJA: Java for High Performance Numerical Computing

    Directory of Open Access Journals (Sweden)

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  11. Efficient High Performance Computing on Heterogeneous Platforms

    NARCIS (Netherlands)

    Shen, J.

    2015-01-01

    Heterogeneous platforms are mixes of different processing units in a compute node (e.g., CPUs+GPUs, CPU+MICs) or a chip package (e.g., APUs). This type of platforms keeps gaining popularity in various computer systems ranging from supercomputers to mobile devices. In this context, improving their

  12. Efficient High Performance Computing on Heterogeneous Platforms

    NARCIS (Netherlands)

    Shen, J.

    2015-01-01

    Heterogeneous platforms are mixes of different processing units in a compute node (e.g., CPUs+GPUs, CPU+MICs) or a chip package (e.g., APUs). This type of platforms keeps gaining popularity in various computer systems ranging from supercomputers to mobile devices. In this context, improving their ef

  13. Software Synthesis for High Productivity Exascale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bodik, Rastislav [Univ. of Washington, Seattle, WA (United States)

    2010-09-01

    Over the three years of our project, we accomplished three key milestones: We demonstrated how ideas from generative programming and software synthesis can help support the development of bulk-synchronous distributed memory kernels. These ideas are realized in a new language called MSL, a C-like language that combines synthesis features with high level notations for array manipulation and bulk-synchronous parallelism to simplify the semantic analysis required for synthesis. We also demonstrated that these high level notations map easily to low level C code and show that the performance of this generated code matches that of handwritten Fortran. Second, we introduced the idea of solver-aided domain-specific languages (SDSLs), which are an emerging class of computer-aided programming systems. SDSLs ease the construction of programs by automating tasks such as verification, debugging, synthesis, and non-deterministic execution. SDSLs are implemented by translating the DSL program into logical constraints. Next, we developed a symbolic virtual machine called Rosette, which simplifies the construction of such SDSLs and their compilers. We have used Rosette to build SynthCL, a subset of OpenCL that supports synthesis. Third, we developed novel numeric algorithms that move as little data as possible, either between levels of a memory hierarchy or between parallel processors over a network. We achieved progress in three aspects of this problem. First we determined lower bounds on communication. Second, we compared these lower bounds to widely used versions of these algorithms, and noted that these widely used algorithms usually communicate asymptotically more than is necessary. Third, we identified or invented new algorithms for most linear algebra problems that do attain these lower bounds, and demonstrated large speed-ups in theory and practice.

  14. Do high job demands increase intrinsic motivation or fatigue or both? The role of job control and job social support

    NARCIS (Netherlands)

    Van Yperen, N.W.; Hagedoorn, M.

    2003-01-01

    Examined whether job control and job social support reduce signs of fatigue and enhance intrinsic motivation among employees facing high job demands. 555 nurses (mean age 35.5 yrs) working at specialized units for patients with different levels of mental deficiency completed surveys regarding: (1) j

  15. When Actions Speak Too Much Louder than Words: Hand Gestures Disrupt Word Learning when Phonetic Demands Are High

    Science.gov (United States)

    Kelly, Spencer D.; Lee, Angela L.

    2012-01-01

    It is now widely accepted that hand gestures help people understand and learn language. Here, we provide an exception to this general rule--when phonetic demands are high, gesture actually hurts. Native English-speaking adults were instructed on the meaning of novel Japanese word pairs that were for non-native speakers phonetically hard (/ite/ vs.…

  16. Do high job demands increase intrinsic motivation or fatigue or both? The role of job control and job social support

    NARCIS (Netherlands)

    Van Yperen, N.W.; Hagedoorn, M.

    2003-01-01

    Examined whether job control and job social support reduce signs of fatigue and enhance intrinsic motivation among employees facing high job demands. 555 nurses (mean age 35.5 yrs) working at specialized units for patients with different levels of mental deficiency completed surveys regarding: (1)

  17. Job Resources Boost Work Engagement, Particularly when Job Demands Are High

    Science.gov (United States)

    Bakker, Arnold B.; Hakanen, Jari J.; Demerouti, Evangelia; Xanthopoulou, Despoina

    2007-01-01

    This study of 805 Finnish teachers working in elementary, secondary, and vocational schools tested 2 interaction hypotheses. On the basis of the job demands-resources model, the authors predicted that job resources act as buffers and diminish the negative relationship between pupil misbehavior and work engagement. In addition, using conservation…

  18. Job Resources Boost Work Engagement, Particularly when Job Demands Are High

    Science.gov (United States)

    Bakker, Arnold B.; Hakanen, Jari J.; Demerouti, Evangelia; Xanthopoulou, Despoina

    2007-01-01

    This study of 805 Finnish teachers working in elementary, secondary, and vocational schools tested 2 interaction hypotheses. On the basis of the job demands-resources model, the authors predicted that job resources act as buffers and diminish the negative relationship between pupil misbehavior and work engagement. In addition, using conservation…

  19. Technology Push, Demand Pull And The Shaping Of Technological Paradigms - Patterns In The Development Of Computing Technology

    NARCIS (Netherlands)

    J.C.M. van den Ende (Jan); W.A. Dolfsma (Wilfred)

    2002-01-01

    textabstractAn assumption generally subscribed in evolutionary economics is that new technological paradigms arise from advances is science and developments in technological knowledge. Demand only influences the selection among competing paradigms, and the course the paradigm after its inception. In

  20. High-performance Scientific Computing using Parallel Computing to Improve Performance Optimization Problems

    Directory of Open Access Journals (Sweden)

    Florica Novăcescu

    2011-10-01

    Full Text Available HPC (High Performance Computing has become essential for the acceleration of innovation and the companies’ assistance in creating new inventions, better models and more reliable products as well as obtaining processes and services at low costs. The information in this paper focuses particularly on: description the field of high performance scientific computing, parallel computing, scientific computing, parallel computers, and trends in the HPC field, presented here reveal important new directions toward the realization of a high performance computational society. The practical part of the work is an example of use of the HPC tool to accelerate solving an electrostatic optimization problem using the Parallel Computing Toolbox that allows solving computational and data-intensive problems using MATLAB and Simulink on multicore and multiprocessor computers.

  1. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    DEFF Research Database (Denmark)

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational thi...

  2. High Fidelity Adiabatic Quantum Computation via Dynamical Decoupling

    CERN Document Server

    Quiroz, Gregory

    2012-01-01

    We introduce high-order dynamical decoupling strategies for open system adiabatic quantum computation. Our numerical results demonstrate that a judicious choice of high-order dynamical decoupling method, in conjunction with an encoding which allows computation to proceed alongside decoupling, can dramatically enhance the fidelity of adiabatic quantum computation in spite of decoherence.

  3. Do traditional male role norms modify the association between high emotional demands in work, and sickness absence?

    DEFF Research Database (Denmark)

    Labriola, Merete; Hansen, Claus D.; Lund, Thomas

    2011-01-01

    if adherence to traditional male role norms modifies the effect of emotional demands on sickness absence/presenteeism. Methods Data derive from MARS (Men, accidents, risk and safety), a two-wave panel study of ambulance workers and fire fighters in Denmark (n = 2585). Information was collected from......Objectives Ambulance workers are exposed to high levels of emotional demands, which could affect sickness absence. Being a male dominated occupation, it is hypothesised that ambulance workers adhere to more traditional male role norms than men in other occupations. The aim is to investigate...... of emotional demands on mental health varies according to adherence to traditional male role norms. The presentation will furthermore include results from prospective analyses on not-yet collected follow-up data on absenteeism taken from a national register....

  4. A high turndown, ultra low emission low swirl burner for natural gas, on-demand water heaters

    Energy Technology Data Exchange (ETDEWEB)

    Rapp, Vi H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cheng, Robert K. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Therkelsen, Peter L. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-06-13

    Previous research has shown that on-demand water heaters are, on average, approximately 37% more efficient than storage water heaters. However, approximately 98% of water heaters in the U.S. use storage water heaters while the remaining 2% are on-demand. A major market barrier to deployment of on-demand water heaters is their high retail cost, which is due in part to their reliance on multi-stage burner banks that require complex electronic controls. This project aims to research and develop a cost-effective, efficient, ultra-low emission burner for next generation natural gas on-demand water heaters in residential and commercial buildings. To meet these requirements, researchers at the Lawrence Berkeley National Laboratory (LBNL) are adapting and testing the low-swirl burner (LSB) technology for commercially available on-demand water heaters. In this report, a low-swirl burner is researched, developed, and evaluated to meet targeted on-demand water heater performance metrics. Performance metrics for a new LSB design are identified by characterizing performance of current on-demand water heaters using published literature and technical specifications, and through experimental evaluations that measure fuel consumption and emissions output over a range of operating conditions. Next, target metrics and design criteria for the LSB are used to create six 3D printed prototypes for preliminary investigations. Prototype designs that proved the most promising were fabricated out of metal and tested further to evaluate the LSB’s full performance potential. After conducting a full performance evaluation on two designs, we found that one LSB design is capable of meeting or exceeding almost all the target performance metrics for on-demand water heaters. Specifically, this LSB demonstrated flame stability when operating from 4.07 kBTU/hr up to 204 kBTU/hr (50:1 turndown), compliance with SCAQMD Rule 1146.2 (14 ng/J or 20 ppm NOX @ 3% O2), and lower CO emissions than state

  5. Demand Uncertainty

    DEFF Research Database (Denmark)

    Nguyen, Daniel Xuyen

    This paper presents a model of trade that explains why firms wait to export and why many exporters fail. Firms face uncertain demands that are only realized after the firm enters the destination. The model retools the timing of uncertainty resolution found in productivity heterogeneity models...... the high rate of exit seen in the first years of exporting. Finally, when faced with multiple countries in which to export, some firms will choose to sequentially export in order to slowly learn more about its chances for success in untested markets....

  6. Demand-based urban forest planning using high-resolution remote sensing and AHP

    Science.gov (United States)

    Kolanuvada, Srinivasa Raju; Mariappan, Muneeswaran; Krishnan, Vani

    2016-05-01

    Urban forest planning is important for providing better urban ecosystem services and conserve the natural carbon sinks inside the urban area. In this study, a demand based urban forest plan was developed for Chennai city by using Analytical Hierarchy Process (AHP) method. Population density, Tree cover, Air quality index and Carbon stocks are the parameters were considered in this study. Tree cover and Above Ground Biomass (AGB) layers were prepared at a resolution of 1m from airborne LiDAR and aerial photos. The ranks and weights are assigned by the spatial priority using AHP. The results show that, the actual status of the urban forest is not adequate to provide ecosystem services on spatial priority. From this perspective, we prepared a demand based plan for improving the urban ecosystem.

  7. Proceedings CSR 2010 Workshop on High Productivity Computations

    CERN Document Server

    Ablayev, Farid; Vasiliev, Alexander; 10.4204/EPTCS.52

    2011-01-01

    This volume contains the proceedings of the Workshop on High Productivity Computations (HPC 2010) which took place on June 21-22 in Kazan, Russia. This workshop was held as a satellite workshop of the 5th International Computer Science Symposium in Russia (CSR 2010). HPC 2010 was intended to organize the discussions about high productivity computing means and models, including but not limited to high performance and quantum information processing.

  8. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    Directory of Open Access Journals (Sweden)

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  9. High Demand, Core Geosciences, and Meeting the Challenges through Online Approaches

    Science.gov (United States)

    Keane, Christopher; Leahy, P. Patrick; Houlton, Heather; Wilson, Carolyn

    2014-05-01

    As the geosciences has evolved over the last several decades, so too has undergraduate geoscience education, both from a standpoint of curriculum and educational experience. In the United States, we have been experiencing very strong growth in enrollments in geoscience, as well as employment demand for the last 7 years. That growth has been largely fueled by all aspects of the energy boom in the US, both from the energy production side and the environmental management side. Interestingly the portfolio of experiences and knowledge required are strongly congruent as evidenced from results of the American Geosciences Institute's National Geoscience Exit Survey. Likewise, the demand for new geoscientists in the US is outstripping even the nearly unprecedented growth in enrollments and degrees, which is calling into question the geosciences' inability to effectively reach into the largest growing segments of the U.S. College population - underrepresented minorities. We will also examine the results of the AGI Survey on Geoscience Online Learning and examine how the results of that survey are rectified with Peter Smith's "Middle Third" theory on "wasted talent" because of spatial, economic, and social dislocation. In particular, the geosciences are late to the online learning game in the United States and most faculty engaged in such activities are "lone wolves" in their department operating with little knowledge of the support structures that exist in such development. Yet the most cited barriers for faculty not engaging actively in online learning is the assertion that laboratory and field experiences will be lost and thus fight engaging in this medium. However, the survey shows that faculty are discovering novel approaches to address these issues, many of which have great application to enabling geoscience programs in the United States to meet the expanding demand for geoscience degrees.

  10. Exploring Tradeoffs in Demand-Side and Supply-Side Management of Urban Water Resources Using Agent-Based Modeling and Evolutionary Computation

    Directory of Open Access Journals (Sweden)

    Lufthansa Kanta

    2015-11-01

    Full Text Available Urban water supply systems may be managed through supply-side and demand-side strategies, which focus on water source expansion and demand reductions, respectively. Supply-side strategies bear infrastructure and energy costs, while demand-side strategies bear costs of implementation and inconvenience to consumers. To evaluate the performance of demand-side strategies, the participation and water use adaptations of consumers should be simulated. In this study, a Complex Adaptive Systems (CAS framework is developed to simulate consumer agents that change their consumption to affect the withdrawal from the water supply system, which, in turn influences operational policies and long-term resource planning. Agent-based models are encoded to represent consumers and a policy maker agent and are coupled with water resources system simulation models. The CAS framework is coupled with an evolutionary computation-based multi-objective methodology to explore tradeoffs in cost, inconvenience to consumers, and environmental impacts for both supply-side and demand-side strategies. Decisions are identified to specify storage levels in a reservoir that trigger: (1 increases in the volume of water pumped through inter-basin transfers from an external reservoir; and (2 drought stages, which restrict the volume of water that is allowed for residential outdoor uses. The proposed methodology is demonstrated for Arlington, Texas, water supply system to identify non-dominated strategies for an historic drought decade. Results demonstrate that pumping costs associated with maximizing environmental reliability exceed pumping costs associated with minimizing restrictions on consumer water use.

  11. High-Performance Cloud Computing: A View of Scientific Applications

    CERN Document Server

    Vecchiola, Christian; Buyya, Rajkumar

    2009-01-01

    Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure...

  12. Energy Demand

    NARCIS (Netherlands)

    Stehfest, E. et al.

    2014-01-01

    Key policy issues – How will energy demand evolve particularly in emerging and medium- and low- income economies? – What is the mix of end-use energy carriers to meet future energy demand? – How can energy efficiency contribute to reducing the growth rate of energy demand and mitigate pressures on t

  13. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  14. From the Web to the Grid and beyond computing paradigms driven by high-energy physics

    CERN Document Server

    Carminati, Federico; Galli-Carminati, Giuliana

    2012-01-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the ...

  15. Intro - High Performance Computing for 2015 HPC Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Klitsner, Tom [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    The recent Executive Order creating the National Strategic Computing Initiative (NSCI) recognizes the value of high performance computing for economic competitiveness and scientific discovery and commits to accelerate delivery of exascale computing. The HPC programs at Sandia –the NNSA ASC program and Sandia’s Institutional HPC Program– are focused on ensuring that Sandia has the resources necessary to deliver computation in the national interest.

  16. Condor-COPASI: high-throughput computing for biochemical networks

    OpenAIRE

    Kent Edward; Hoops Stefan; Mendes Pedro

    2012-01-01

    Abstract Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary experti...

  17. The Demand Side Management Potential to Balance a Highly Renewable European Power System

    Directory of Open Access Journals (Sweden)

    Alexander Kies

    2016-11-01

    Full Text Available Shares of renewables continue to grow in the European power system. A fully renewable European power system will primarily depend on the renewable power sources of wind and photovoltaics (PV, which are not dispatchable but intermittent and therefore pose a challenge to the balancing of the power system. To overcome this issue, several solutions have been proposed and investigated in the past, including storage, backup power, reinforcement of the transmission grid, and demand side management (DSM. In this paper, we investigate the potential of DSM to balance a simplified, fully renewable European power system. For this purpose, we use ten years of weather and historical load data, a power-flow model and the implementation of demand side management as a storage equivalent, to investigate the impact of DSM on the need for backup energy. We show that DSM has the potential to reduce the need for backup energy in Europe by up to one third and can cover the need for backup up to a renewable share of 67%. Finally, it is demonstrated that the optimal mix of wind and PV is shifted by the utilisation of DSM towards a higher share of PV, from 19% to 36%.

  18. Progress and Challenges in High Performance Computer Technology

    Institute of Scientific and Technical Information of China (English)

    Xue-Jun Yang; Yong Dou; Qing-Feng Hu

    2006-01-01

    High performance computers provide strategic computing power in the construction of national economy and defense, and become one of symbols of the country's overall strength. Over 30 years, with the supports of governments, the technology of high performance computers is in the process of rapid development, during which the computing performance increases nearly 3 million times and the processors number expands over 10 hundred thousands times. To solve the critical issues related with parallel efficiency and scalability, scientific researchers pursued extensive theoretical studies and technical innovations. The paper briefly looks back the course of building high performance computer systems both at home and abroad,and summarizes the significant breakthroughs of international high performance computer technology. We also overview the technology progress of China in the area of parallel computer architecture, parallel operating system and resource management,parallel compiler and performance optimization, environment for parallel programming and network computing. Finally, we examine the challenging issues, "memory wall", system scalability and "power wall", and discuss the issues of high productivity computers, which is the trend in building next generation high performance computers.

  19. Architectural Considerations for Highly Scalable Computing to Support On-demand Video Analytics

    Science.gov (United States)

    2017-04-19

    be passed on to multiple operating systems of choice (windows or Linux ) in a uniform fashion. This helps in running analytics on multiple OS’s. A...common share between windows and Linux nodes. Another implementation detail is that the algorithm processing part of an analytics must run to...shared storage was made available to windows nodes as \\\\sigmafs\\data CIFS share and to Linux nodes as /sigmafs/data NFS mount point. VI. CONCLUSION

  20. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  1. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    DEFF Research Database (Denmark)

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational...... thinking. We present two main theses on which the subject is based, and we present the included knowledge areas and didactical design principles. Finally we summarize the status and future plans for the subject and related development projects....

  2. Proteomic analysis of Ketogulonicigenium vulgare under glutathione reveals high demand for thiamin transport and antioxidant protection.

    Directory of Open Access Journals (Sweden)

    Qian Ma

    Full Text Available Ketogulonicigenium vulgare, though grows poorly when mono-cultured, has been widely used in the industrial production of the precursor of vitamin C with the coculture of Bacillus megaterium. Various efforts have been made to clarify the synergic pattern of this artificial microbial community and to improve the growth and production ability of K. vulgare, but there is still no sound explanation. In previous research, we found that the addition of reduced glutathione into K. vulgare monoculture could significantly improve its growth and productivity. By performing SEM and TEM, we observed that after adding GSH into K. vulgare monoculture, cells became about 4-6 folds elongated, and formed intracytoplasmic membranes (ICM. To explore the molecular mechanism and provide insights into the investigation of the synergic pattern of the co-culture system, we conducted a comparative iTRAQ-2-D-LC-MS/MS-based proteomic analysis of K. vulgare grown under reduced glutathione. Principal component analysis of proteomic data showed that after the addition of glutathione, proteins for thiamin/thiamin pyrophosphate (TPP transport, glutathione transport and the maintenance of membrane integrity, together with several membrane-bound dehydrogenases had significant up-regulation. Besides, several proteins participating in the pentose phosphate pathway and tricarboxylic acid cycle were also up-regulated. Additionally, proteins combating intracellular reactive oxygen species were also up-regulated, which similarly occurred in K. vulgare when the co-cultured B. megaterium cells lysed from our former research results. This study reveals the demand for transmembrane transport of substrates, especially thiamin, and the demand for antioxidant protection of K. vulgare.

  3. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  4. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  5. Analog computing

    CERN Document Server

    Ulmann, Bernd

    2013-01-01

    This book is a comprehensive introduction to analog computing. As most textbooks about this powerful computing paradigm date back to the 1960s and 1970s, it fills a void and forges a bridge from the early days of analog computing to future applications. The idea of analog computing is not new. In fact, this computing paradigm is nearly forgotten, although it offers a path to both high-speed and low-power computing, which are in even more demand now than they were back in the heyday of electronic analog computers.

  6. High Performance Spaceflight Computing (HPSC) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In 2012, the NASA Game Changing Development Program (GCDP), residing in the NASA Space Technology Mission Directorate (STMD), commissioned a High Performance...

  7. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community

    OpenAIRE

    Krampis Konstantinos; Booth Tim; Chapman Brad; Tiwari Bela; Bicak Mesude; Field Dawn; Nelson Karen E

    2012-01-01

    Abstract Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the ...

  8. High Available COTS Based Computer for Space

    Science.gov (United States)

    Hartmann, J.; Magistrati, Giorgio

    2015-09-01

    The availability and reliability factors of a system are central requirements of a target application. From a simple fuel injection system used in cars up to a flight control system of an autonomous navigating spacecraft, each application defines its specific availability factor under the target application boundary conditions. Increasing quality requirements on data processing systems used in space flight applications calling for new architectures to fulfill the availability, reliability as well as the increase of the required data processing power. Contrary to the increased quality request simplification and use of COTS components to decrease costs while keeping the interface compatibility to currently used system standards are clear customer needs. Data processing system design is mostly dominated by strict fulfillment of the customer requirements and reuse of available computer systems were not always possible caused by obsolescence of EEE-Parts, insufficient IO capabilities or the fact that available data processing systems did not provide the required scalability and performance.

  9. CRPC research into linear algebra software for high performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Choi, J.; Walker, D.W. [Oak Ridge National Lab., TN (United States). Mathematical Sciences Section; Dongarra, J.J. [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Computer Science]|[Oak Ridge National Lab., TN (United States). Mathematical Sciences Section; Pozo, R. [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Computer Science; Sorensen, D.C. [Rice Univ., Houston, TX (United States). Dept. of Computational and Applied Mathematics

    1994-12-31

    In this paper the authors look at a number of approaches being investigated in the Center for Research on Parallel Computation (CRPC) to develop linear algebra software for high-performance computers. These approaches are exemplified by the LAPACK, templates, and ARPACK projects. LAPACK is a software library for performing dense and banded linear algebra computations, and was designed to run efficiently on high-performance computers. The authors focus on the design of the distributed-memory version of LAPACK, and on an object-oriented interface to LAPACK.

  10. Benchmarking: More Aspects of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Ravindrudu, Rahul [Iowa State Univ., Ames, IA (United States)

    2004-01-01

    pattern for the left-looking factorization. The right-looking algorithm performs better for in-core data, but the left-looking will perform better for out-of-core data due to the reduced I/O operations. Hence the conclusion that out-of-core algorithms will perform better when designed from start. The out-of-core and thread based computation do not interact in this case, since I/O is not done by the threads. The performance of the thread based computation does not depend on I/O as the algorithms are in the BLAS algorithms which assumes all the data to be in memory. This is the reason the out-of-core results and OpenMP threads results were presented separately and no attempt to combine them was made. In general, the modified HPL performs better with larger block sizes, due to less I/O involved for out-of-core part and better cache utilization for the thread based computation.

  11. High Job Demands and Low Job Control Increase Nurses' Professional Leaving Intentions: The Role of Care Setting and Profit Orientation.

    Science.gov (United States)

    Wendsche, Johannes; Hacker, Winfried; Wegge, Jürgen; Rudolf, Matthias

    2016-10-01

    We investigated how two types of care setting (home care and nursing home) and type of ownership (for-profit vs. public/non-profit) of geriatric care services interacted in influencing registered nurses' intention to give up their profession. In prior research, employment in for-profit-organizations, high job demands, and low job control were important antecedents of nurses' intent to leave. However, the impact of care setting on these associations was inconclusive. Therefore, we tested a mediated moderation model predicting that adverse work characteristics would drive professional leaving intentions, particularly in for-profit services and in nursing homes. A representative German sample of 304 registered nurses working in 78 different teams participated in our cross-sectional study. As predicted, lower job control and higher job demands were associated with higher professional leaving intentions, and nurses reported higher job demands in public/non-profit care than in for-profit care, and in nursing homes compared to home care. Overall, RNs in nursing homes and home care reported similar intent to leave, but in for-profit settings only, nurses working in nursing homes reported higher professional leaving intentions than did nurses in home care, which was linked to lower job control in the for-profit nursing home setting, supporting mediated moderation. Taken together, our results indicate that the interplay of care setting and type of ownership is important when explaining nurses' intentions to give up their profession. © 2016 Wiley Periodicals, Inc.

  12. Biomedical Requirements for High Productivity Computing Systems

    Science.gov (United States)

    2005-04-01

    operations performed by embedded C ++ libraries. While Python is not currently used directly for numerically intensive work it would be quite desirable if...performed by embedded C ++ libraries. The availability of a higher performance python solution is highly desirable, i.e. – a python compiler or better JIT...be desirable. - Virtually all high-level programming is now done in Python with numerically intensive operations performed by embedded C ++ libraries

  13. A high performance scientific cloud computing environment for materials simulations

    CERN Document Server

    Jorissen, Kevin; Rehr, John J

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditi...

  14. Evaluation of resource allocation and supply-demand balance in clinical practice with high-cost technologies.

    Science.gov (United States)

    Otsubo, Tetsuya; Imanaka, Yuichi; Lee, Jason; Hayashida, Kenshi

    2011-12-01

    Japan has one of the highest numbers of high-cost medical devices installed relative to its population. While evaluations of the distribution of these devices traditionally involve simple population-based assessments, an indicator that includes the demand of these devices would more accurately reflect the situation. The purpose of this study was to develop an indicator of the supply-demand balance of such devices, using examples of magnetic resonance imaging scanners (MRI) and extracorporeal shockwave lithotripters (ESWL), and to investigate the relationship between this indicator, personnel distribution statuses and operating statuses at the prefectural level. Using data from nation-wide surveys and claims data from 16 hospitals, we developed an indicator based on the ratio of the supplied number of device units to the number of device units in demand for MRI and ESWL. The latter value was based on patient volume and utilization proportion. Correlation analyses were conducted between the supply-demand balances of these devices, personal distribution and operating statuses. Comparisons between our indicator and conventional population-based indicators revealed that 15% and 30% of prefectures were at risk of underestimating the availability of MRI and ESWL, respectively. The numbers of specialist personnel/device units showed significant, negative correlations with our indicators in both devices. Utilization-based analyses of health care resource placement and utilization status provide a more accurate indication than simple population-based assessments, and can assist decision makers in reviewing gaps between health policy and management. Such an indicator therefore has the potential to be a tool in helping to improve the efficiency of the allocation and placement of such devices. © 2010 Blackwell Publishing Ltd.

  15. The Principals and Practice of Distributed High Throughput Computing

    CERN Document Server

    CERN. Geneva

    2016-01-01

    The potential of Distributed Processing Systems to deliver computing capabilities with qualities ranging from high availability and reliability to easy expansion in functionality and capacity were recognized and formalized in the 1970’s. For more three decade these principals Distributed Computing guided the development of the HTCondor resource and job management system. The widely adopted suite of software tools offered by HTCondor are based on novel distributed computing technologies and are driven by the evolving needs of High Throughput scientific applications. We will review the principals that underpin our work, the distributed computing frameworks and technologies we developed and the lessons we learned from delivering effective and dependable software tools in an ever changing landscape computing technologies and needs that range today from a desktop computer to tens of thousands of cores offered by commercial clouds. About the speaker Miron Livny received a B.Sc. degree in Physics and Mat...

  16. Computer-Aided Design of Drugs on Emerging Hybrid High Performance Computers

    Science.gov (United States)

    2013-09-01

    Clustering using MapReduce , Workshop on Trends in High-Performance Distributed Computing, Vrije Universiteit, Amsterdam, NL. (Invited Talk) [25] February...and middleware packages for polarizable force fields on multi-core and GPU systems, supported by the MapReduce paradigm. NSF MRI #0922657, $451,051...High-throughput Molecular Datasets for Scalable Clustering using MapReduce , Workshop on Trends in High-Performance Distributed Computing, Vrije

  17. Comparing computer experiments for fitting high-order polynomial metamodels

    OpenAIRE

    Johnson, Rachel T.; Montgomery, Douglas C.; Jones, Bradley; Parker, Peter T.

    2010-01-01

    The use of simulation as a modeling and analysis tool is wide spread. Simulation is an enabling tool for experimentally virtually on a validated computer environment. Often the underlying function for a computer experiment result has too much curvalture to be adequately modeled by a low-order polynomial. In such cases, finding an appropriate experimental design is not easy. We evaluate several computer experiments assuming the modeler is interested in fitting a high-order polynomial to th...

  18. Nuclear Forces and High-Performance Computing: The Perfect Match

    Energy Technology Data Exchange (ETDEWEB)

    Luu, T; Walker-Loud, A

    2009-06-12

    High-performance computing is now enabling the calculation of certain nuclear interaction parameters directly from Quantum Chromodynamics, the quantum field theory that governs the behavior of quarks and gluons and is ultimately responsible for the nuclear strong force. We briefly describe the state of the field and describe how progress in this field will impact the greater nuclear physics community. We give estimates of computational requirements needed to obtain certain milestones and describe the scientific and computational challenges of this field.

  19. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  20. Low latency, high bandwidth data communications between compute nodes in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2010-11-02

    Methods, parallel computers, and computer program products are disclosed for low latency, high bandwidth data communications between compute nodes in a parallel computer. Embodiments include receiving, by an origin direct memory access (`DMA`) engine of an origin compute node, data for transfer to a target compute node; sending, by the origin DMA engine of the origin compute node to a target DMA engine on the target compute node, a request to send (`RTS`) message; transferring, by the origin DMA engine, a predetermined portion of the data to the target compute node using memory FIFO operation; determining, by the origin DMA engine whether an acknowledgement of the RTS message has been received from the target DMA engine; if the an acknowledgement of the RTS message has not been received, transferring, by the origin DMA engine, another predetermined portion of the data to the target compute node using a memory FIFO operation; and if the acknowledgement of the RTS message has been received by the origin DMA engine, transferring, by the origin DMA engine, any remaining portion of the data to the target compute node using a direct put operation.

  1. Integration of highly probabilistic sources into optical quantum architectures: perpetual quantum computation

    CERN Document Server

    Devitt, Simon J; Munro, William J; Nemoto, Kae

    2011-01-01

    In this paper we introduce a design for an optical topological cluster state computer constructed exclusively from a single quantum component. Unlike previous efforts we eliminate the need for on demand, high fidelity photon sources and detectors and replace them with the same device utilised to create photon/photon entanglement. This introduces highly probabilistic elements into the optical architecture while maintaining complete specificity of the structure and operation for a large scale computer. Photons in this system are continually recycled back into the preparation network, allowing for a arbitrarily deep 3D cluster to be prepared using a comparatively small number of photonic qubits and consequently the elimination of high frequency, deterministic photon sources.

  2. Transforming High School Physics with Modeling and Computation

    CERN Document Server

    Aiken, John M

    2013-01-01

    The Engage to Excel (PCAST) report, the National Research Council's Framework for K-12 Science Education, and the Next Generation Science Standards all call for transforming the physics classroom into an environment that teaches students real scientific practices. This work describes the early stages of one such attempt to transform a high school physics classroom. Specifically, a series of model-building and computational modeling exercises were piloted in a ninth grade Physics First classroom. Student use of computation was assessed using a proctored programming assignment, where the students produced and discussed a computational model of a baseball in motion via a high-level programming environment (VPython). Student views on computation and its link to mechanics was assessed with a written essay and a series of think-aloud interviews. This pilot study shows computation's ability for connecting scientific practice to the high school science classroom.

  3. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  4. From the web to the grid and beyond. Computing paradigms driven by high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Brun, Rene; Carminati, Federico [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Galli Carminati, Giuliana (eds.) [Hopitaux Universitaire de Geneve, Chene-Bourg (Switzerland). Unite de la Psychiatrie du Developpement Mental

    2012-07-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the Web, and grid- and cloud computing. The important issue of intellectual property regulations for distributed software engineering and computing is also addressed. Aptly, the book closes with a visionary chapter of what may lie ahead. Approachable and requiring only basic understanding of physics and computer sciences, this book is intended for both education and research. (orig.)

  5. Design of Demand Driven Return Supply Chain for High-Tech Products

    NARCIS (Netherlands)

    Ashayeri, J.; Tuzkaya, G.

    2010-01-01

    Many high-tech supply chain operate in a context of high process and market uncertainties due to shorter product life cycles. When introducing a new product, a company must manage the cost of supply, including the cost of returns over its short life cycle. The returns distribution looks like a negat

  6. Design of Demand Driven Return Supply Chain for High-Tech Products

    NARCIS (Netherlands)

    Ashayeri, J.; Tuzkaya, G.

    2010-01-01

    Many high-tech supply chain operate in a context of high process and market uncertainties due to shorter product life cycles. When introducing a new product, a company must manage the cost of supply, including the cost of returns over its short life cycle. The returns distribution looks like a

  7. Domain Decomposition Based High Performance Parallel Computing

    CERN Document Server

    Raju, Mandhapati P

    2009-01-01

    The study deals with the parallelization of finite element based Navier-Stokes codes using domain decomposition and state-ofart sparse direct solvers. There has been significant improvement in the performance of sparse direct solvers. Parallel sparse direct solvers are not found to exhibit good scalability. Hence, the parallelization of sparse direct solvers is done using domain decomposition techniques. A highly efficient sparse direct solver PARDISO is used in this study. The scalability of both Newton and modified Newton algorithms are tested.

  8. Estimation of the Demand for Hospital Care After a Possible High-Magnitude Earthquake in the City of Lima, Peru.

    Science.gov (United States)

    Bambarén, Celso; Uyen, Angela; Rodriguez, Miguel

    2017-02-01

    Introduction A model prepared by National Civil Defense (INDECI; Lima, Peru) estimated that an earthquake with an intensity of 8.0 Mw in front of the central coast of Peru would result in 51,019 deaths and 686,105 injured in districts of Metropolitan Lima and Callao. Using this information as a base, a study was designed to determine the characteristics of the demand for treatment in public hospitals and to estimate gaps in care in the hours immediately after such an event. A probabilistic model was designed that included the following variables: demand for hospital care; time of arrival at the hospitals; type of medical treatment; reason for hospital admission; and the need for specialized care like hemodialysis, blood transfusions, and surgical procedures. The values for these variables were obtained through a literature search of the databases of the MEDLINE medical bibliography, the Cochrane and SciELO libraries, and Google Scholar for information on earthquakes over the last 30 years of over magnitude 6.0 on the moment magnitude scale. If a high-magnitude earthquake were to occur in Lima, it was estimated that between 23,328 and 178,387 injured would go to hospitals, of which between 4,666 and 121,303 would require inpatient care, while between 18,662 and 57,084 could be treated as outpatients. It was estimated that there would be an average of 8,768 cases of crush syndrome and 54,217 cases of other health problems. Enough blood would be required for 8,761 wounded in the first 24 hours. Furthermore, it was expected that there would be a deficit of hospital beds and operating theaters due to the high demand. Sudden and violent disasters, such as earthquakes, represent significant challenges for health systems and services. This study shows the deficit of preparation and capacity to respond to a possible high-magnitude earthquake. The study also showed there are not enough resources to face mega-disasters, especially in large cities. Bambarén C , Uyen A

  9. High-Speed Computer-Controlled Switch-Matrix System

    Science.gov (United States)

    Spisz, E.; Cory, B.; Ho, P.; Hoffman, M.

    1985-01-01

    High-speed computer-controlled switch-matrix system developed for communication satellites. Satellite system controlled by onboard computer and all message-routing functions between uplink and downlink beams handled by newly developed switch-matrix system. Message requires only 2-microsecond interconnect period, repeated every millisecond.

  10. Analysis of stationary fuel cell dynamic ramping capabilities and ultra capacitor energy storage using high resolution demand data

    Science.gov (United States)

    Meacham, James R.; Jabbari, Faryar; Brouwer, Jacob; Mauzey, Josh L.; Samuelsen, G. Scott

    Current high temperature fuel cell (HTFC) systems used for stationary power applications (in the 200-300 kW size range) have very limited dynamic load following capability or are simply base load devices. Considering the economics of existing electric utility rate structures, there is little incentive to increase HTFC ramping capability beyond 1 kWs -1 (0.4% s -1). However, in order to ease concerns about grid instabilities from utility companies and increase market adoption, HTFC systems will have to increase their ramping abilities, and will likely have to incorporate electrical energy storage (EES). Because batteries have low power densities and limited lifetimes in highly cyclic applications, ultra capacitors may be the EES medium of choice. The current analyses show that, because ultra capacitors have a very low energy storage density, their integration with HTFC systems may not be feasible unless the fuel cell has a ramp rate approaching 10 kWs -1 (4% s -1) when using a worst-case design analysis. This requirement for fast dynamic load response characteristics can be reduced to 1 kWs -1 by utilizing high resolution demand data to properly size ultra capacitor systems and through demand management techniques that reduce load volatility.

  11. Why Electricity Demand Is Highly Income-Elastic in Spain: A Cross-Country Comparison Based on an Index-Decomposition Analysis

    Directory of Open Access Journals (Sweden)

    Julián Pérez-García

    2017-03-01

    Full Text Available Since 1990, Spain has had one of the highest elasticities of electricity demand in the European Union. We provide an in-depth analysis into the causes of this high elasticity, and we examine how these same causes influence electricity demand in other European countries. To this end, we present an index-decomposition analysis of growth in electricity demand which allows us to identify three key factors in the relationship between gross domestic product (GDP and electricity demand: (i structural change; (ii GDP growth; and (iii intensity of electricity use. Our findings show that the main differences in electricity demand elasticities across countries and time are accounted for by the fast convergence in residential per capita electricity consumption. This convergence has almost concluded, and we expect the Spanish energy demand elasticity to converge to European standards in the near future.

  12. Demand for alternative-fuel vehicles when registration taxes are high

    DEFF Research Database (Denmark)

    Mabit, Stefan Lindhard; Fosgerau, Mogens

    2011-01-01

    This paper investigates the potential futures for alternative-fuel vehicles in Denmark, where the vehicle registration tax is very high and large tax rebates can be given. A large stated choice dataset has been collected concerning vehicle choice among conventional, hydrogen, hybrid, bio......-diesel, and electric vehicles. We estimate a mixed logit model that improves on previous contributions by controlling for reference dependence and allowing for correlation of random effects. Both improvements are found to be important. An application of the model shows that alternative-fuel vehicles with present...... technology could obtain fairly high market shares given tax regulations possible in the present high-tax vehicle market....

  13. Scientific and high-performance computing at FAIR

    Directory of Open Access Journals (Sweden)

    Kisel Ivan

    2015-01-01

    Full Text Available Future FAIR experiments have to deal with very high input rates, large track multiplicities, make full event reconstruction and selection on-line on a large dedicated computer farm equipped with heterogeneous many-core CPU/GPU compute nodes. To develop efficient and fast algorithms, which are optimized for parallel computations, is a challenge for the groups of experts dealing with the HPC computing. Here we present and discuss the status and perspectives of the data reconstruction and physics analysis software of one of the future FAIR experiments, namely, the CBM experiment.

  14. On-demand optical immobilization of Caenorhabditis elegans for high-resolution imaging and microinjection.

    Science.gov (United States)

    Hwang, Hyundoo; Krajniak, Jan; Matsunaga, Yohei; Benian, Guy M; Lu, Hang

    2014-09-21

    This paper describes a novel selective immobilization technique based on optical control of the sol-gel transition of thermoreversible Pluronic gel, which provides a simple, versatile, and biocompatible approach for high-resolution imaging and microinjection of Caenorhabditis elegans.

  15. Resource estimation in high performance medical image computing.

    Science.gov (United States)

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D M; Landman, Bennett A

    2014-10-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of 'jobs' requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources.

  16. High performance computing: Clusters, constellations, MPPs, and future directions

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, Jack; Sterling, Thomas; Simon, Horst; Strohmaier, Erich

    2003-06-10

    Last year's paper by Bell and Gray [1] examined past trends in high performance computing and asserted likely future directions based on market forces. While many of the insights drawn from this perspective have merit and suggest elements governing likely future directions for HPC, there are a number of points put forth that we feel require further discussion and, in certain cases, suggest alternative, more likely views. One area of concern relates to the nature and use of key terms to describe and distinguish among classes of high end computing systems, in particular the authors use of ''cluster'' to relate to essentially all parallel computers derived through the integration of replicated components. The taxonomy implicit in their previous paper, while arguable and supported by some elements of our community, fails to provide the essential semantic discrimination critical to the effectiveness of descriptive terms as tools in managing the conceptual space of consideration. In this paper, we present a perspective that retains the descriptive richness while providing a unifying framework. A second area of discourse that calls for additional commentary is the likely future path of system evolution that will lead to effective and affordable Petaflops-scale computing including the future role of computer centers as facilities for supporting high performance computing environments. This paper addresses the key issues of taxonomy, future directions towards Petaflops computing, and the important role of computer centers in the 21st century.

  17. Demand Uncertainty

    DEFF Research Database (Denmark)

    Nguyen, Daniel Xuyen

    This paper presents a model of trade that explains why firms wait to export and why many exporters fail. Firms face uncertain demands that are only realized after the firm enters the destination. The model retools the timing of uncertainty resolution found in productivity heterogeneity models...... in untested destinations. The option to forecast demands causes firms to delay exporting in order to gather more information about foreign demand. Third, since uncertainty is resolved after entry, many firms enter a destination and then exit after learning that they cannot profit. This prediction reconciles...

  18. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  19. High Performance Computing Assets for Ocean Acoustics Research

    Science.gov (United States)

    2016-11-18

    that make them easily parallelizable in the manner that, for example, atmospheric or ocean general circulation models (GCMs) are parallel. Many GCMs...Enclosed is the Final Report for ONR Grant No. NOOO 14-15-1-2840 entitled "High Performance Computing Assets for Ocean Acoustjc Research," Principal...distribution is unlimited. ONR DURIP Grant Final Report High Performance Computing Assets for Ocean Acoustics Research Timothy F. Dud a Applied Ocean

  20. Dynamic Resource Management and Job Scheduling for High Performance Computing

    OpenAIRE

    2016-01-01

    Job scheduling and resource management plays an essential role in high-performance computing. Supercomputing resources are usually managed by a batch system, which is responsible for the effective mapping of jobs onto resources (i.e., compute nodes). From the system perspective, a batch system must ensure high system utilization and throughput, while from the user perspective it must ensure fast response times and fairness when allocating resources across jobs. Parallel jobs can be divide...

  1. Compact high performance spectrometers using computational imaging

    Science.gov (United States)

    Morton, Kenneth; Weisberg, Arel

    2016-05-01

    Compressive sensing technology can theoretically be used to develop low cost compact spectrometers with the performance of larger and more expensive systems. Indeed, compressive sensing for spectroscopic systems has been previously demonstrated using coded aperture techniques, wherein a mask is placed between the grating and a charge coupled device (CCD) and multiple measurements are collected with different masks. Although proven effective for some spectroscopic sensing paradigms (e.g. Raman), this approach requires that the signal being measured is static between shots (low noise and minimal signal fluctuation). Many spectroscopic techniques applicable to remote sensing are inherently noisy and thus coded aperture compressed sensing will likely not be effective. This work explores an alternative approach to compressed sensing that allows for reconstruction of a high resolution spectrum in sensing paradigms featuring significant signal fluctuations between measurements. This is accomplished through relatively minor changes to the spectrometer hardware together with custom super-resolution algorithms. Current results indicate that a potential overall reduction in CCD size of up to a factor of 4 can be attained without a loss of resolution. This reduction can result in significant improvements in cost, size, and weight of spectrometers incorporating the technology.

  2. After Installation: Ubiquitous Computing and High School Science in Three Experienced, High-Technology Schools

    Science.gov (United States)

    Drayton, Brian; Falk, Joni K.; Stroud, Rena; Hobbs, Kathryn; Hammerman, James

    2010-01-01

    There are few studies of the impact of ubiquitous computing on high school science, and the majority of studies of ubiquitous computing report only on the early stages of implementation. The present study presents data on 3 high schools with carefully elaborated ubiquitous computing systems that have gone through at least one "obsolescence cycle"…

  3. High removal of chemical and biochemical oxygen demand from tequila vinasses by using physicochemical and biological methods.

    Science.gov (United States)

    Retes-Pruneda, Jose Luis; Davila-Vazquez, Gustavo; Medina-Ramírez, Iliana; Chavez-Vela, Norma Angelica; Lozano-Alvarez, Juan Antonio; Alatriste-Mondragon, Felipe; Jauregui-Rincon, Juan

    2014-08-01

    The goal of this research is to find a more effective treatment for tequila vinasses (TVs) with potential industrial application in order to comply with the Mexican environmental regulations. TVs are characterized by their high content of solids, high values of biochemical oxygen demand (BODs), chemical oxygen demand (COD), low pH and intense colour; thus, disposal of untreated TVs severely impacts the environment. Physicochemical and biological treatments, and a combination of both, were probed on the remediation of TVs. The use of alginate for the physicochemical treatment of TVs reduced BOD5 and COD values by 70.6% and 14.2%, respectively. Twenty white-rot fungi (WRF) strains were tested in TV-based solid media. Pleurotus ostreatus 7992 and Trametes trogii 8154 were selected due to their ability to grow on TV-based solid media. Ligninolytic enzymes' production was observed in liquid cultures of both fungi. Using the selected WRF for TVs' bioremediation, both COD and BOD5 were reduced by 88.7% and 89.7%, respectively. Applying sequential physicochemical and biological treatments, BOD5 and COD were reduced by 91.6% and 93.1%, respectively. Results showed that alginate and selected WRF have potential for the industrial treatment of TVs.

  4. High Job Demands, Still Engaged and Not Burned Out? The Role of Job Crafting

    NARCIS (Netherlands)

    Hakanen, Jari J.; Seppälä, Piia; Peeters, Maria C W

    2017-01-01

    Purpose: Traditionally, employee well-being has been considered as resulting from decent working conditions arranged by the organization. Much less is known about whether employees themselves can make self-initiated changes to their work, i.e., craft their jobs, in order to stay well, even in highly

  5. Demand forecasting

    OpenAIRE

    Gregor, Belčec

    2011-01-01

    Companies operate in an increasingly challenging environment that requires them to continuously improve all areas of the business process. Demand forecasting is one area in manufacturing companies where we can hope to gain great advantages. Improvements in forecasting can result in cost savings throughout the supply chain, improve the reliability of information and the quality of the service for our customers. In the company Danfoss Trata, d. o. o. we did not have a system for demand forecast...

  6. Changes in chloroplast ultrastructure in some high-alpine plants: adaptation to metabolic demands and climate?

    Science.gov (United States)

    Lütz, C; Engel, L

    2007-01-01

    The cytology of leaf cells from five different high-alpine plants was studied and compared with structures in chloroplasts from the typical high-alpine plant Ranunculus glacialis previously described as having frequent envelope plus stroma protrusions. The plants under investigation ranged from subalpine/alpine Geum montanum through alpine Geum reptans, Poa alpina var. vivipara, and Oxyria digyna to nival Cerastium uniflorum and R. glacialis. The general leaf structure (by light microscopy) and leaf mesophyll cell ultrastructure (by transmission electron microscopy [TEM]) did not show any specialized structures unique to these mountain species. However, chloroplast protrusion formation could be found in G. reptans and, to a greater extent, in O. digyna. The other species exhibited only a low percentage of such chloroplast structural changes. Occurrence of protrusions in samples of G. montanum and O. digyna growing in a mild climate at about 50 m above sea level was drastically reduced. Serial TEM sections of O. digyna cells showed that the protrusions can appear as rather broad and long appendices of plastids, often forming pocketlike structures where mitochondria and microbodies are in close vicinity to the plastid and to each other. It is suggested that some high-alpine plants may form such protrusions to facilitate fast exchange of molecules between cytoplasm and plastid as an adaptation to the short, often unfavorable vegetation period in the Alps, while other species may have developed different types of adaptation that are not expressed in ultrastructural changes of the plastids.

  7. Design of demand driven return supply chain for high-tech products

    Directory of Open Access Journals (Sweden)

    Jalal Ashayeri

    2011-10-01

    Full Text Available Purpose: The purpose of this study is to design a responsive network for after-sale services of high-tech products. Design/methodology/approach: Analytic Hierarchy Process (AHP and weighted max-min approach are integrated to solve a fuzzy goal programming model. Findings: Uncertainty is an important characteristic of reverse logistics networks, and the level of uncertainty increases with the decrease of the products’ life-cycle. Research limitations/implications: Some of the objective functions of our model are simplified to deal with non-linearities. Practical implications: Designing after-sale services networks for high-tech products is an overwhelming task, especially when the external environment is characterized by high levels of uncertainty and dynamism. This study presents a comprehensive modeling approach to simplify this task. Originality/value: Consideration of multiple objectives is rare in reverse logistics network design literature. Although the number of multi-objective reverse logistics network design studies has been increasing in recent years, the last two objective of our model is unique to this research area.

  8. A Component Architecture for High-Performance Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

    2004-12-14

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  9. A Component Architecture for High-Performance Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, David E; Allan, Benjamin A; Armstrong, Robert C; Bertrand, Felipe; Chiu, Kenneth; Dahlgren, Tamara L; Damevski, Kostadin; Elwasif, Wael R; Epperly, Thomas G; Govindaraju, Madhusudhan; Katz, Daniel S; Kohl, James A; Krishnan, Manoj Kumar; Kumfert, Gary K; Larson, J Walter; Lefantzi, Sophia; Lewis, Michael J; Malony, Allen D; McInnes, Lois C; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G; Ray, Jaideep; Shende, Sameer; Windus, Theresa L; Zhou, Shujia

    2006-07-03

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  10. CRITICAL ISSUES IN HIGH END COMPUTING - FINAL REPORT

    Energy Technology Data Exchange (ETDEWEB)

    Corones, James [Krell Institute

    2013-09-23

    High-End computing (HEC) has been a driver for advances in science and engineering for the past four decades. Increasingly HEC has become a significant element in the national security, economic vitality, and competitiveness of the United States. Advances in HEC provide results that cut across traditional disciplinary and organizational boundaries. This program provides opportunities to share information about HEC systems and computational techniques across multiple disciplines and organizations through conferences and exhibitions of HEC advances held in Washington DC so that mission agency staff, scientists, and industry can come together with White House, Congressional and Legislative staff in an environment conducive to the sharing of technical information, accomplishments, goals, and plans. A common thread across this series of conferences is the understanding of computational science and applied mathematics techniques across a diverse set of application areas of interest to the Nation. The specific objectives of this program are: Program Objective 1. To provide opportunities to share information about advances in high-end computing systems and computational techniques between mission critical agencies, agency laboratories, academics, and industry. Program Objective 2. To gather pertinent data, address specific topics of wide interest to mission critical agencies. Program Objective 3. To promote a continuing discussion of critical issues in high-end computing. Program Objective 4.To provide a venue where a multidisciplinary scientific audience can discuss the difficulties applying computational science techniques to specific problems and can specify future research that, if successful, will eliminate these problems.

  11. High Electricity Demand in the Northeast U.S.: PJM Reliability Network and Peaking Unit Impacts on Air Quality.

    Science.gov (United States)

    Farkas, Caroline M; Moeller, Michael D; Felder, Frank A; Henderson, Barron H; Carlton, Annmarie G

    2016-08-01

    On high electricity demand days, when air quality is often poor, regional transmission organizations (RTOs), such as PJM Interconnection, ensure reliability of the grid by employing peak-use electric generating units (EGUs). These "peaking units" are exempt from some federal and state air quality rules. We identify RTO assignment and peaking unit classification for EGUs in the Eastern U.S. and estimate air quality for four emission scenarios with the Community Multiscale Air Quality (CMAQ) model during the July 2006 heat wave. Further, we population-weight ambient values as a surrogate for potential population exposure. Emissions from electricity reliability networks negatively impact air quality in their own region and in neighboring geographic areas. Monitored and controlled PJM peaking units are generally located in economically depressed areas and can contribute up to 87% of hourly maximum PM2.5 mass locally. Potential population exposure to peaking unit PM2.5 mass is highest in the model domain's most populated cities. Average daily temperature and national gross domestic product steer peaking unit heat input. Air quality planning that capitalizes on a priori knowledge of local electricity demand and economics may provide a more holistic approach to protect human health within the context of growing energy needs in a changing world.

  12. A review of High Performance Computing foundations for scientists

    CERN Document Server

    Ibáñez, Pablo García-Risueño Pablo E

    2012-01-01

    The increase of existing computational capabilities has made simulation emerge as a third discipline of Science, lying midway between experimental and purely theoretical branches [1, 2]. Simulation enables the evaluation of quantities which otherwise would not be accessible, helps to improve experiments and provides new insights on systems which are analysed [3-6]. Knowing the fundamentals of computation can be very useful for scientists, for it can help them to improve the performance of their theoretical models and simulations. This review includes some technical essentials that can be useful to this end, and it is devised as a complement for researchers whose education is focused on scientific issues and not on technological respects. In this document we attempt to discuss the fundamentals of High Performance Computing (HPC) [7] in a way which is easy to understand without much previous background. We sketch the way standard computers and supercomputers work, as well as discuss distributed computing and di...

  13. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  14. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  15. Studying an Eulerian Computer Model on Different High-performance Computer Platforms and Some Applications

    Science.gov (United States)

    Georgiev, K.; Zlatev, Z.

    2010-11-01

    The Danish Eulerian Model (DEM) is an Eulerian model for studying the transport of air pollutants on large scale. Originally, the model was developed at the National Environmental Research Institute of Denmark. The model computational domain covers Europe and some neighbour parts belong to the Atlantic Ocean, Asia and Africa. If DEM model is to be applied by using fine grids, then its discretization leads to a huge computational problem. This implies that such a model as DEM must be run only on high-performance computer architectures. The implementation and tuning of such a complex large-scale model on each different computer is a non-trivial task. Here, some comparison results of running of this model on different kind of vector (CRAY C92A, Fujitsu, etc.), parallel computers with distributed memory (IBM SP, CRAY T3E, Beowulf clusters, Macintosh G4 clusters, etc.), parallel computers with shared memory (SGI Origin, SUN, etc.) and parallel computers with two levels of parallelism (IBM SMP, IBM BlueGene/P, clusters of multiprocessor nodes, etc.) will be presented. The main idea in the parallel version of DEM is domain partitioning approach. Discussions according to the effective use of the cache and hierarchical memories of the modern computers as well as the performance, speed-ups and efficiency achieved will be done. The parallel code of DEM, created by using MPI standard library, appears to be highly portable and shows good efficiency and scalability on different kind of vector and parallel computers. Some important applications of the computer model output are presented in short.

  16. Worksite interventions for preventing physical deterioration among employees in job-groups with high physical work demands: background, design and conceptual model of FINALE

    DEFF Research Database (Denmark)

    Holtermann, Andreas; Jørgensen, Marie B; Gram, Bibi;

    2010-01-01

    A mismatch between individual physical capacities and physical work demands enhance the risk for musculoskeletal disorders, poor work ability and sickness absence, termed physical deterioration. However, effective intervention strategies for preventing physical deterioration in job groups with high...... physical demands remains to be established. This paper describes the background, design and conceptual model of the FINALE programme, a framework for health promoting interventions at 4 Danish job groups (i.e. cleaners, health-care workers, construction workers and industrial workers) characterized by high...... physical work demands, musculoskeletal disorders, poor work ability and sickness absence....

  17. Computer Literacy and the Construct Validity of a High-Stakes Computer-Based Writing Assessment

    Science.gov (United States)

    Jin, Yan; Yan, Ming

    2017-01-01

    One major threat to validity in high-stakes testing is construct-irrelevant variance. In this study we explored whether the transition from a paper-and-pencil to a computer-based test mode in a high-stakes test in China, the College English Test, has brought about variance irrelevant to the construct being assessed in this test. Analyses of the…

  18. Radio Synthesis Imaging - A High Performance Computing and Communications Project

    Science.gov (United States)

    Crutcher, Richard M.

    The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long

  19. The Role of Computing in High-Energy Physics.

    Science.gov (United States)

    Metcalf, Michael

    1983-01-01

    Examines present and future applications of computers in high-energy physics. Areas considered include high-energy physics laboratories, accelerators, detectors, networking, off-line analysis, software guidelines, event sizes and volumes, graphics applications, event simulation, theoretical studies, and future trends. (JN)

  20. Numerics of High Performance Computers and Benchmark Evaluation of Distributed Memory Computers

    Directory of Open Access Journals (Sweden)

    H. S. Krishna

    2004-07-01

    Full Text Available The internal representation of numerical data, their speed of manipulation to generate the desired result through efficient utilisation of central processing unit, memory, and communication links are essential steps of all high performance scientific computations. Machine parameters, in particular, reveal accuracy and error bounds of computation, required for performance tuning of codes. This paper reports diagnosis of machine parameters, measurement of computing power of several workstations, serial and parallel computers, and a component-wise test procedure for distributed memory computers. Hierarchical memory structure is illustrated by block copying and unrolling techniques. Locality of reference for cache reuse of data is amply demonstrated by fast Fourier transform codes. Cache and register-blocking technique results in their optimum utilisation with consequent gain in throughput during vector-matrix operations. Implementation of these memory management techniques reduces cache inefficiency loss, which is known to be proportional to the number of processors. Of the two Linux clusters-ANUP16, HPC22 and HPC64, it has been found from the measurement of intrinsic parameters and from application benchmark of multi-block Euler code test run that ANUP16 is suitable for problems that exhibit fine-grained parallelism. The delivered performance of ANUP16 is of immense utility for developing high-end PC clusters like HPC64 and customised parallel computers with added advantage of speed and high degree of parallelism.

  1. Condor-COPASI: high-throughput computing for biochemical networks

    Directory of Open Access Journals (Sweden)

    Kent Edward

    2012-07-01

    Full Text Available Abstract Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage.

  2. Worksite interventions for preventing physical deterioration among employees in job-groups with high physical work demands

    DEFF Research Database (Denmark)

    Holtermann, Andreas; Jørgensen, Marie B; Gram, Bibi

    2010-01-01

    demands, physical capacities and health profile of workers in each job-group. The RCT among cleaners, characterized by repetitive work tasks and musculoskeletal disorders, aims at making the cleaners less susceptible to musculoskeletal disorders by physical coordination training or cognitive behavioral...... theory based training (CBTr). Because health-care workers are reported to have high prevalence of overweight and heavy lifts, the aim of the RCT is long-term weight-loss by combined physical exercise training, CBTr and diet. Construction work, characterized by heavy lifting, pushing and pulling, the RCT...... aims at improving physical capacity and promoting musculoskeletal and cardiovascular health. At the industrial work-place characterized by repetitive work tasks, the intervention aims at reducing physical exertion and musculoskeletal disorders by combined physical exercise training, CBTr...

  3. Proceedings from the conference on high speed computing: High speed computing and national security

    Energy Technology Data Exchange (ETDEWEB)

    Hirons, K.P.; Vigil, M.; Carlson, R. [comps.

    1997-07-01

    This meeting covered the following topics: technologies/national needs/policies: past, present and future; information warfare; crisis management/massive data systems; risk assessment/vulnerabilities; Internet law/privacy and rights of society; challenges to effective ASCI programmatic use of 100 TFLOPs systems; and new computing technologies.

  4. Analysis and Modeling of Social In uence in High Performance Computing Workloads

    KAUST Repository

    Zheng, Shuai

    2011-06-01

    High Performance Computing (HPC) is becoming a common tool in many research areas. Social influence (e.g., project collaboration) among increasing users of HPC systems creates bursty behavior in underlying workloads. This bursty behavior is increasingly common with the advent of grid computing and cloud computing. Mining the user bursty behavior is important for HPC workloads prediction and scheduling, which has direct impact on overall HPC computing performance. A representative work in this area is the Mixed User Group Model (MUGM), which clusters users according to the resource demand features of their submissions, such as duration time and parallelism. However, MUGM has some difficulties when implemented in real-world system. First, representing user behaviors by the features of their resource demand is usually difficult. Second, these features are not always available. Third, measuring the similarities among users is not a well-defined problem. In this work, we propose a Social Influence Model (SIM) to identify, analyze, and quantify the level of social influence across HPC users. The advantage of the SIM model is that it finds HPC communities by analyzing user job submission time, thereby avoiding the difficulties of MUGM. An offline algorithm and a fast-converging, computationally-efficient online learning algorithm for identifying social groups are proposed. Both offline and online algorithms are applied on several HPC and grid workloads, including Grid 5000, EGEE 2005 and 2007, and KAUST Supercomputing Lab (KSL) BGP data. From the experimental results, we show the existence of a social graph, which is characterized by a pattern of dominant users and followers. In order to evaluate the effectiveness of identified user groups, we show the pattern discovered by the offline algorithm follows a power-law distribution, which is consistent with those observed in mainstream social networks. We finally conclude the thesis and discuss future directions of our work.

  5. Definition and evaluation of testing scenarios for knee wear simulation under conditions of highly demanding daily activities.

    Science.gov (United States)

    Schwiesau, Jens; Schilling, Carolin; Kaddick, Christian; Utzschneider, Sandra; Jansson, Volkmar; Fritz, Bernhard; Blömer, Wilhelm; Grupp, Thomas M

    2013-05-01

    The objective of our study was the definition of testing scenarios for knee wear simulation under various highly demanding daily activities of patients after total knee arthroplasty. This was mainly based on a review of published data on knee kinematics and kinetics followed by the evaluation of the accuracy and precision of a new experimental setup. We combined tibio-femoral load and kinematic data reported in the literature to develop deep squatting loading profiles for simulator input. A servo-hydraulic knee wear simulator was customised with a capability of a maximum flexion of 120°, a tibio-femoral load of 5000N, an anterior-posterior (AP) shear force of ±1000N and an internal-external (IE) rotational torque of ±50Nm to simulate highly demanding patient activities. During the evaluation of the newly configurated simulator the ability of the test machine to apply the required load and torque profiles and the flexion kinematics in a precise manner was examined by nominal-actual profile comparisons monitored periodically during subsequent knee wear simulation. For the flexion kinematics under displacement control a delayed actuator response of approximately 0.05s was inevitable due to the inertia of masses in movement of the coupled knee wear stations 1-3 during all applied activities. The axial load and IE torque is applied in an effective manner without substantial deviations between nominal and actual load and torque profiles. During the first third of the motion cycle a marked deviation between nominal and actual AP shear load profiles has to be noticed but without any expected measurable effect on the latter wear simulation due to the fact that the load values are well within the peak magnitude of the nominal load amplitude. In conclusion the described testing method will be an important tool to have more realistic knee wear simulations based on load conditions of the knee joint during activities of daily living.

  6. Optical interconnection networks for high-performance computing systems.

    Science.gov (United States)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  7. Demanding Satisfaction

    Science.gov (United States)

    Oguntoyinbo, Lekan

    2010-01-01

    It was the kind of crisis most universities dread. In November 2006, a group of minority student leaders at Indiana University-Purdue University Indianapolis (IUPUI) threatened to sue the university if administrators did not heed demands that included providing more funding for multicultural student groups. This article discusses how this threat…

  8. High performance computing in power and energy systems

    CERN Document Server

    Khaitan, Siddhartha Kumar

    2012-01-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would  need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, casc

  9. Computer Security: SAHARA - Security As High As Reasonably Achievable

    CERN Multimedia

    Stefan Lueders, Computer Security Team

    2015-01-01

    History has shown us time and again that our computer systems, computing services and control systems have digital security deficiencies. Too often we deploy stop-gap solutions and improvised hacks, or we just accept that it is too late to change things.    In my opinion, this blatantly contradicts the professionalism we show in our daily work. Other priorities and time pressure force us to ignore security or to consider it too late to do anything… but we can do better. Just look at how “safety” is dealt with at CERN! “ALARA” (As Low As Reasonably Achievable) is the objective set by the CERN HSE group when considering our individual radiological exposure. Following this paradigm, and shifting it from CERN safety to CERN computer security, would give us “SAHARA”: “Security As High As Reasonably Achievable”. In other words, all possible computer security measures must be applied, so long as ...

  10. High-order hydrodynamic algorithms for exascale computing

    Energy Technology Data Exchange (ETDEWEB)

    Morgan, Nathaniel Ray [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-05

    Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broad range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.

  11. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  12. High performance computing and communications: FY 1997 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  13. An Experimental QoE Performance Study for the Efficient Transmission of High Demanding Traffic over an Ad Hoc Network Using BATMAN

    Directory of Open Access Journals (Sweden)

    Ramon Sanchez-Iborra

    2015-01-01

    Full Text Available Multimedia communications are attracting great attention from the research, industry, and end-user communities. The latter are increasingly claiming for higher levels of quality and the possibility of consuming multimedia content from a plethora of devices at their disposal. Clearly, the most appealing gadgets are those that communicate wirelessly to access these services. However, current wireless technologies raise severe concerns to support extremely demanding services such as real-time multimedia transmissions. This paper evaluates from QoE and QoS perspectives the capability of the ad hoc routing protocol called BATMAN to support Voice over IP and video traffic. To this end, two test-benches were proposed, namely, a real (emulated testbed and a simulation framework. Additionally, a series of modifications was proposed on both protocols’ parameters settings and video-stream characteristics that contributes to further improving the multimedia quality perceived by the users. The performance of the well-extended protocol OLSR is also evaluated in detail to compare it with BATMAN. From the results, a notably high correlation between real experimentation and computer simulation outcomes was observed. It was also found out that, with the proper configuration, BATMAN is able to transmit several QCIF video-streams and VoIP calls with high quality. In addition, BATMAN outperforms OLSR supporting multimedia traffic in both experimental and simulated environments.

  14. Challenges of high dam construction to computational mechanics

    Institute of Scientific and Technical Information of China (English)

    ZHANG Chuhan

    2007-01-01

    The current situations and growing prospects of China's hydro-power development and high dam construction are reviewed,giving emphasis to key issues for safety evaluation of large dams and hydro-power plants,especially those associated with application of state-of-the-art computational mechanics.These include but are not limited to:stress and stability analysis of dam foundations under external loads;earthquake behavior of dam-foundation-reservoir systems,mechanical properties of mass concrete for dams,high velocity flow and energy dissipation for high dams,scientific and technical problems of hydro-power plants and underground structures,and newly developed types of dam-Roll Compacted Concrete (RCC) dams and Concrete Face Rock-fill (CFR)dams.Some examples demonstrating successful utilizations of computational mechanics in high dam engineering are given,including seismic nonlinear analysis for arch dam foundations,nonlinear fracture analysis of arch dams under reservoir loads,and failure analysis of arch dam-foundations.To make more use of the computational mechanics in high dam engineering,it is pointed out that much research including different computational methods,numerical models and solution schemes,and verifications through experimental tests and filed measurements is necessary in the future.

  15. High performance computing and communications: FY 1996 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  16. A High-Performance Communication Service for Parallel Servo Computing

    Directory of Open Access Journals (Sweden)

    Cheng Xin

    2010-11-01

    Full Text Available Complexity of algorithms for the servo control in the multi-dimensional, ultra-precise stage application has made multi-processor parallel computing technology needed. Considering the specific communication requirements in the parallel servo computing, we propose a communication service scheme based on VME bus, which provides high-performance data transmission and precise synchronization trigger support for the processors involved. Communications service is implemented on both standard VME bus and user-defined Internal Bus (IB, and can be redefined online. This paper introduces parallel servo computing architecture and communication service, describes structure and implementation details of each module in the service, and finally provides data transmission model and analysis. Experimental results show that communication services can provide high-speed data transmission with sub-nanosecond-level error of transmission latency, and synchronous trigger with nanosecond-level synchronization error. Moreover, the performance of communication service is not affected by the increasing number of processors.

  17. ABOUT THE SUITABILITY OF CLOUDS IN HIGH-PERFORMANCE COMPUTING

    Directory of Open Access Journals (Sweden)

    Harald Richter

    2016-01-01

    Full Text Available Cloud computing has become the ubiquitous computing and storage paradigm. It is also attractive for scientists, because they do not have to care any more for their own IT infrastructure, but can outsource it to a Cloud Service Provider of their choice. However, for the case of High-Performance Computing (HPC in a cloud, as it is needed in simulations or for Big Data analysis, things are getting more intricate, because HPC codes must stay highly efficient, even when executed by many virtual cores (vCPUs. Older clouds or new standard clouds can fulfil this only under special precautions, which are given in this article. The results can be extrapolated to other cloud OSes than OpenStack and to other codes than OpenFOAM, which were used as examples.

  18. High-performance computing, high-speed networks, and configurable computing environments: progress toward fully distributed computing.

    Science.gov (United States)

    Johnston, W E; Jacobson, V L; Loken, S C; Robertson, D W; Tierney, B L

    1992-01-01

    The next several years will see the maturing of a collection of technologies that will enable fully and transparently distributed computing environments. Networks will be used to configure independent computing, storage, and I/O elements into "virtual systems" that are optimal for solving a particular problem. This environment will make the most powerful computing systems those that are logically assembled from network-based components and will also make those systems available to a widespread audience. Anticipating that the necessary technology and communications infrastructure will be available in the next 3 to 5 years, we are developing and demonstrating prototype applications that test and exercise the currently available elements of this configurable environment. The Lawrence Berkeley Laboratory (LBL) Information and Computing Sciences and Research Medicine Divisions have collaborated with the Pittsburgh Supercomputer Center to demonstrate one distributed application that illuminates the issues and potential of using networks to configure virtual systems. This application allows the interactive visualization of large three-dimensional (3D) scalar fields (voxel data sets) by using a network-based configuration of heterogeneous supercomputers and workstations. The specific test case is visualization of 3D magnetic resonance imaging (MRI) data. The virtual system architecture consists of a Connection Machine-2 (CM-2) that performs surface reconstruction from the voxel data, a Cray Y-MP that renders the resulting geometric data into an image, and a workstation that provides the display of the image and the user interface for specifying the parameters for the geometry generation and 3D viewing. These three elements are configured into a virtual system by using several different network technologies. This paper reviews the current status of the software, hardware, and communications technologies that are needed to enable this configurable environment. These

  19. Democratizing Computer Science Knowledge: Transforming the Face of Computer Science through Public High School Education

    Science.gov (United States)

    Ryoo, Jean J.; Margolis, Jane; Lee, Clifford H.; Sandoval, Cueponcaxochitl D. M.; Goode, Joanna

    2013-01-01

    Despite the fact that computer science (CS) is the driver of technological innovations across all disciplines and aspects of our lives, including participatory media, high school CS too commonly fails to incorporate the perspectives and concerns of low-income students of color. This article describes a partnership program -- Exploring Computer…

  20. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa

    2012-10-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a representative HPC application. © 2012 IEEE.

  1. High Performance Computing tools for the Integrated Tokamak Modelling project

    Energy Technology Data Exchange (ETDEWEB)

    Guillerminet, B., E-mail: bernard.guillerminet@cea.f [Association Euratom-CEA sur la Fusion, IRFM, DSM, CEA Cadarache (France); Plasencia, I. Campos [Instituto de Fisica de Cantabria (IFCA), CSIC, Santander (Spain); Haefele, M. [Universite Louis Pasteur, Strasbourg (France); Iannone, F. [EURATOM/ENEA Fusion Association, Frascati (Italy); Jackson, A. [University of Edinburgh (EPCC) (United Kingdom); Manduchi, G. [EURATOM/ENEA Fusion Association, Padova (Italy); Plociennik, M. [Poznan Supercomputing and Networking Center (PSNC) (Poland); Sonnendrucker, E. [Universite Louis Pasteur, Strasbourg (France); Strand, P. [Chalmers University of Technology (Sweden); Owsiak, M. [Poznan Supercomputing and Networking Center (PSNC) (Poland)

    2010-07-15

    Fusion Modelling and Simulation are very challenging and the High Performance Computing issues are addressed here. Toolset for jobs launching and scheduling, data communication and visualization have been developed by the EUFORIA project and used with a plasma edge simulation code.

  2. Artificial Intelligence and the High School Computer Curriculum.

    Science.gov (United States)

    Dillon, Richard W.

    1993-01-01

    Describes a four-part curriculum that can serve as a model for incorporating artificial intelligence (AI) into the high school computer curriculum. The model includes examining questions fundamental to AI, creating and designing an expert system, language processing, and creating programs that integrate machine vision with robotics and…

  3. Seeking Solution: High-Performance Computing for Science. Background Paper.

    Science.gov (United States)

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    This is the second publication from the Office of Technology Assessment's assessment on information technology and research, which was requested by the House Committee on Science and Technology and the Senate Committee on Commerce, Science, and Transportation. The first background paper, "High Performance Computing & Networking for…

  4. Computer science of the high performance; Informatica del alto rendimiento

    Energy Technology Data Exchange (ETDEWEB)

    Moraleda, A.

    2008-07-01

    The high performance computing is taking shape as a powerful accelerator of the process of innovation, to drastically reduce the waiting times for access to the results and the findings in a growing number of processes and activities as complex and important as medicine, genetics, pharmacology, environment, natural resources management or the simulation of complex processes in a wide variety of industries. (Author)

  5. Replica-Based High-Performance Tuple Space Computing

    DEFF Research Database (Denmark)

    Andric, Marina; De Nicola, Rocco; Lluch Lafuente, Alberto

    2015-01-01

    We present the tuple-based coordination language RepliKlaim, which enriches Klaim with primitives for replica-aware coordination. Our overall goal is to offer suitable solutions to the challenging problems of data distribution and locality in large-scale high performance computing. In particular,...

  6. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2015-01-01

    A continuation of Contemporary High Performance Computing: From Petascale toward Exascale, this second volume continues the discussion of HPC flagship systems, major application workloads, facilities, and sponsors. The book includes of figures and pictures that capture the state of existing systems: pictures of buildings, systems in production, floorplans, and many block diagrams and charts to illustrate system design and performance.

  7. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    Science.gov (United States)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  8. High metabolic demand in neural tissues: Information and control theory perspectives on the synergism between rate and stability.

    Science.gov (United States)

    Wallace, Rodrick

    2016-11-21

    Evolutionary process has been selected for inherently unstable systems in higher animals that can react swiftly to changing patterns of threat or opportunity, for example blood pressure, the immune response, and gene expression. However, these require continual strict regulation: uncontrolled blood pressure is fatal, immune cells can attack 'self' tissues, and improper gene expression triggers developmental disorders. Consciousness in particular demands high rates of metabolic free energy to both operate and regulate the fundamental biological machinery: both the 'stream of consciousness' and the 'riverbanks' that confine it to useful realms are constructed and reconstructed moment-by-moment in response to highly dynamic internal and environmental circumstances. We develop powerful necessary conditions models for such phenomena based on the Data Rate Theorem linking control and information theories in the context of inherent instability. The synergism between conscious action and its regulation underlies the ten-fold higher rate of metabolic energy consumption in human neural tissues and implies a close, culturally modulated relation between sleep disorders and certain psychopathologies.

  9. Hot Chips and Hot Interconnects for High End Computing Systems

    Science.gov (United States)

    Saini, Subhash

    2005-01-01

    I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).

  10. High resolution computed tomography for peripheral facial nerve paralysis

    Energy Technology Data Exchange (ETDEWEB)

    Koester, O.; Straehler-Pohl, H.J.

    1987-01-01

    High resolution computer tomographic examinations of the petrous bones were performed on 19 patients with confirmed peripheral facial nerve paralysis. High resolution CT provides accurate information regarding the extent, and usually regarding the type, of pathological process; this can be accurately localised with a view to possible surgical treatments. The examination also differentiates this from idiopathic paresis, which showed no radiological changes. Destruction of the petrous bone, without facial nerve symptoms, makes early suitable treatment mandatory.

  11. BEAGLE: an application programming interface and high-performance computing library for statistical phylogenetics.

    Science.gov (United States)

    Ayres, Daniel L; Darling, Aaron; Zwickl, Derrick J; Beerli, Peter; Holder, Mark T; Lewis, Paul O; Huelsenbeck, John P; Ronquist, Fredrik; Swofford, David L; Cummings, Michael P; Rambaut, Andrew; Suchard, Marc A

    2012-01-01

    Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software.

  12. Component-based software for high-performance scientific computing

    Science.gov (United States)

    Alexeev, Yuri; Allan, Benjamin A.; Armstrong, Robert C.; Bernholdt, David E.; Dahlgren, Tamara L.; Gannon, Dennis; Janssen, Curtis L.; Kenny, Joseph P.; Krishnan, Manojkumar; Kohl, James A.; Kumfert, Gary; Curfman McInnes, Lois; Nieplocha, Jarek; Parker, Steven G.; Rasmussen, Craig; Windus, Theresa L.

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  13. Achieving high performance in numerical computations on RISC workstations and parallel systems

    Energy Technology Data Exchange (ETDEWEB)

    Goedecker, S. [Max-Planck Inst. for Solid State Research, Stuttgart (Germany); Hoisie, A. [Los Alamos National Lab., NM (United States)

    1997-08-20

    The nominal peak speeds of both serial and parallel computers is raising rapidly. At the same time however it is becoming increasingly difficult to get out a significant fraction of this high peak speed from modern computer architectures. In this tutorial the authors give the scientists and engineers involved in numerically demanding calculations and simulations the necessary basic knowledge to write reasonably efficient programs. The basic principles are rather simple and the possible rewards large. Writing a program by taking into account optimization techniques related to the computer architecture can significantly speedup your program, often by factors of 10--100. As such, optimizing a program can for instance be a much better solution than buying a faster computer. If a few basic optimization principles are applied during program development, the additional time needed for obtaining an efficient program is practically negligible. In-depth optimization is usually only needed for a few subroutines or kernels and the effort involved is therefore also acceptable.

  14. Integrating Embedded Computing Systems into High School and Early Undergraduate Education

    Science.gov (United States)

    Benson, B.; Arfaee, A.; Choon Kim; Kastner, R.; Gupta, R. K.

    2011-01-01

    Early exposure to embedded computing systems is crucial for students to be prepared for the embedded computing demands of today's world. However, exposure to systems knowledge often comes too late in the curriculum to stimulate students' interests and to provide a meaningful difference in how they direct their choice of electives for future…

  15. Overview of Parallel Platforms for Common High Performance Computing

    Directory of Open Access Journals (Sweden)

    T. Fryza

    2012-04-01

    Full Text Available The paper deals with various parallel platforms used for high performance computing in the signal processing domain. More precisely, the methods exploiting the multicores central processing units such as message passing interface and OpenMP are taken into account. The properties of the programming methods are experimentally proved in the application of a fast Fourier transform and a discrete cosine transform and they are compared with the possibilities of MATLAB's built-in functions and Texas Instruments digital signal processors with very long instruction word architectures. New FFT and DCT implementations were proposed and tested. The implementation phase was compared with CPU based computing methods and with possibilities of the Texas Instruments digital signal processing library on C6747 floating-point DSPs. The optimal combination of computing methods in the signal processing domain and new, fast routines' implementation is proposed as well.

  16. School of Analytic Computing in Theoretical High-Energy Physics

    CERN Document Server

    2013-01-01

    In recent years, a huge progress has been made on computing rates for production processes of direct relevance to experiments at the Large Hadron Collider (LHC). Crucial to that remarkable advance has been our understanding and ability to compute scattering amplitudes. The aim of the School is to bring together young theorists working on the phenomenology of LHC physics with those working in more formal areas, and to provide them the analytic tools to compute amplitudes in gauge theories. The school is addressed to Ph.D. students and post-docs in Theoretical High-Energy Physics. 30 hours of lectures will be delivered over the 5 days of the School. A Poster Session will be held, at which students are welcome to present their research topics.

  17. School of Analytic Computing in Theoretical High-Energy Physics

    CERN Document Server

    2015-01-01

    In recent years, a huge progress has been made on computing rates for production processes of direct relevance to experiments at the Large Hadron Collider (LHC). Crucial to that remarkable advance has been our understanding and ability to compute scattering amplitudes and cross sections. The aim of the School is to bring together young theorists working on the phenomenology of LHC physics with those working in more formal areas, and to provide them the analytic tools to compute amplitudes in gauge theories. The school is addressed to Ph.D. students and post-docs in Theoretical High-Energy Physics. 30 hours of lectures and 4 hours of tutorials will be delivered over the 6 days of the School.

  18. Parallel computation of seismic analysis of high arch dam

    Institute of Scientific and Technical Information of China (English)

    Chen Houqun; Ma Huaifa; Tu Jin; Cheng Guangqing; Tang Juzhen

    2008-01-01

    Parallel computation programs are developed for three-dimensional meso-mechanics analysis of fully-graded dam concrete and seismic response analysis of high arch dams (ADs), based on the Parallel Finite Element Program Generator (PFEPG). The computational algorithms of the numerical simulation of the meso-structure of concrete specimens were studied. Taking into account damage evolution, static preload, strain rate effect, and the heterogeneity of the meso-structure of dam concrete, the fracture processes of damage evolution and configuration of the cracks can be directly simulated. In the seismic response analysis of ADs, all the following factors are involved, such as the nonlinear contact due to the opening and slipping of the contraction joints, energy dispersion of the far-field foundation, dynamic interactions of the dam-foundation-reservoir system, and the combining effects of seismic action with all static loads. The correctness, reliability and efficiency of the two parallel computational programs are verified with practical illustrations.

  19. Computing trends using graphic processor in high energy physics

    CERN Document Server

    Niculescu, Mihai

    2011-01-01

    One of the main challenges in Heavy Energy Physics is to make fast analysis of high amount of experimental and simulated data. At LHC-CERN one p-p event is approximate 1 Mb in size. The time taken to analyze the data and obtain fast results depends on high computational power. The main advantage of using GPU(Graphic Processor Unit) programming over traditional CPU one is that graphical cards bring a lot of computing power at a very low price. Today a huge number of application(scientific, financial etc) began to be ported or developed for GPU, including Monte Carlo tools or data analysis tools for High Energy Physics. In this paper, we'll present current status and trends in HEP using GPU.

  20. The design of linear algebra libraries for high performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, J.J. [Tennessee Univ., Knoxville, TN (United States). Dept. of Computer Science]|[Oak Ridge National Lab., TN (United States); Walker, D.W. [Oak Ridge National Lab., TN (United States)

    1993-08-01

    This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed by an outline of ScaLAPACK, which is a distributed memory version of LAPACK currently under development. The importance of block-partitioned algorithms in reducing the frequency of data movement between different levels of hierarchical memory is stressed. The use of such algorithms helps reduce the message startup costs on distributed memory concurrent computers. Other key ideas in our approach are the use of distributed versions of the Level 3 Basic Linear Algebra Subprograms (BLAS) as computational building blocks, and the use of Basic Linear Algebra Communication Subprograms (BLACS) as communication building blocks. Together the distributed BLAS and the BLACS can be used to construct higher-level algorithms, and hide many details of the parallelism from the application developer. The block-cyclic data distribution is described, and adopted as a good way of distributing block-partitioned matrices. Block-partitioned versions of the Cholesky and LU factorizations are presented, and optimization issues associated with the implementation of the LU factorization algorithm on distributed memory concurrent computers are discussed, together with its performance on the Intel Delta system. Finally, approaches to the design of library interfaces are reviewed.

  1. Federal Plan for High-End Computing. Report of the High-End Computing Revitalization Task Force (HECRTF)

    Science.gov (United States)

    2004-07-01

    and other energy feedstock more efficiently. Signal Transduction Pathways Develop atomic-level computational models and simulations of complex...biomolecules to explain and predict cell signal pathways and their disrupters. Yield understanding of initiation of cancer and other diseases and their...calculations also introduces a requirement for a high degree of internodal connectivity (high bisection bandwidth). These needs cannot be met simply by

  2. 按需供给的移动云计算动态安全服务%On-Demand Mobile Cloud Computing Security Service

    Institute of Scientific and Technical Information of China (English)

    陈小华; 董振江; 金怡爱

    2015-01-01

    This paper presents on-demand mobile cloud security solutions. These solutions guarantee security for cloud computing business and improve utilization of security resources in the cloud environment. The program focused on the cloud-based services business in the mobile Internet environment. It can meet complex, varied security needs of different users and services and can dynamical y analyze the security situation and adjust security policies. It is as much as possible for users to reduce security costs for different users in different security state to provide differentiated and efficient security mechanisms in mobile Internet services.%提出一套按需供给移动云计算安全服务解决方案,提升了云计算环境中安全资源的使用率,充分保障云服务的安全。该方案针对移动互联网环境,基于云计算提供服务的业务,满足不同用户、不同业务对安全的复杂、多变、多样化的安全需求,能够动态、实时地感知并分析安全态势、调整安全执行策略,尽可能地为用户降低安全支出成本,为不同用户、不同安全状态下的移动互联网业务提供差异化、高效的安全防护机制。

  3. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    Directory of Open Access Journals (Sweden)

    Florin Pop

    2014-01-01

    Full Text Available Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  4. High Resolution Muon Computed Tomography at Neutrino Beam Facilities

    CERN Document Server

    Suerfu, Burkhant

    2015-01-01

    X-ray computed tomography (CT) has an indispensable role in constructing 3D images of objects made from light materials. However, limited by absorption coefficients, X-rays cannot deeply penetrate materials such as copper and lead. Here we show via simulation that muon beams can provide high resolution tomographic images of dense objects and of structures within the interior of dense objects. The effects of resolution broadening from multiple scattering diminish with increasing muon momentum. As the momentum of the muon increases, the contrast of the image goes down and therefore requires higher resolution in the muon spectrometer to resolve the image. The variance of the measured muon momentum reaches a minimum and then increases with increasing muon momentum. The impact of the increase in variance is to require a higher integrated muon flux to reduce fluctuations. The flux requirements and level of contrast needed for high resolution muon computed tomography are well matched to the muons produced in the pio...

  5. High performance computing for classic gravitational N-body systems

    CERN Document Server

    Capuzzo-Dolcetta, Roberto

    2009-01-01

    The role of gravity is crucial in astrophysics. It determines the evolution of any system, over an enormous range of time and space scales. Astronomical stellar systems as composed by N interacting bodies represent examples of self-gravitating systems, usually treatable with the aid of newtonian gravity but for particular cases. In this note I will briefly discuss some of the open problems in the dynamical study of classic self-gravitating N-body systems, over the astronomical range of N. I will also point out how modern research in this field compulsorily requires a heavy use of large scale computations, due to the contemporary requirement of high precision and high computational speed.

  6. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  7. Opportunities and challenges of high-performance computing in chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Guest, M.F.; Kendall, R.A.; Nichols, J.A. [eds.] [and others

    1995-06-01

    The field of high-performance computing is developing at an extremely rapid pace. Massively parallel computers offering orders of magnitude increase in performance are under development by all the major computer vendors. Many sites now have production facilities that include massively parallel hardware. Molecular modeling methodologies (both quantum and classical) are also advancing at a brisk pace. The transition of molecular modeling software to a massively parallel computing environment offers many exciting opportunities, such as the accurate treatment of larger, more complex molecular systems in routine fashion, and a viable, cost-effective route to study physical, biological, and chemical `grand challenge` problems that are impractical on traditional vector supercomputers. This will have a broad effect on all areas of basic chemical science at academic research institutions and chemical, petroleum, and pharmaceutical industries in the United States, as well as chemical waste and environmental remediation processes. But, this transition also poses significant challenges: architectural issues (SIMD, MIMD, local memory, global memory, etc.) remain poorly understood and software development tools (compilers, debuggers, performance monitors, etc.) are not well developed. In addition, researchers that understand and wish to pursue the benefits offered by massively parallel computing are often hindered by lack of expertise, hardware, and/or information at their site. A conference and workshop organized to focus on these issues was held at the National Institute of Health, Bethesda, Maryland (February 1993). This report is the culmination of the organized workshop. The main conclusion: a drastic acceleration in the present rate of progress is required for the chemistry community to be positioned to exploit fully the emerging class of Teraflop computers, even allowing for the significant work to date by the community in developing software for parallel architectures.

  8. Intel: High Throughput Computing Collaboration: A CERN openlab / Intel collaboration

    CERN Document Server

    CERN. Geneva

    2015-01-01

    The Intel/CERN High Throughput Computing Collaboration studies the application of upcoming Intel technologies to the very challenging environment of the LHC trigger and data-acquisition systems. These systems will need to transport and process many terabits of data every second, in some cases with tight latency constraints. Parallelisation and tight integration of accelerators and classical CPU via Intel's OmniPath fabric are the key elements in this project.

  9. The role of interpreters in high performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Naumann, Axel; /CERN; Canal, Philippe; /Fermilab

    2008-01-01

    Compiled code is fast, interpreted code is slow. There is not much we can do about it, and it's the reason why interpreters use in high performance computing is usually restricted to job submission. We show where interpreters make sense even in the context of analysis code, and what aspects have to be taken into account to make this combination a success.

  10. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    Energy Technology Data Exchange (ETDEWEB)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs.

  11. High-Throughput Neuroimaging-Genetics Computational Infrastructure

    Directory of Open Access Journals (Sweden)

    Ivo D Dinov

    2014-04-01

    Full Text Available Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate and disseminate novel scientific methods, computational resources and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval and aggregation. Computational processing involves the necessary software, hardware and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical and phenotypic data and meta-data. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI and the Laboratory of Neuro Imaging (LONI at University of Southern California (USC. INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer’s and Parkinson’s data, we provide several examples of translational applications using this infrastructure.

  12. High-throughput neuroimaging-genetics computational infrastructure.

    Science.gov (United States)

    Dinov, Ivo D; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Hobel, Sam; Vespa, Paul; Woo Moon, Seok; Van Horn, John D; Franco, Joseph; Toga, Arthur W

    2014-01-01

    Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize

  13. High-Precision Computation: Mathematical Physics and Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, D. H.; Barrio, R.; Borwein, J. M.

    2010-04-01

    At the present time, IEEE 64-bit oating-point arithmetic is suficiently accurate for most scientic applications. However, for a rapidly growing body of important scientic computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion e ort. This pa- per presents a survey of recent applications of these techniques and provides someanalysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, studies of the one structure constant, scattering amplitudes of quarks, glu- ons and bosons, nonlinear oscillator theory, experimental mathematics, evaluation of orthogonal polynomials, numerical integration of ODEs, computation of periodic orbits, studies of the splitting of separatrices, detection of strange nonchaotic at- tractors, Ising theory, quantum held theory, and discrete dynamical systems. We conclude that high-precision arithmetic facilities are now an indispensable compo- nent of a modern large-scale scientic computing environment.

  14. Integrating Reconfigurable Hardware-Based Grid for High Performance Computing

    Directory of Open Access Journals (Sweden)

    Julio Dondo Gazzano

    2015-01-01

    Full Text Available FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC. The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process.

  15. High-Performance Java Codes for Computational Fluid Dynamics

    Science.gov (United States)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  16. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  17. “电脑”型产品需求预测的Gompertz模型与随机模拟%Forecasting computer products demand on Gompertz model and stochastics simulation

    Institute of Scientific and Technical Information of China (English)

    谢小良

    2011-01-01

    The computer type product, including the intangible products of metamorphism such as mobile telephone, TV and computer et al.,has the character of strong volatility and randomness.In addition,the historical data of this type product may be invalid and/or does not exist.lt is more difficult to forecast the demand of this type product than the normal product.This paper introduces the Gompertz model that can forecast the demand of computer type product.Beside.this paper forecasts the demand of computer of Changsha in 2010 using the Gompertz model.As the same time, it forcasts the demand of mobile telephone of Changsha in 2010 by computer random simulation and gets the perfect results.This study supplies the foundation for the further research of inventory control of computer type product.%“电脑”型产品包括手机、电视机、电脑等无形性变质产品,这种产品需求波动性大,随机性强、历史数据失效或根本不存在历史数据,其需求量的预测往往比常规品更为困难.介绍了“电脑”型产品需求预测的Gompertz模型,并应用这个模型对2010年长沙市电脑需求情况进行了预测,通过计算机随机模拟,对2010年长沙市手机产品的需求量也进行了预测,取得了较好的效果.为进一步研究“电脑”型产品库存控制问题提供了较好的基础.

  18. An integrated communications demand model

    Science.gov (United States)

    Doubleday, C. F.

    1980-11-01

    A computer model of communications demand is being developed to permit dynamic simulations of the long-term evolution of demand for communications media in the U.K. to be made under alternative assumptions about social, economic and technological trends in British Telecom's business environment. The context and objectives of the project and the potential uses of the model are reviewed, and four key concepts in the demand for communications media, around which the model is being structured are discussed: (1) the generation of communications demand; (2) substitution between media; (3) technological convergence; and (4) competition. Two outline perspectives on the model itself are given.

  19. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  20. Computational Thermodynamics and Kinetics-Based ICME Framework for High-Temperature Shape Memory Alloys

    Science.gov (United States)

    Arróyave, Raymundo; Talapatra, Anjana; Johnson, Luke; Singh, Navdeep; Ma, Ji; Karaman, Ibrahim

    2015-11-01

    Over the last decade, considerable interest in the development of High-Temperature Shape Memory Alloys (HTSMAs) for solid-state actuation has increased dramatically as key applications in the aerospace and automotive industry demand actuation temperatures well above those of conventional SMAs. Most of the research to date has focused on establishing the (forward) connections between chemistry, processing, (micro)structure, properties, and performance. Much less work has been dedicated to the development of frameworks capable of addressing the inverse problem of establishing necessary chemistry and processing schedules to achieve specific performance goals. Integrated Computational Materials Engineering (ICME) has emerged as a powerful framework to address this problem, although it has yet to be applied to the development of HTSMAs. In this paper, the contributions of computational thermodynamics and kinetics to ICME of HTSMAs are described. Some representative examples of the use of computational thermodynamics and kinetics to understand the phase stability and microstructural evolution in HTSMAs are discussed. Some very recent efforts at combining both to assist in the design of HTSMAs and limitations to the full implementation of ICME frameworks for HTSMA development are presented.

  1. Demand response - Non-electric storage devices for electric utilities with a high participation of renewable energy sources; Demand response - Nichtelektrische Speicher fuer Elektrizitaetsversorgungssysteme mit hohem Anteil erneuerbarer Energien

    Energy Technology Data Exchange (ETDEWEB)

    Stadler, I.

    2005-10-15

    Electrical power supply is mainly based on fossil and nuclear fuels. Their availability will be dramatically reduced within the next half century. The only possibility to maintain our high living and economic standard which is based on high energy consumption will be a reorganisation of our supplies towards renewable energies such as wind, photovoltaics and biomass. It is generally accepted that the maximum fraction of stochastically available renewable energies that can be integrated into our electricity supplies is 20 to 25 %. Reasons are stability of the electricity network and absence of possibilities to store large amounts of electricity. The author demonstrates that by integrating already existing intrinsic storage capacities on the demand side into the control of electricity supplies there is no longer an upper limit for integration of renewable energies. The following technologies on the demand side are taken into account: storage heating and electrical water heating, ventilation, refrigeration, circulation pumps and compressed air in industry. Furthermore, changes in user behaviour, compressed air energy storage, and transition to flexible electricity usage by integration of thermal storage are discussed. (orig.)

  2. Building highly available control system applications with Advanced Telecom Computing Architecture and open standards

    Science.gov (United States)

    Kazakov, Artem; Furukawa, Kazuro

    2010-11-01

    Requirements for modern and future control systems for large projects like International Linear Collider demand high availability for control system components. Recently telecom industry came up with a great open hardware specification - Advanced Telecom Computing Architecture (ATCA). This specification is aimed for better reliability, availability and serviceability. Since its first market appearance in 2004, ATCA platform has shown tremendous growth and proved to be stable and well represented by a number of vendors. ATCA is an industry standard for highly available systems. On the other hand Service Availability Forum, a consortium of leading communications and computing companies, describes interaction between hardware and software. SAF defines a set of specifications such as Hardware Platform Interface, Application Interface Specification. SAF specifications provide extensive description of highly available systems, services and their interfaces. Originally aimed for telecom applications, these specifications can be used for accelerator controls software as well. This study describes benefits of using these specifications and their possible adoption to accelerator control systems. It is demonstrated how EPICS Redundant IOC was extended using Hardware Platform Interface specification, which made it possible to utilize benefits of the ATCA platform.

  3. FPGAs in High Perfomance Computing: Results from Two LDRD Projects.

    Energy Technology Data Exchange (ETDEWEB)

    Underwood, Keith D; Ulmer, Craig D.; Thompson, David; Hemmert, Karl Scott

    2006-11-01

    Field programmable gate arrays (FPGAs) have been used as alternative computational de-vices for over a decade; however, they have not been used for traditional scientific com-puting due to their perceived lack of floating-point performance. In recent years, there hasbeen a surge of interest in alternatives to traditional microprocessors for high performancecomputing. Sandia National Labs began two projects to determine whether FPGAs wouldbe a suitable alternative to microprocessors for high performance scientific computing and,if so, how they should be integrated into the system. We present results that indicate thatFPGAs could have a significant impact on future systems. FPGAs have thepotentialtohave order of magnitude levels of performance wins on several key algorithms; however,there are serious questions as to whether the system integration challenge can be met. Fur-thermore, there remain challenges in FPGA programming and system level reliability whenusing FPGA devices.4 AcknowledgmentArun Rodrigues provided valuable support and assistance in the use of the Structural Sim-ulation Toolkit within an FPGA context. Curtis Janssen and Steve Plimpton provided valu-able insights into the workings of two Sandia applications (MPQC and LAMMPS, respec-tively).5

  4. High performance computing and communications: FY 1995 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-01

    The High Performance Computing and Communications (HPCC) Program was formally established following passage of the High Performance Computing Act of 1991 signed on December 9, 1991. Ten federal agencies in collaboration with scientists and managers from US industry, universities, and laboratories have developed the HPCC Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1994 and FY 1995. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency. Although the Department of Education is an official HPCC agency, its current funding and reporting of crosscut activities goes through the Committee on Education and Health Resources, not the HPCC Program. For this reason the Implementation Plan covers nine HPCC agencies.

  5. A Computer Controlled Precision High Pressure Measuring System

    Science.gov (United States)

    Sadana, S.; Yadav, S.; Jha, N.; Gupta, V. K.; Agarwal, R.; Bandyopadhyay, A. K.; Saxena, T. K.

    2011-01-01

    A microcontroller (AT89C51) based electronics has been designed and developed for high precision calibrator based on Digiquartz pressure transducer (DQPT) for the measurement of high hydrostatic pressure up to 275 MPa. The input signal from DQPT is converted into a square wave form and multiplied through frequency multiplier circuit over 10 times to input frequency. This input frequency is multiplied by a factor of ten using phased lock loop. Octal buffer is used to store the calculated frequency, which in turn is fed to microcontroller AT89C51 interfaced with a liquid crystal display for the display of frequency as well as corresponding pressure in user friendly units. The electronics developed is interfaced with a computer using RS232 for automatic data acquisition, computation and storage. The data is acquired by programming in Visual Basic 6.0. This system is interfaced with the PC to make it a computer controlled system. The system is capable of measuring the frequency up to 4 MHz with a resolution of 0.01 Hz and the pressure up to 275 MPa with a resolution of 0.001 MPa within measurement uncertainty of 0.025%. The details on the hardware of the pressure measuring system, associated electronics, software and calibration are discussed in this paper.

  6. Scout: high-performance heterogeneous computing made simple

    Energy Technology Data Exchange (ETDEWEB)

    Jablin, James [Los Alamos National Laboratory; Mc Cormick, Patrick [Los Alamos National Laboratory; Herlihy, Maurice [BROWN UNIV.

    2011-01-26

    Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly navigate both software and hardware. Co-design between computer scientists and research scientists can alleviate but not solve this problem. The science community requires better tools for developing, optimizing, and future-proofing codes, allowing scientists to focus on their research while still achieving high computational performance. Scout is a parallel programming language and extensible compiler framework targeting heterogeneous architectures. It provides the abstraction required to buffer scientists from the constantly-shifting details of hardware while still realizing higb-performance by encapsulating software and hardware optimization within a compiler framework.

  7. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  8. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  9. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  10. Iterative coupling reservoir simulation on high performance computers

    Institute of Scientific and Technical Information of China (English)

    Lu Bo; Wheeler Mary F

    2009-01-01

    In this paper, the iterative coupling approach is proposed for applications to solving multiphase flow equation systems in reservoir simulation, as it provides a more flexible time-stepping strategy than existing approaches. The iterative method decouples the whole equation systems into pressure and saturation/concentration equations, and then solves them in sequence, implicitly and semi-implicitly. At each time step, a series of iterations are computed, which involve solving linearized equations using specific tolerances that are iteration dependent. Following convergence of subproblems, material balance is checked. Convergence of time steps is based on material balance errors. Key components of the iterative method include phase scaling for deriving a pressure equation and use of several advanced numerical techniques. The iterative model is implemented for parallel computing platforms and shows high parallel efficiency and scalability.

  11. High Performance Computing - Power Application Programming Interface Specification.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  12. High-reliability computing for the smarter planet

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Heather M [Los Alamos National Laboratory; Graham, Paul [Los Alamos National Laboratory; Manuzzato, Andrea [UNIV OF PADOVA; Dehon, Andre [UNIV OF PENN; Carter, Nicholas [INTEL CORPORATION

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is

  13. Simultaneous efficient removal of high-strength ammonia nitrogen and chemical oxygen demand from landfill leachate by using an extremely high ammonia nitrogen-resistant strain.

    Science.gov (United States)

    Yu, Dahai; Yang, Jiyu; Fang, Xuexun; Ren, Hejun

    2015-01-01

    Bioaugmentation is a promising technology for pollutant elimination from stressed environments, and it would provide an efficient way to solve challenges in traditional biotreatment of wastewater with high strength of ammonia nitrogen (NH4(+)-N). A high NH4(+)-N-resistant bacteria strain, identified as Bacillus cereus (Jlu BC), was domesticated and isolated from the bacteria consortium in landfill leachate. Jlu BC could survive in 100 g/L NH4(+)-N environment, which indicated its extremely high NH4(+)-N tolerance than the stains found before. Jlu BC was employed in the bioaugmented system to remove high strength of NH4(+)-N from landfill leachate, and to increase the removal efficiency, response surface methodology (RSM) was used for optimizing bioaugmentation degradation conditions. At the optimum condition (initial pH 7.33, 4.14 days, initial chemical oxygen demand [COD] concentration [18,000 mg/L], 3.5 mL inoculated domesticated bacteria strain, 0.3 mg/mL phosphorus supplement, 30 °C, and 170 rpm), 94.74 ± 3.8% removal rate of NH4(+)-N was obtained, and the experiment data corresponded well with the predicted removal rate of the RSM models (95.50%). Furthermore, COD removal rate of 81.94 ± 1.4% was obtained simultaneously. The results presented are promising, and the screened strain would be of great practical importance in mature landfill leachate and other NH4(+)-N enrichment wastewater pollution control.

  14. A Crafts-Oriented Approach to Computing in High School: Introducing Computational Concepts, Practices, and Perspectives with Electronic Textiles

    Science.gov (United States)

    Kafai, Yasmin B.; Lee, Eunkyoung; Searle, Kristin; Fields, Deborah; Kaplan, Eliot; Lui, Debora

    2014-01-01

    In this article, we examine the use of electronic textiles (e-textiles) for introducing key computational concepts and practices while broadening perceptions about computing. The starting point of our work was the design and implementation of a curriculum module using the LilyPad Arduino in a pre-AP high school computer science class. To…

  15. Disturbed hepatic carbohydrate management during high metabolic demand in medium-chain acyl-CoA dehydrogenase (MCAD)-deficient mice

    NARCIS (Netherlands)

    Herrema, H.J.; Derks, T.G.; Dijk, van T.H.; Bloks, V.W.; Gerding, A.; Havinga, R.; Tietge, U.J.; Müller, M.R.; Smit, G.P.; Kuipers, F.; Reijngoud, D.J.

    2008-01-01

    Medium-chain acyl-coenzyme A (CoA) dehydrogenase (MCAD) catalyzes crucial steps in mitochondrial fatty acid oxidation, a process that is of key relevance for maintenance of energy homeostasis, especially during high metabolic demand. To gain insight into the metabolic consequences of MCAD deficiency

  16. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  17. 基于需求的高职数学教学优化策略%The Optimal Strategies of High Vocational Mathematics Based on Demand

    Institute of Scientific and Technical Information of China (English)

    金环

    2011-01-01

    通过现状分析,给出基于需求的高职数学教学优化策略:精心设计实例,激发学生内在需求;提炼数学问题,引导学生学习需求;改进教学方法,适应学生多元需求;鼓励学科交叉,应对社会发展需求;采用综合评价,满足学生进步需求.%By means of analyzing current situation, the optimal strategies of high vocational mathematics based on demand are provided: used well-designed example to arouse internal need of student; refined mathematical problem to guide requirement; improved teaching method to adapt polybasic demand; overlapped subject to reply social development; evaluated comprehensively to meet the demand of student.

  18. VLSI IMPLEMENTATION OF FIR FILTER USING COMPUTATIONAL SHARING MULTIPLIER BASED ON HIGH SPEED CARRY SELECT ADDER

    Directory of Open Access Journals (Sweden)

    S. Karunakaran

    2012-01-01

    Full Text Available Recent advances in mobile computing and multimedia applications demand high-performance and low-power VLSI Digital Signal Processing (DSP systems. One of the most widely used operations in DSP is Finite-Impulse Response (FIR filtering. In the existing method FIR filter is designed using array multiplier, which is having higher delay and power dissipation. The proposed method presents a programmable digital Finite Impulse Response (FIR filter for high-performance applications. The architecture is based on a computational sharing multiplier which specifically doing add and shift operation and also targets computation re-use in vector-scalar products. CSHM multiplier can be implemented by Carry Select Adder which is a high speed adder. A Carry-Select Adder (CSA can be implemented by using single ripple carry adder and add-one circuits using the fast all-one finding circuit and low-delay multiplexers to reduce the area and accelerate the speed of CSA. An 8-tap programmable FIR filter was implemented in tanner EDA tool using CMOS 180nm technology based on the proposed CSHM technique. In which the number of transistor, power (mW and clock cycle (ns of the filter using array multiplier are 6000, 3.732 and 9 respectively. The FIR filter using CSHM in which the number of transistor, power (mW and clock cycle (ns are 23500, 2.627 and 4.5 respectively. By adopting the proposed method for the design of FIR filter, the delay is reduced to about 43.2% in comparison with the existing method. The CSHM scheme and circuit-level techniques helped to achieve high-performance FIR filtering operation.

  19. Next-generation sequencing: big data meets high performance computing.

    Science.gov (United States)

    Schmidt, Bertil; Hildebrandt, Andreas

    2017-02-02

    The progress of next-generation sequencing has a major impact on medical and genomic research. This high-throughput technology can now produce billions of short DNA or RNA fragments in excess of a few terabytes of data in a single run. This leads to massive datasets used by a wide range of applications including personalized cancer treatment and precision medicine. In addition to the hugely increased throughput, the cost of using high-throughput technologies has been dramatically decreasing. A low sequencing cost of around US$1000 per genome has now rendered large population-scale projects feasible. However, to make effective use of the produced data, the design of big data algorithms and their efficient implementation on modern high performance computing systems is required.

  20. Towards robust dynamical decoupling and high fidelity adiabatic quantum computation

    Science.gov (United States)

    Quiroz, Gregory

    Quantum computation (QC) relies on the ability to implement high-fidelity quantum gate operations and successfully preserve quantum state coherence. One of the most challenging obstacles for reliable QC is overcoming the inevitable interaction between a quantum system and its environment. Unwanted interactions result in decoherence processes that cause quantum states to deviate from a desired evolution, consequently leading to computational errors and loss of coherence. Dynamical decoupling (DD) is one such method, which seeks to attenuate the effects of decoherence by applying strong and expeditious control pulses solely to the system. Provided the pulses are applied over a time duration sufficiently shorter than the correlation time associated with the environment dynamics, DD effectively averages out undesirable interactions and preserves quantum states with a low probability of error, or fidelity loss. In this study various aspects of this approach are studied from sequence construction to applications of DD to protecting QC. First, a comprehensive examination of the error suppression properties of a near-optimal DD approach is given to understand the relationship between error suppression capabilities and the number of required DD control pulses in the case of ideal, instantaneous pulses. While such considerations are instructive for examining DD efficiency, i.e., performance vs the number of control pulses, high-fidelity DD in realizable systems is difficult to achieve due to intrinsic pulse imperfections which further contribute to decoherence. As a second consideration, it is shown how one can overcome this hurdle and achieve robustness and recover high-fidelity DD in the presence of faulty control pulses using Genetic Algorithm optimization and sequence symmetrization. Thirdly, to illustrate the implementation of DD in conjunction with QC, the utilization of DD and quantum error correction codes (QECCs) as a protection method for adiabatic quantum

  1. Chip-to-board interconnects for high-performance computing

    Science.gov (United States)

    Riester, Markus B. K.; Houbertz-Krauss, Ruth; Steenhusen, Sönke

    2013-02-01

    Super computing is reaching out to ExaFLOP processing speeds, creating fundamental challenges for the way that computing systems are designed and built. One governing topic is the reduction of power used for operating the system, and eliminating the excess heat generated from the system. Current thinking sees optical interconnects on most interconnect levels to be a feasible solution to many of the challenges, although there are still limitations to the technical solutions, in particular with regard to manufacturability. This paper explores drivers for enabling optical interconnect technologies to advance into the module and chip level. The introduction of optical links into High Performance Computing (HPC) could be an option to allow scaling the manufacturing technology to large volume manufacturing. This will drive the need for manufacturability of optical interconnects, giving rise to other challenges that add to the realization of this type of interconnection. This paper describes a solution that allows the creation of optical components on module level, integrating optical chips, laser diodes or PIN diodes as components much like the well known SMD components used for electrical components. The paper shows the main challenges and potential solutions to this challenge and proposes a fundamental paradigm shift in the manufacturing of 3-dimensional optical links for the level 1 interconnect (chip package).

  2. High-resolution temperature fields to evaluate the response of Italian electricity demand to meteorological variables: an example of climate service for the energy sector

    Science.gov (United States)

    Scapin, Simone; Apadula, Francesco; Brunetti, Michele; Maugeri, Maurizio

    2016-08-01

    The dependence of Italian daily electricity demand on cooling degree-days, heating degree-days and solar radiation is investigated by means of a regression model applied to 12 consecutive 2-year intervals in the 1990-2013 period. The cooling and heating degree-days records used in the model are obtained by (i) estimating, by means of a network of 92 synoptic stations and high-resolution gridded temperature climatologies, a daily effective temperature record for all urbanised grid points of a high-resolution grid covering Italy; (ii) using these records to calculate corresponding grid point degree-days records; and (iii) averaging them to get national degree-days records representative of urban areas. The solar radiation record is obtained with the same averaging approach, with grid point solar radiation estimated from the corresponding daily temperature range. The model is based on deterministic components related to the weekly cyclical pattern of demand and to long-term demand changes and on weather-sensitive components related to cooling degree-days, heating degree-days and solar radiation. It establishes a strong contribution of cooling degree-days to the Italian electricity demand, with values peaking in summer months of the latest years up to 211 GWh day-1 (i.e. about 23 % of the corresponding average Italian electricity demand). This contribution shows a strong positive trend in the period considered here: the coefficient of the cooling degree-days term in the regression models increases from the first 2-year period (1990-1991) to the last one (2012-2013) by a factor 3.5, which is much greater than the increase of the Italian total electricity demand.

  3. Molecular Dynamics Simulations on High-Performance Reconfigurable Computing Systems.

    Science.gov (United States)

    Chiu, Matt; Herbordt, Martin C

    2010-11-01

    The acceleration of molecular dynamics (MD) simulations using high-performance reconfigurable computing (HPRC) has been much studied. Given the intense competition from multicore and GPUs, there is now a question whether MD on HPRC can be competitive. We concentrate here on the MD kernel computation: determining the short-range force between particle pairs. In one part of the study, we systematically explore the design space of the force pipeline with respect to arithmetic algorithm, arithmetic mode, precision, and various other optimizations. We examine simplifications and find that some have little effect on simulation quality. In the other part, we present the first FPGA study of the filtering of particle pairs with nearly zero mutual force, a standard optimization in MD codes. There are several innovations, including a novel partitioning of the particle space, and new methods for filtering and mapping work onto the pipelines. As a consequence, highly efficient filtering can be implemented with only a small fraction of the FPGA's resources. Overall, we find that, for an Altera Stratix-III EP3ES260, 8 force pipelines running at nearly 200 MHz can fit on the FPGA, and that they can perform at 95% efficiency. This results in an 80-fold per core speed-up for the short-range force, which is likely to make FPGAs highly competitive for MD.

  4. Optimizing high performance computing workflow for protein functional annotation.

    Science.gov (United States)

    Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene

    2014-09-10

    Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data.

  5. Computational characterization of high temperature composites via METCAN

    Science.gov (United States)

    Brown, H. C.; Chamis, Christos C.

    1991-01-01

    The computer code 'METCAN' (METal matrix Composite ANalyzer) developed at NASA Lewis Research Center can be used to predict the high temperature behavior of metal matrix composites using the room temperature constituent properties. A reference manual that characterizes some common composites is being developed from METCAN generated data. Typical plots found in the manual are shown for graphite/copper. These include plots of stress-strain, elastic and shear moduli, Poisson's ratio, thermal expansion, and thermal conductivity. This manual can be used in the preliminary design of structures and as a guideline for the behavior of other composite systems.

  6. PRaVDA: High Energy Physics towards proton Computed Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Price, T., E-mail: t.price@bham.ac.uk

    2016-07-11

    Proton radiotherapy is an increasingly popular modality for treating cancers of the head and neck, and in paediatrics. To maximise the potential of proton radiotherapy it is essential to know the distribution, and more importantly the proton stopping powers, of the body tissues between the proton beam and the tumour. A stopping power map could be measured directly, and uncertainties in the treatment vastly reduce, if the patient was imaged with protons instead of conventional x-rays. Here we outline the application of technologies developed for High Energy Physics to provide clinical-quality proton Computed Tomography, in so reducing range uncertainties and enhancing the treatment of cancer.

  7. Computational Proteomics: High-throughput Analysis for Systems Biology

    Energy Technology Data Exchange (ETDEWEB)

    Cannon, William R.; Webb-Robertson, Bobbie-Jo M.

    2007-01-03

    High-throughput (HTP) proteomics is a rapidly developing field that offers the global profiling of proteins from a biological system. The HTP technological advances are fueling a revolution in biology, enabling analyses at the scales of entire systems (e.g., whole cells, tumors, or environmental communities). However, simply identifying the proteins in a cell is insufficient for understanding the underlying complexity and operating mechanisms of the overall system. Systems level investigations are relying more and more on computational analyses, especially in the field of proteomics generating large-scale global data.

  8. High performance computing for three-dimensional agent-based molecular models.

    Science.gov (United States)

    Pérez-Rodríguez, G; Pérez-Pérez, M; Fdez-Riverola, F; Lourenço, A

    2016-07-01

    Agent-based simulations are increasingly popular in exploring and understanding cellular systems, but the natural complexity of these systems and the desire to grasp different modelling levels demand cost-effective simulation strategies and tools. In this context, the present paper introduces novel sequential and distributed approaches for the three-dimensional agent-based simulation of individual molecules in cellular events. These approaches are able to describe the dimensions and position of the molecules with high accuracy and thus, study the critical effect of spatial distribution on cellular events. Moreover, two of the approaches allow multi-thread high performance simulations, distributing the three-dimensional model in a platform independent and computationally efficient way. Evaluation addressed the reproduction of molecular scenarios and different scalability aspects of agent creation and agent interaction. The three approaches simulate common biophysical and biochemical laws faithfully. The distributed approaches show improved performance when dealing with large agent populations while the sequential approach is better suited for small to medium size agent populations. Overall, the main new contribution of the approaches is the ability to simulate three-dimensional agent-based models at the molecular level with reduced implementation effort and moderate-level computational capacity. Since these approaches have a generic design, they have the major potential of being used in any event-driven agent-based tool. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. SCEC Earthquake System Science Using High Performance Computing

    Science.gov (United States)

    Maechling, P. J.; Jordan, T. H.; Archuleta, R.; Beroza, G.; Bielak, J.; Chen, P.; Cui, Y.; Day, S.; Deelman, E.; Graves, R. W.; Minster, J. B.; Olsen, K. B.

    2008-12-01

    The SCEC Community Modeling Environment (SCEC/CME) collaboration performs basic scientific research using high performance computing with the goal of developing a predictive understanding of earthquake processes and seismic hazards in California. SCEC/CME research areas including dynamic rupture modeling, wave propagation modeling, probabilistic seismic hazard analysis (PSHA), and full 3D tomography. SCEC/CME computational capabilities are organized around the development and application of robust, re- usable, well-validated simulation systems we call computational platforms. The SCEC earthquake system science research program includes a wide range of numerical modeling efforts and we continue to extend our numerical modeling codes to include more realistic physics and to run at higher and higher resolution. During this year, the SCEC/USGS OpenSHA PSHA computational platform was used to calculate PSHA hazard curves and hazard maps using the new UCERF2.0 ERF and new 2008 attenuation relationships. Three SCEC/CME modeling groups ran 1Hz ShakeOut simulations using different codes and computer systems and carefully compared the results. The DynaShake Platform was used to calculate several dynamic rupture-based source descriptions equivalent in magnitude and final surface slip to the ShakeOut 1.2 kinematic source description. A SCEC/CME modeler produced 10Hz synthetic seismograms for the ShakeOut 1.2 scenario rupture by combining 1Hz deterministic simulation results with 10Hz stochastic seismograms. SCEC/CME modelers ran an ensemble of seven ShakeOut-D simulations to investigate the variability of ground motions produced by dynamic rupture-based source descriptions. The CyberShake Platform was used to calculate more than 15 new probabilistic seismic hazard analysis (PSHA) hazard curves using full 3D waveform modeling and the new UCERF2.0 ERF. The SCEC/CME group has also produced significant computer science results this year. Large-scale SCEC/CME high performance codes

  10. Security Services Lifecycle Management in on-demand infrastructure services

    NARCIS (Netherlands)

    Demchenko, Y.; de Laat, C.; Lopez, D.R.; García-Espín, J.A.; Qiu, J.; Zhao, G.; Rong, C.

    2010-01-01

    Modern e-Science and high technology industry require high-performance and complicated network and computer infrastructure to support distributed collaborating groups of researchers and applications that should be provisioned on-demand. The effective use and management of the dynamically provisioned

  11. Security Services Lifecycle Management in on-demand infrastructure services

    NARCIS (Netherlands)

    Demchenko, Y.; de Laat, C.; Lopez, D.R.; García-Espín, J.A.; Qiu, J.; Zhao, G.; Rong, C.

    2010-01-01

    Modern e-Science and high technology industry require high-performance and complicated network and computer infrastructure to support distributed collaborating groups of researchers and applications that should be provisioned on-demand. The effective use and management of the dynamically provisioned

  12. Matrix element method for high performance computing platforms

    Science.gov (United States)

    Grasseau, G.; Chamont, D.; Beaudette, F.; Bianchini, L.; Davignon, O.; Mastrolorenzo, L.; Ochando, C.; Paganini, P.; Strebler, T.

    2015-12-01

    Lot of efforts have been devoted by ATLAS and CMS teams to improve the quality of LHC events analysis with the Matrix Element Method (MEM). Up to now, very few implementations try to face up the huge computing resources required by this method. We propose here a highly parallel version, combining MPI and OpenCL, which makes the MEM exploitation reachable for the whole CMS datasets with a moderate cost. In the article, we describe the status of two software projects under development, one focused on physics and one focused on computing. We also showcase their preliminary performance obtained with classical multi-core processors, CUDA accelerators and MIC co-processors. This let us extrapolate that with the help of 6 high-end accelerators, we should be able to reprocess the whole LHC run 1 within 10 days, and that we have a satisfying metric for the upcoming run 2. The future work will consist in finalizing a single merged system including all the physics and all the parallelism infrastructure, thus optimizing implementation for best hardware platforms.

  13. Quantitative analysis of cholesteatoma using high resolution computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Kikuchi, Shigeru; Yamasoba, Tatsuya (Kameda General Hospital, Chiba (Japan)); Iinuma, Toshitaka

    1992-05-01

    Seventy-three cases of adult cholesteatoma, including 52 cases of pars flaccida type cholesteatoma and 21 of pars tensa type cholesteatoma, were examined using high resolution computed tomography, in both axial (lateral semicircular canal plane) and coronal sections (cochlear, vestibular and antral plane). These cases were classified into two subtypes according to the presence of extension of cholesteatoma into the antrum. Sixty cases of chronic otitis media with central perforation (COM) were also examined as controls. Various locations of the middle ear cavity were measured in terms of size in comparison with pars flaccida type cholesteatoma, pars tensa type cholesteatoma and COM. The width of the attic was significantly larger in both pars flaccida type and pars tensa type cholesteatoma than in COM. With pars flaccida type cholesteatoma there was a significantly larger distance between the malleus and lateral wall of the attic than with COM. In contrast, the distance between the malleus and medial wall of the attic was significantly larger with pars tensa type cholesteatoma than with COM. With cholesteatoma extending into the antrum, regardless of the type of cholesteatoma, there were significantly larger distances than with COM at the following sites: the width and height of the aditus ad antrum, and the width, height and anterior-posterior diameter of the antrum. However, these distances were not significantly different between cholesteatoma without extension into the antrum and COM. The hitherto demonstrated qualitative impressions of bone destruction in cholesteatoma were quantitatively verified in detail using high resolution computed tomography. (author).

  14. Analyzing high energy physics data using database computing: Preliminary report

    Science.gov (United States)

    Baden, Andrew; Day, Chris; Grossman, Robert; Lifka, Dave; Lusk, Ewing; May, Edward; Price, Larry

    1991-01-01

    A proof of concept system is described for analyzing high energy physics (HEP) data using data base computing. The system is designed to scale up to the size required for HEP experiments at the Superconducting SuperCollider (SSC) lab. These experiments will require collecting and analyzing approximately 10 to 100 million 'events' per year during proton colliding beam collisions. Each 'event' consists of a set of vectors with a total length of approx. one megabyte. This represents an increase of approx. 2 to 3 orders of magnitude in the amount of data accumulated by present HEP experiments. The system is called the HEPDBC System (High Energy Physics Database Computing System). At present, the Mark 0 HEPDBC System is completed, and can produce analysis of HEP experimental data approx. an order of magnitude faster than current production software on data sets of approx. 1 GB. The Mark 1 HEPDBC System is currently undergoing testing and is designed to analyze data sets 10 to 100 times larger.

  15. Investigation of Vocational High-School Students' Computer Anxiety

    Science.gov (United States)

    Tuncer, Murat; Dogan, Yunus; Tanas, Ramazan

    2013-01-01

    With the advent of the computer technologies, we are increasingly encountering these technologies in every field of life. The fact that the computer technology is so much interwoven with the daily life makes it necessary to investigate certain psychological attitudes of those working with computers towards computers. As this study is limited to…

  16. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services.

    Science.gov (United States)

    Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha

    2016-02-27

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  17. Performance management of high performance computing for medical image processing in Amazon Web Services

    Science.gov (United States)

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-03-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  18. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  19. The addition of beta-hydroxy-beta-methylbutyrate and isomaltulose to whey protein improves recovery from highly demanding resistance exercise.

    Science.gov (United States)

    Kraemer, William J; Hooper, David R; Szivak, Tunde K; Kupchak, Brian R; Dunn-Lewis, Courtenay; Comstock, Brett A; Flanagan, Shawn D; Looney, David P; Sterczala, Adam J; DuPont, William H; Pryor, J Luke; Luk, Hiu-Ying; Maladoungdock, Jesse; McDermott, Danielle; Volek, Jeff S; Maresh, Carl M

    2015-01-01

    This study evaluated whether a combination of whey protein (WP), calcium beta-hydroxy-beta-methylbutyrate (HMB), and carbohydrate exert additive effects on recovery from highly demanding resistance exercise. Thirteen resistance-trained men (age: 22.6 ± 3.9 years; height: 175.3 ± 12.2 cm; weight: 86.2 ± 9.8 kg) completed a double-blinded, counterbalanced, within-group study. Subjects ingested EAS Recovery Protein (RP; EAS Sports Nutrition/Abbott Laboratories, Columbus, OH) or WP twice daily for 2 weeks prior to, during, and for 2 days following 3 consecutive days of intense resistance exercise. The workout sequence included heavy resistance exercise (day 1) and metabolic resistance exercise (days 2 and 3). The subjects performed no physical activity during day 4 (+24 hours) and day 5 (+48 hours), where recovery testing was performed. Before, during, and following the 3 workouts, treatment outcomes were evaluated using blood-based muscle damage markers and hormones, perceptual measures of muscle soreness, and countermovement jump performance. Creatine kinase was lower for the RP treatment on day 2 (RP: 166.9 ± 56.4 vs WP: 307.1 ± 125.2 IU · L(-1), p ≤ 0.05), day 4 (RP: 232.5 ± 67.4 vs WP: 432.6 ± 223.3 IU · L(-1), p ≤ 0.05), and day 5 (RP: 176.1 ± 38.7 vs 264.5 ± 120.9 IU · L(-1), p ≤ 0.05). Interleukin-6 was lower for the RP treatment on day 4 (RP: 1.2 ± 0.2 vs WP: 1.6 ± 0.6 pg · ml(-1), p ≤ 0.05) and day 5 (RP: 1.1 ± 0.2 vs WP: 1.6 ± 0.4 pg · ml(-1), p ≤ 0.05). Muscle soreness was lower for RP treatment on day 4 (RP: 2.0 ± 0.7 vs WP: 2.8 ± 1.1 cm, p ≤ 0.05). Vertical jump power was higher for the RP treatment on day 4 (RP: 5983.2 ± 624 vs WP 5303.9 ± 641.7 W, p ≤ 0.05) and day 5 (RP: 5792.5 ± 595.4 vs WP: 5200.4 ± 501 W, p ≤ 0.05). Our findings suggest that during times of intense conditioning, the recovery benefits of WP are enhanced with the addition of HMB and a slow-release carbohydrate. We

  20. 15 CFR 743.2 - High performance computers: Post shipment verification reporting.

    Science.gov (United States)

    2010-01-01

    ... 15 Commerce and Foreign Trade 2 2010-01-01 2010-01-01 false High performance computers: Post... ADMINISTRATION REGULATIONS SPECIAL REPORTING § 743.2 High performance computers: Post shipment verification... certain computers to destinations in Computer Tier 3, see § 740.7(d) for a list of these destinations...

  1. Proceedings of the workshop on high resolution computed microtomography (CMT)

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-02-01

    The purpose of the workshop was to determine the status of the field, to define instrumental and computational requirements, and to establish minimum specifications required by possible users. The most important message sent by implementers was the remainder that CMT is a tool. It solves a wide spectrum of scientific problems and is complementary to other microscopy techniques, with certain important advantages that the other methods do not have. High-resolution CMT can be used non-invasively and non-destructively to study a variety of hierarchical three-dimensional microstructures, which in turn control body function. X-ray computed microtomography can also be used at the frontiers of physics, in the study of granular systems, for example. With high-resolution CMT, for example, three-dimensional pore geometries and topologies of soils and rocks can be obtained readily and implemented directly in transport models. In turn, these geometries can be used to calculate fundamental physical properties, such as permeability and electrical conductivity, from first principles. Clearly, use of the high-resolution CMT technique will contribute tremendously to the advancement of current R and D technologies in the production, transport, storage, and utilization of oil and natural gas. It can also be applied to problems related to environmental pollution, particularly to spilling and seepage of hazardous chemicals into the Earth's subsurface. Applications to energy and environmental problems will be far-ranging and may soon extend to disciplines such as materials science--where the method can be used in the manufacture of porous ceramics, filament-resin composites, and microelectronics components--and to biomedicine, where it could be used to design biocompatible materials such as artificial bones, contact lenses, or medication-releasing implants. Selected papers are indexed separately for inclusion in the Energy Science and Technology Database.

  2. Computation of High-Frequency Waves with Random Uncertainty

    KAUST Repository

    Malenova, Gabriela

    2016-01-06

    We consider the forward propagation of uncertainty in high-frequency waves, described by the second order wave equation with highly oscillatory initial data. The main sources of uncertainty are the wave speed and/or the initial phase and amplitude, described by a finite number of random variables with known joint probability distribution. We propose a stochastic spectral asymptotic method [1] for computing the statistics of uncertain output quantities of interest (QoIs), which are often linear or nonlinear functionals of the wave solution and its spatial/temporal derivatives. The numerical scheme combines two techniques: a high-frequency method based on Gaussian beams [2, 3], a sparse stochastic collocation method [4]. The fast spectral convergence of the proposed method depends crucially on the presence of high stochastic regularity of the QoI independent of the wave frequency. In general, the high-frequency wave solutions to parametric hyperbolic equations are highly oscillatory and non-smooth in both physical and stochastic spaces. Consequently, the stochastic regularity of the QoI, which is a functional of the wave solution, may in principle below and depend on frequency. In the present work, we provide theoretical arguments and numerical evidence that physically motivated QoIs based on local averages of |uE|2 are smooth, with derivatives in the stochastic space uniformly bounded in E, where uE and E denote the highly oscillatory wave solution and the short wavelength, respectively. This observable related regularity makes the proposed approach more efficient than current asymptotic approaches based on Monte Carlo sampling techniques.

  3. Toward high performance, weakly invasive brain computer interfaces using selective visual attention.

    Science.gov (United States)

    Rotermund, David; Ernst, Udo A; Mandon, Sunita; Taylor, Katja; Smiyukha, Yulia; Kreiter, Andreas K; Pawelzik, Klaus R

    2013-04-01

    Brain-computer interfaces have been proposed as a solution for paralyzed persons to communicate and interact with their environment. However, the neural signals used for controlling such prostheses are often noisy and unreliable, resulting in a low performance of real-world applications. Here we propose neural signatures of selective visual attention in epidural recordings as a fast, reliable, and high-performance control signal for brain prostheses. We recorded epidural field potentials with chronically implanted electrode arrays from two macaque monkeys engaged in a shape-tracking task. For single trials, we classified the direction of attention to one of two visual stimuli based on spectral amplitude, coherence, and phase difference in time windows fixed relative to stimulus onset. Classification performances reached up to 99.9%, and the information about attentional states could be transferred at rates exceeding 580 bits/min. Good classification can already be achieved in time windows as short as 200 ms. The classification performance changed dynamically over the trial and modulated with the task's varying demands for attention. For all three signal features, the information about the direction of attention was contained in the γ-band. The most informative feature was spectral amplitude. Together, these findings establish a novel paradigm for constructing brain prostheses as, for example, virtual spelling boards, promising a major gain in performance and robustness for human brain-computer interfaces.

  4. Guest Editorial High Performance Computing (HPC) Applications for a More Resilient and Efficient Power Grid

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Zhenyu Henry; Tate, Zeb; Abhyankar, Shrirang; Dong, Zhaoyang; Khaitan, Siddhartha; Min, Liang; Taylor, Gary

    2017-05-01

    The power grid has been evolving over the last 120 years, but it is seeing more changes in this decade and next than it has seen over the past century. In particular, the widespread deployment of intermittent renewable generation, smart loads and devices, hierarchical and distributed control technologies, phasor measurement units, energy storage, and widespread usage of electric vehicles will require fundamental changes in methods and tools for the operation and planning of the power grid. The resulting new dynamic and stochastic behaviors will demand the inclusion of more complexity in modeling the power grid. Solving such complex models in the traditional computing environment will be a major challenge. Along with the increasing complexity of power system models, the increasing complexity of smart grid data further adds to the prevailing challenges. In this environment, the myriad of smart sensors and meters in the power grid increase by multiple orders of magnitude, so do the volume and speed of the data. The information infrastructure will need to drastically change to support the exchange of enormous amounts of data as smart grid applications will need the capability to collect, assimilate, analyze and process the data, to meet real-time grid functions. High performance computing (HPC) holds the promise to enhance these functions, but it is a great resource that has not been fully explored and adopted for the power grid domain.

  5. Computational modeling of high pressure combustion mechanism in scram accelerator

    Energy Technology Data Exchange (ETDEWEB)

    Choi, J.Y. [Pusan Nat. Univ. (Korea); Lee, B.J. [Pusan Nat. Univ. (Korea); Agency for Defense Development, Taejon (Korea); Jeung, I.S. [Pusan Nat. Univ. (Korea); Seoul National Univ. (Korea). Dept. of Aerospace Engineering

    2000-11-01

    A computational study was carried out to analyze a high-pressure combustion in scram accelerator. Fluid dynamic modeling was based on RANS equations for reactive flows, which were solved in a fully coupled manner using a fully implicit-upwind TVD scheme. For the accurate simulation of high-pressure combustion in ram accelerator, 9-species, 25-step fully detailed reaction mechanism was incorporated with the existing CFD code previously used for the ram accelerator studies. The mechanism is based on GRI-Mech. 2.11 that includes pressure-dependent reaction rate formulations indispensable for the correct prediction of induction time in high-pressure environment. A real gas equation of state was also included to account for molecular interactions and real gas effects of high-pressure gases. The present combustion modeling is compared with previous 8-step and 19-step mechanisms with ideal gas assumption. The result shows that mixture ignition characteristics are very sensitive to the combustion mechanisms, and different mechanism results in different reactive flow-field characteristics that have a significant relevance to the operation mode and the performance of scram accelerator. (orig.)

  6. Computational Fluid Dynamics Analysis of High Injection Pressure Blended Biodiesel

    Science.gov (United States)

    Khalid, Amir; Jaat, Norrizam; Faisal Hushim, Mohd; Manshoor, Bukhari; Zaman, Izzuddin; Sapit, Azwan; Razali, Azahari

    2017-08-01

    Biodiesel have great potential for substitution with petrol fuel for the purpose of achieving clean energy production and emission reduction. Among the methods that can control the combustion properties, controlling of the fuel injection conditions is one of the successful methods. The purpose of this study is to investigate the effect of high injection pressure of biodiesel blends on spray characteristics using Computational Fluid Dynamics (CFD). Injection pressure was observed at 220 MPa, 250 MPa and 280 MPa. The ambient temperature was kept held at 1050 K and ambient pressure 8 MPa in order to simulate the effect of boost pressure or turbo charger during combustion process. Computational Fluid Dynamics were used to investigate the spray characteristics of biodiesel blends such as spray penetration length, spray angle and mixture formation of fuel-air mixing. The results shows that increases of injection pressure, wider spray angle is produced by biodiesel blends and diesel fuel. The injection pressure strongly affects the mixture formation, characteristics of fuel spray, longer spray penetration length thus promotes the fuel and air mixing.

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  8. Highly versatile computer-controlled television detector system

    Science.gov (United States)

    Kalata, K.

    1982-01-01

    A description is presented of a television detector system which has been designed to accommodate a wide range of applications. It is currently being developed for use in X-ray diffraction, X-ray astrophysics, and electron microscopy, but it is also well suited for astronomical observations. The image can be integrated in a large, high-speed memory system, in the memory of a computer system, or the target of the TV tube or CCD array. The detector system consists of a continuously scanned, intensified SIT vidicon with scan and processing electronics which generate a digital image that is integrated in the detector memory. Attention is given to details regarding the camera system, scan control and image processing electronics, the memory system, and aspects of detector performance.

  9. Power/energy use cases for high performance computing.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H.,; Kelly, Suzanne M; Hammond, Steven; Elmore, Ryan; Munch, Kristin

    2013-12-01

    Power and Energy have been identified as a first order challenge for future extreme scale high performance computing (HPC) systems. In practice the breakthroughs will need to be provided by the hardware vendors. But to make the best use of the solutions in an HPC environment, it will likely require periodic tuning by facility operators and software components. This document describes the actions and interactions needed to maximize power resources. It strives to cover the entire operational space in which an HPC system occupies. The descriptions are presented as formal use cases, as documented in the Unified Modeling Language Specification [1]. The document is intended to provide a common understanding to the HPC community of the necessary management and control capabilities. Assuming a common understanding can be achieved, the next step will be to develop a set of Application Programing Interfaces (APIs) to which hardware vendors and software developers could utilize to steer power consumption.

  10. Derivation Of Probabilistic Damage Definitions From High Fidelity Deterministic Computations

    Energy Technology Data Exchange (ETDEWEB)

    Leininger, L D

    2004-10-26

    This paper summarizes a methodology used by the Underground Analysis and Planning System (UGAPS) at Lawrence Livermore National Laboratory (LLNL) for the derivation of probabilistic damage curves for US Strategic Command (USSTRATCOM). UGAPS uses high fidelity finite element and discrete element codes on the massively parallel supercomputers to predict damage to underground structures from military interdiction scenarios. These deterministic calculations can be riddled with uncertainty, especially when intelligence, the basis for this modeling, is uncertain. The technique presented here attempts to account for this uncertainty by bounding the problem with reasonable cases and using those bounding cases as a statistical sample. Probability of damage curves are computed and represented that account for uncertainty within the sample and enable the war planner to make informed decisions. This work is flexible enough to incorporate any desired damage mechanism and can utilize the variety of finite element and discrete element codes within the national laboratory and government contractor community.

  11. A Component Architecture for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, D E; Elwasif, W R; Kohl, J A; Epperly, T G W

    2003-01-21

    The Common Component Architecture (CCA) provides a means for developers to manage the complexity of large-scale scientific software systems and to move toward a ''plug and play'' environment for high-performance computing. The CCA model allows for a direct connection between components within the same process to maintain performance on inter-component calls. It is neutral with respect to parallelism, allowing components to use whatever means they desire to communicate within their parallel ''cohort.'' We will discuss in detail the importance of performance in the design of the CCA and will analyze the performance costs associated with features of the CCA.

  12. Is the effect of job strain on myocardial infarction risk due to interaction between high psychological demands and low decision latitude?

    DEFF Research Database (Denmark)

    Hallqvist, J; Diderichsen, Finn; Theorell, T;

    1998-01-01

    referents were included in the analysis. Exposure categories of job strain were formed from self reported questionnaire information. The results show that high demands and low decision latitude interact with a synergy index of 7.5 (95% C.I.: 1.8-30.6) providing empirical support for the core mechanism......The objectives are to examine if the excess risk of myocardial infarction from exposure to job strain is due to interaction between high demands and low control and to analyse what role such an interaction has regarding socioeconomic differences in risk of myocardial infarction. The material...... of the job strain model. Manual workers are more susceptible when exposed to job strain and its components and this increased susceptibility explains about 25-50% of the relative excess risk among manual workers. Low decision latitude may also, as a causal link, explain about 30% of the socioeconomic...

  13. A Heterogeneous High-Performance System for Computational and Computer Science

    Science.gov (United States)

    2016-11-15

    Science The views, opinions and/or findings contained in this report are those of the author(s) and should not contrued as an official Department of the...System for Computational and Computer Science Report Title This DoD HBC/MI Equipment/Instrumentation grant was awarded in October 2014 for the purchase...Computing (HPC) course taught in the department of computer science as to attract more graduate students from many disciplines where their research

  14. High-Performance Special-Purpose Computers in Science

    OpenAIRE

    1998-01-01

    The next decade will be an exciting time for computational physicists. After 50 years of being forced to use standardized commercial equipment, it will finally become relatively straightforward to adapt one's computing tools to one's own needs. The breakthrough that opens this new era is the now wide-spread availability of programmable chips that allow virtually every computational scientist to design his or her own special-purpose computer.

  15. FY 1995 Blue Book: High Performance Computing and Communications: Technology for the National Information Infrastructure

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — The Federal High Performance Computing and Communications HPCC Program was created to accelerate the development of future generations of high performance computers...

  16. Computation of order and volume fill rates for a base stock inventory control system with heterogeneous demand to investigate which customer class gets the best service

    DEFF Research Database (Denmark)

    Larsen, Christian

    We consider a base stock inventory control system serving two customer classes whose demands are generated by two independent compound renewal processes. We show how to derive order and volume fill rates of each class. Based on assumptions about first order stochastic dominance we prove when one ...... customer class will get the best service. That theoretical result is validated through a series of numerical experiments which also reveal that it is quite robust.......We consider a base stock inventory control system serving two customer classes whose demands are generated by two independent compound renewal processes. We show how to derive order and volume fill rates of each class. Based on assumptions about first order stochastic dominance we prove when one...

  17. Contributions to Desktop Grid Computing : From High Throughput Computing to Data-Intensive Sciences on Hybrid Distributed Computing Infrastructures

    OpenAIRE

    Fedak, Gilles

    2015-01-01

    Since the mid 90’s, Desktop Grid Computing - i.e the idea of using a large number of remote PCs distributed on the Internet to execute large parallel applications - has proved to be an efficient paradigm to provide a large computational power at the fraction of the cost of a dedicated computing infrastructure.This document presents my contributions over the last decade to broaden the scope of Desktop Grid Computing. My research has followed three different directions. The first direction has ...

  18. HiFi-MBQC High Fidelitiy Measurement-Based Quantum Computing using Superconducting Detectors

    Science.gov (United States)

    2016-04-04

    computer. We exploit the conceptual framework of measurement - based quantum computation that enables a client to delegate a computation to a quantum...AFRL-AFOSR-UK-TR-2016-0006 HiFi-MBQC High Fidelitiy Measurement - Based Quantum Computing using Superconducting Detectors Philip Walther UNIVERSITT...HiFi-MBQC High Fidelitiy Measurement - Based Quantum Computing using Superconducting Detectors 5a. CONTRACT NUMBER FA8655-11-1-3004 5b. GRANT NUMBER

  19. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  20. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  1. High Performance Input/Output Systems for High Performance Computing and Four-Dimensional Data Assimilation

    Science.gov (United States)

    Fox, Geoffrey C.; Ou, Chao-Wei

    1997-01-01

    The approach of this task was to apply leading parallel computing research to a number of existing techniques for assimilation, and extract parameters indicating where and how input/output limits computational performance. The following was used for detailed knowledge of the application problems: 1. Developing a parallel input/output system specifically for this application 2. Extracting the important input/output characteristics of data assimilation problems; and 3. Building these characteristics s parameters into our runtime library (Fortran D/High Performance Fortran) for parallel input/output support.

  2. COMPUTER APPROACHES TO WHEAT HIGH-THROUGHPUT PHENOTYPING

    Directory of Open Access Journals (Sweden)

    Afonnikov D.

    2012-08-01

    Full Text Available The growing need for rapid and accurate approaches for large-scale assessment of phenotypic characters in plants becomes more and more obvious in the studies looking into relationships between genotype and phenotype. This need is due to the advent of high throughput methods for analysis of genomes. Nowadays, any genetic experiment involves data on thousands and dozens of thousands of plants. Traditional ways of assessing most phenotypic characteristics (those with reliance on the eye, the touch, the ruler are little effective on samples of such sizes. Modern approaches seek to take advantage of automated phenotyping, which warrants a much more rapid data acquisition, higher accuracy of the assessment of phenotypic features, measurement of new parameters of these features and exclusion of human subjectivity from the process. Additionally, automation allows measurement data to be rapidly loaded into computer databases, which reduces data processing time.In this work, we present the WheatPGE information system designed to solve the problem of integration of genotypic and phenotypic data and parameters of the environment, as well as to analyze the relationships between the genotype and phenotype in wheat. The system is used to consolidate miscellaneous data on a plant for storing and processing various morphological traits and genotypes of wheat plants as well as data on various environmental factors. The system is available at www.wheatdb.org. Its potential in genetic experiments has been demonstrated in high-throughput phenotyping of wheat leaf pubescence.

  3. High performance computation on beam dynamics problems in high intensity compact cyclotrons

    Institute of Scientific and Technical Information of China (English)

    ADELMANN; Andreas

    2011-01-01

    This paper presents the research progress in the beam dynamics problems for future high intensity compact cyclotrons by utilizing the state-of-the-art high performance computation technology. A "Start-to-Stop" model, which includes both the interaction of the internal particles of a single bunch and the mutual interaction of neighboring multiple bunches in the radial direction, is established for compact cyclotrons with multi-turn extraction. This model is then implemented in OPAL-CYCL, which is a 3D object-oriented parallel code for large scale particle simulations in cyclotrons. In addition, to meet the running requirement of parallel computation, we have constructed a small scale HPC cluster system and tested its performance. Finally, the high intensity beam dynamics problems in the 100 MeV compact cyclotron, which is being constructed at CIAE, are studied using this code and some conclusions are drawn.

  4. High Performance Computing Facility Operational Assessment, FY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL

    2011-08-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.5 billion core hours in calendar year (CY) 2010 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Scientific achievements by OLCF users range from collaboration with university experimentalists to produce a working supercapacitor that uses atom-thick sheets of carbon materials to finely determining the resolution requirements for simulations of coal gasifiers and their components, thus laying the foundation for development of commercial-scale gasifiers. OLCF users are pushing the boundaries with software applications sustaining more than one petaflop of performance in the quest to illuminate the fundamental nature of electronic devices. Other teams of researchers are working to resolve predictive capabilities of climate models, to refine and validate genome sequencing, and to explore the most fundamental materials in nature - quarks and gluons - and their unique properties. Details of these scientific endeavors - not possible without access to leadership-class computing resources - are detailed in Section 4 of this report and in the INCITE in Review. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. This Operational Assessment Report (OAR) will delineate the policies, procedures, and innovations implemented by the OLCF to continue delivering a petaflop-scale resource for cutting-edge research. The 2010 operational assessment of the OLCF yielded recommendations that have been addressed (Reference Section 1) and

  5. Worksite interventions for preventing physical deterioration among employees in job-groups with high physical work demands: Background, design and conceptual model of FINALE

    Directory of Open Access Journals (Sweden)

    Mortensen Ole S

    2010-03-01

    Full Text Available Abstract Background A mismatch between individual physical capacities and physical work demands enhance the risk for musculoskeletal disorders, poor work ability and sickness absence, termed physical deterioration. However, effective intervention strategies for preventing physical deterioration in job groups with high physical demands remains to be established. This paper describes the background, design and conceptual model of the FINALE programme, a framework for health promoting interventions at 4 Danish job groups (i.e. cleaners, health-care workers, construction workers and industrial workers characterized by high physical work demands, musculoskeletal disorders, poor work ability and sickness absence. Methods/Design A novel approach of the FINALE programme is that the interventions, i.e. 3 randomized controlled trials (RCT and 1 exploratory case-control study are tailored to the physical work demands, physical capacities and health profile of workers in each job-group. The RCT among cleaners, characterized by repetitive work tasks and musculoskeletal disorders, aims at making the cleaners less susceptible to musculoskeletal disorders by physical coordination training or cognitive behavioral theory based training (CBTr. Because health-care workers are reported to have high prevalence of overweight and heavy lifts, the aim of the RCT is long-term weight-loss by combined physical exercise training, CBTr and diet. Construction work, characterized by heavy lifting, pushing and pulling, the RCT aims at improving physical capacity and promoting musculoskeletal and cardiovascular health. At the industrial work-place characterized by repetitive work tasks, the intervention aims at reducing physical exertion and musculoskeletal disorders by combined physical exercise training, CBTr and participatory ergonomics. The overall aim of the FINALE programme is to improve the safety margin between individual resources (i.e. physical capacities, and

  6. Diagnostic value of high resolutional computed tomography of spine

    Energy Technology Data Exchange (ETDEWEB)

    Yang, S. M.; Im, S. K.; Sohn, M. H.; Lim, K. Y.; Kim, J. K.; Choi, K. C. [Jeonbug National University College of Medicine, Seoul (Korea, Republic of)

    1984-03-15

    Non-enhanced high resolution computed tomography provide clear visualization of soft tissue in the canal and bony details of spine, particularly of the lumbar spine. We observed 70 cases of spine CT using GE CT/T 8800 scanner during the period from Dec. 1982 to Sep. 1983 at Jeonbug National University Hospital. The results were as follows: 1. The sex distribution of cases were 55 males and 15 females : age was from 17 years to 67 years; sites were 11 cervical spine, 5 thoracic spine and 54 lumbosacral spine. 2. CT diagnosis showed 44 cases of lumbar disc herniation, 7 cases of degenerative disease, 3 cases of spine fracture and each 1 cases of cord tumor, metastatic tumor, spontaneous epidural hemorrhage, epidural abscess, spine tbc., meningocele with diastematomyelia. 3. Sites of herniated nucleus pulposus were 34 cases (59.6%) between L4-5 interspace and 20 cases (35.1%) between L5-S1 interspace. 13 cases (29.5%) of lumbar disc herniation disclosed multiple lesions. Location of herniation were central type in 28 cases(49.1%), right-central type in 12 cases(21.2%), left-central type in 11 cases (19.2%) and far lateral type in 6 cases(10.5%). 4. CT findings of herniated nucleus pulposus were as follows : focal protrusion of posterior disc margin and obliteration of anterior epidural fat in all cases, dural sac indentation in 26 cases(45.6%), soft tissue mass in epidural fat in 21 cases(36.8%), displacement or compression of nerve root sheath in 12 cases(21%). 5. Multiplanar reformatted images and Blink mode provide more effective evaluation about definite level and longitudinal dimension of lesion, such as obscure disc herniation, spine fracture, cord tumor and epidural abscess. 6. Non-enhanced and enhanced high resolutional computed tomography were effectively useful in demonstrating compression or displacement of spinal cord and nerve root, examing congenital anomaly such as meningocele and primary or metastatic spinal lesions.

  7. Big Data and High-Performance Computing in Global Seismology

    Science.gov (United States)

    Bozdag, Ebru; Lefebvre, Matthieu; Lei, Wenjie; Peter, Daniel; Smith, James; Komatitsch, Dimitri; Tromp, Jeroen

    2014-05-01

    Much of our knowledge of Earth's interior is based on seismic observations and measurements. Adjoint methods provide an efficient way of incorporating 3D full wave propagation in iterative seismic inversions to enhance tomographic images and thus our understanding of processes taking place inside the Earth. Our aim is to take adjoint tomography, which has been successfully applied to regional and continental scale problems, further to image the entire planet. This is one of the extreme imaging challenges in seismology, mainly due to the intense computational requirements and vast amount of high-quality seismic data that can potentially be assimilated. We have started low-resolution inversions (T > 30 s and T > 60 s for body and surface waves, respectively) with a limited data set (253 carefully selected earthquakes and seismic data from permanent and temporary networks) on Oak Ridge National Laboratory's Cray XK7 "Titan" system. Recent improvements in our 3D global wave propagation solvers, such as a GPU version of the SPECFEM3D_GLOBE package, will enable us perform higher-resolution (T > 9 s) and longer duration (~180 m) simulations to take the advantage of high-frequency body waves and major-arc surface waves, thereby improving imbalanced ray coverage as a result of the uneven global distribution of sources and receivers. Our ultimate goal is to use all earthquakes in the global CMT catalogue within the magnitude range of our interest and data from all available seismic networks. To take the full advantage of computational resources, we need a solid framework to manage big data sets during numerical simulations, pre-processing (i.e., data requests and quality checks, processing data, window selection, etc.) and post-processing (i.e., pre-conditioning and smoothing kernels, etc.). We address the bottlenecks in our global seismic workflow, which are mainly coming from heavy I/O traffic during simulations and the pre- and post-processing stages, by defining new data

  8. Numerical Computation of High Dimensional Solitons Via Drboux Transformation

    Institute of Scientific and Technical Information of China (English)

    ZixiangZHOU

    1997-01-01

    Darboux transformation gives explicit soliton solutions of nonlinear partial differential equations.Using numerical computation in each step of constructing Darboux transformation,one can get the graphs of the solitons practically,In n dimensions(n≥3),this method greatly increases the speed and deduces the memory usage of computation comparing to the software for algebraic computation.A technical problem concerning floating overflow is discussed.

  9. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  10. An Embedded System for applying High Performance Computing in Educational Learning Activity

    OpenAIRE

    Irene Erlyn Wina Rachmawan; Nurul Fahmi; Edi Wahyu Widodo; Samsul Huda; M. Unggul Pamenang; M. Choirur Roziqin; Andri Permana W.; Stritusta Sukaridhoto; Dadet Pramadihanto

    2016-01-01

    HPC (High Performance Computing) has become more popular in the last few years. With the benefits on high computational power, HPC has impact on industry, scientific research and educational activities. Implementing HPC as a curriculum in universities could be consuming a lot of resources because well-known HPC system are using Personal Computer or Server. By using PC as the practical moduls it is need great resources and spaces.  This paper presents an innovative high performance computing c...

  11. CAREER GUIDE FOR DEMAND OCCUPATIONS.

    Science.gov (United States)

    LEE, E.R.; WELCH, JOHN L.

    THIS PUBLICATION UPDATES THE "CAREER GUIDE FOR DEMAND OCCUPATIONS" PUBLISHED IN 1959 AND PROVIDES COUNSELORS WITH INFORMATION ABOUT OCCUPATIONS IN DEMAND IN MANY AREAS WHICH REQUIRE PREEMPLOYMENT TRAINING. IT PRESENTS, IN COLUMN FORM, THE EDUCATION AND OTHER TRAINING USUALLY REQUIRED BY EMPLOYERS, HIGH SCHOOL SUBJECTS OF PARTICULAR PERTINENCE TO…

  12. Multi-Language Programming Environments for High Performance Java Computing

    Directory of Open Access Journals (Sweden)

    Vladimir Getov

    1999-01-01

    Full Text Available Recent developments in processor capabilities, software tools, programming languages and programming paradigms have brought about new approaches to high performance computing. A steadfast component of this dynamic evolution has been the scientific community’s reliance on established scientific packages. As a consequence, programmers of high‐performance applications are reluctant to embrace evolving languages such as Java. This paper describes the Java‐to‐C Interface (JCI tool which provides application programmers wishing to use Java with immediate accessibility to existing scientific packages. The JCI tool also facilitates rapid development and reuse of existing code. These benefits are provided at minimal cost to the programmer. While beneficial to the programmer, the additional advantages of mixed‐language programming in terms of application performance and portability are addressed in detail within the context of this paper. In addition, we discuss how the JCI tool is complementing other ongoing projects such as IBM’s High‐Performance Compiler for Java (HPCJ and IceT’s metacomputing environment.

  13. High-definition three-dimensional television disparity map computation

    Science.gov (United States)

    Chammem, Afef; Mitrea, Mihai; Prêteux, Françoise

    2012-10-01

    By reconsidering some two-dimensional video inherited approaches and by adapting them to the stereoscopic video content and to the human visual system peculiarities, a new disparity map is designed. First, the inner relation between the left and the right views is modeled by some weights discriminating between the horizontal and vertical disparities. Second, the block matching operation is achieved by considering a visual related measure (normalized cross correlation) instead of the traditional pixel differences (mean squared error or sum of absolute differences). The advanced three-dimensional (3-D) video-new three step search (3DV-NTSS) disparity map (3-D Video-New Three Step Search) is benchmarked against two state-of-the-art algorithms, namely NTSS and full-search MPEG (FS-MPEG), by successively considering two corpora. The first corpus was organized during the 3DLive French national project and regroups 20 min of stereoscopic video sequences. The second one, with similar size, is provided by the MPEG community. The experimental results demonstrate the effectiveness of 3DV-NTSS in both reconstructed image quality (average gains between 3% and 7% in both PSNR and structural similarity, with a singular exception) and computational cost (search operation number reduced by average factors between 1.3 and 13). The 3DV-NTSS was finally validated by designing a watermarking method for high definition 3-D TV content protection.

  14. Enabling Efficient Climate Science Workflows in High Performance Computing Environments

    Science.gov (United States)

    Krishnan, H.; Byna, S.; Wehner, M. F.; Gu, J.; O'Brien, T. A.; Loring, B.; Stone, D. A.; Collins, W.; Prabhat, M.; Liu, Y.; Johnson, J. N.; Paciorek, C. J.

    2015-12-01

    A typical climate science workflow often involves a combination of acquisition of data, modeling, simulation, analysis, visualization, publishing, and storage of results. Each of these tasks provide a myriad of challenges when running on a high performance computing environment such as Hopper or Edison at NERSC. Hurdles such as data transfer and management, job scheduling, parallel analysis routines, and publication require a lot of forethought and planning to ensure that proper quality control mechanisms are in place. These steps require effectively utilizing a combination of well tested and newly developed functionality to move data, perform analysis, apply statistical routines, and finally, serve results and tools to the greater scientific community. As part of the CAlibrated and Systematic Characterization, Attribution and Detection of Extremes (CASCADE) project we highlight a stack of tools our team utilizes and has developed to ensure that large scale simulation and analysis work are commonplace and provide operations that assist in everything from generation/procurement of data (HTAR/Globus) to automating publication of results to portals like the Earth Systems Grid Federation (ESGF), all while executing everything in between in a scalable environment in a task parallel way (MPI). We highlight the use and benefit of these tools by showing several climate science analysis use cases they have been applied to.

  15. Pulmonary high-resolution computed tomography findings in nephropathia epidemica

    Energy Technology Data Exchange (ETDEWEB)

    Paakkala, Antti, E-mail: antti.paakkala@pshp.fi [Medical Imaging Centre, Tampere University Hospital, 33521 Tampere (Finland); Jaervenpaeae, Ritva, E-mail: ritva.jarvenpaa@pshp.fi [Medical Imaging Centre, Tampere University Hospital, 33521 Tampere (Finland); Maekelae, Satu, E-mail: satu.marjo.makela@uta.fi [Department of Internal Medicine, Tampere University Hospital, 33521 Tampere (Finland); Medical School, University of Tampere, 33521 Tampere (Finland); Huhtala, Heini, E-mail: heini.huhtala@uta.fi [School of Public Health, University of Tampere, 33521 Tampere (Finland); Mustonen, Jukka, E-mail: jukka.mustonen@uta.fi [Department of Internal Medicine, Tampere University Hospital, 33521 Tampere (Finland); Medical School, University of Tampere, 33521 Tampere (Finland)

    2012-08-15

    Purpose: To evaluate lung high-resolution computed tomography (HRCT) findings in patients with Puumala hantavirus-induced nephropathia epidemica (NE), and to determine if these findings correspond to chest radiograph findings. Materials and methods: HRCT findings and clinical course were studied in 13 hospital-treated NE patients. Chest radiograph findings were studied in 12 of them. Results: Twelve patients (92%) showed lung parenchymal abnormalities in HRCT, while only 8 had changes in their chest radiography. Atelectasis, pleural effusion, intralobular and interlobular septal thickening were the most common HRCT findings. Ground-glass opacification (GGO) was seen in 4 and hilar and mediastinal lymphadenopathy in 3 patients. Atelectasis and pleural effusion were also mostly seen in chest radiographs, other findings only in HRCT. Conclusion: Almost every NE patient showed lung parenchymal abnormalities in HRCT. The most common findings of lung involvement in NE can be defined as accumulation of pleural fluid and atelectasis and intralobular and interlobular septal thickening, most profusely in the lower parts of the lung. As a novel finding, lymphadenopathy was seen in a minority, probably related to capillary leakage and overall fluid overload. Pleural effusion is not the prominent feature in other viral pneumonias, whereas intralobular and interlobular septal thickening are characteristic of other viral pulmonary infections as well. Lung parenchymal findings in HRCT can thus be taken not to be disease-specific in NE and HRCT is useful only for scientific purposes.

  16. High Speed Computational Ghost Imaging via Spatial Sweeping

    Science.gov (United States)

    Wang, Yuwang; Liu, Yang; Suo, Jinli; Situ, Guohai; Qiao, Chang; Dai, Qionghai

    2017-01-01

    Computational ghost imaging (CGI) achieves single-pixel imaging by using a Spatial Light Modulator (SLM) to generate structured illuminations for spatially resolved information encoding. The imaging speed of CGI is limited by the modulation frequency of available SLMs, and sets back its practical applications. This paper proposes to bypass this limitation by trading off SLM’s redundant spatial resolution for multiplication of the modulation frequency. Specifically, a pair of galvanic mirrors sweeping across the high resolution SLM multiply the modulation frequency within the spatial resolution gap between SLM and the final reconstruction. A proof-of-principle setup with two middle end galvanic mirrors achieves ghost imaging as fast as 42 Hz at 80 × 80-pixel resolution, 5 times faster than state-of-the-arts, and holds potential for one magnitude further multiplication by hardware upgrading. Our approach brings a significant improvement in the imaging speed of ghost imaging and pushes ghost imaging towards practical applications. PMID:28358010

  17. High performance computing network for cloud environment using simulators

    CERN Document Server

    Singh, N Ajith

    2012-01-01

    Cloud computing is the next generation computing. Adopting the cloud computing is like signing up new form of a website. The GUI which controls the cloud computing make is directly control the hardware resource and your application. The difficulty part in cloud computing is to deploy in real environment. Its' difficult to know the exact cost and it's requirement until and unless we buy the service not only that whether it will support the existing application which is available on traditional data center or had to design a new application for the cloud computing environment. The security issue, latency, fault tolerance are some parameter which we need to keen care before deploying, all this we only know after deploying but by using simulation we can do the experiment before deploying it to real environment. By simulation we can understand the real environment of cloud computing and then after it successful result we can start deploying your application in cloud computing environment. By using the simulator it...

  18. Computer Science in High School Graduation Requirements. ECS Education Trends

    Science.gov (United States)

    Zinth, Jennifer Dounay

    2015-01-01

    Computer science and coding skills are widely recognized as a valuable asset in the current and projected job market. The Bureau of Labor Statistics projects 37.5 percent growth from 2012 to 2022 in the "computer systems design and related services" industry--from 1,620,300 jobs in 2012 to an estimated 2,229,000 jobs in 2022. Yet some…

  19. Using a Computer Animation to Teach High School Molecular Biology

    Science.gov (United States)

    Rotbain, Yosi; Marbach-Ad, Gili; Stavy, Ruth

    2008-01-01

    We present an active way to use a computer animation in secondary molecular genetics class. For this purpose we developed an activity booklet that helps students to work interactively with a computer animation which deals with abstract concepts and processes in molecular biology. The achievements of the experimental group were compared with those…

  20. A comprehensive approach to decipher biological computation to achieve next generation high-performance exascale computing.

    Energy Technology Data Exchange (ETDEWEB)

    James, Conrad D.; Schiess, Adrian B.; Howell, Jamie; Baca, Michael J.; Partridge, L. Donald; Finnegan, Patrick Sean; Wolfley, Steven L.; Dagel, Daryl James; Spahn, Olga Blum; Harper, Jason C.; Pohl, Kenneth Roy; Mickel, Patrick R.; Lohn, Andrew; Marinella, Matthew

    2013-10-01

    The human brain (volume=1200cm3) consumes 20W and is capable of performing > 10^16 operations/s. Current supercomputer technology has reached 1015 operations/s, yet it requires 1500m^3 and 3MW, giving the brain a 10^12 advantage in operations/s/W/cm^3. Thus, to reach exascale computation, two achievements are required: 1) improved understanding of computation in biological tissue, and 2) a paradigm shift towards neuromorphic computing where hardware circuits mimic properties of neural tissue. To address 1), we will interrogate corticostriatal networks in mouse brain tissue slices, specifically with regard to their frequency filtering capabilities as a function of input stimulus. To address 2), we will instantiate biological computing characteristics such as multi-bit storage into hardware devices with future computational and memory applications. Resistive memory devices will be modeled, designed, and fabricated in the MESA facility in consultation with our internal and external collaborators.

  1. High throughput computing: a solution for scientific analysis

    Science.gov (United States)

    O'Donnell, M.

    2011-01-01

    Public land management agencies continually face resource management problems that are exacerbated by climate warming, land-use change, and other human activities. As the U.S. Geological Survey (USGS) Fort Collins Science Center (FORT) works with managers in U.S. Department of the Interior (DOI) agencies and other federal, state, and private entities, researchers are finding that the science needed to address these complex ecological questions across time and space produces substantial amounts of data. The additional data and the volume of computations needed to analyze it require expanded computing resources well beyond single- or even multiple-computer workstations. To meet this need for greater computational capacity, FORT investigated how to resolve the many computational shortfalls previously encountered when analyzing data for such projects. Our objectives included finding a solution that would:

  2. Voluntary medical male circumcision: matching demand and supply with quality and efficiency in a high-volume campaign in Iringa Region, Tanzania.

    Science.gov (United States)

    Mahler, Hally R; Kileo, Baldwin; Curran, Kelly; Plotkin, Marya; Adamu, Tigistu; Hellar, Augustino; Koshuma, Sifuni; Nyabenda, Simeon; Machaku, Michael; Lukobo-Durrell, Mainza; Castor, Delivette; Njeuhmeli, Emmanuel; Fimbo, Bennett

    2011-11-01

    The government of Tanzania has adopted voluntary medical male circumcision (VMMC) as an important component of its national HIV prevention strategy and is scaling up VMMC in eight regions nationwide, with the goal of reaching 2.8 million uncircumcised men by 2015. In a 2010 campaign lasting six weeks, five health facilities in Tanzania's Iringa Region performed 10,352 VMMCs, which exceeded the campaign's target by 72%, with an adverse event (AE) rate of 1%. HIV testing was almost universal during the campaign. Through the adoption of approaches designed to improve clinical efficiency-including the use of the forceps-guided surgical method, the use of multiple beds in an assembly line by surgical teams, and task shifting and task sharing-the campaign matched the supply of VMMC services with demand. Community mobilization and bringing client preparation tasks (such as counseling, testing, and client scheduling) out of the facility and into the community helped to generate demand. This case study suggests that a campaign approach can be used to provide high-volume quality VMMC services without compromising client safety, and provides a model for matching supply and demand for VMMC services in other settings.

  3. Voluntary medical male circumcision: matching demand and supply with quality and efficiency in a high-volume campaign in Iringa Region, Tanzania.

    Directory of Open Access Journals (Sweden)

    Hally R Mahler

    2011-11-01

    Full Text Available The government of Tanzania has adopted voluntary medical male circumcision (VMMC as an important component of its national HIV prevention strategy and is scaling up VMMC in eight regions nationwide, with the goal of reaching 2.8 million uncircumcised men by 2015. In a 2010 campaign lasting six weeks, five health facilities in Tanzania's Iringa Region performed 10,352 VMMCs, which exceeded the campaign's target by 72%, with an adverse event (AE rate of 1%. HIV testing was almost universal during the campaign. Through the adoption of approaches designed to improve clinical efficiency-including the use of the forceps-guided surgical method, the use of multiple beds in an assembly line by surgical teams, and task shifting and task sharing-the campaign matched the supply of VMMC services with demand. Community mobilization and bringing client preparation tasks (such as counseling, testing, and client scheduling out of the facility and into the community helped to generate demand. This case study suggests that a campaign approach can be used to provide high-volume quality VMMC services without compromising client safety, and provides a model for matching supply and demand for VMMC services in other settings.

  4. A ground-up approach to High Throughput Cloud Computing in High-Energy Physics

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00245123; Ganis, Gerardo; Bagnasco, Stefano

    The thesis explores various practical approaches in making existing High Throughput computing applications common in High Energy Physics work on cloud-provided resources, as well as opening the possibility for running new applications. The work is divided into two parts: firstly we describe the work done at the computing facility hosted by INFN Torino to entirely convert former Grid resources into cloud ones, eventually running Grid use cases on top along with many others in a more flexible way. Integration and conversion problems are duly described. The second part covers the development of solutions for automatizing the orchestration of cloud workers based on the load of a batch queue and the development of HEP applications based on ROOT's PROOF that can adapt at runtime to a changing number of workers.

  5. Path Not Found: Disparities in Access to Computer Science Courses in California High Schools

    Science.gov (United States)

    Martin, Alexis; McAlear, Frieda; Scott, Allison

    2015-01-01

    "Path Not Found: Disparities in Access to Computer Science Courses in California High Schools" exposes one of the foundational causes of underrepresentation in computing: disparities in access to computer science courses in California's public high schools. This report provides new, detailed data on these disparities by student body…

  6. A Study of Quality of Service Communication for High-Speed Packet-Switching Computer Sub-Networks

    Science.gov (United States)

    Cui, Zhenqian

    1999-01-01

    With the development of high-speed networking technology, computer networks, including local-area networks (LANs), wide-area networks (WANs) and the Internet, are extending their traditional roles of carrying computer data. They are being used for Internet telephony, multimedia applications such as conferencing and video on demand, distributed simulations, and other real-time applications. LANs are even used for distributed real-time process control and computing as a cost-effective approach. Differing from traditional data transfer, these new classes of high-speed network applications (video, audio, real-time process control, and others) are delay sensitive. The usefulness of data depends not only on the correctness of received data, but also the time that data are received. In other words, these new classes of applications require networks to provide guaranteed services or quality of service (QoS). Quality of service can be defined by a set of parameters and reflects a user's expectation about the underlying network's behavior. Traditionally, distinct services are provided by different kinds of networks. Voice services are provided by telephone networks, video services are provided by cable networks, and data transfer services are provided by computer networks. A single network providing different services is called an integrated-services network.

  7. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  8. Speeding up ecological and evolutionary computations in R; essentials of high performance computing for biologists.

    Science.gov (United States)

    Visser, Marco D; McMahon, Sean M; Merow, Cory; Dixon, Philip M; Record, Sydne; Jongejans, Eelke

    2015-03-01

    Computation has become a critical component of research in biology. A risk has emerged that computational and programming challenges may limit research scope, depth, and quality. We review various solutions to common computational efficiency problems in ecological and evolutionary research. Our review pulls together material that is currently scattered across many sources and emphasizes those techniques that are especially effective for typical ecological and environmental problems. We demonstrate how straightforward it can be to write efficient code and implement techniques such as profiling or parallel computing. We supply a newly developed R package (aprof) that helps to identify computational bottlenecks in R code and determine whether optimization can be effective. Our review is complemented by a practical set of examples and detailed Supporting Information material (S1-S3 Texts) that demonstrate large improvements in computational speed (ranging from 10.5 times to 14,000 times faster). By improving computational efficiency, biologists can feasibly solve more complex tasks, ask more ambitious questions, and include more sophisticated analyses in their research.

  9. Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system.

    Science.gov (United States)

    Takada, Naoki; Shimobaba, Tomoyoshi; Nakayama, Hirotaka; Shiraki, Atsushi; Okada, Naohisa; Oikawa, Minoru; Masuda, Nobuyuki; Ito, Tomoyoshi

    2012-10-20

    To overcome the computational complexity of a computer-generated hologram (CGH), we implement an optimized CGH computation in our multi-graphics processing unit cluster system. Our system can calculate a CGH of 6,400×3,072 pixels from a three-dimensional (3D) object composed of 2,048 points in 55 ms. Furthermore, in the case of a 3D object composed of 4096 points, our system is 553 times faster than a conventional central processing unit (using eight threads).

  10. FY 1992 Blue Book: Grand Challenges: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  11. FY 1993 Blue Book: Grand Challenges 1993: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  12. Taking the High Ground: A Case for Department of Defense Application of Public Cloud Computing

    Science.gov (United States)

    2011-06-01

    IT cannot be sustained in a declining budget environment with users demanding better services. Wyld captures the essence of much of the problem for...the DoD laboratory data centers into model versions of public providers. An open source project, called Eucalyptus (http://www.eucalyptus.com), would...be an excellent starting point for such a project. Eucalyptus is a software plat- form for implementing private cloud computing solutions on top of

  13. Memory Benchmarks for SMP-Based High Performance Parallel Computers

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, A B; de Supinski, B; Mueller, F; Mckee, S A

    2001-11-20

    As the speed gap between CPU and main memory continues to grow, memory accesses increasingly dominates the performance of many applications. The problem is particularly acute for symmetric multiprocessor (SMP) systems, where the shared memory may be accessed concurrently by a group of threads running on separate CPUs. Unfortunately, several key issues governing memory system performance in current systems are not well understood. Complex interactions between the levels of the memory hierarchy, buses or switches, DRAM back-ends, system software, and application access patterns can make it difficult to pinpoint bottlenecks and determine appropriate optimizations, and the situation is even more complex for SMP systems. To partially address this problem, we formulated a set of multi-threaded microbenchmarks for characterizing and measuring the performance of the underlying memory system in SMP-based high-performance computers. We report our use of these microbenchmarks on two important SMP-based machines. This paper has four primary contributions. First, we introduce a microbenchmark suite to systematically assess and compare the performance of different levels in SMP memory hierarchies. Second, we present a new tool based on hardware performance monitors to determine a wide array of memory system characteristics, such as cache sizes, quickly and easily; by using this tool, memory performance studies can be targeted to the full spectrum of performance regimes with many fewer data points than is otherwise required. Third, we present experimental results indicating that the performance of applications with large memory footprints remains largely constrained by memory. Fourth, we demonstrate that thread-level parallelism further degrades memory performance, even for the latest SMPs with hardware prefetching and switch-based memory interconnects.

  14. Hybrid Computational Model for High-Altitude Aeroassist Vehicles Project

    Data.gov (United States)

    National Aeronautics and Space Administration — A hybrid continuum/noncontinuum computational model will be developed for analyzing the aerodynamics and heating on aeroassist vehicles. Unique features of this...

  15. Multicore Challenges and Benefits for High Performance Scientific Computing

    Directory of Open Access Journals (Sweden)

    Ida M.B. Nielsen

    2008-01-01

    Full Text Available Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexity of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.

  16. Distributed metadata in a high performance computing environment

    Energy Technology Data Exchange (ETDEWEB)

    Bent, John M.; Faibish, Sorin; Zhang, Zhenhua; Liu, Xuezhao; Tang, Haiying

    2017-07-11

    A computer-executable method, system, and computer program product for managing meta-data in a distributed storage system, wherein the distributed storage system includes one or more burst buffers enabled to operate with a distributed key-value store, the co computer-executable method, system, and computer program product comprising receiving a request for meta-data associated with a block of data stored in a first burst buffer of the one or more burst buffers in the distributed storage system, wherein the meta data is associated with a key-value, determining which of the one or more burst buffers stores the requested metadata, and upon determination that a first burst buffer of the one or more burst buffers stores the requested metadata, locating the key-value in a portion of the distributed key-value store accessible from the first burst buffer.

  17. High-speed packet switching network to link computers

    CERN Document Server

    Gerard, F M

    1980-01-01

    Virtually all of the experiments conducted at CERN use minicomputers today; some simply acquire data and store results on magnetic tape while others actually control experiments and help to process the resulting data. Currently there are more than two hundred minicomputers being used in the laboratory. In order to provide the minicomputer users with access to facilities available on mainframes and also to provide intercommunication between various experimental minicomputers, CERN opted for a packet switching network back in 1975. It was decided to use Modcomp II computers as switching nodes. The only software to be taken was a communications-oriented operating system called Maxcom. Today eight Modcomp II 16-bit computers plus six newer Classic minicomputers from Modular Computer Services have been purchased for the CERNET data communications networks. The current configuration comprises 11 nodes connecting more than 40 user machines to one another and to the laboratory's central computing facility. (0 refs).

  18. Role of high-performance computing in science education

    Energy Technology Data Exchange (ETDEWEB)

    Sabelli, N.H. (National Center for Supercomputing Applications, Champaign, IL (US))

    1991-01-01

    This article is a report on the continuing activities of a group committed to enhancing the development and use of computational science techniques in education. Interested readers are encouraged to contact members of the Steering Committee or the project coordinator.

  19. Benchmark Numerical Toolkits for High Performance Computing Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Computational codes in physics and engineering often use implicit solution algorithms that require linear algebra tools such as Ax=b solvers, eigenvalue,...

  20. Hybrid Computational Model for High-Altitude Aeroassist Vehicles Project

    Data.gov (United States)

    National Aeronautics and Space Administration — The proposed effort addresses a need for accurate computational models to support aeroassist and entry vehicle system design over a broad range of flight conditions...

  1. High Interactivity Visualization Software for Large Computational Data Sets Project

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to develop a collection of computer tools and libraries called SciViz that enable researchers to visualize large scale data sets on HPC resources remotely...

  2. Mathematical and computational modeling of a ferrofluid deformable mirror for high-contrast imaging

    Science.gov (United States)

    Lemmer, Aaron J.; Griffiths, Ian M.; Groff, Tyler D.; Rousing, Andreas W.; Kasdin, N. Jeremy

    2016-07-01

    Deformable mirrors (DMs) are an enabling and mission-critical technology in any coronagraphic instrument designed to directly image exoplanets. A new ferro fluid deformable mirror technology for high-contrast imaging is currently under development at Princeton, featuring a flexible optical surface manipulated by the local electromagnetic and global hydraulic actuation of a reservoir of ferro fluid. The ferro fluid DM is designed to prioritize high optical surface quality, high-precision/low-stroke actuation, and excellent low-spatial-frequency performance - capabilities that meet the unique demands of high-contrast coronagraphy in a space-based platform. To this end, the ferro-fluid medium continuously supports the DM face sheet, a configuration that eliminates actuator print-through (or, quilting) by decoupling the nominal surface figure from the geometry of the actuator array. The global pressure control allows independent focus actuation. In this paper we describe an analytical model for the quasi-static deformation response of the DM face sheet to both magnetic and pressure actuation. These modeling efforts serve to identify the key design parameters and quantify their contributions to the DM response, model the relationship between actuation commands and DM surface-profile response, and predict performance metrics such as achievable spatial resolution and stroke precision for specific actuator configurations. Our theoretical approach addresses the complexity of the boundary conditions associated with mechanical mounting of the face sheet, and makes use of asymptotic approximations by leveraging the three distinct length scales in the problem - namely, the low-stroke ( nm) actuation, face sheet thickness ( mm), and mirror diameter (cm). In addition to describing the theoretical treatment, we report the progress of computational multi physics simulations which will be useful in improving the model fidelity and in drawing conclusions to improve the design.

  3. High-performance computing at NERSC: Present and future

    Energy Technology Data Exchange (ETDEWEB)

    Koniges, A.E.

    1995-07-01

    The author describes the new T3D parallel computer at NERSC. The adaptive mesh ICF3D code is one of the current applications being ported and developed for use on the T3D. It has been stressed in other papers in this proceedings that the development environment and tools available on the parallel computer is similar to any planned for the future including networks of workstations.

  4. Providing a computing environment for a high energy physics workshop

    Energy Technology Data Exchange (ETDEWEB)

    Andrews, C.; Butler, J.; Carter, T.; DeMar, P.; Fagan, D.; Gibbons, R.; Grigaliunas, V.; Haibeck, M.; Haring, P.; Horvath, C.; Hughart, N.; Johnstad, H.; Jones, S.; Kreymer, A.; LeBrun, P.; Lego, A.; Leninger, M.; Loebel, L.; McNamara, S.; Nguyen, T.; Nicholls, J.; O' Reilly, C.; Pabrai, U.; Pfister, J.; Ritchie, D.; Roberts, L.; Sazama, C.; Wohlt, D. (Fermi National Accelerator Lab., Batavia, IL (USA)); Carven, R. (Wiscons

    1989-12-01

    Although computing facilities have been provided at conferences and workshops remote from the host institution for some years, the equipment provided has rarely been capable of providing for much more than simple editing and electronic mail. This report documents the effort involved in providing a local computing facility with world-wide networking capability for a physics workshop so that we and others can benefit from the knowledge gained through the experience.

  5. High Performance Computing Innovation Service Portal Study (HPC-ISP)

    Science.gov (United States)

    2009-04-01

    based electronic commerce interface for the goods and services available through the brokerage service. This infrastructure will also support the... electronic commerce backend functionality for third parties that want to sell custom computing services. • Tailored Industry Portals are web portals for...broker shown in Figure 8 is essentially a web server that provides remote access to computing and software resources through an electronic commerce

  6. On-Demand Single Photons with High Extraction Efficiency and Near-Unity Indistinguishability from a Resonantly Driven Quantum Dot in a Micropillar

    DEFF Research Database (Denmark)

    Ding, Xing; He, Yu; Duan, Z.-C.

    2016-01-01

    Scalable photonic quantum technologies require on-demand single-photon sources with simultaneously high levels of purity, indistinguishability, and efficiency. These key features, however, have only been demonstrated separately in previous experiments. Here, by s-shell pulsed resonant excitation...... of a Purcellenhanced quantum dot-micropillar system, we deterministically generate resonance fluorescence single photons which, at π pulse excitation, have an extraction efficiency of 66%, single-photon purity of 99.1%, and photon indistinguishability of 98.5%. Such a single-photon source for the first time combines...

  7. Demands Set Upon Modern Cartographic Visualization

    Directory of Open Access Journals (Sweden)

    Stanislav Frangeš

    2007-05-01

    Full Text Available Scientific cartography has the task to develop and research new methods of cartographic visualization. General demands are set upon modern cartographic visualization, which encompasses digital cartography and computer graphics: legibility, clearness, accuracy, plainness and aesthetics. In this paper, it is explained in detail what demands should be met in order to satisfy the general demands set. In order to satisfy the demand of legibility, one should respect conditions of minimal sizes, appropriate graphical density and better differentiation of known features. Demand of clearness needs to be met by fulfilling conditions of simplicity, contrasting quality and layer arrangement of cartographic representation. Accuracy, as the demand on cartographic visualization, can be divided into positioning accuracy and accuracy signs. For fulfilling the demand of plainness, the conditions of symbolism, traditionalism and hierarchic organization should be met. Demand of aesthetics will be met if the conditions of beauty and harmony are fulfilled.

  8. High-level GPU computing with jacket for MATLAB and C/C++

    Science.gov (United States)

    Pryor, Gallagher; Lucey, Brett; Maddipatla, Sandeep; McClanahan, Chris; Melonakos, John; Venugopalakrishnan, Vishwanath; Patel, Krunal; Yalamanchili, Pavan; Malcolm, James

    2011-06-01

    We describe a software platform for the rapid development of general purpose GPU (GPGPU) computing applications within the MATLAB computing environment, C, and C++: Jacket. Jacket provides thousands of GPU-tuned function syntaxes within MATLAB, C, and C++, including linear algebra, convolutions, reductions, and FFTs as well as signal, image, statistics, and graphics libraries. Additionally, Jacket includes a compiler that translates MATLAB and C++ code to CUDA PTX assembly and OpenGL shaders on demand at runtime. A facility is also included to compile a domain specific version of the MATLAB language to CUDA assembly at build time. Jacket includes the first parallel GPU FOR-loop construction and the first profiler for comparative analysis of CPU and GPU execution times. Jacket provides full GPU compute capability on CUDA hardware and limited, image processing focused compute on OpenGL/ES (2.0 and up) devices for mobile and embedded applications.

  9. HIGH-PERFORMANCE COMPUTING FOR THE STUDY OF EARTH AND ENVIRONMENTAL SCIENCE MATERIALS USING SYNCHROTRON X-RAY COMPUTED MICROTOMOGRAPHY.

    Energy Technology Data Exchange (ETDEWEB)

    FENG,H.; JONES,K.W.; MCGUIGAN,M.; SMITH,G.J.; SPILETIC,J.

    2001-10-12

    Synchrotron x-ray computed microtomography (CMT) is a non-destructive method for examination of rock, soil, and other types of samples studied in the earth and environmental sciences. The high x-ray intensities of the synchrotron source make possible the acquisition of tomographic volumes at a high rate that requires the application of high-performance computing techniques for data reconstruction to produce the three-dimensional volumes, for their visualization, and for data analysis. These problems are exacerbated by the need to share information between collaborators at widely separated locations over both local and tide-area networks. A summary of the CMT technique and examples of applications are given here together with a discussion of the applications of high-performance computing methods to improve the experimental techniques and analysis of the data.

  10. Exploring the relationships between high involvement work system practices, work demands and emotional exhaustion : A multi-level study.

    NARCIS (Netherlands)

    Oppenauer, V.; van de Voorde, F.C.

    2017-01-01

    This study explores the impact of enacted high involvement work systems (HIWS) practices on employee emotional exhaustion. This study hypothesized that work overload and job responsibility mediate the relationship between HIWS practices (ability, motivation, opportunity and work design HIWS

  11. Issues in undergraduate education in computational science and high performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Marchioro, T.L. II; Martin, D. [Ames Lab., IA (United States)

    1994-12-31

    The ever increasing need for mathematical and computational literacy within their society and among members of the work force has generated enormous pressure to revise and improve the teaching of related subjects throughout the curriculum, particularly at the undergraduate level. The Calculus Reform movement is perhaps the best known example of an organized initiative in this regard. The UCES (Undergraduate Computational Engineering and Science) project, an effort funded by the Department of Energy and administered through the Ames Laboratory, is sponsoring an informal and open discussion of the salient issues confronting efforts to improve and expand the teaching of computational science as a problem oriented, interdisciplinary approach to scientific investigation. Although the format is open, the authors hope to consider pertinent questions such as: (1) How can faculty and research scientists obtain the recognition necessary to further excellence in teaching the mathematical and computational sciences? (2) What sort of educational resources--both hardware and software--are needed to teach computational science at the undergraduate level? Are traditional procedural languages sufficient? Are PCs enough? Are massively parallel platforms needed? (3) How can electronic educational materials be distributed in an efficient way? Can they be made interactive in nature? How should such materials be tied to the World Wide Web and the growing ``Information Superhighway``?

  12. Failure detection in high-performance clusters and computers using chaotic map computations

    Science.gov (United States)

    Rao, Nageswara S.

    2015-09-01

    A programmable media includes a processing unit capable of independent operation in a machine that is capable of executing 10.sup.18 floating point operations per second. The processing unit is in communication with a memory element and an interconnect that couples computing nodes. The programmable media includes a logical unit configured to execute arithmetic functions, comparative functions, and/or logical functions. The processing unit is configured to detect computing component failures, memory element failures and/or interconnect failures by executing programming threads that generate one or more chaotic map trajectories. The central processing unit or graphical processing unit is configured to detect a computing component failure, memory element failure and/or an interconnect failure through an automated comparison of signal trajectories generated by the chaotic maps.

  13. High performance computing software package for multitemporal Remote-Sensing computations

    Directory of Open Access Journals (Sweden)

    Asaad Chahboun

    2010-10-01

    Full Text Available With the huge satellite data actually stored, remote sensing multitemporal study is nowadays one of the most challenging fields of computer science. The multicore hardware support and Multithreading can play an important role in speeding up algorithm computations. In the present paper, a software package (called Multitemporal Software Package for Satellite Remote sensing data (MSPSRS has been developed for the multitemporal treatment of satellite remote sensing images in a standard format. Due to portability intend, the interface was developed using the QT application framework and the core wasdeveloped integrating C++ classes. MSP.SRS can run under different operating systems (i.e., Linux, Mac OS X, Windows, Embedded Linux, Windows CE, etc.. Final benchmark results, using multiple remote sensing biophysical indices, show a gain up to 6X on a quad core i7 personal computer.

  14. Combined coagulation-flocculation and sequencing batch reactor with phosphorus adjustment for the treatment of high-strength landfill leachate: experimental kinetics and chemical oxygen demand fractionation.

    Science.gov (United States)

    El-Fadel, M; Matar, F; Hashisho, J

    2013-05-01

    The treatability of high-strength landfill leachate is challenging and relatively limited. This study examines the feasibility of treating high-strength landfill leachate (chemical oxygen demand [COD]: 7,760-11,770 mg/L, biochemical oxygen demand [BOD5]: 2,760-3,569 mg/L, total nitrogen [TN] = 980-1,160 mg/L) using a sequencing batch reactor (SBR) preceded by a coagulation-flocculation process with phosphorus nutritional balance under various mixing and aeration patterns. Simulations were also conducted to define kinetic parameters and COD fractionation. Removal efficiencies reached 89% for BOD5, 60% for COD, and 72% for TN, similar to and better than reported studies, albeit with a relatively lower hydraulic retention time (HRT) and solid retention time (SRT). The coupled experimental and simulation results contribute in filling a gap toward managing high-strength landfill leachate and providing guidelines for corresponding SBR applications. The treatability of high-strength landfill leachate, which is challenging and relatively limited, was demonstrated using a combined coagulation-flocculation with SBR technology and nutrient balance adjustment. The most suitable coagulant, kinetic design parameters, and COD fractionation were defined using coupled experimental and simulation results contributing in filling a gap toward managing high-strength leachate by providing guidelines for corresponding SBR applications and anticipating potential constraints related to the non-biodegradable COD fraction. In this context, while the combined coagulation-flocculation and SBR process improved removal efficiencies, posttreatment may be required for high-strength leachate, depending on discharge standards and ultimate usage of the treated leachate.

  15. Demand Forecasting Errors

    OpenAIRE

    Mackie, Peter; Nellthorp, John; Laird, James

    2005-01-01

    Demand forecasts form a key input to the economic appraisal. As such any errors present within the demand forecasts will undermine the reliability of the economic appraisal. The minimization of demand forecasting errors is therefore important in the delivery of a robust appraisal. This issue is addressed in this note by introducing the key issues, and error types present within demand fore...

  16. A Highly Efficient Parallel Algorithm for Computing the Fiedler Vector

    CERN Document Server

    Manguoglu, Murat

    2010-01-01

    The eigenvector corresponding to the second smallest eigenvalue of the Laplacian of a graph, known as the Fiedler vector, has a number of applications in areas that include matrix reordering, graph partitioning, protein analysis, data mining, machine learning, and web search. The computation of the Fiedler vector has been regarded as an expensive process as it involves solving a large eigenvalue problem. We present a novel and efficient parallel algorithm for computing the Fiedler vector of large graphs based on the Trace Minimization algorithm (Sameh, et.al). We compare the parallel performance of our method with a multilevel scheme, designed specifically for computing the Fiedler vector, which is implemented in routine MC73\\_Fiedler of the Harwell Subroutine Library (HSL). In addition, we compare the quality of the Fiedler vector for the application of weighted matrix reordering and provide a metric for measuring the quality of reordering.

  17. High-pressure fluid phase equilibria phenomenology and computation

    CERN Document Server

    Deiters, Ulrich K

    2012-01-01

    The book begins with an overview of the phase diagrams of fluid mixtures (fluid = liquid, gas, or supercritical state), which can show an astonishing variety when elevated pressures are taken into account; phenomena like retrograde condensation (single and double) and azeotropy (normal and double) are discussed. It then gives an introduction into the relevant thermodynamic equations for fluid mixtures, including some that are rarely found in modern textbooks, and shows how they can they be used to compute phase diagrams and related properties. This chapter gives a consistent and axiomatic approach to fluid thermodynamics; it avoids using activity coefficients. Further chapters are dedicated to solid-fluid phase equilibria and global phase diagrams (systematic search for phase diagram classes). The appendix contains numerical algorithms needed for the computations. The book thus enables the reader to create or improve computer programs for the calculation of fluid phase diagrams. introduces phase diagram class...

  18. A PROFICIENT MODEL FOR HIGH END SECURITY IN CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    R. Bala Chandar

    2014-01-01

    Full Text Available Cloud computing is an inspiring technology due to its abilities like ensuring scalable services, reducing the anxiety of local hardware and software management associated with computing while increasing flexibility and scalability. A key trait of the cloud services is remotely processing of data. Even though this technology had offered a lot of services, there are a few concerns such as misbehavior of server side stored data , out of control of data owner's data and cloud computing does not control the access of outsourced data desired by the data owner. To handle these issues, we propose a new model to ensure the data correctness for assurance of stored data, distributed accountability for authentication and efficient access control of outsourced data for authorization. This model strengthens the correctness of data and helps to achieve the cloud data integrity, supports data owner to have control on their own data through tracking and improves the access control of outsourced data.

  19. Short-term effects of implemented high intensity shoulder elevation during computer work

    Directory of Open Access Journals (Sweden)

    Madeleine Pascal

    2009-08-01

    Full Text Available Abstract Background Work-site strength training sessions are shown effective to prevent and reduce neck-shoulder pain in computer workers, but difficult to integrate in normal working routines. A solution for avoiding neck-shoulder pain during computer work may be to implement high intensity voluntary contractions during the computer work. However, it is unknown how this may influence productivity, rate of perceived exertion (RPE as well as activity and rest of neck-shoulder muscles during computer work. The aim of this study was to investigate short-term effects of a high intensity contraction on productivity, RPE and upper trapezius activity and rest during computer work and a subsequent pause from computer work. Methods 18 female computer workers performed 2 sessions of 15 min standardized computer mouse work preceded by 1 min pause with and without prior high intensity contraction of shoulder elevation. RPE was reported, productivity (drawings per min measured, and bipolar surface electromyography (EMG recorded from the dominant upper trapezius during pauses and sessions of computer work. Repeated measure ANOVA with Bonferroni corrected post-hoc tests was applied for the statistical analyses. Results The main findings were that a high intensity shoulder elevation did not modify RPE, productivity or EMG activity of the upper trapezius during the subsequent pause and computer work. However, the high intensity contraction reduced the relative rest time of the uppermost (clavicular trapezius part during the subsequent pause from computer work (p Conclusion Since a preceding high intensity shoulder elevation did not impose a negative impact on perceived effort, productivity or upper trapezius activity during computer work, implementation of high intensity contraction during computer work to prevent neck-shoulder pain may be possible without affecting the working routines. However, the unexpected reduction in clavicular trapezius rest during a

  20. 医学院校新生计算机应用水平和需求情况调查分析%Investigation and Analysis of Medical Colleges and Universities New Students" Computer Application Level and Demand

    Institute of Scientific and Technical Information of China (English)

    王丽彬

    2012-01-01

    社会需要具有计算机技能的医学毕业生,通过了解医学院校新生计算机应用水平和需求情况,为医学院校计算机课程教学提供参考。通过调整教学内容,改进教学方法、开设选修课程等措施满足医学院校学生的计算机应用需求和提高学生计算机应用水平。%The social needs the medical graduates with computer skills. Through studying the application level and demand of new students in medical colleges and universities, provides the reference of computer course teaching for medical colleges and universities, and through adjusting the teach- ing content, improving the teaching methods, measures such as opening selective courses to meet medical colleges and universities students" computer application needs and improve the students" computer application level.

  1. Is the effect of job strain on myocardial infarction risk due to interaction between high psychological demands and low decision latitude? Results from Stockholm Heart Epidemiology Program (SHEEP).

    Science.gov (United States)

    Hallqvist, J; Diderichsen, F; Theorell, T; Reuterwall, C; Ahlbom, A

    1998-06-01

    The objectives are to examine if the excess risk of myocardial infarction from exposure to job strain is due to interaction between high demands and low control and to analyse what role such an interaction has regarding socioeconomic differences in risk of myocardial infarction. The material is a population-based case-referent study having incident first events of myocardial infarction as outcome (SHEEP: Stockholm Heart Epidemiology Program). The analysis is restricted to males 45-64 yr of age with a more detailed analysis confined to those still working at inclusion. In total, 1047 cases and 1450 referents were included in the analysis. Exposure categories of job strain were formed from self reported questionnaire information. The results show that high demands and low decision latitude interact with a synergy index of 7.5 (95% C.I.: 1.8-30.6) providing empirical support for the core mechanism of the job strain model. Manual workers are more susceptible when exposed to job strain and its components and this increased susceptibility explains about 25-50% of the relative excess risk among manual workers. Low decision latitude may also, as a causal link, explain about 30% of the socioeconomic difference in risk of myocardial infarction. The distinction between the interaction and the causal link mechanisms identifies new etiologic questions and intervention alternatives. The specific causes of the increased susceptibility among manual workers to job strain and its components seem to be an interesting and important research question.

  2. [Inhibitor development after early high exposure and cerebral haemorrhage. Costs and factor demand for a successful immunotolerance induction therapy].

    Science.gov (United States)

    Haubold, K; Moorthi, C; Bade, A; Niekrens, C; Auerswald, G

    2010-11-01

    Severe haemophilia A was diagnosed postpartum in a newborn. The mother was known as a conductor (intron 22 inversion) and an uncle had a persistently high titer inhibitor after failed ITI. Due to a cephalhaematoma, a high-dose pdFVIII substitution was given within the first days after birth. At the age of six month a severe cerebral haemorrhage occurred, making a high-dose pdFVIII substitution and neurosurgical intervention necessary. Several days later a porth-a-cath-system was implanted. The development of a high titer inhibitor occured six days later, an ITI was started according to the Bonn Protocol. Initially rFVIIa was given in addition to the pdFVIII substitution. Seven days after the beginning of treatment the inhibitor was no longer detectable. At monthly intervals the FVIII dosage was reduced until the dosage complied with a prophylaxis in severe haemophilia A. The duration of the ITI was nine months. A total of 30 mg rFVIIa and 276000 IU pdFVIII were used; costs in total: 280173.60 Euro.

  3. Grid Computing

    Science.gov (United States)

    Foster, Ian

    2001-08-01

    The term "Grid Computing" refers to the use, for computational purposes, of emerging distributed Grid infrastructures: that is, network and middleware services designed to provide on-demand and high-performance access to all important computational resources within an organization or community. Grid computing promises to enable both evolutionary and revolutionary changes in the practice of computational science and engineering based on new application modalities such as high-speed distributed analysis of large datasets, collaborative engineering and visualization, desktop access to computation via "science portals," rapid parameter studies and Monte Carlo simulations that use all available resources within an organization, and online analysis of data from scientific instruments. In this article, I examine the status of Grid computing circa 2000, briefly reviewing some relevant history, outlining major current Grid research and development activities, and pointing out likely directions for future work. I also present a number of case studies, selected to illustrate the potential of Grid computing in various areas of science.

  4. Global nitrogen fertilizer supply and demand outlook

    Institute of Scientific and Technical Information of China (English)

    Michel; Prud'homme

    2005-01-01

    This paper presents a brief overview of the world nitrogen fertilizer demand, high-lights trends in the global and regional developments of production capacity and provides a medium-term perspective of the global nitrogen supply/demand balance.

  5. Application of High Performance Computing for Simulations of N-Dodecane Jet Spray with Evaporation

    Science.gov (United States)

    2016-11-01

    ARL-TR-7873 ● NOV 2016 US Army Research Laboratory Application of High Performance Computing for Simulations of N -Dodecane Jet...US Army Research Laboratory Application of High Performance Computing for Simulations of N -Dodecane Jet Spray with Evaporation by Luis...TITLE AND SUBTITLE Application of High Performance Computing for Simulations of N -Dodecane Jet Spray with Evaporation 5a. CONTRACT NUMBER 5b

  6. Analog computation through high-dimensional physical chaotic neuro-dynamics

    Science.gov (United States)

    Horio, Yoshihiko; Aihara, Kazuyuki

    2008-07-01

    Conventional von Neumann computers have difficulty in solving complex and ill-posed real-world problems. However, living organisms often face such problems in real life, and must quickly obtain suitable solutions through physical, dynamical, and collective computations involving vast assemblies of neurons. These highly parallel computations through high-dimensional dynamics (computation through dynamics) are completely different from the numerical computations on von Neumann computers (computation through algorithms). In this paper, we explore a novel computational mechanism with high-dimensional physical chaotic neuro-dynamics. We physically constructed two hardware prototypes using analog chaotic-neuron integrated circuits. These systems combine analog computations with chaotic neuro-dynamics and digital computation through algorithms. We used quadratic assignment problems (QAPs) as benchmarks. The first prototype utilizes an analog chaotic neural network with 800-dimensional dynamics. An external algorithm constructs a solution for a QAP using the internal dynamics of the network. In the second system, 300-dimensional analog chaotic neuro-dynamics drive a tabu-search algorithm. We demonstrate experimentally that both systems efficiently solve QAPs through physical chaotic dynamics. We also qualitatively analyze the underlying mechanism of the highly parallel and collective analog computations by observing global and local dynamics. Furthermore, we introduce spatial and temporal mutual information to quantitatively evaluate the system dynamics. The experimental results confirm the validity and efficiency of the proposed computational paradigm with the physical analog chaotic neuro-dynamics.

  7. On the Design of High-Rise Buildings for Multihazard: Fundamental Differences between Wind and Earthquake Demand

    Directory of Open Access Journals (Sweden)

    Aly Mousaad Aly

    2015-01-01

    Full Text Available In the past few decades, high-rise buildings have received a renewed interest in many city business locations, where land is scarce, as per their economics, sustainability, and other benefits. Taller and taller towers are being built everywhere in the world. However, the increased frequency of multihazard disasters makes it challenging to balance between a resilient and sustainable construction. Accordingly, it is essential to understand the behavior of such structures under multihazard loadings, in order to apply such knowledge to design. The results obtained from the dynamic analysis of two different high-rise buildings (54-story and 76-story buildings investigated in the current study indicate that earthquake loads excite higher modes that produce lower interstory drift, compared to wind loads, but higher accelerations that occur for a shorter time. Wind-induced accelerations may have comfort and serviceability concerns, while excessive interstory drifts can cause security issues. The results also show that high-rise and slender buildings designed for wind may be safe under moderate earthquake loads, regarding the main force resisting system. Nevertheless, nonstructural components may present a significant percentage of loss exposure of buildings to earthquakes due to higher floor acceleration. Consequently, appropriate damping/control techniques for tall buildings are recommended for mitigation under multihazard.

  8. Computer program for high pressure real gas effects

    Science.gov (United States)

    Johnson, R. C.

    1969-01-01

    Computer program obtains the real-gas isentropic flow functions and thermodynamic properties of gases for which the equation of state is known. The program uses FORTRAN 4 subroutines which were designed for calculations of nitrogen and helium. These subroutines are easily modified for calculations of other gases.

  9. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    Science.gov (United States)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-03-01

    We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  10. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    Energy Technology Data Exchange (ETDEWEB)

    Hadjidoukas, P.E.; Angelikopoulos, P. [Computational Science and Engineering Laboratory, ETH Zürich, CH-8092 (Switzerland); Papadimitriou, C. [Department of Mechanical Engineering, University of Thessaly, GR-38334 Volos (Greece); Koumoutsakos, P., E-mail: petros@ethz.ch [Computational Science and Engineering Laboratory, ETH Zürich, CH-8092 (Switzerland)

    2015-03-01

    We present Π4U,{sup 1} an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  11. Blood Volume, Plasma Volume and Circulation Time in a High-Energy-Demand Teleost, the Yellowfin Tuna (Thunnus Albacares)

    DEFF Research Database (Denmark)

    Brill, R.W.; Cousins, K.L.; Jones, D.R.

    1998-01-01

    tuna, circulation time is approximately 0.4 min (47 ml kg-1/115 ml min-1 kg- 1) compared with 1.3 min (46 ml kg-1/35 ml min-1 kg-1) in yellowtail (Seriola quinqueradiata) and 1.9 min (35 ml kg-1/18 ml min-1 kg-1) in rainbow trout (Oncorhynchus mykiss). In air-breathing vertebrates, high metabolic rates...... are necessarily correlated with short circulation times. Our data are the first to imply that a similar relationship occurs in fishes....

  12. On-Demand Single Photons with High Extraction Efficiency and Near-Unity Indistinguishability from a Resonantly Driven Quantum Dot in a Micropillar.

    Science.gov (United States)

    Ding, Xing; He, Yu; Duan, Z-C; Gregersen, Niels; Chen, M-C; Unsleber, S; Maier, S; Schneider, Christian; Kamp, Martin; Höfling, Sven; Lu, Chao-Yang; Pan, Jian-Wei

    2016-01-15

    Scalable photonic quantum technologies require on-demand single-photon sources with simultaneously high levels of purity, indistinguishability, and efficiency. These key features, however, have only been demonstrated separately in previous experiments. Here, by s-shell pulsed resonant excitation of a Purcell-enhanced quantum dot-micropillar system, we deterministically generate resonance fluorescence single photons which, at π pulse excitation, have an extraction efficiency of 66%, single-photon purity of 99.1%, and photon indistinguishability of 98.5%. Such a single-photon source for the first time combines the features of high efficiency and near-perfect levels of purity and indistinguishabilty, and thus opens the way to multiphoton experiments with semiconductor quantum dots.

  13. Solvent-Assisted Metal Metathesis: A Highly Efficient and Versatile Route towards Synthetically Demanding Chromium Metal-Organic Frameworks.

    Science.gov (United States)

    Wang, Jun-Hao; Zhang, Ying; Li, Mian; Yan, Shu; Li, Dan; Zhang, Xian-Ming

    2017-06-01

    Chromium(III)-based metal-organic frameworks (Cr-MOFs) are very attractive in a wide range of investigations because of their robustness and high porosity. However, reports on Cr-MOFs are scarce owing to the difficulties in their direct synthesis. Recently developed postsynthetic routes to obtain Cr-MOFs suffered from complicated procedures and a lack of general applicability. Herein, we report a highly efficient and versatile strategy, namely solvent-assisted metal metathesis, to obtain Cr-MOFs from a variety of Fe(III) -MOFs, including several well-known MOFs and a newly synthesized one, through judicious selection of a coordinating solvent. The versatility of this strategy was demonstrated by producing Cr-MIL-100, Cr-MIL-142A/C, Cr-PCN-333, and Cr-PCN-600 from their Fe(III) analogues and Cr-SXU-1 from a newly synthesized MOF precursor, Fe-SXU-1, in acetone as the solvent under very mild conditions. We have thus developed a general approach for the preparation of robust Cr-MOFs, which are difficult to synthesize by direct methods. © 2017 Wiley-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. The demanding attention of tuberculosis in allogeneic hematopoietic stem cell transplantation recipients: High incidence compared with general population

    Science.gov (United States)

    Lee, Hyo-Jin; Lee, Dong-Gun; Choi, Su-Mi; Park, Sun Hee; Cho, Sung-Yeon; Choi, Jae-Ki; Kim, Si-Hyun; Choi, Jung-Hyun; Yoo, Jin-Hong; Cho, Byung-Sik; Eom, Ki-Seong; Lee, Seok; Kim, Yoo-Jin; Kim, Hee-Je; Min, Chang-Ki; Kim, Dong-Wook; Lee, Jong-Wook; Min, Woo-Sung; Jung, Jung Im

    2017-01-01

    Background The risk of developing tuberculosis (TB) in allogeneic hematopoietic stem cell transplantation (HSCT) recipients is expected to be relatively high in an intermediate TB burden country. This single-center retrospective study was conducted to investigate risk factors and the incidence of TB after allogeneic HSCT. Methods From January 2004 to March 2011, 845 adult patients were enrolled. Starting April 2009, patients were given isoniazid (INH) prophylaxis based on interferon-γ release assay results. The incidence of TB was analyzed before and after April 2009, and compared it with that of the general population in Korea. Results TB was diagnosed in 21 (2.49%) of the 845 allogeneic HSCT patients. The median time to the development of TB was 386 days after transplantation (range, 49–886). Compared with the general population, the standardized incidence ratio of TB was 9.10 (95% CI; 5.59–14.79). Extensive chronic graft-versus-host disease (GVHD) was associated with the development of TB (P = 0.003). Acute GVHD, conditioning regimen with total body irradiation and conditioning intensity were not significantly related. INH prophylaxis did not reduce the incidence of TB (P = 0.548). Among 21 TB patients, one patient had INH prophylaxis. Conclusion Allogeneic HSCT recipients especially those who suffer from extensive chronic GVHD are at a high risk of developing TB. INH prophylaxis did not statistically change the incidence of TB, however, further well-designed prospective studies are needed. PMID:28278166

  15. Achieving High Performance Distributed System: Using Grid, Cluster and Cloud Computing

    Directory of Open Access Journals (Sweden)

    Sunil Kr Singh

    2015-02-01

    Full Text Available To increase the efficiency of any task, we require a system that would provide high performance along with flexibilities and cost efficiencies for user. Distributed computing, as we are all aware, has become very popular over the past decade. Distributed computing has three major types, namely, cluster, grid and cloud. In order to develop a high performance distributed system, we need to utilize all the above mentioned three types of computing. In this paper, we shall first have an introduction of all the three types of distributed computing. Subsequently examining them we shall explore trends in computing and green sustainable computing to enhance the performance of a distributed system. Finally presenting the future scope, we conclude the paper suggesting a path to achieve a Green high performance distributed system using cluster, grid and cloud computing

  16. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    CERN Document Server

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  17. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    Science.gov (United States)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  18. High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing (HPC) Environment: Device Status Data

    Science.gov (United States)

    2015-09-01

    5.1.1 Basic Components The Hydra data processing framework provides an object - oriented hierarchy for organizing data processing within an HPC...ARL-CR-0780 ● SEP 2015 US Army Research Laboratory High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing...ARL-CR-0780 ● SEP 2015 US Army Research Laboratory High-Bandwidth Tactical-Network Data Analysis in a High-Performance-Computing (HPC

  19. High resolution mapping of traits related to whole-plant transpiration under increasing evaporative demand in wheat.

    Science.gov (United States)

    Schoppach, Rémy; Taylor, Julian D; Majerus, Elisabeth; Claverie, Elodie; Baumann, Ute; Suchecki, Radoslaw; Fleury, Delphine; Sadok, Walid

    2016-04-01

    Atmospheric vapor pressure deficit (VPD) is a key component of drought and has a strong influence on yields. Whole-plant transpiration rate (TR) response to increasing VPD has been linked to drought tolerance in wheat, but because of its challenging phenotyping, its genetic basis remains unexplored. Further, the genetic control of other key traits linked to daytime TR such as leaf area, stomata densities and - more recently - nocturnal transpiration remains unknown. Considering the presence of wheat phenology genes that can interfere with drought tolerance, the aim of this investigation was to identify at an enhanced resolution the genetic basis of the above traits while investigating the effects of phenology genes Ppd-D1 and Ppd-B1 Virtually all traits were highly heritable (heritabilities from 0.61 to 0.91) and a total of mostly trait-specific 68 QTL were detected. Six QTL were identified for TR response to VPD, with one QTL (QSLP.ucl-5A) individually explaining 25.4% of the genetic variance. This QTL harbored several genes previously reported to be involved in ABA signaling, interaction with DREB2A and root hydraulics. Surprisingly, nocturnal TR and stomata densities on both leaf sides were characterized by highly specific and robust QTL. In addition, negative correlations were found between TR and leaf area suggesting trade-offs between these traits. Further, Ppd-D1 had strong but opposite effects on these traits, suggesting an involvement in this trade-off. Overall, these findings revealed novel genetic resources while suggesting a more direct role of phenology genes in enhancing wheat drought tolerance.

  20. Allocating Tactical High-Performance Computer (HPC) Resources to Offloaded Computation in Battlefield Scenarios

    Science.gov (United States)

    2013-12-01

    devices. Offloading solutions such as Cuckoo (12), MAUI(13), COMET(14), and ThinkAir(15) offload applications via Wi-Fi or 3G networks to servers or...Soldier Smartphone Program. Information Week, 2010. 12. Kemp, R.; Palmer, N.; Kielmann, T.; Bal, H. Cuckoo : A Computation Offloading Framework for...ARMY RESEARCH LAB RDRL CIH S TAMIM SOOKOOR DALE SHIRES DAVID BRUNO RONDA TAYLOR SONG PARK 20 INTENTIONALLY LEFT BLANK. 21

  1. Computational study of developing high-quality decision trees

    Science.gov (United States)

    Fu, Zhiwei

    2002-03-01

    Recently, decision tree algorithms have been widely used in dealing with data mining problems to find out valuable rules and patterns. However, scalability, accuracy and efficiency are significant concerns regarding how to effectively deal with large and complex data sets in the implementation. In this paper, we propose an innovative machine learning approach (we call our approach GAIT), combining genetic algorithm, statistical sampling, and decision tree, to develop intelligent decision trees that can alleviate some of these problems. We design our computational experiments and run GAIT on three different data sets (namely Socio- Olympic data, Westinghouse data, and FAA data) to test its performance against standard decision tree algorithm, neural network classifier, and statistical discriminant technique, respectively. The computational results show that our approach outperforms standard decision tree algorithm profoundly at lower sampling levels, and achieves significantly better results with less effort than both neural network and discriminant classifiers.

  2. High performance stream computing for particle beam transport simulations

    Energy Technology Data Exchange (ETDEWEB)

    Appleby, R; Bailey, D; Higham, J; Salt, M [School of Physics and Astronomy, University of Manchester, Oxford Road, Manchester, M13 9PL (United Kingdom)], E-mail: Robert.Appleby@manchester.ac.uk, E-mail: David.Bailey-2@manchester.ac.uk

    2008-07-15

    Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the simultaneous transport of many particles using dedicated hardware. Modern Graphics Processing Units (GPUs) are powerful and affordable stream computing devices. The results of simulations of particle transport through the booster-to-storage-ring transfer line of the DIAMOND synchrotron light source using an NVidia GeForce 7900 GPU are compared to the standard transport code MAD. It is found that particle transport calculations are suitable for stream processing and large performance increases are possible. The accuracy and potential speed gains are compared and the prospects for future work in the area are discussed.

  3. High performance stream computing for particle beam transport simulations

    Science.gov (United States)

    Appleby, R.; Bailey, D.; Higham, J.; Salt, M.

    2008-07-01

    Understanding modern particle accelerators requires simulating charged particle transport through the machine elements. These simulations can be very time consuming due to the large number of particles and the need to consider many turns of a circular machine. Stream computing offers an attractive way to dramatically improve the performance of such simulations by calculating the simultaneous transport of many particles using dedicated hardware. Modern Graphics Processing Units (GPUs) are powerful and affordable stream computing devices. The results of simulations of particle transport through the booster-to-storage-ring transfer line of the DIAMOND synchrotron light source using an NVidia GeForce 7900 GPU are compared to the standard transport code MAD. It is found that particle transport calculations are suitable for stream processing and large performance increases are possible. The accuracy and potential speed gains are compared and the prospects for future work in the area are discussed.

  4. A High Performance SOAP Engine for Grid Computing

    Science.gov (United States)

    Wang, Ning; Welzl, Michael; Zhang, Liang

    Web Service technology still has many defects that make its usage for Grid computing problematic, most notably the low performance of the SOAP engine. In this paper, we develop a novel SOAP engine called SOAPExpress, which adopts two key techniques for improving processing performance: SCTP data transport and dynamic early binding based data mapping. Experimental results show a significant and consistent performance improvement of SOAPExpress over Apache Axis.

  5. Student motivation in a high school science laboratory: The impact of computers and other technologies on young adolescent physics students

    Science.gov (United States)

    Clark, Stephen Allan

    The impact of technology (including computers and probes, low friction carts, video camera, VCR's and electronic balances) on the motivation of adolescent science students was investigated using a naturalistic case study of college preparatory ninth grade physics classes at a comprehensive high school in the southeastern United States. The students were positively affected by the use of computer technology as compared to other "low tech" labs. The non-computer technologies had little motivational effect on the students. The most important motivational effect was the belief among the students that they could successfully operate the equipment and gather meaningful results. At times, the students spent more cognitive energy on performing the experiment than on learning the physics. This was especially true when microcomputer-based labs were used. When the technology led to results that were clear to the students and displayed in a manner that could be easily interpreted, they were generally receptive and motivated to persist at the task. Many students reported being especially motivated when a computer was used to gather the data because they "just liked computers." Furthermore, qualitative evidence suggested that they had learned the physics concept they were working on. This is in close agreement with the conceptual change model of learning in that students are most likely to change their prior conceptions when the new idea is plausible (the technology makes it so), intelligible (real time graphing, actual light rays), and fruitful (the new idea explains what they actually see). However, many of the microcomputer-based laboratory (MBL) activities and "high tech" labs were too unstructured, leaving students bewildered, confused and unmotivated. To achieve maximum motivational effects from the technology, it was necessary to reduce the cognitive demand on the students so they could concentrate on the data gathered rather than the operation of the equipment.

  6. A new methodology for automating acoustic emission detection of metallic fatigue fractures in highly demanding aerospace environments: An overview

    Science.gov (United States)

    Holford, Karen M.; Eaton, Mark J.; Hensman, James J.; Pullin, Rhys; Evans, Sam L.; Dervilis, Nikolaos; Worden, Keith

    2017-04-01

    The acoustic emission (AE) phenomenon has many attributes that make it desirable as a structural health monitoring or non-destructive testing technique, including the capability to continuously and globally monitor large structures using a sparse sensor array and with no dependency on defect size. However, AE monitoring is yet to fulfil its true potential, due mainly to limitations in location accuracy and signal characterisation that often arise in complex structures with high levels of background noise. Furthermore, the technique has been criticised for a lack of quantitative results and the large amount of operator interpretation required during data analysis. This paper begins by introducing the challenges faced in developing an AE based structural health monitoring system and then gives a review of previous progress made in addresing these challenges. Subsequently an overview of a novel methodology for automatic detection of fatigue fractures in complex geometries and noisy environments is presented, which combines a number of signal processing techniques to address the current limitations of AE monitoring. The technique was developed for monitoring metallic landing gear components during pre-flight certification testing and results are presented from a full-scale steel landing gear component undergoing fatigue loading. Fracture onset was successfully identify automatically at 49,000 fatigue cycles prior to final failure (validated by the use of dye penetrant inspection) and the fracture position was located to within 10 mm of the actual location.

  7. Emotional labor demands and compensating wage differentials.

    Science.gov (United States)

    Glomb, Theresa M; Kammeyer-Mueller, John D; Rotundo, Maria

    2004-08-01

    The concept of emotional labor demands and their effects on workers has received considerable attention in recent years, with most studies concentrating on stress, burnout, satisfaction, or other affective outcomes. This study extends the literature by examining the relationship between emotional labor demands and wages at the occupational level. Theories describing the expected effects of job demands and working conditions on wages are described. Results suggest that higher levels of emotional labor demands are associated with lower wage rates for jobs low in cognitive demands and with higher wage rates for jobs high in cognitive demands. Implications of these findings are discussed. (c) 2004 APA

  8. Study of application technology of ultra-high speed computer to the elucidation of complex phenomena

    Energy Technology Data Exchange (ETDEWEB)

    Sekiguchi, Tomotsugu [Electrotechnical Lab., Tsukuba, Ibaraki (Japan)

    1996-06-01

    The basic design of numerical information library in the decentralized computer network was explained at the first step of constructing the application technology of ultra-high speed computer to the elucidation of complex phenomena. Establishment of the system makes possible to construct the efficient application environment of ultra-high speed computer system to be scalable with the different computing systems. We named the system Ninf (Network Information Library for High Performance Computing). The summary of application technology of library was described as follows: the application technology of library under the distributed environment, numeric constants, retrieval of value, library of special functions, computing library, Ninf library interface, Ninf remote library and registration. By the system, user is able to use the program concentrating the analyzing technology of numerical value with high precision, reliability and speed. (S.Y.)

  9. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  10. A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers

    Energy Technology Data Exchange (ETDEWEB)

    Potok, Thomas E [ORNL; Schuman, Catherine D [ORNL; Young, Steven R [ORNL; Patton, Robert M [ORNL; Spedalieri, Federico [University of Southern California, Information Sciences Institute; Liu, Jeremy [University of Southern California, Information Sciences Institute; Yao, Ke-Thia [University of Southern California, Information Sciences Institute; Rose, Garrett [University of Tennessee (UT); Chakma, Gangotree [University of Tennessee (UT)

    2016-01-01

    Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.

  11. Fog Computing: Focusing on Mobile Users at the Edge

    OpenAIRE

    Luan, Tom H.; Gao, Longxiang; Li, Zhi; XIANG, YANG; Wei, Guiyi; Sun, Limin

    2015-01-01

    With smart devices, particular smartphones, becoming our everyday companions, the ubiquitous mobile Internet and computing applications pervade people daily lives. With the surge demand on high-quality mobile services at anywhere, how to address the ubiquitous user demand and accommodate the explosive growth of mobile traffics is the key issue of the next generation mobile networks. The Fog computing is a promising solution towards this goal. Fog computing extends cloud computing by providing...

  12. Implementing Simulation Design of Experiments and Remote Execution on a High Performance Computing Cluster

    Science.gov (United States)

    2007-09-01

    Kennard, R. W. & Stone, L.A. (1969). Computer Aided Desing of Experiments . Tecnometrics, 11(1), 137-148. Kleijnen, J. P. (2003). A user’s guide to the...SIMULATION DESIGN OF EXPERIMENTS AND REMOTE EXECUTION ON A HIGH PERFORMANCE COMPUTING CLUSTER by Adam J. Peters September 2007 Thesis...Simulation Design of Experiments and Remote Execution on a High Performance Computing Cluster 6. AUTHOR(S) Adam J. Peters 5. FUNDING NUMBERS 7

  13. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Bland, Arthur S Buddy [ORNL; Boudwin, Kathlyn J. [ORNL; Hack, James J [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL; Hudson, Douglas L [ORNL

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation

  14. Computation of nonlinear water waves with a high-order Boussinesq model

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Madsen, Per A.; Bingham, Harry

    2005-01-01

    -crested waves in shallow/deep water, resulting in hexagonal/rectangular surface patterns; crescent waves, resulting from unstable perturbations of plane progressive waves; and highly-nonlinear wave-structure interactions. The emphasis is on physically demanding problems, and in eachcase qualitative and (when...

  15. Energy-efficient high performance computing measurement and tuning

    CERN Document Server

    III, James H Laros; Kelly, Sue

    2012-01-01

    In this work, the unique power measurement capabilities of the Cray XT architecture were exploited to gain an understanding of power and energy use, and the effects of tuning both CPU and network bandwidth. Modifications were made to deterministically halt cores when idle. Additionally, capabilities were added to alter operating P-state. At the application level, an understanding of the power requirements of a range of important DOE/NNSA production scientific computing applications running at large scale is gained by simultaneously collecting current and voltage measurements on the hosting nod

  16. High-performance computational solutions in protein bioinformatics

    CERN Document Server

    Mrozek, Dariusz

    2014-01-01

    Recent developments in computer science enable algorithms previously perceived as too time-consuming to now be efficiently used for applications in bioinformatics and life sciences. This work focuses on proteins and their structures, protein structure similarity searching at main representation levels and various techniques that can be used to accelerate similarity searches. Divided into four parts, the first part provides a formal model of 3D protein structures for functional genomics, comparative bioinformatics and molecular modeling. The second part focuses on the use of multithreading for

  17. Using High Performance Computing to Support Water Resource Planning

    Energy Technology Data Exchange (ETDEWEB)

    Groves, David G. [RAND Corporation, Santa Monica, CA (United States); Lembert, Robert J. [RAND Corporation, Santa Monica, CA (United States); May, Deborah W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Leek, James R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Syme, James [RAND Corporation, Santa Monica, CA (United States)

    2015-10-22

    In recent years, decision support modeling has embraced deliberation-withanalysis— an iterative process in which decisionmakers come together with experts to evaluate a complex problem and alternative solutions in a scientifically rigorous and transparent manner. Simulation modeling supports decisionmaking throughout this process; visualizations enable decisionmakers to assess how proposed strategies stand up over time in uncertain conditions. But running these simulation models over standard computers can be slow. This, in turn, can slow the entire decisionmaking process, interrupting valuable interaction between decisionmakers and analytics.

  18. High-performance computational condensed-matter physics in the cloud

    Science.gov (United States)

    Rehr, J. J.; Svec, L.; Gardner, J. P.; Prange, M. P.

    2009-03-01

    We demonstrate the feasibility of high performance scientific computation in condensed-matter physics using cloud computers as an alternative to traditional computational tools. The availability of these large, virtualized pools of compute resources raises the possibility of a new compute paradigm for scientific research with many advantages. For research groups, cloud computing provides convenient access to reliable, high performance clusters and storage, without the need to purchase and maintain sophisticated hardware. For developers, virtualization allows scientific codes to be pre-installed on machine images, facilitating control over the computational environment. Detailed tests are presented for the parallelized versions of the electronic structure code SIESTA ootnotetextJ. Soler et al., J. Phys.: Condens. Matter 14, 2745 (2002). and for the x-ray spectroscopy code FEFF ootnotetextA. Ankudinov et al., Phys. Rev. B 65, 104107 (2002). including CPU, network, and I/O performance, using the the Amazon EC2 Elastic Cloud.

  19. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  20. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  1. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  2. Computer Science in High School Graduation Requirements. ECS Education Trends (Updated)

    Science.gov (United States)

    Zinth, Jennifer

    2016-01-01

    Allowing high school students to fulfill a math or science high school graduation requirement via a computer science credit may encourage more student to pursue computer science coursework. This Education Trends report is an update to the original report released in April 2015 and explores state policies that allow or require districts to apply…

  3. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    Science.gov (United States)

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  4. The Relationship between Utilization of Computer Games and Spatial Abilities among High School Students

    Science.gov (United States)

    Motamedi, Vahid; Yaghoubi, Razeyah Mohagheghyan

    2015-01-01

    This study aimed at investigating the relationship between computer game use and spatial abilities among high school students. The sample consisted of 300 high school male students selected through multi-stage cluster sampling. Data gathering tools consisted of a researcher made questionnaire (to collect information on computer game usage) and the…

  5. Computer Self-Efficacy among Senior High School Teachers in Ghana and the Functionality of Demographic Variables on Their Computer Self-Efficacy

    Science.gov (United States)

    Sarfo, Frederick Kwaku; Amankwah, Francis; Konin, Daniel

    2017-01-01

    The study is aimed at investigating 1) the level of computer self-efficacy among public senior high school (SHS) teachers in Ghana and 2) the functionality of teachers' age, gender, and computer experiences on their computer self-efficacy. Four hundred and Seven (407) SHS teachers were used for the study. The "Computer Self-Efficacy"…

  6. THE FAILURE OF TCP IN HIGH-PERFORMANCE COMPUTATIONAL GRIDS

    Energy Technology Data Exchange (ETDEWEB)

    W. FENG; ET AL

    2000-08-01

    Distributed computational grids depend on TCP to ensure reliable end-to-end communication between nodes across the wide-area network (WAN). Unfortunately, TCP performance can be abysmal even when buffers on the end hosts are manually optimized. Recent studies blame the self-similar nature of aggregate network traffic for TCP's poor performance because such traffic is not readily amenable to statistical multiplexing in the Internet, and hence computational grids. In this paper we identify a source of self-similarity previously ignored, a source that is readily controllable--TCP. Via an experimental study, we examine the effects of the TCP stack on network traffic using different implementations of TCP. We show that even when aggregate application traffic ought to smooth out as more applications' traffic are multiplexed, TCP induces burstiness into the aggregate traffic loud, thus adversely impacting network performance. Furthermore, our results indicate that TCP performance will worsen as WAN speeds continue to increase.

  7. Clinical phenotyping in selected national networks: demonstrating the need for high-throughput, portable, and computational methods.

    Science.gov (United States)

    Richesson, Rachel L; Sun, Jimeng; Pathak, Jyotishman; Kho, Abel N; Denny, Joshua C

    2016-07-01

    The combination of phenomic data from electronic health records (EHR) and clinical data repositories with dense biological data has enabled genomic and pharmacogenomic discovery, a first step toward precision medicine. Computational methods for the identification of clinical phenotypes from EHR data will advance our understanding of disease risk and drug response, and support the practice of precision medicine on a national scale. Based on our experience within three national research networks, we summarize the broad approaches to clinical phenotyping and highlight the important role of these networks in the progression of high-throughput phenotyping and precision medicine. We provide supporting literature in the form of a non-systematic review. The practice of clinical phenotyping is evolving to meet the growing demand for scalable, portable, and data driven methods and tools. The resources required for traditional phenotyping algorithms from expert defined rules are significant. In contrast, machine learning approaches that rely on data patterns will require fewer clinical domain experts and resources. Machine learning approaches that generate phenotype definitions from patient features and clinical profiles will result in truly computational phenotypes, derived from data rather than experts. Research networks and phenotype developers should cooperate to develop methods, collaboration platforms, and data standards that will enable computational phenotyping and truly modernize biomedical research and precision medicine. Copyright © 2016 Elsevier B.V. All rights reserved.

  8. Nonlinear dynamics of high-power ultrashort laser pulses: exaflop computations on a laboratory computer station and subcycle light bullets

    Science.gov (United States)

    Voronin, A. A.; Zheltikov, A. M.

    2016-09-01

    The propagation of high-power ultrashort light pulses involves intricate nonlinear spatio-temporal dynamics where various spectral-temporal field transformation effects are strongly coupled to the beam dynamics, which, in turn, varies from the leading to the trailing edge of the pulse. Analysis of this nonlinear dynamics, accompanied by spatial instabilities, beam breakup into multiple filaments, and unique phenomena leading to the generation of extremely short optical field waveforms, is equivalent in its computational complexity to a simulation of the time evolution of a few billion-dimensional physical system. Such an analysis requires exaflops of computational operations and is usually performed on high-performance supercomputers. Here, we present methods of physical modeling and numerical analysis that allow problems of this class to be solved on a laboratory computer boosted by a cluster of graphic accelerators. Exaflop computations performed with the application of these methods reveal new unique phenomena in the spatio-temporal dynamics of high-power ultrashort laser pulses. We demonstrate that unprecedentedly short light bullets can be generated as a part of that dynamics, providing optical field localization in both space and time through a delicate balance between dispersion and nonlinearity with simultaneous suppression of diffraction-induced beam divergence due to the joint effect of Kerr and ionization nonlinearities.

  9. Leveraging High Performance Computing for Managing Large and Evolving Data Collections

    Directory of Open Access Journals (Sweden)

    Ritu Arora

    2014-10-01

    Full Text Available The process of developing a digital collection in the context of a research project often involves a pipeline pattern during which data growth, data types, and data authenticity need to be assessed iteratively in relation to the different research steps and in the interest of archiving. Throughout a project’s lifecycle curators organize newly generated data while cleaning and integrating legacy data when it exists, and deciding what data will be preserved for the long term. Although these actions should be part of a well-oiled data management workflow, there are practical challenges in doing so if the collection is very large and heterogeneous, or is accessed by several researchers contemporaneously. There is a need for data management solutions that can help curators with efficient and on-demand analyses of their collection so that they remain well-informed about its evolving characteristics. In this paper, we describe our efforts towards developing a workflow to leverage open science High Performance Computing (HPC resources for routinely and efficiently conducting data management tasks on large collections. We demonstrate that HPC resources and techniques can significantly reduce the time for accomplishing critical data management tasks, and enable a dynamic archiving throughout the research process. We use a large archaeological data collection with a long and complex formation history as our test case. We share our experiences in adopting open science HPC resources for large-scale data management, which entails understanding usage of the open source HPC environment and training users. These experiences can be generalized to meet the needs of other data curators working with large collections.

  10. Limits to high-speed simulations of spiking neural networks using general-purpose computers

    Directory of Open Access Journals (Sweden)

    Friedemann eZenke

    2014-09-01

    Full Text Available To understand how the central nervous system performs computations using recurrent neuronal circuitry, simulations have become an indispensable tool for theoretical neuroscience. To study neuronal circuits and their ability to self-organize, increasing attention has been directed towards synaptic plasticity. In particular spike-timing-dependent plasticity (STDP creates specific demands for simulations of spiking neural networks. On the one hand a high temporal resolution is required to capture the millisecond timescale of typical STDP windows. On the other hand network simulations have to evolve over hours up to days, to capture the timescale of long-term plasticity. To do this efficiently, fast simulation speed is the crucial ingredient rather than large neuron numbers. Using different medium-sized network models consisting of several thousands of neurons and off-the-shelf hardware, we compare the simulation speed of the simulators: Brian, NEST and Neuron as well as our own simulator Auryn. Our results show that real-time simulations of different plastic network models are possible in parallel simulations in which numerical precision is not a primary concern. Even so, the speed-up margin of parallelism is limited and boosting simulation speeds beyond one tenth of real-time is difficult. By profiling simulation code we show that the run times of typical plastic network simulations encounter a hard boundary. This limit is partly due to latencies in the inter-process communications and thus cannot be overcome by increased parallelism. Overall these results show that to study plasticity in medium-sized spiking neural networks, adequate simulation tools are readily available which run efficiently on small clusters. However, to run simulations substantially faster than real-time, special hardware is a prerequisite.

  11. Final report for %22High performance computing for advanced national electric power grid modeling and integration of solar generation resources%22, LDRD Project No. 149016.

    Energy Technology Data Exchange (ETDEWEB)

    Reno, Matthew J.; Riehm, Andrew Charles; Hoekstra, Robert John; Munoz-Ramirez, Karina; Stamp, Jason Edwin; Phillips, Laurence R.; Adams, Brian M.; Russo, Thomas V.; Oldfield, Ron A.; McLendon, William Clarence, III; Nelson, Jeffrey Scott; Hansen, Clifford W.; Richardson, Bryan T.; Stein, Joshua S.; Schoenwald, David Alan; Wolfenbarger, Paul R.

    2011-02-01

    Design and operation of the electric power grid (EPG) relies heavily on computational models. High-fidelity, full-order models are used to study transient phenomena on only a small part of the network. Reduced-order dynamic and power flow models are used when analysis involving thousands of nodes are required due to the computational demands when simulating large numbers of nodes. The level of complexity of the future EPG will dramatically increase due to large-scale deployment of variable renewable generation, active load and distributed generation resources, adaptive protection and control systems, and price-responsive demand. High-fidelity modeling of this future grid will require significant advances in coupled, multi-scale tools and their use on high performance computing (HPC) platforms. This LDRD report demonstrates SNL's capability to apply HPC resources to these 3 tasks: (1) High-fidelity, large-scale modeling of power system dynamics; (2) Statistical assessment of grid security via Monte-Carlo simulations of cyber attacks; and (3) Development of models to predict variability of solar resources at locations where little or no ground-based measurements are available.

  12. Cache County Water Demand/Supply Model

    OpenAIRE

    Hughes, Trevor C.; Norby, Gregory J.; Thyagarajan, Laxman

    1996-01-01

    This report descibes a municipal water demand forecasting model for use in areas of mixed rural and urban housing types. A series of residential demand functions were derived which forecast water demand based on the ype and density of housing and season. Micro sampling techniques were used to correlate water use data and explanatory variable data for low, medium, and high density housing. The demand functions were...

  13. Low latency, high bandwidth data communications between compute nodes in a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Blocksome, Michael A

    2014-04-01

    Methods, systems, and products are disclosed for data transfers between nodes in a parallel computer that include: receiving, by an origin DMA on an origin node, a buffer identifier for a buffer containing data for transfer to a target node; sending, by the origin DMA to the target node, a RTS message; transferring, by the origin DMA, a data portion to the target node using a memory FIFO operation that specifies one end of the buffer from which to begin transferring the data; receiving, by the origin DMA, an acknowledgement of the RTS message from the target node; and transferring, by the origin DMA in response to receiving the acknowledgement, any remaining data portion to the target node using a direct put operation that specifies the other end of the buffer from which to begin transferring the data, including initiating the direct put operation without invoking an origin processing core.

  14. Low latency, high bandwidth data communications between compute nodes in a parallel computer

    Energy Technology Data Exchange (ETDEWEB)

    Blocksome, Michael A

    2014-04-22

    Methods, systems, and products are disclosed for data transfers between nodes in a parallel computer that include: receiving, by an origin DMA on an origin node, a buffer identifier for a buffer containing data for transfer to a target node; sending, by the origin DMA to the target node, a RTS message; transferring, by the origin DMA, a data portion to the target node using a memory FIFO operation that specifies one end of the buffer from which to begin transferring the data; receiving, by the origin DMA, an acknowledgement of the RTS message from the target node; and transferring, by the origin DMA in response to receiving the acknowledgement, any remaining data portion to the target node using a direct put operation that specifies the other end of the buffer from which to begin transferring the data, including initiating the direct put operation without invoking an origin processing core.

  15. ClustalXeed: a GUI-based grid computation version for high performance and terabyte size multiple sequence alignment

    Directory of Open Access Journals (Sweden)

    Kim Taeho

    2010-09-01

    Full Text Available Abstract Background There is an increasing demand to assemble and align large-scale biological sequence data sets. The commonly used multiple sequence alignment programs are still limited in their ability to handle very large amounts of sequences because the system lacks a scalable high-performance computing (HPC environment with a greatly extended data storage capacity. Results We designed ClustalXeed, a software system for multiple sequence alignment with incremental improvements over previous versions of the ClustalX and ClustalW-MPI software. The primary advantage of ClustalXeed over other multiple sequence alignment software is its ability to align a large family of protein or nucleic acid sequences. To solve the conventional memory-dependency problem, ClustalXeed uses both physical random access memory (RAM and a distributed file-allocation system for distance matrix construction and pair-align computation. The computation efficiency of disk-storage system was markedly improved by implementing an efficient load-balancing algorithm, called "idle node-seeking task algorithm" (INSTA. The new editing option and the graphical user interface (GUI provide ready access to a parallel-computing environment for users who seek fast and easy alignment of large DNA and protein sequence sets. Conclusions ClustalXeed can now compute a large volume of biological sequence data sets, which were not tractable in any other parallel or single MSA program. The main developments include: 1 the ability to tackle larger sequence alignment problems than possible with previous systems through markedly improved storage-handling capabilities. 2 Implementing an efficient task load-balancing algorithm, INSTA, which improves overall processing times for multiple sequence alignment with input sequences of non-uniform length. 3 Support for both single PC and distributed cluster systems.

  16. High-throughput landslide modelling using computational grids

    Science.gov (United States)

    Wallace, M.; Metson, S.; Holcombe, L.; Anderson, M.; Newbold, D.; Brook, N.

    2012-04-01

    Landslides are an increasing problem in developing countries. Multiple landslides can be triggered by heavy rainfall resulting in loss of life, homes and critical infrastructure. Through computer simulation of individual slopes it is possible to predict the causes, timing and magnitude of landslides and estimate the potential physical impact. Geographical scientists at the University of Bristol have developed software that integrates a physically-based slope hydrology and stability model (CHASM) with an econometric model (QUESTA) in order to predict landslide risk over time. These models allow multiple scenarios to be evaluated for each slope, accounting for data uncertainties, different engineering interventions, risk management approaches and rainfall patterns. Individual scenarios can be computationally intensive, however each scenario is independent and so multiple scenarios can be executed in parallel. As more simulations are carried out the overhead involved in managing input and output data becomes significant. This is a greater problem if multiple slopes are considered concurrently, as is required both for landslide research and for effective disaster planning at national levels. There are two critical factors in this context: generated data volumes can be in the order of tens of terabytes, and greater numbers of simulations result in long total runtimes. Users of such models, in both the research community and in developing countries, need to develop a means for handling the generation and submission of landside modelling experiments, and the storage and analysis of the resulting datasets. Additionally, governments in developing countries typically lack the necessary computing resources and infrastructure. Consequently, knowledge that could be gained by aggregating simulation results from many different scenarios across many different slopes remains hidden within the data. To address these data and workload management issues, University of Bristol particle

  17. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  18. Computer

    CERN Document Server

    Atkinson, Paul

    2011-01-01

    The pixelated rectangle we spend most of our day staring at in silence is not the television as many long feared, but the computer-the ubiquitous portal of work and personal lives. At this point, the computer is almost so common we don't notice it in our view. It's difficult to envision that not that long ago it was a gigantic, room-sized structure only to be accessed by a few inspiring as much awe and respect as fear and mystery. Now that the machine has decreased in size and increased in popular use, the computer has become a prosaic appliance, little-more noted than a toaster. These dramati

  19. Processing Device for High-Speed Execution of an Xrisc Computer Program

    Science.gov (United States)

    Ng, Tak-Kwong (Inventor); Mills, Carl S. (Inventor)

    2016-01-01

    A processing device for high-speed execution of a computer program is provided. A memory module may store one or more computer programs. A sequencer may select one of the computer programs and controls execution of the selected program. A register module may store intermediate values associated with a current calculation set, a set of output values associated with a previous calculation set, and a set of input values associated with a subsequent calculation set. An external interface may receive the set of input values from a computing device and provides the set of output values to the computing device. A computation interface may provide a set of operands for computation during processing of the current calculation set. The set of input values are loaded into the register and the set of output values are unloaded from the register in parallel with processing of the current calculation set.

  20. Modeling high resolution space-time variations in energy demand/CO2 emissions of human inhabited landscapes in the United States under a changing climate

    Science.gov (United States)

    Godbole, A. V.; Gurney, K. R.

    2010-12-01

    components of the human-climate system must be coupled in climate modeling efforts to better understand the impacts and feedbacks. To implement modeling strategies for coupling the human and climate systems, their interactions must first be examined in greater detail at high spatial and temporal resolutions. This work attempts to quantify the impact of high resolution variations in projected climate change on energy use/emissions in the United States. We develop a predictive model for the space heating component of residential and commercial energy demand by leveraging results from the high resolution fossil fuel CO2 inventory of the Vulcan Project (Gurney et al., 2009). This predictive model is driven by high resolution temperature data from the RegCM3 model obtained by implementing a downscaling algorithm (Chow and Levermore, 2007). We will present the energy use/emissions in both the space and time domain from two different predictive models highlighting strengths and weaknesses in both. Furthermore, we will explore high frequency variations in the projected temperature field and how these might place potentially large burdens on energy supply and delivery.

  1. A first attempt to bring computational biology into advanced high school biology classrooms.

    Directory of Open Access Journals (Sweden)

    Suzanne Renick Gallagher

    2011-10-01

    Full Text Available Computer science has become ubiquitous in many areas of biological research, yet most high school and even college students are unaware of this. As a result, many college biology majors graduate without adequate computational skills for contemporary fields of biology. The absence of a computational element in secondary school biology classrooms is of growing concern to the computational biology community and biology teachers who would like to acquaint their students with updated approaches in the discipline. We present a first attempt to correct this absence by introducing a computational biology element to teach genetic evolution into advanced biology classes in two local high schools. Our primary goal was to show students how computation is used in biology and why a basic understanding of computation is necessary for research in many fields of biology. This curriculum is intended to be taught by a computational biologist who has worked with a high school advanced biology teacher to adapt the unit for his/her classroom, but a motivated high school teacher comfortable with mathematics and computing may be able to teach this alone. In this paper, we present our curriculum, which takes into consideration the constraints of the required curriculum, and discuss our experiences teaching it. We describe the successes and challenges we encountered while bringing this unit to high school students, discuss how we addressed these challenges, and make suggestions for future versions of this curriculum.We believe that our curriculum can be a valuable seed for further development of computational activities aimed at high school biology students. Further, our experiences may be of value to others teaching computational biology at this level. Our curriculum can be obtained at http://ecsite.cs.colorado.edu/?page_id=149#biology or by contacting the authors.

  2. A first attempt to bring computational biology into advanced high school biology classrooms.

    Science.gov (United States)

    Gallagher, Suzanne Renick; Coon, William; Donley, Kristin; Scott, Abby; Goldberg, Debra S

    2011-10-01

    Computer science has become ubiquitous in many areas of biological research, yet most high school and even college students are unaware of this. As a result, many college biology majors graduate without adequate computational skills for contemporary fields of biology. The absence of a computational element in secondary school biology classrooms is of growing concern to the computational biology community and biology teachers who would like to acquaint their students with updated approaches in the discipline. We present a first attempt to correct this absence by introducing a computational biology element to teach genetic evolution into advanced biology classes in two local high schools. Our primary goal was to show students how computation is used in biology and why a basic understanding of computation is necessary for research in many fields of biology. This curriculum is intended to be taught by a computational biologist who has worked with a high school advanced biology teacher to adapt the unit for his/her classroom, but a motivated high school teacher comfortable with mathematics and computing may be able to teach this alone. In this paper, we present our curriculum, which takes into consideration the constraints of the required curriculum, and discuss our experiences teaching it. We describe the successes and challenges we encountered while bringing this unit to high school students, discuss how we addressed these challenges, and make suggestions for future versions of this curriculum.We believe that our curriculum can be a valuable seed for further development of computational activities aimed at high school biology students. Further, our experiences may be of value to others teaching computational biology at this level. Our curriculum can be obtained at http://ecsite.cs.colorado.edu/?page_id=149#biology or by contacting the authors.

  3. High-throughput Bayesian Network Learning using Heterogeneous Multicore Computers.

    Science.gov (United States)

    Linderman, Michael D; Athalye, Vivek; Meng, Teresa H; Asadi, Narges Bani; Bruggner, Robert; Nolan, Garry P

    2010-06-01

    Aberrant intracellular signaling plays an important role in many diseases. The causal structure of signal transduction networks can be modeled as Bayesian Networks (BNs), and computationally learned from experimental data. However, learning the structure of Bayesian Networks (BNs) is an NP-hard problem that, even with fast heuristics, is too time consuming for large, clinically important networks (20-50 nodes). In this paper, we present a novel graphics processing unit (GPU)-accelerated implementation of a Monte Carlo Markov Chain-based algorithm for learning BNs that is up to 7.5-fold faster than current general-purpose processor (GPP)-based implementations. The GPU-based implementation is just one of several implementations within the larger application, each optimized for a different input or machine configuration. We describe the methodology we use to build an extensible application, assembled from these variants, that can target a broad range of heterogeneous systems, e.g., GPUs, multicore GPPs. Specifically we show how we use the Merge programming model to efficiently integrate, test and intelligently select among the different potential implementations.

  4. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  5. Next Generation Seismic Imaging; High Fidelity Algorithms and High-End Computing

    Science.gov (United States)

    Bevc, D.; Ortigosa, F.; Guitton, A.; Kaelin, B.

    2007-05-01

    uniquely powerful computing power of the MareNostrum supercomputer in Barcelona to realize the promise of RTM, incorporate it into daily processing flows, and to help solve exploration problems in a highly cost-effective way. Uniquely, the Kaleidoscope Project is simultaneously integrating software (algorithms) and hardware (Cell BE), steps that are traditionally taken sequentially. This unique integration of software and hardware will accelerate seismic imaging by several orders of magnitude compared to conventional solutions running on standard Linux Clusters.

  6. High-order distortion control using a computational prediction method for device overlay

    Science.gov (United States)

    Kang, Young-Seog; Affentauschegg, Cedric; Mulkens, Jan; Kim, Jang-Sun; Shin, Ju-Hee; Kim, Young-Ha; Nam, Young-Sun; Choi, Young-Sin; Ha, Hunhwan; Lee, Dong-Han; Lee, Jae-il; Rizvi, Umar; Geh, Bernd; van der Heijden, Rob; Baselmans, Jan; Kwon, Oh-Sung

    2016-04-01

    As a result of the continuously shrinking features of the integrated circuit, the overlay budget requirements have become very demanding. Historically, overlay has been performed using metrology targets for process control, and most overlay enhancements were achieved by hardware improvements. However, this is no longer sufficient, and we need to consider additional solutions for overlay improvements in process variation using computational methods. In this paper, we present the limitations of third-order intrafield distortion corrections based on standard overlay metrology and propose an improved method which includes a prediction of the device overlay and corrects the lens aberration fingerprint based on this prediction. For a DRAM use case, we present a computational approach that calculates the overlay of the device pattern using lens aberrations as an additional input, next to the target-based overlay measurement result. Supporting experimental data are presented that demonstrate a significant reduction of the intrafield overlay fingerprint.

  7. Windspharm: A High-Level Library for Global Wind Field Computations Using Spherical Harmonics

    Directory of Open Access Journals (Sweden)

    Andrew Dawson

    2016-08-01

    Full Text Available The 'windspharm' library is a Python package for performing computations on global wind fields in spherical geometry. It provides a high-level interface for computing derivatives and integrals of vector wind fields over a sphere using spherical harmonics. The software allows for computations with plain arrays, or with structures that include metadata, integrating with several popular data analysis libraries from the atmospheric and climate science community. The software is available on Github.

  8. Preliminary evaluation of ultra-high pitch computed tomography enterography

    Energy Technology Data Exchange (ETDEWEB)

    Hardie, Andrew D.; Horst, Nicole D.; Mayes, Nicholas [Medical University of South Carolina, Department of Radiology and Radiological Science, Charleston (United States)], E-mail: andrewdhardie@gmail.com

    2012-12-15

    Background. CT enterography (CTE) is a valuable tool in the management of patients with inflammatory bowel disease. Reducing imaging time, reduced motion artifacts, and decreased radiation exposure are important goals for optimizing CTE examinations. Purpose. To assess the potential impact of new CT technology (ultra-high pitch CTE) for the ability to reduce scan time and also potentially reduce radiation exposure while maintaining image quality. Material and Methods. This retrospective study compared 13 patients who underwent ultra-high pitch CTE with 25 patients who underwent routine CTE on the same CT scanner with identical radiation emission settings. Total scan time and radiation exposure were recorded for each patient. Image quality was assessed by measurement of image noise and also qualitatively by two independent observers. Results. Total scan time was significantly lower for patients who underwent ultra-high pitch CTE (2.1 s {+-} 0.2) than by routine CTE (18.6 s {+-} 0.9) (P < 0.0001). The mean radiation exposure for ultra-high pitch CTE was also significantly lower (10.1 mGy {+-} 1.0) than routine CTE (15.8 mGy {+-} 4.5) (P < 0.0001). No significant difference in image noise was found between ultra-high pitch CTE (16.0 HU {+-} 2.5) and routine CTE (15.5 HU {+-} 3.7) (P > 0.74). There was also no significant difference in image quality noted by either of the two readers. Conclusion. Ultra-high pitch CTE can be performed more rapidly than standard CTE and offers the potential for radiation exposure reduction while maintaining image quality.

  9. Development of a Computational Steering Framework for High Performance Computing Environments on Blue Gene/P Systems

    KAUST Repository

    Danani, Bob K.

    2012-07-01

    Computational steering has revolutionized the traditional workflow in high performance computing (HPC) applications. The standard workflow that consists of preparation of an application’s input, running of a simulation, and visualization of simulation results in a post-processing step is now transformed into a real-time interactive workflow that significantly reduces development and testing time. Computational steering provides the capability to direct or re-direct the progress of a simulation application at run-time. It allows modification of application-defined control parameters at run-time using various user-steering applications. In this project, we propose a computational steering framework for HPC environments that provides an innovative solution and easy-to-use platform, which allows users to connect and interact with running application(s) in real-time. This framework uses RealityGrid as the underlying steering library and adds several enhancements to the library to enable steering support for Blue Gene systems. Included in the scope of this project is the development of a scalable and efficient steering relay server that supports many-to-many connectivity between multiple steered applications and multiple steering clients. Steered applications can range from intermediate simulation and physical modeling applications to complex computational fluid dynamics (CFD) applications or advanced visualization applications. The Blue Gene supercomputer presents special challenges for remote access because the compute nodes reside on private networks. This thesis presents an implemented solution and demonstrates it on representative applications. Thorough implementation details and application enablement steps are also presented in this thesis to encourage direct usage of this framework.

  10. A High Performance Bayesian Computing Framework for Spatiotemporal Uncertainty Modeling

    Science.gov (United States)

    Cao, G.

    2015-12-01

    All types of spatiotemporal measurements are subject to uncertainty. With spatiotemporal data becomes increasingly involved in scientific research and decision making, it is important to appropriately model the impact of uncertainty. Quantitatively modeling spatiotemporal uncertainty, however, is a challenging problem considering the complex dependence and dataheterogeneities.State-space models provide a unifying and intuitive framework for dynamic systems modeling. In this paper, we aim to extend the conventional state-space models for uncertainty modeling in space-time contexts while accounting for spatiotemporal effects and data heterogeneities. Gaussian Markov Random Field (GMRF) models, also known as conditional autoregressive models, are arguably the most commonly used methods for modeling of spatially dependent data. GMRF models basically assume that a geo-referenced variable primarily depends on its neighborhood (Markov property), and the spatial dependence structure is described via a precision matrix. Recent study has shown that GMRFs are efficient approximation to the commonly used Gaussian fields (e.g., Kriging), and compared with Gaussian fields, GMRFs enjoy a series of appealing features, such as fast computation and easily accounting for heterogeneities in spatial data (e.g, point and areal). This paper represents each spatial dataset as a GMRF and integrates them into a state-space form to statistically model the temporal dynamics. Different types of spatial measurements (e.g., categorical, count or continuous), can be accounted for by according link functions. A fast alternative to MCMC framework, so-called Integrated Nested Laplace Approximation (INLA), was adopted for model inference.Preliminary case studies will be conducted to showcase the advantages of the described framework. In the first case, we apply the proposed method for modeling the water table elevation of Ogallala aquifer over the past decades. In the second case, we analyze the

  11. Computational analysis of high-throughput flow cytometry data

    Science.gov (United States)

    Robinson, J Paul; Rajwa, Bartek; Patsekin, Valery; Davisson, Vincent Jo

    2015-01-01

    Introduction Flow cytometry has been around for over 40 years, but only recently has the opportunity arisen to move into the high-throughput domain. The technology is now available and is highly competitive with imaging tools under the right conditions. Flow cytometry has, however, been a technology that has focused on its unique ability to study single cells and appropriate analytical tools are readily available to handle this traditional role of the technology. Areas covered Expansion of flow cytometry to a high-throughput (HT) and high-content technology requires both advances in hardware and analytical tools. The historical perspective of flow cytometry operation as well as how the field has changed and what the key changes have been discussed. The authors provide a background and compelling arguments for moving toward HT flow, where there are many innovative opportunities. With alternative approaches now available for flow cytometry, there will be a considerable number of new applications. These opportunities show strong capability for drug screening and functional studies with cells in suspension. Expert opinion There is no doubt that HT flow is a rich technology awaiting acceptance by the pharmaceutical community. It can provide a powerful phenotypic analytical toolset that has the capacity to change many current approaches to HT screening. The previous restrictions on the technology, based on its reduced capacity for sample throughput, are no longer a major issue. Overcoming this barrier has transformed a mature technology into one that can focus on systems biology questions not previously considered possible. PMID:22708834

  12. BiForce Toolbox: powerful high-throughput computational analysis of gene-gene interactions in genome-wide association studies.

    Science.gov (United States)

    Gyenesei, Attila; Moody, Jonathan; Laiho, Asta; Semple, Colin A M; Haley, Chris S; Wei, Wen-Hua

    2012-07-01

    Genome-wide association studies (GWAS) have discovered many loci associated with common disease and quantitative traits. However, most GWAS have not studied the gene-gene interactions (epistasis) that could be important in complex trait genetics. A major challenge in analysing epistasis in GWAS is the enormous computational demands of analysing billions of SNP combinations. Several methods have been developed recently to address this, some using computers equipped with particular graphical processing units, most restricted to binary disease traits and all poorly suited to general usage on the most widely used operating systems. We have developed the BiForce Toolbox to address the demand for high-throughput analysis of pairwise epistasis in GWAS of quantitative and disease traits across all commonly used computer systems. BiForce Toolbox is a stand-alone Java program that integrates bitwise computing with multithreaded parallelization and thus allows rapid full pairwise genome scans via a graphical user interface or the command line. Furthermore, BiForce Toolbox incorporates additional tests of interactions involving SNPs with significant marginal effects, potentially increasing the power of detection of epistasis. BiForce Toolbox is easy to use and has been applied in multiple studies of epistasis in large GWAS data sets, identifying interesting interaction signals and pathways.

  13. Stress in highly demanding IT jobs: transformational leadership moderates the impact of time pressure on exhaustion and work-life balance.

    Science.gov (United States)

    Syrek, Christine J; Apostel, Ella; Antoni, Conny H

    2013-07-01

    The objective of this article is to investigate transformational leadership as a potential moderator of the negative relationship of time pressure to work-life balance and of the positive relationship between time pressure and exhaustion. Recent research regards time pressure as a challenge stressor; while being positively related to motivation and performance, time pressure also increases employee strain and decreases well-being. Building on the Job Demand-Resources model, we hypothesize that transformational leadership moderates the relationships between time pressure and both employees' exhaustion and work-life balance such that both relationships will be weaker when transformational leadership is higher. Of seven information technology organizations in Germany, 262 employees participated in the study. Established scales for time pressure, transformational leadership, work-life balance, and exhaustion were used, all showing good internal consistencies. The results support our assumptions. Specifically, we find that under high transformational leadership the impact of time pressure on exhaustion and work-life balance was less strong. The results of this study suggest that, particularly under high time pressure, transformational leadership is an important factor for both employees' work-life balance and exhaustion. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  14. A Debugging Standard for High-Performance Computing

    Directory of Open Access Journals (Sweden)

    Joan M. Francioni

    2000-01-01

    Full Text Available Throughout 1998, the High Performance Debugging Forum worked on defining a base level standard for high performance debuggers. The standard had to meet the sometimes conflicting constraints of being useful to users, realistically implementable by developers, and architecturally independent across multiple platforms. To meet criteria for timeliness, the standard had to be defined in one year and in such a way that it could be implemented within an additional year. The Forum was successful, and in November 1998 released Version 1 of the HPD Standard. Implementations of the standard are currently underway. This paper presents an overview of Version 1 of the standard and an analysis of the process by which the standard was developed. The status of implementation efforts and plans for follow-on efforts are discussed as well.

  15. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.

    2009-10-01

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  16. Cognitive Correlates of Performance in Algorithms in a Computer Science Course for High School

    Science.gov (United States)

    Avancena, Aimee Theresa; Nishihara, Akinori

    2014-01-01

    Computer science for high school faces many challenging issues. One of these is whether the students possess the appropriate cognitive ability for learning the fundamentals of computer science. Online tests were created based on known cognitive factors and fundamental algorithms and were implemented among the second grade students in the…

  17. Computer simulation of effect of conditions on discharge-excited high power gas flow CO laser

    Science.gov (United States)

    Ochiai, Ryo; Iyoda, Mitsuhiro; Taniwaki, Manabu; Sato, Shunichi

    2017-01-01

    The authors have developed the computer simulation codes to analyze the effect of conditions on the performances of discharge excited high power gas flow CO laser. The six be analyzed. The simulation code described and executed by Macintosh computers consists of some modules to calculate the kinetic processes. The detailed conditions, kinetic processes, results and discussions are described in this paper below.

  18. Cognitive Correlates of Performance in Algorithms in a Computer Science Course for High School

    Science.gov (United States)

    Avancena, Aimee Theresa; Nishihara, Akinori

    2014-01-01

    Computer science for high school faces many challenging issues. One of these is whether the students possess the appropriate cognitive ability for learning the fundamentals of computer science. Online tests were created based on known cognitive factors and fundamental algorithms and were implemented among the second grade students in the…

  19. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  20. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...