WorldWideScience

Sample records for high computational demand

  1. Ubiquitous Green Computing Techniques for High Demand Applications in Smart Environments

    Directory of Open Access Journals (Sweden)

    Jose M. Moya

    2012-08-01

    Full Text Available Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

  2. Ubiquitous green computing techniques for high demand applications in Smart environments.

    Science.gov (United States)

    Zapater, Marina; Sanchez, Cesar; Ayala, Jose L; Moya, Jose M; Risco-Martín, José L

    2012-01-01

    Ubiquitous sensor network deployments, such as the ones found in Smart cities and Ambient intelligence applications, require constantly increasing high computational demands in order to process data and offer services to users. The nature of these applications imply the usage of data centers. Research has paid much attention to the energy consumption of the sensor nodes in WSNs infrastructures. However, supercomputing facilities are the ones presenting a higher economic and environmental impact due to their very high power consumption. The latter problem, however, has been disregarded in the field of smart environment services. This paper proposes an energy-minimization workload assignment technique, based on heterogeneity and application-awareness, that redistributes low-demand computational tasks from high-performance facilities to idle nodes with low and medium resources in the WSN infrastructure. These non-optimal allocation policies reduce the energy consumed by the whole infrastructure and the total execution time.

  3. Conceptual Framework and Computational Research of Hierarchical Residential Household Water Demand

    Directory of Open Access Journals (Sweden)

    Baodeng Hou

    2018-05-01

    Full Text Available Although the quantity of household water consumption does not account for a huge proportion of the total water consumption amidst socioeconomic development, there has been a steadily increasing trend due to population growth and improved urbanization standards. As such, mastering the mechanisms of household water demand, scientifically predicting trends of household water demand, and implementing reasonable control measures are key focuses of current urban water management. Based on the categorization and characteristic analysis of household water, this paper used Maslow’s Hierarchy of Needs to establish a level and grade theory of household water demand, whereby household water is classified into three levels (rigid water demand, flexible water demand, and luxury water demand and three grades (basic water demand, reasonable water demand, and representational water demand. An in-depth analysis was then carried out on the factors that influence the computation of household water demand, whereby equations for different household water categories were established, and computations for different levels of household water were proposed. Finally, observational experiments on household water consumption were designed, and observation and simulation computations were performed on three typical households in order to verify the scientific outcome and rationality of the computation of household water demand. The research findings contribute to the enhancement and development of prediction theories on water demand, and they are of high theoretical and realistic significance in terms of scientifically predicting future household water demand and fine-tuning the management of urban water resources.

  4. Water demand forecasting: review of soft computing methods.

    Science.gov (United States)

    Ghalehkhondabi, Iman; Ardjmand, Ehsan; Young, William A; Weckman, Gary R

    2017-07-01

    Demand forecasting plays a vital role in resource management for governments and private companies. Considering the scarcity of water and its inherent constraints, demand management and forecasting in this domain are critically important. Several soft computing techniques have been developed over the last few decades for water demand forecasting. This study focuses on soft computing methods of water consumption forecasting published between 2005 and 2015. These methods include artificial neural networks (ANNs), fuzzy and neuro-fuzzy models, support vector machines, metaheuristics, and system dynamics. Furthermore, it was discussed that while in short-term forecasting, ANNs have been superior in many cases, but it is still very difficult to pick a single method as the overall best. According to the literature, various methods and their hybrids are applied to water demand forecasting. However, it seems soft computing has a lot more to contribute to water demand forecasting. These contribution areas include, but are not limited, to various ANN architectures, unsupervised methods, deep learning, various metaheuristics, and ensemble methods. Moreover, it is found that soft computing methods are mainly used for short-term demand forecasting.

  5. Design of massively parallel hardware multi-processors for highly-demanding embedded applications

    NARCIS (Netherlands)

    Jozwiak, L.; Jan, Y.

    2013-01-01

    Many new embedded applications require complex computations to be performed to tight schedules, while at the same time demanding low energy consumption and low cost. For implementation of these highly-demanding applications, highly-optimized application-specific multi-processor system-on-a-chip

  6. Balancing exploration, uncertainty and computational demands in many objective reservoir optimization

    Science.gov (United States)

    Zatarain Salazar, Jazmin; Reed, Patrick M.; Quinn, Julianne D.; Giuliani, Matteo; Castelletti, Andrea

    2017-11-01

    Reservoir operations are central to our ability to manage river basin systems serving conflicting multi-sectoral demands under increasingly uncertain futures. These challenges motivate the need for new solution strategies capable of effectively and efficiently discovering the multi-sectoral tradeoffs that are inherent to alternative reservoir operation policies. Evolutionary many-objective direct policy search (EMODPS) is gaining importance in this context due to its capability of addressing multiple objectives and its flexibility in incorporating multiple sources of uncertainties. This simulation-optimization framework has high potential for addressing the complexities of water resources management, and it can benefit from current advances in parallel computing and meta-heuristics. This study contributes a diagnostic assessment of state-of-the-art parallel strategies for the auto-adaptive Borg Multi Objective Evolutionary Algorithm (MOEA) to support EMODPS. Our analysis focuses on the Lower Susquehanna River Basin (LSRB) system where multiple sectoral demands from hydropower production, urban water supply, recreation and environmental flows need to be balanced. Using EMODPS with different parallel configurations of the Borg MOEA, we optimize operating policies over different size ensembles of synthetic streamflows and evaporation rates. As we increase the ensemble size, we increase the statistical fidelity of our objective function evaluations at the cost of higher computational demands. This study demonstrates how to overcome the mathematical and computational barriers associated with capturing uncertainties in stochastic multiobjective reservoir control optimization, where parallel algorithmic search serves to reduce the wall-clock time in discovering high quality representations of key operational tradeoffs. Our results show that emerging self-adaptive parallelization schemes exploiting cooperative search populations are crucial. Such strategies provide a

  7. High energy physics and cloud computing

    International Nuclear Information System (INIS)

    Cheng Yaodong; Liu Baoxu; Sun Gongxing; Chen Gang

    2011-01-01

    High Energy Physics (HEP) has been a strong promoter of computing technology, for example WWW (World Wide Web) and the grid computing. In the new era of cloud computing, HEP has still a strong demand, and major international high energy physics laboratories have launched a number of projects to research on cloud computing technologies and applications. It describes the current developments in cloud computing and its applications in high energy physics. Some ongoing projects in the institutes of high energy physics, Chinese Academy of Sciences, including cloud storage, virtual computing clusters, and BESⅢ elastic cloud, are also described briefly in the paper. (authors)

  8. The impact of object size and precision demands on fatigue during computer mouse use

    DEFF Research Database (Denmark)

    Aasa, Ulrika; Jensen, Bente Rona; Sandfeld, Jesper

    2011-01-01

    use demands were of influence. Also, we investigated performance (number of rectangles painted), and whether perceived fatigue was paralleled by local muscle fatigue or tissue oxygenation. Ten women performed the task for three conditions (crossover design). At condition 1, rectangles were 45 × 25 mm...... not differ between conditions. In conclusion, computer work tasks imposing high visual and motor demands, and with high performance, seemed to have an influence on eye fatigue....... ratio was 1:8. The results showed increased self-reported fatigue over time, with the observed increase greater for the eyes, but no change in physiological responses. Condition 2 resulted in higher performance and increased eye fatigue. Perceived fatigue in the muscles or physiological responses did...

  9. Agent assisted interactive algorithm for computationally demanding multiobjective optimization problems

    OpenAIRE

    Ojalehto, Vesa; Podkopaev, Dmitry; Miettinen, Kaisa

    2015-01-01

    We generalize the applicability of interactive methods for solving computationally demanding, that is, time-consuming, multiobjective optimization problems. For this purpose we propose a new agent assisted interactive algorithm. It employs a computationally inexpensive surrogate problem and four different agents that intelligently update the surrogate based on the preferences specified by a decision maker. In this way, we decrease the waiting times imposed on the decision maker du...

  10. Dynamic Placement of Virtual Machines with Both Deterministic and Stochastic Demands for Green Cloud Computing

    Directory of Open Access Journals (Sweden)

    Wenying Yue

    2014-01-01

    Full Text Available Cloud computing has come to be a significant commercial infrastructure offering utility-oriented IT services to users worldwide. However, data centers hosting cloud applications consume huge amounts of energy, leading to high operational cost and greenhouse gas emission. Therefore, green cloud computing solutions are needed not only to achieve high level service performance but also to minimize energy consumption. This paper studies the dynamic placement of virtual machines (VMs with deterministic and stochastic demands. In order to ensure a quick response to VM requests and improve the energy efficiency, a two-phase optimization strategy has been proposed, in which VMs are deployed in runtime and consolidated into servers periodically. Based on an improved multidimensional space partition model, a modified energy efficient algorithm with balanced resource utilization (MEAGLE and a live migration algorithm based on the basic set (LMABBS are, respectively, developed for each phase. Experimental results have shown that under different VMs’ stochastic demand variations, MEAGLE guarantees the availability of stochastic resources with a defined probability and reduces the number of required servers by 2.49% to 20.40% compared with the benchmark algorithms. Also, the difference between the LMABBS solution and Gurobi solution is fairly small, but LMABBS significantly excels in computational efficiency.

  11. Lightweight on-demand computing with Elasticluster and Nordugrid ARC

    CERN Document Server

    Pedersen, Maiken; The ATLAS collaboration; Filipcic, Andrej

    2018-01-01

    The cloud computing paradigm allows scientists to elastically grow or shrink computing resources as requirements demand, so that resources only need to be paid for when necessary. The challenge of integrating cloud computing into distributed computing frameworks used by HEP experiments has led to many different solutions in the past years, however none of these solutions offer a complete, fully integrated cloud resource out of the box. This paper describes how to offer such a resource using stripped-down minimal versions of existing distributed computing software components combined with off-the-shelf cloud tools. The basis of the cloud resource is Elasticluster, and the glue to join to the HEP computing infrastructure is provided by the NorduGrid ARC middleware and the ARC Control Tower. These latter two components are stripped down to bare minimum edge services, removing the need for administering complex grid middleware, yet still provide the complete job and data management required to fully exploit the c...

  12. Demand side management scheme in smart grid with cloud computing approach using stochastic dynamic programming

    Directory of Open Access Journals (Sweden)

    S. Sofana Reka

    2016-09-01

    Full Text Available This paper proposes a cloud computing framework in smart grid environment by creating small integrated energy hub supporting real time computing for handling huge storage of data. A stochastic programming approach model is developed with cloud computing scheme for effective demand side management (DSM in smart grid. Simulation results are obtained using GUI interface and Gurobi optimizer in Matlab in order to reduce the electricity demand by creating energy networks in a smart hub approach.

  13. GPU-based high-performance computing for radiation therapy

    International Nuclear Information System (INIS)

    Jia, Xun; Jiang, Steve B; Ziegenhein, Peter

    2014-01-01

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented. (topical review)

  14. Architectural Considerations for Highly Scalable Computing to Support On-demand Video Analytics

    Science.gov (United States)

    2017-04-19

    research were used to implement a distributed on-demand video analytics system that was prototyped for the use of forensics investigators in law...demand video intelligence; intelligent video system ; video analytics platform I. INTRODUCTION Video Analytics systems has been of tremendous interest...enforcement. The system was tested in the wild using video files as well as a commercial Video Management System supporting more than 100 surveillance

  15. High-performance scientific computing in the cloud

    Science.gov (United States)

    Jorissen, Kevin; Vila, Fernando; Rehr, John

    2011-03-01

    Cloud computing has the potential to open up high-performance computational science to a much broader class of researchers, owing to its ability to provide on-demand, virtualized computational resources. However, before such approaches can become commonplace, user-friendly tools must be developed that hide the unfamiliar cloud environment and streamline the management of cloud resources for many scientific applications. We have recently shown that high-performance cloud computing is feasible for parallelized x-ray spectroscopy calculations. We now present benchmark results for a wider selection of scientific applications focusing on electronic structure and spectroscopic simulation software in condensed matter physics. These applications are driven by an improved portable interface that can manage virtual clusters and run various applications in the cloud. We also describe a next generation of cluster tools, aimed at improved performance and a more robust cluster deployment. Supported by NSF grant OCI-1048052.

  16. A Primer on High-Throughput Computing for Genomic Selection

    Directory of Open Access Journals (Sweden)

    Xiao-Lin eWu

    2011-02-01

    Full Text Available High-throughput computing (HTC uses computer clusters to solve advanced computational problems, with the goal of accomplishing high throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general purpose computation on a graphics processing unit (GPU provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin – Madison, which can be leveraged for genomic selection, in terms of central processing unit (CPU capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of

  17. Effect of aging on performance, muscle activation and perceived stress during mentally demanding computer tasks

    DEFF Research Database (Denmark)

    Alkjaer, Tine; Pilegaard, Marianne; Bakke, Merete

    2005-01-01

    OBJECTIVES: This study examined the effects of age on performance, muscle activation, and perceived stress during computer tasks with different levels of mental demand. METHODS: Fifteen young and thirteen elderly women performed two computer tasks [color word test and reference task] with different...... levels of mental demand but similar physical demands. The performance (clicking frequency, percentage of correct answers, and response time for correct answers) and electromyography from the forearm, shoulder, and neck muscles were recorded. Visual analogue scales were used to measure the participants......' perception of the stress and difficulty related to the tasks. RESULTS: Performance decreased significantly in both groups during the color word test in comparison with performance on the reference task. However, the performance reduction was more pronounced in the elderly group than in the young group...

  18. Demand Response in Low Voltage Distribution Networks with High PV Penetration

    DEFF Research Database (Denmark)

    Nainar, Karthikeyan; Pokhrel, Basanta Raj; Pillai, Jayakrishnan Radhakrishna

    2017-01-01

    the required flexibility from the electricity market through an aggregator. The optimum demand response enables consumption of maximum renewable energy within the network constraints. Simulation studies are conducted using Matlab and DigSilent Power factory software on a Danish low-voltage distribution system......In this paper, application of demand response to accommodate maximum PV power in a low-voltage distribution network is discussed. A centralized control based on model predictive control method is proposed for the computation of optimal demand response on an hourly basis. The proposed method uses PV...

  19. High School Computer Science Education Paves the Way for Higher Education: The Israeli Case

    Science.gov (United States)

    Armoni, Michal; Gal-Ezer, Judith

    2014-01-01

    The gap between enrollments in higher education computing programs and the high-tech industry's demands is widely reported, and is especially prominent for women. Increasing the availability of computer science education in high school is one of the strategies suggested in order to address this gap. We look at the connection between exposure to…

  20. A primer on high-throughput computing for genomic selection.

    Science.gov (United States)

    Wu, Xiao-Lin; Beissinger, Timothy M; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J M; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel

    2011-01-01

    High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin-Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized

  1. A High Performance VLSI Computer Architecture For Computer Graphics

    Science.gov (United States)

    Chin, Chi-Yuan; Lin, Wen-Tai

    1988-10-01

    A VLSI computer architecture, consisting of multiple processors, is presented in this paper to satisfy the modern computer graphics demands, e.g. high resolution, realistic animation, real-time display etc.. All processors share a global memory which are partitioned into multiple banks. Through a crossbar network, data from one memory bank can be broadcasted to many processors. Processors are physically interconnected through a hyper-crossbar network (a crossbar-like network). By programming the network, the topology of communication links among processors can be reconfigurated to satisfy specific dataflows of different applications. Each processor consists of a controller, arithmetic operators, local memory, a local crossbar network, and I/O ports to communicate with other processors, memory banks, and a system controller. Operations in each processor are characterized into two modes, i.e. object domain and space domain, to fully utilize the data-independency characteristics of graphics processing. Special graphics features such as 3D-to-2D conversion, shadow generation, texturing, and reflection, can be easily handled. With the current high density interconnection (MI) technology, it is feasible to implement a 64-processor system to achieve 2.5 billion operations per second, a performance needed in most advanced graphics applications.

  2. High performance computing in science and engineering Garching/Munich 2016

    Energy Technology Data Exchange (ETDEWEB)

    Wagner, Siegfried; Bode, Arndt; Bruechle, Helmut; Brehm, Matthias (eds.)

    2016-11-01

    Computer simulations are the well-established third pillar of natural sciences along with theory and experimentation. Particularly high performance computing is growing fast and constantly demands more and more powerful machines. To keep pace with this development, in spring 2015, the Leibniz Supercomputing Centre installed the high performance computing system SuperMUC Phase 2, only three years after the inauguration of its sibling SuperMUC Phase 1. Thereby, the compute capabilities were more than doubled. This book covers the time-frame June 2014 until June 2016. Readers will find many examples of outstanding research in the more than 130 projects that are covered in this book, with each one of these projects using at least 4 million core-hours on SuperMUC. The largest scientific communities using SuperMUC in the last two years were computational fluid dynamics simulations, chemistry and material sciences, astrophysics, and life sciences.

  3. High End Computing Technologies for Earth Science Applications: Trends, Challenges, and Innovations

    Science.gov (United States)

    Parks, John (Technical Monitor); Biswas, Rupak; Yan, Jerry C.; Brooks, Walter F.; Sterling, Thomas L.

    2003-01-01

    Earth science applications of the future will stress the capabilities of even the highest performance supercomputers in the areas of raw compute power, mass storage management, and software environments. These NASA mission critical problems demand usable multi-petaflops and exabyte-scale systems to fully realize their science goals. With an exciting vision of the technologies needed, NASA has established a comprehensive program of advanced research in computer architecture, software tools, and device technology to ensure that, in partnership with US industry, it can meet these demanding requirements with reliable, cost effective, and usable ultra-scale systems. NASA will exploit, explore, and influence emerging high end computing architectures and technologies to accelerate the next generation of engineering, operations, and discovery processes for NASA Enterprises. This article captures this vision and describes the concepts, accomplishments, and the potential payoff of the key thrusts that will help meet the computational challenges in Earth science applications.

  4. GLIF – striving towards a high-performance on-demand network

    CERN Multimedia

    Kristina Gunne

    2010-01-01

    If you were passing through the Mezzanine in the Main Building a couple of weeks ago, you probably noticed the large tiled panel display showing an ultra-high resolution visualization model of dark matter, developed by Cosmogrid. The display was one of the highlights of the 10th Annual Global Lambda Grid Workshop demo session, together with the first ever transfer of over 35 Gbit/second from one PC to another between the SARA Computing Centre in Amsterdam and CERN.   GLIF display. The transfer of such large amounts of data at this speed has been made possible thanks to the GLIF community's vision of a new computing paradigm, in which the central architectural element is an end-to-end path built on optical network wavelengths (so called lambdas). You may think of this as an on-demand private highway for data transfer: by using it you avoid the normal internet exchange points and “traffic jams”. GLIF is a virtual international organization managed as a cooperative activity, wi...

  5. Dimensioning storage and computing clusters for efficient High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2012-01-01

    Scientific experiments are producing huge amounts of data, and they continue increasing the size of their datasets and the total volume of data. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of Scientific Data Centres has shifted from coping efficiently with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful s...

  6. Dimensioning storage and computing clusters for efficient high throughput computing

    International Nuclear Information System (INIS)

    Accion, E; Bria, A; Bernabeu, G; Caubet, M; Delfino, M; Espinal, X; Merino, G; Lopez, F; Martinez, F; Planas, E

    2012-01-01

    Scientific experiments are producing huge amounts of data, and the size of their datasets and total volume of data continues increasing. These data are then processed by researchers belonging to large scientific collaborations, with the Large Hadron Collider being a good example. The focal point of scientific data centers has shifted from efficiently coping with PetaByte scale storage to deliver quality data processing throughput. The dimensioning of the internal components in High Throughput Computing (HTC) data centers is of crucial importance to cope with all the activities demanded by the experiments, both the online (data acceptance) and the offline (data processing, simulation and user analysis). This requires a precise setup involving disk and tape storage services, a computing cluster and the internal networking to prevent bottlenecks, overloads and undesired slowness that lead to losses cpu cycles and batch jobs failures. In this paper we point out relevant features for running a successful data storage and processing service in an intensive HTC environment.

  7. High-Throughput Computing on High-Performance Platforms: A Case Study

    Energy Technology Data Exchange (ETDEWEB)

    Oleynik, D [University of Texas at Arlington; Panitkin, S [Brookhaven National Laboratory (BNL); Matteo, Turilli [Rutgers University; Angius, Alessio [Rutgers University; Oral, H Sarp [ORNL; De, K [University of Texas at Arlington; Klimentov, A [Brookhaven National Laboratory (BNL); Wells, Jack C. [ORNL; Jha, S [Rutgers University

    2017-10-01

    The computing systems used by LHC experiments has historically consisted of the federation of hundreds to thousands of distributed resources, ranging from small to mid-size resource. In spite of the impressive scale of the existing distributed computing solutions, the federation of small to mid-size resources will be insufficient to meet projected future demands. This paper is a case study of how the ATLAS experiment has embraced Titan -- a DOE leadership facility in conjunction with traditional distributed high- throughput computing to reach sustained production scales of approximately 52M core-hours a years. The three main contributions of this paper are: (i) a critical evaluation of design and operational considerations to support the sustained, scalable and production usage of Titan; (ii) a preliminary characterization of a next generation executor for PanDA to support new workloads and advanced execution modes; and (iii) early lessons for how current and future experimental and observational systems can be integrated with production supercomputers and other platforms in a general and extensible manner.

  8. High-resolution stochastic integrated thermal–electrical domestic demand model

    International Nuclear Information System (INIS)

    McKenna, Eoghan; Thomson, Murray

    2016-01-01

    Highlights: • A major new version of CREST’s demand model is presented. • Simulates electrical and thermal domestic demands at high-resolution. • Integrated structure captures appropriate time-coincidence of variables. • Suitable for low-voltage network and urban energy analyses. • Open-source development in Excel VBA freely available for download. - Abstract: This paper describes the extension of CREST’s existing electrical domestic demand model into an integrated thermal–electrical demand model. The principle novelty of the model is its integrated structure such that the timing of thermal and electrical output variables are appropriately correlated. The model has been developed primarily for low-voltage network analysis and the model’s ability to account for demand diversity is of critical importance for this application. The model, however, can also serve as a basis for modelling domestic energy demands within the broader field of urban energy systems analysis. The new model includes the previously published components associated with electrical demand and generation (appliances, lighting, and photovoltaics) and integrates these with an updated occupancy model, a solar thermal collector model, and new thermal models including a low-order building thermal model, domestic hot water consumption, thermostat and timer controls and gas boilers. The paper reviews the state-of-the-art in high-resolution domestic demand modelling, describes the model, and compares its output with three independent validation datasets. The integrated model remains an open-source development in Excel VBA and is freely available to download for users to configure and extend, or to incorporate into other models.

  9. Effective Management of High-Use/High-Demand Space Using Restaurant-Style Pagers

    Science.gov (United States)

    Gonzalez, Adriana

    2012-01-01

    The library landscape is changing at a fast pace, with an increase in the demand for study space including quiet, individualized study space; open group study space; and as enclosed group study space. In large academic libraries, managing limited high-demand resources is crucial and is partially being driven by the greater emphasis on group…

  10. Acquisition of ICU data: concepts and demands.

    Science.gov (United States)

    Imhoff, M

    1992-12-01

    As the issue of data overload is a problem in critical care today, it is of utmost importance to improve acquisition, storage, integration, and presentation of medical data, which appears only feasible with the help of bedside computers. The data originates from four major sources: (1) the bedside medical devices, (2) the local area network (LAN) of the ICU, (3) the hospital information system (HIS) and (4) manual input. All sources differ markedly in quality and quantity of data and in the demands of the interfaces between source of data and patient database. The demands for data acquisition from bedside medical devices, ICU-LAN and HIS concentrate on technical problems, such as computational power, storage capacity, real-time processing, interfacing with different devices and networks and the unmistakable assignment of data to the individual patient. The main problem of manual data acquisition is the definition and configuration of the user interface that must allow the inexperienced user to interact with the computer intuitively. Emphasis must be put on the construction of a pleasant, logical and easy-to-handle graphical user interface (GUI). Short response times will require high graphical processing capacity. Moreover, high computational resources are necessary in the future for additional interfacing devices such as speech recognition and 3D-GUI. Therefore, in an ICU environment the demands for computational power are enormous. These problems are complicated by the urgent need for friendly and easy-to-handle user interfaces. Both facts place ICU bedside computing at the vanguard of present and future workstation development leaving no room for solutions based on traditional concepts of personal computers.(ABSTRACT TRUNCATED AT 250 WORDS)

  11. High-demand jobs: age-related diversity in work ability?

    NARCIS (Netherlands)

    Sluiter, Judith K.

    2006-01-01

    High-demand jobs include 'specific' job demands that are not preventable with state of the art ergonomics knowledge and may overburden the bodily capacities, safety or health of workers. An interesting question is whether the age of the worker is an important factor in explanations of diversity in

  12. Bringing Computational Thinking into the High School Science and Math Classroom

    Science.gov (United States)

    Trouille, Laura; Beheshti, E.; Horn, M.; Jona, K.; Kalogera, V.; Weintrop, D.; Wilensky, U.; University CT-STEM Project, Northwestern; University CenterTalent Development, Northwestern

    2013-01-01

    Computational thinking (for example, the thought processes involved in developing algorithmic solutions to problems that can then be automated for computation) has revolutionized the way we do science. The Next Generation Science Standards require that teachers support their students’ development of computational thinking and computational modeling skills. As a result, there is a very high demand among teachers for quality materials. Astronomy provides an abundance of opportunities to support student development of computational thinking skills. Our group has taken advantage of this to create a series of astronomy-based computational thinking lesson plans for use in typical physics, astronomy, and math high school classrooms. This project is funded by the NSF Computing Education for the 21st Century grant and is jointly led by Northwestern University’s Center for Interdisciplinary Exploration and Research in Astrophysics (CIERA), the Computer Science department, the Learning Sciences department, and the Office of STEM Education Partnerships (OSEP). I will also briefly present the online ‘Astro Adventures’ courses for middle and high school students I have developed through NU’s Center for Talent Development. The online courses take advantage of many of the amazing online astronomy enrichment materials available to the public, including a range of hands-on activities and the ability to take images with the Global Telescope Network. The course culminates with an independent computational research project.

  13. Integration of highly probabilistic sources into optical quantum architectures: perpetual quantum computation

    International Nuclear Information System (INIS)

    Devitt, Simon J; Stephens, Ashley M; Munro, William J; Nemoto, Kae

    2011-01-01

    In this paper, we introduce a design for an optical topological cluster state computer constructed exclusively from a single quantum component. Unlike previous efforts we eliminate the need for on demand, high fidelity photon sources and detectors and replace them with the same device utilized to create photon/photon entanglement. This introduces highly probabilistic elements into the optical architecture while maintaining complete specificity of the structure and operation for a large-scale computer. Photons in this system are continually recycled back into the preparation network, allowing for an arbitrarily deep three-dimensional cluster to be prepared using a comparatively small number of photonic qubits and consequently the elimination of high-frequency, deterministic photon sources.

  14. Electricity demand profile with high penetration of heat pumps in Nordic area

    DEFF Research Database (Denmark)

    Liu, Zhaoxi; Wu, Qiuwei; Nielsen, Arne Hejde

    2013-01-01

    This paper presents the heat pump (HP) demand profile with high HP penetration in the Nordic area in order to achieve the carbon neutrality power system. The calculation method in the European Standard EN14825 was used to estimate the HP electricity demand profile. The study results show...... there will be high power demand from HPs and the selection of supplemental heating for heat pumps has a big impact on the peak electrical power load of heating. The study in this paper gives an estimate of the scale of the electricity demand with high penetration of heat pumps in the Nordic area....

  15. Analysis of Future Vehicle Energy Demand in China Based on a Gompertz Function Method and Computable General Equilibrium Model

    Directory of Open Access Journals (Sweden)

    Tian Wu

    2014-11-01

    Full Text Available This paper presents a model for the projection of Chinese vehicle stocks and road vehicle energy demand through 2050 based on low-, medium-, and high-growth scenarios. To derive a gross-domestic product (GDP-dependent Gompertz function, Chinese GDP is estimated using a recursive dynamic Computable General Equilibrium (CGE model. The Gompertz function is estimated using historical data on vehicle development trends in North America, Pacific Rim and Europe to overcome the problem of insufficient long-running data on Chinese vehicle ownership. Results indicate that the number of projected vehicle stocks for 2050 is 300, 455 and 463 million for low-, medium-, and high-growth scenarios respectively. Furthermore, the growth in China’s vehicle stock will increase beyond the inflection point of Gompertz curve by 2020, but will not reach saturation point during the period 2014–2050. Of major road vehicle categories, cars are the largest energy consumers, followed by trucks and buses. Growth in Chinese vehicle demand is primarily determined by per capita GDP. Vehicle saturation levels solely influence the shape of the Gompertz curve and population growth weakly affects vehicle demand. Projected total energy consumption of road vehicles in 2050 is 380, 575 and 586 million tonnes of oil equivalent for each scenario.

  16. Delivering Training for Highly Demanding Information Systems

    Science.gov (United States)

    Norton, Andrew Lawrence; Coulson-Thomas, Yvette May; Coulson-Thomas, Colin Joseph; Ashurst, Colin

    2012-01-01

    Purpose: There is a lack of research covering the training requirements of organisations implementing highly demanding information systems (HDISs). The aim of this paper is to help in the understanding of appropriate training requirements for such systems. Design/methodology/approach: This research investigates the training delivery within a…

  17. Security Services Lifecycle Management in on-demand infrastructure services

    NARCIS (Netherlands)

    Demchenko, Y.; de Laat, C.; Lopez, D.R.; García-Espín, J.A.; Qiu, J.; Zhao, G.; Rong, C.

    2010-01-01

    Modern e-Science and high technology industry require high-performance and complicated network and computer infrastructure to support distributed collaborating groups of researchers and applications that should be provisioned on-demand. The effective use and management of the dynamically provisioned

  18. Computation of nonlinear water waves with a high-order Boussinesq model

    DEFF Research Database (Denmark)

    Fuhrman, David R.; Madsen, Per A.; Bingham, Harry

    2005-01-01

    Computational highlights from a recently developed high-order Boussinesq model are shown. The model is capable of treating fully nonlinear waves (up to the breaking point) out to dimensionless depths of (wavenumber times depth) kh \\approx 25. Cases considered include the study of short......-crested waves in shallow/deep water, resulting in hexagonal/rectangular surface patterns; crescent waves, resulting from unstable perturbations of plane progressive waves; and highly-nonlinear wave-structure interactions. The emphasis is on physically demanding problems, and in eachcase qualitative and (when...

  19. Predicting Short-Term Electricity Demand by Combining the Advantages of ARMA and XGBoost in Fog Computing Environment

    Directory of Open Access Journals (Sweden)

    Chuanbin Li

    2018-01-01

    Full Text Available With the rapid development of IoT, the disadvantages of Cloud framework have been exposed, such as high latency, network congestion, and low reliability. Therefore, the Fog Computing framework has emerged, with an extended Fog Layer between the Cloud and terminals. In order to address the real-time prediction on electricity demand, we propose an approach based on XGBoost and ARMA in Fog Computing environment. By taking the advantages of Fog Computing framework, we first propose a prototype-based clustering algorithm to divide enterprise users into several categories based on their total electricity consumption; we then propose a model selection approach by analyzing users’ historical records of electricity consumption and identifying the most important features. Generally speaking, if the historical records pass the test of stationarity and white noise, ARMA is used to model the user’s electricity consumption in time sequence; otherwise, if the historical records do not pass the test, and some discrete features are the most important, such as weather and whether it is weekend, XGBoost will be used. The experiment results show that our proposed approach by combining the advantage of ARMA and XGBoost is more accurate than the classical models.

  20. EBR-II high-ramp transients under computer control

    International Nuclear Information System (INIS)

    Forrester, R.J.; Larson, H.A.; Christensen, L.J.; Booty, W.F.; Dean, E.M.

    1983-01-01

    During reactor run 122, EBR-II was subjected to 13 computer-controlled overpower transients at ramps of 4 MWt/s to qualify the facility and fuel for transient testing of LMFBR oxide fuels as part of the EBR-II operational-reliability-testing (ORT) program. A computer-controlled automatic control-rod drive system (ACRDS), designed by EBR-II personnel, permitted automatic control on demand power during the transients

  1. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  2. Persistent high job demands and reactivity to mental stress predict future ambulatory blood pressure.

    Science.gov (United States)

    Steptoe, A; Cropley, M

    2000-05-01

    To test the hypothesis that work stress (persistent high job demands over 1 year) in combination with high reactivity to mental stress predict ambulatory blood pressure. Assessment of cardiovascular responses to standardized behavioural tasks, job demands, and ambulatory blood pressure over a working day and evening after 12 months. We studied 81 school teachers (26 men, 55 women), 36 of whom experienced persistent high job demands over 1 year, while 45 reported lower job demands. Participants were divided on the basis of high and low job demands, and high and low systolic pressure reactions to an uncontrollable stress task. Blood pressure and concurrent physical activity were monitored using ambulatory apparatus from 0900 to 2230 h on a working day. Cardiovascular stress reactivity was associated with waist/hip ratio. Systolic and diastolic pressure during the working day were greater in high job demand participants who were stress reactive than in other groups, after adjustment for age, baseline blood pressure, body mass index and negative affectivity. The difference was not accounted for by variations in physical activity. Cardiovascular stress reactivity and sustained psychosocial stress may act in concert to increase cardiovascular risk in susceptible individuals.

  3. Cloud Computing Benefits for Educational Institutions

    OpenAIRE

    Lakshminarayanan, Ramkumar; Kumar, Binod; Raju, M.

    2013-01-01

    Education today is becoming completely associated with the Information Technology on the content delivery, communication and collaboration. The need for servers, storage and software are highly demanding in the universities, colleges and schools. Cloud Computing is an Internet based computing, whereby shared resources, software and information, are provided to computers and devices on-demand, like the electricity grid. Currently, IaaS (Infrastructure as a Service), PaaS (Platform as a Service...

  4. Large Scale Computing and Storage Requirements for High Energy Physics

    International Nuclear Information System (INIS)

    Gerber, Richard A.; Wasserman, Harvey

    2010-01-01

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  5. Career Technical Education: Keeping Adult Learners Competitive for High-Demand Jobs

    Science.gov (United States)

    National Association of State Directors of Career Technical Education Consortium, 2011

    2011-01-01

    In today's turbulent economy, how can adult workers best position themselves to secure jobs in high-demand fields where they are more likely to remain competitive and earn more? Further, how can employers up-skill current employees so that they meet increasingly complex job demands? Research indicates that Career Technical Education (CTE) aligned…

  6. Computing in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Watase, Yoshiyuki

    1991-09-15

    The increasingly important role played by computing and computers in high energy physics is displayed in the 'Computing in High Energy Physics' series of conferences, bringing together experts in different aspects of computing - physicists, computer scientists, and vendors.

  7. Analog computing

    CERN Document Server

    Ulmann, Bernd

    2013-01-01

    This book is a comprehensive introduction to analog computing. As most textbooks about this powerful computing paradigm date back to the 1960s and 1970s, it fills a void and forges a bridge from the early days of analog computing to future applications. The idea of analog computing is not new. In fact, this computing paradigm is nearly forgotten, although it offers a path to both high-speed and low-power computing, which are in even more demand now than they were back in the heyday of electronic analog computers.

  8. Computing in high energy physics

    International Nuclear Information System (INIS)

    Watase, Yoshiyuki

    1991-01-01

    The increasingly important role played by computing and computers in high energy physics is displayed in the 'Computing in High Energy Physics' series of conferences, bringing together experts in different aspects of computing - physicists, computer scientists, and vendors

  9. The effect of preferred music on mood and performance in a high-cognitive demand occupation.

    Science.gov (United States)

    Lesiuk, Teresa

    2010-01-01

    Mild positive affect has been shown in the psychological literature to improve cognitive skills of creative problem-solving and systematic thinking. Individual preferred music listening offers opportunity for improved positive affect. The purpose of this study was to examine the effect of preferred music listening on state-mood and cognitive performance in a high-cognitive demand occupation. Twenty-four professional computer information systems developers (CISD) from a North American IT company participated in a 3-week study with a music/no music/music weekly design. During the music weeks, participants listened to their preferred music "when they wanted, as they wanted." Self-reports of State Positive Affect, State Negative Affect, and Cognitive Performance were measured throughout the 3 weeks. Results indicate a statistically significant improvement in both state-mood and cognitive performance scores. "High-cognitive demand" is a relative term given that challenges presented to individuals may occur on a cognitive continuum from need for focus and selective attention to systematic analysis and creative problem-solving. The findings and recommendations have important implications for music therapists in their knowledge of the effect of music on emotion and cognition, and, as well, have important implications for music therapy consultation to organizations.

  10. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community.

    Science.gov (United States)

    Krampis, Konstantinos; Booth, Tim; Chapman, Brad; Tiwari, Bela; Bicak, Mesude; Field, Dawn; Nelson, Karen E

    2012-03-19

    A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the development of highly

  11. Cloud BioLinux: pre-configured and on-demand bioinformatics computing for the genomics community

    Science.gov (United States)

    2012-01-01

    Background A steep drop in the cost of next-generation sequencing during recent years has made the technology affordable to the majority of researchers, but downstream bioinformatic analysis still poses a resource bottleneck for smaller laboratories and institutes that do not have access to substantial computational resources. Sequencing instruments are typically bundled with only the minimal processing and storage capacity required for data capture during sequencing runs. Given the scale of sequence datasets, scientific value cannot be obtained from acquiring a sequencer unless it is accompanied by an equal investment in informatics infrastructure. Results Cloud BioLinux is a publicly accessible Virtual Machine (VM) that enables scientists to quickly provision on-demand infrastructures for high-performance bioinformatics computing using cloud platforms. Users have instant access to a range of pre-configured command line and graphical software applications, including a full-featured desktop interface, documentation and over 135 bioinformatics packages for applications including sequence alignment, clustering, assembly, display, editing, and phylogeny. Each tool's functionality is fully described in the documentation directly accessible from the graphical interface of the VM. Besides the Amazon EC2 cloud, we have started instances of Cloud BioLinux on a private Eucalyptus cloud installed at the J. Craig Venter Institute, and demonstrated access to the bioinformatic tools interface through a remote connection to EC2 instances from a local desktop computer. Documentation for using Cloud BioLinux on EC2 is available from our project website, while a Eucalyptus cloud image and VirtualBox Appliance is also publicly available for download and use by researchers with access to private clouds. Conclusions Cloud BioLinux provides a platform for developing bioinformatics infrastructures on the cloud. An automated and configurable process builds Virtual Machines, allowing the

  12. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    California to date. The Titan system provides the largest extant heterogeneous architecture for computing and computational science. Usage is high, delivering on the promise of a system well-suited for capability simulations for science. This success is due in part to innovations in tracking and reporting the activity on the compute nodes, and using this information to further enable and optimize applications, extending and balancing workload across the entire node. The OLCF continues to invest in innovative processes, tools, and resources necessary to meet continuing user demand. The facility’s leadership in data analysis and workflows was featured at the Department of Energy (DOE) booth at SC15, for the second year in a row, highlighting work with researchers from the National Library of Medicine coupled with unique computational and data resources serving experimental and observational data across facilities. Effective operations of the OLCF play a key role in the scientific missions and accomplishments of its users. Building on the exemplary year of 2014, as shown by the 2014 Operational Assessment Report (OAR) review committee response in Appendix A, this OAR delineates the policies, procedures, and innovations implemented by the OLCF to continue delivering a multi-petaflop resource for cutting-edge research. This report covers CY 2015, which, unless otherwise specified, denotes January 1, 2015, through December 31, 2015.

  13. Achieving high performance in numerical computations on RISC workstations and parallel systems

    Energy Technology Data Exchange (ETDEWEB)

    Goedecker, S. [Max-Planck Inst. for Solid State Research, Stuttgart (Germany); Hoisie, A. [Los Alamos National Lab., NM (United States)

    1997-08-20

    The nominal peak speeds of both serial and parallel computers is raising rapidly. At the same time however it is becoming increasingly difficult to get out a significant fraction of this high peak speed from modern computer architectures. In this tutorial the authors give the scientists and engineers involved in numerically demanding calculations and simulations the necessary basic knowledge to write reasonably efficient programs. The basic principles are rather simple and the possible rewards large. Writing a program by taking into account optimization techniques related to the computer architecture can significantly speedup your program, often by factors of 10--100. As such, optimizing a program can for instance be a much better solution than buying a faster computer. If a few basic optimization principles are applied during program development, the additional time needed for obtaining an efficient program is practically negligible. In-depth optimization is usually only needed for a few subroutines or kernels and the effort involved is therefore also acceptable.

  14. A high-resolution stochastic model of domestic activity patterns and electricity demand

    International Nuclear Information System (INIS)

    Widen, Joakim; Waeckelgard, Ewa

    2010-01-01

    Realistic time-resolved data on occupant behaviour, presence and energy use are important inputs to various types of simulations, including performance of small-scale energy systems and buildings' indoor climate, use of lighting and energy demand. This paper presents a modelling framework for stochastic generation of high-resolution series of such data. The model generates both synthetic activity sequences of individual household members, including occupancy states, and domestic electricity demand based on these patterns. The activity-generating model, based on non-homogeneous Markov chains that are tuned to an extensive empirical time-use data set, creates a realistic spread of activities over time, down to a 1-min resolution. A detailed validation against measurements shows that modelled power demand data for individual households as well as aggregate demand for an arbitrary number of households are highly realistic in terms of end-use composition, annual and diurnal variations, diversity between households, short time-scale fluctuations and load coincidence. An important aim with the model development has been to maintain a sound balance between complexity and output quality. Although the model yields a high-quality output, the proposed model structure is uncomplicated in comparison to other available domestic load models.

  15. Computing in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Sarah; Devenish, Robin [Nuclear Physics Laboratory, Oxford University (United Kingdom)

    1989-07-15

    Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations. The state of the art, as well as new trends and hopes, were reflected in this year's 'Computing In High Energy Physics' conference held in the dreamy setting of Oxford's spires. The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference's aim – 'to bring together high energy physicists and computer scientists'.

  16. The world energy demand in 2007: How high oil prices impact the global energy demand? June 9, 2008

    International Nuclear Information System (INIS)

    2008-01-01

    How high oil prices impact the global energy demand? The growth of energy demand continued to accelerate in 2007 despite soaring prices, to reach 2,8 % (+ 0,3 point compared to 2006). This evolution results from two diverging trends: a shrink in energy consumption in most of OECD countries, except North America, and a strong increase in emerging countries. Within the OECD, two contrasting trends can be reported, that compensate each other partially: the reduction of energy consumption in Japan (-0.8%) and in Europe (-1.2%), particularly significant in the EU-15 (-1.9%); the increase of energy consumption in North America (+2%). Globally, the OECD overall consumption continued to increase slightly (+0.5%), while electricity increased faster (2,1%) and fuels remained stable. Elsewhere, the strong energy demand growth remained very dynamic (+5% for the total demand, 8% for electricity only), driven by China (+7.3%). The world oil demand increased by 1% only, but the demand has focused even more on captive end usages, transports and petrochemistry. The world gasoline and diesel demand increased by around 5,7% in 2007, and represents 53% of the total oil products demand in 2007 (51% in 2006). If gasoline and diesel consumption remained quasi-stable within OECD countries, the growth has been extremely strong in the emerging countries, despite booming oil prices. There are mainly two factors explaining this evolution where both oil demand and oil prices increased: Weak elasticity-prices to the demand in transport and petrochemistry sectors Disconnection of domestic fuel prices in major emerging countries (China, India, Latin America) compared to world oil market prices Another striking point is that world crude oil and condensate production remained almost stable in 2007, hence the entire demand growth was supported by destocking. During the same period, the OPEC production decreased by 1%, mainly due to the production decrease in Saudi Arabia, that is probably more

  17. High-level language computer architecture

    CERN Document Server

    Chu, Yaohan

    1975-01-01

    High-Level Language Computer Architecture offers a tutorial on high-level language computer architecture, including von Neumann architecture and syntax-oriented architecture as well as direct and indirect execution architecture. Design concepts of Japanese-language data processing systems are discussed, along with the architecture of stack machines and the SYMBOL computer system. The conceptual design of a direct high-level language processor is also described.Comprised of seven chapters, this book first presents a classification of high-level language computer architecture according to the pr

  18. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  19. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  20. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  1. Computing in high energy physics

    International Nuclear Information System (INIS)

    Smith, Sarah; Devenish, Robin

    1989-01-01

    Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations. The state of the art, as well as new trends and hopes, were reflected in this year's 'Computing In High Energy Physics' conference held in the dreamy setting of Oxford's spires. The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference's aim – 'to bring together high energy physicists and computer scientists'

  2. Integrating Embedded Computing Systems into High School and Early Undergraduate Education

    Science.gov (United States)

    Benson, B.; Arfaee, A.; Choon Kim; Kastner, R.; Gupta, R. K.

    2011-01-01

    Early exposure to embedded computing systems is crucial for students to be prepared for the embedded computing demands of today's world. However, exposure to systems knowledge often comes too late in the curriculum to stimulate students' interests and to provide a meaningful difference in how they direct their choice of electives for future…

  3. High energy physics and grid computing

    International Nuclear Information System (INIS)

    Yu Chuansong

    2004-01-01

    The status of the new generation computing environment of the high energy physics experiments is introduced briefly in this paper. The development of the high energy physics experiments and the new computing requirements by the experiments are presented. The blueprint of the new generation computing environment of the LHC experiments, the history of the Grid computing, the R and D status of the high energy physics grid computing technology, the network bandwidth needed by the high energy physics grid and its development are described. The grid computing research in Chinese high energy physics community is introduced at last. (authors)

  4. Analysis and Modeling of Social In uence in High Performance Computing Workloads

    KAUST Repository

    Zheng, Shuai

    2011-06-01

    High Performance Computing (HPC) is becoming a common tool in many research areas. Social influence (e.g., project collaboration) among increasing users of HPC systems creates bursty behavior in underlying workloads. This bursty behavior is increasingly common with the advent of grid computing and cloud computing. Mining the user bursty behavior is important for HPC workloads prediction and scheduling, which has direct impact on overall HPC computing performance. A representative work in this area is the Mixed User Group Model (MUGM), which clusters users according to the resource demand features of their submissions, such as duration time and parallelism. However, MUGM has some difficulties when implemented in real-world system. First, representing user behaviors by the features of their resource demand is usually difficult. Second, these features are not always available. Third, measuring the similarities among users is not a well-defined problem. In this work, we propose a Social Influence Model (SIM) to identify, analyze, and quantify the level of social influence across HPC users. The advantage of the SIM model is that it finds HPC communities by analyzing user job submission time, thereby avoiding the difficulties of MUGM. An offline algorithm and a fast-converging, computationally-efficient online learning algorithm for identifying social groups are proposed. Both offline and online algorithms are applied on several HPC and grid workloads, including Grid 5000, EGEE 2005 and 2007, and KAUST Supercomputing Lab (KSL) BGP data. From the experimental results, we show the existence of a social graph, which is characterized by a pattern of dominant users and followers. In order to evaluate the effectiveness of identified user groups, we show the pattern discovered by the offline algorithm follows a power-law distribution, which is consistent with those observed in mainstream social networks. We finally conclude the thesis and discuss future directions of our work.

  5. Long-term uranium supply-demand analyses

    International Nuclear Information System (INIS)

    1986-12-01

    It is the intention of this study to investigate the long-term uranium supply demand situation using a number of supply and demand related assumptions. For supply, these assumptions as used in the Resources and Production Projection (RAPP) model include country economic development status, and consequent lead times for exploration and development, uranium development status, country infrastructure, and uranium resources including the Reasonably Assured (RAR), Estimated Additional, Categories I and II, (EAR-I and II) and Speculative Resource categories. The demand assumptions were based on the ''pure'' reactor strategies developed by the NEA Working Party on Nuclear Fuel Cycle Requirements for the 1986 OECD (NEA)/IAEA reports ''Nuclear Energy and its Fuel Cycle: Prospects to 2025''. In addition for this study, a mixed strategy case was computed using the averages of the Plutonium (Pu) burning LWR high, and the improved LWR low cases. It is understandable that such a long-term analysis cannot present hard facts, but it can show which variables may in fact influence the long-term supply-demand situation. It is hoped that results of this study will provide valuable information for planners in the uranium supply and demand fields. Periodical re-analyses with updated data bases will be needed from time to time

  6. On-demand provisioning of HEP compute resources on cloud sites and shared HPC centers

    Science.gov (United States)

    Erli, G.; Fischer, F.; Fleig, G.; Giffels, M.; Hauth, T.; Quast, G.; Schnepf, M.; Heese, J.; Leppert, K.; Arnaez de Pedro, J.; Sträter, R.

    2017-10-01

    This contribution reports on solutions, experiences and recent developments with the dynamic, on-demand provisioning of remote computing resources for analysis and simulation workflows. Local resources of a physics institute are extended by private and commercial cloud sites, ranging from the inclusion of desktop clusters over institute clusters to HPC centers. Rather than relying on dedicated HEP computing centers, it is nowadays more reasonable and flexible to utilize remote computing capacity via virtualization techniques or container concepts. We report on recent experience from incorporating a remote HPC center (NEMO Cluster, Freiburg University) and resources dynamically requested from the commercial provider 1&1 Internet SE into our intitute’s computing infrastructure. The Freiburg HPC resources are requested via the standard batch system, allowing HPC and HEP applications to be executed simultaneously, such that regular batch jobs run side by side to virtual machines managed via OpenStack [1]. For the inclusion of the 1&1 commercial resources, a Python API and SDK as well as the possibility to upload images were available. Large scale tests prove the capability to serve the scientific use case in the European 1&1 datacenters. The described environment at the Institute of Experimental Nuclear Physics (IEKP) at KIT serves the needs of researchers participating in the CMS and Belle II experiments. In total, resources exceeding half a million CPU hours have been provided by remote sites.

  7. High Job Demands, Still Engaged and Not Burned Out? The Role of Job Crafting.

    Science.gov (United States)

    Hakanen, Jari J; Seppälä, Piia; Peeters, Maria C W

    2017-08-01

    Traditionally, employee well-being has been considered as resulting from decent working conditions arranged by the organization. Much less is known about whether employees themselves can make self-initiated changes to their work, i.e., craft their jobs, in order to stay well, even in highly demanding work situations. The aim of this study was to use the job demands-resources (JD-R model) to investigate whether job crafting buffers the negative impacts of four types of job demands (workload, emotional dissonance, work contents, and physical demands) on burnout and work engagement. A questionnaire study was designed to examine the buffering role of job crafting among 470 Finnish dentists. All in all, 11 out of 16 possible interaction effects of job demands and job crafting on employee well-being were significant. Job crafting particularly buffered the negative effects of job demands on burnout (7/8 significant interactions) and to a somewhat lesser extent also on work engagement (4/8 significant interactions). Applying job crafting techniques appeared to be particularly effective in mitigating the negative effects of quantitative workload (4/4 significant interactions). By demonstrating that job crafting can also buffer the negative impacts of high job demands on employee well-being, this study contributed to the JD-R model as it suggests that job crafting may even be possible under high work demands, and not only in resourceful jobs, as most previous studies have indicated. In addition to the top-down initiatives for improving employee well-being, bottom-up approaches such as job crafting may also be efficient in preventing burnout and enhancing work engagement.

  8. More customers embrace Dell standards-based computing for even the most demanding applications-Growing demand among HPCC customers for Dell in Europe

    CERN Multimedia

    2003-01-01

    Dell Computers has signed agreements with several high-profile customers in Europe to provide high performance computing cluster (HPCC) solutions. One customer is a consortium of 4 universities involved in research at the Collider Detector Facility at Fermilab (1 page).

  9. A compound Poisson EOQ model for perishable items with intermittent high and low demand periods

    NARCIS (Netherlands)

    Boxma, O.J.; Perry, D.; Stadje, W.; Zacks, S.

    2012-01-01

    We consider a stochastic EOQ-type model, with demand operating in a two-state random environment. This environment alternates between exponentially distributed periods of high demand and generally distributed periods of low demand. The inventory level starts at some level q, and decreases according

  10. High degree utilization of computers for design of nuclear power plants

    International Nuclear Information System (INIS)

    Masui, Takao; Sawada, Takashi

    1992-01-01

    Nuclear power plants are the huge technology in which various technologies are compounded, and the high safety is demanded. Therefore, in the design of nuclear power plants, it is necessary to carry out the design by sufficiently grasping the behavior of the plants, and to confirm the safety by carrying out the accurate design evaluation supposing the various operational conditions, and as the indispensable tool for these analysis and evaluation, the most advanced computers in that age have been utilized. As to the utilization for the design, there are the fields of design, analysis and evaluation and another fields of the application to the support of design. Also in the field of the application to operation control, computers are utilized. The utilization of computers for the core design, hydrothermal design, core structure design, safety analysis and structural analysis of PWR plants, and for the nuclear design, safety analysis and heat flow analysis of FBR plants, the application to the support of design and the application to operation control are explained. (K.I.)

  11. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  12. High Performance Computing in Science and Engineering '17 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael; HLRS 2017

    2018-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2017. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance.The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  13. Visual ergonomics and computer work--is it all about computer glasses?

    Science.gov (United States)

    Jonsson, Christina

    2012-01-01

    The Swedish Provisions on Work with Display Screen Equipment and the EU Directive on the minimum safety and health requirements for work with display screen equipment cover several important visual ergonomics aspects. But a review of cases and questions to the Swedish Work Environment Authority clearly shows that most attention is given to the demands for eyesight tests and special computer glasses. Other important visual ergonomics factors are at risk of being neglected. Today computers are used everywhere, both at work and at home. Computers can be laptops, PDA's, tablet computers, smart phones, etc. The demands on eyesight tests and computer glasses still apply but the visual demands and the visual ergonomics conditions are quite different compared to the use of a stationary computer. Based on this review, we raise the question if the demand on the employer to provide the employees with computer glasses is outdated.

  14. High-End Scientific Computing

    Science.gov (United States)

    EPA uses high-end scientific computing, geospatial services and remote sensing/imagery analysis to support EPA's mission. The Center for Environmental Computing (CEC) assists the Agency's program offices and regions to meet staff needs in these areas.

  15. 46 CFR 111.60-7 - Demand loads.

    Science.gov (United States)

    2010-10-01

    ... 46 Shipping 4 2010-10-01 2010-10-01 false Demand loads. 111.60-7 Section 111.60-7 Shipping COAST... REQUIREMENTS Wiring Materials and Methods § 111.60-7 Demand loads. Generator, feeder, and bus-tie cables must be selected on the basis of a computed load of not less than the demand load given in Table 111.60-7...

  16. A theoretical model for oxygen transport in skeletal muscle under conditions of high oxygen demand.

    Science.gov (United States)

    McGuire, B J; Secomb, T W

    2001-11-01

    Oxygen transport from capillaries to exercising skeletal muscle is studied by use of a Krogh-type cylinder model. The goal is to predict oxygen consumption under conditions of high demand, on the basis of a consideration of transport processes occurring at the microvascular level. Effects of the decline in oxygen content of blood flowing along capillaries, intravascular resistance to oxygen diffusion, and myoglobin-facilitated diffusion are included. Parameter values are based on human skeletal muscle. The dependence of oxygen consumption on oxygen demand, perfusion, and capillary density are examined. When demand is moderate, the tissue is well oxygenated and consumption is slightly less than demand. When demand is high, capillary oxygen content declines rapidly with axial distance and radial oxygen transport is limited by diffusion resistance within the capillary and the tissue. Under these conditions, much of the tissue is hypoxic, consumption is substantially less than demand, and consumption is strongly dependent on capillary density. Predicted consumption rates are comparable with experimentally observed maximal rates of oxygen consumption.

  17. Residential Consumer-Centric Demand-Side Management Based on Energy Disaggregation-Piloting Constrained Swarm Intelligence: Towards Edge Computing.

    Science.gov (United States)

    Lin, Yu-Hsiu; Hu, Yu-Chen

    2018-04-27

    The emergence of smart Internet of Things (IoT) devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR) schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO)-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power consumption is achieved

  18. Residential Consumer-Centric Demand-Side Management Based on Energy Disaggregation-Piloting Constrained Swarm Intelligence: Towards Edge Computing

    Science.gov (United States)

    Hu, Yu-Chen

    2018-01-01

    The emergence of smart Internet of Things (IoT) devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR) schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO)-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power consumption is achieved

  19. Residential Consumer-Centric Demand-Side Management Based on Energy Disaggregation-Piloting Constrained Swarm Intelligence: Towards Edge Computing

    Directory of Open Access Journals (Sweden)

    Yu-Hsiu Lin

    2018-04-01

    Full Text Available The emergence of smart Internet of Things (IoT devices has highly favored the realization of smart homes in a down-stream sector of a smart grid. The underlying objective of Demand Response (DR schemes is to actively engage customers to modify their energy consumption on domestic appliances in response to pricing signals. Domestic appliance scheduling is widely accepted as an effective mechanism to manage domestic energy consumption intelligently. Besides, to residential customers for DR implementation, maintaining a balance between energy consumption cost and users’ comfort satisfaction is a challenge. Hence, in this paper, a constrained Particle Swarm Optimization (PSO-based residential consumer-centric load-scheduling method is proposed. The method can be further featured with edge computing. In contrast with cloud computing, edge computing—a method of optimizing cloud computing technologies by driving computing capabilities at the IoT edge of the Internet as one of the emerging trends in engineering technology—addresses bandwidth-intensive contents and latency-sensitive applications required among sensors and central data centers through data analytics at or near the source of data. A non-intrusive load-monitoring technique proposed previously is utilized to automatic determination of physical characteristics of power-intensive home appliances from users’ life patterns. The swarm intelligence, constrained PSO, is used to minimize the energy consumption cost while considering users’ comfort satisfaction for DR implementation. The residential consumer-centric load-scheduling method proposed in this paper is evaluated under real-time pricing with inclining block rates and is demonstrated in a case study. The experimentation reported in this paper shows the proposed residential consumer-centric load-scheduling method can re-shape loads by home appliances in response to DR signals. Moreover, a phenomenal reduction in peak power

  20. Demands Set Upon Modern Cartographic Visualization

    Directory of Open Access Journals (Sweden)

    Stanislav Frangeš

    2007-05-01

    Full Text Available Scientific cartography has the task to develop and research new methods of cartographic visualization. General demands are set upon modern cartographic visualization, which encompasses digital cartography and computer graphics: legibility, clearness, accuracy, plainness and aesthetics. In this paper, it is explained in detail what demands should be met in order to satisfy the general demands set. In order to satisfy the demand of legibility, one should respect conditions of minimal sizes, appropriate graphical density and better differentiation of known features. Demand of clearness needs to be met by fulfilling conditions of simplicity, contrasting quality and layer arrangement of cartographic representation. Accuracy, as the demand on cartographic visualization, can be divided into positioning accuracy and accuracy signs. For fulfilling the demand of plainness, the conditions of symbolism, traditionalism and hierarchic organization should be met. Demand of aesthetics will be met if the conditions of beauty and harmony are fulfilled.

  1. A Statist Political Economy and High Demand for Education in South Korea

    Directory of Open Access Journals (Sweden)

    Ki Su Kim

    1999-06-01

    Full Text Available In the 1998 academic year, 84 percent of South Korea's high school "leavers" entered a university or college while almost all children went up to high schools. This is to say, South Korea is now moving into a new age of universal higher education. Even so, competition for university entrance remains intense. What is here interesting is South Koreans' unusually high demand for education. In this article, I criticize the existing cultural and socio-economic interpretations of the phenomenon. Instead, I explore a new interpretation by critically referring to the recent political economy debate on South Korea's state-society/market relationship. In my interpretation, the unusually high demand for education is largely due to the powerful South Korean state's losing flexibility in the management of its "developmental" policies. For this, I blame the traditional "personalist ethic" which still prevails as the

  2. An Interactive Computer Tool for Teaching About Desalination and Managing Water Demand in the US

    Science.gov (United States)

    Ziolkowska, J. R.; Reyes, R.

    2016-12-01

    This paper presents an interactive tool to geospatially and temporally analyze desalination developments and trends in the US in the time span 1950-2013, its current contribution to satisfying water demands and its future potentials. The computer tool is open access and can be used by any user with Internet connection, thus facilitating interactive learning about water resources. The tool can also be used by stakeholders and policy makers for decision-making support and with designing sustainable water management strategies. Desalination technology has been acknowledged as a solution to a sustainable water demand management stemming from many sectors, including municipalities, industry, agriculture, power generation, and other users. Desalination has been applied successfully in the US and many countries around the world since 1950s. As of 2013, around 1,336 desalination plants were operating in the US alone, with a daily production capacity of 2 BGD (billion gallons per day) (GWI, 2013). Despite a steady increase in the number of new desalination plants and growing production capacity, in many regions, the costs of desalination are still prohibitive. At the same time, the technology offers a tremendous potential for `enormous supply expansion that exceeds all likely demands' (Chowdhury et al., 2013). The model and tool are based on data from Global Water Intelligence (GWI, 2013). The analysis shows that more than 90% of all the plants in the US are small-scale plants with the capacity below 4.31 MGD. Most of the plants (and especially larger plants) are located on the US East Coast, as well as in California, Texas, Oklahoma, and Florida. The models and the tool provide information about economic feasibility of potential new desalination plants based on the access to feed water, energy sources, water demand, and experiences of other plants in that region.

  3. A Computational Fluid Dynamics Algorithm on a Massively Parallel Computer

    Science.gov (United States)

    Jespersen, Dennis C.; Levit, Creon

    1989-01-01

    The discipline of computational fluid dynamics is demanding ever-increasing computational power to deal with complex fluid flow problems. We investigate the performance of a finite-difference computational fluid dynamics algorithm on a massively parallel computer, the Connection Machine. Of special interest is an implicit time-stepping algorithm; to obtain maximum performance from the Connection Machine, it is necessary to use a nonstandard algorithm to solve the linear systems that arise in the implicit algorithm. We find that the Connection Machine ran achieve very high computation rates on both explicit and implicit algorithms. The performance of the Connection Machine puts it in the same class as today's most powerful conventional supercomputers.

  4. Computing in high energy physics

    International Nuclear Information System (INIS)

    Hertzberger, L.O.; Hoogland, W.

    1986-01-01

    This book deals with advanced computing applications in physics, and in particular in high energy physics environments. The main subjects covered are networking; vector and parallel processing; and embedded systems. Also examined are topics such as operating systems, future computer architectures and commercial computer products. The book presents solutions that are foreseen as coping, in the future, with computing problems in experimental and theoretical High Energy Physics. In the experimental environment the large amounts of data to be processed offer special problems on-line as well as off-line. For on-line data reduction, embedded special purpose computers, which are often used for trigger applications are applied. For off-line processing, parallel computers such as emulator farms and the cosmic cube may be employed. The analysis of these topics is therefore a main feature of this volume

  5. Optical Computers and Space Technology

    Science.gov (United States)

    Abdeldayem, Hossin A.; Frazier, Donald O.; Penn, Benjamin; Paley, Mark S.; Witherow, William K.; Banks, Curtis; Hicks, Rosilen; Shields, Angela

    1995-01-01

    The rapidly increasing demand for greater speed and efficiency on the information superhighway requires significant improvements over conventional electronic logic circuits. Optical interconnections and optical integrated circuits are strong candidates to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by the conventional electronic logic circuits. The new optical technology has increased the demand for high quality optical materials. NASA's recent involvement in processing optical materials in space has demonstrated that a new and unique class of high quality optical materials are processible in a microgravity environment. Microgravity processing can induce improved orders in these materials and could have a significant impact on the development of optical computers. We will discuss NASA's role in processing these materials and report on some of the associated nonlinear optical properties which are quite useful for optical computers technology.

  6. A Heterogeneous High-Performance System for Computational and Computer Science

    Science.gov (United States)

    2016-11-15

    expand the research infrastructure at the institution but also to enhance the high -performance computing training provided to both undergraduate and... cloud computing, supercomputing, and the availability of cheap memory and storage led to enormous amounts of data to be sifted through in forensic... High -Performance Computing (HPC) tools that can be integrated with existing curricula and support our research to modernize and dramatically advance

  7. High energy physics computing in Japan

    International Nuclear Information System (INIS)

    Watase, Yoshiyuki

    1989-01-01

    A brief overview of the computing provision for high energy physics in Japan is presented. Most of the computing power for high energy physics is concentrated in KEK. Here there are two large scale systems: one providing a general computing service including vector processing and the other dedicated to TRISTAN experiments. Each university group has a smaller sized mainframe or VAX system to facilitate both their local computing needs and the remote use of the KEK computers through a network. The large computer system for the TRISTAN experiments is described. An overview of a prospective future large facility is also given. (orig.)

  8. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  9. Lightweight Provenance Service for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Dai, Dong; Chen, Yong; Carns, Philip; Jenkins, John; Ross, Robert

    2017-09-09

    Provenance describes detailed information about the history of a piece of data, containing the relationships among elements such as users, processes, jobs, and workflows that contribute to the existence of data. Provenance is key to supporting many data management functionalities that are increasingly important in operations such as identifying data sources, parameters, or assumptions behind a given result; auditing data usage; or understanding details about how inputs are transformed into outputs. Despite its importance, however, provenance support is largely underdeveloped in highly parallel architectures and systems. One major challenge is the demanding requirements of providing provenance service in situ. The need to remain lightweight and to be always on often conflicts with the need to be transparent and offer an accurate catalog of details regarding the applications and systems. To tackle this challenge, we introduce a lightweight provenance service, called LPS, for high-performance computing (HPC) systems. LPS leverages a kernel instrument mechanism to achieve transparency and introduces representative execution and flexible granularity to capture comprehensive provenance with controllable overhead. Extensive evaluations and use cases have confirmed its efficiency and usability. We believe that LPS can be integrated into current and future HPC systems to support a variety of data management needs.

  10. From the Web to the Grid and beyond computing paradigms driven by high-energy physics

    CERN Document Server

    Carminati, Federico; Galli-Carminati, Giuliana

    2012-01-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the ...

  11. Cloud Computing-An Ultimate Technique to Minimize Computing cost for Developing Countries

    OpenAIRE

    Narendra Kumar; Shikha Jain

    2012-01-01

    The presented paper deals with how remotely managed computing and IT resources can be beneficial in the developing countries like India and Asian sub-continent countries. This paper not only defines the architectures and functionalities of cloud computing but also indicates strongly about the current demand of Cloud computing to achieve organizational and personal level of IT supports in very minimal cost with high class flexibility. The power of cloud can be used to reduce the cost of IT - r...

  12. Textbook Factor Demand Curves.

    Science.gov (United States)

    Davis, Joe C.

    1994-01-01

    Maintains that teachers and textbook graphics follow the same basic pattern in illustrating changes in demand curves when product prices increase. Asserts that the use of computer graphics will enable teachers to be more precise in their graphic presentation of price elasticity. (CFR)

  13. Development of a small-scale computer cluster

    Science.gov (United States)

    Wilhelm, Jay; Smith, Justin T.; Smith, James E.

    2008-04-01

    An increase in demand for computing power in academia has necessitated the need for high performance machines. Computing power of a single processor has been steadily increasing, but lags behind the demand for fast simulations. Since a single processor has hard limits to its performance, a cluster of computers can have the ability to multiply the performance of a single computer with the proper software. Cluster computing has therefore become a much sought after technology. Typical desktop computers could be used for cluster computing, but are not intended for constant full speed operation and take up more space than rack mount servers. Specialty computers that are designed to be used in clusters meet high availability and space requirements, but can be costly. A market segment exists where custom built desktop computers can be arranged in a rack mount situation, gaining the space saving of traditional rack mount computers while remaining cost effective. To explore these possibilities, an experiment was performed to develop a computing cluster using desktop components for the purpose of decreasing computation time of advanced simulations. This study indicates that small-scale cluster can be built from off-the-shelf components which multiplies the performance of a single desktop machine, while minimizing occupied space and still remaining cost effective.

  14. High performance computing in power and energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Khaitan, Siddhartha Kumar [Iowa State Univ., Ames, IA (United States); Gupta, Anshul (eds.) [IBM Watson Research Center, Yorktown Heights, NY (United States)

    2013-07-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, cascading and security analysis, interaction between hybrid systems (electric, transport, gas, oil, coal, etc.) and so on, to get meaningful information in real time to ensure a secure, reliable and stable power system grid. Advanced research on development and implementation of market-ready leading-edge high-speed enabling technologies and algorithms for solving real-time, dynamic, resource-critical problems will be required for dynamic security analysis targeted towards successful implementation of Smart Grid initiatives. This books aims to bring together some of the latest research developments as well as thoughts on the future research directions of the high performance computing applications in electric power systems planning, operations, security, markets, and grid integration of alternate sources of energy, etc.

  15. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  16. Computing for particle physics. Report of the HEPAP subpanel on computer needs for the next decade

    International Nuclear Information System (INIS)

    1985-08-01

    The increasing importance of computation to the future progress in high energy physics is documented. Experimental computing demands are analyzed for the near future (four to ten years). The computer industry's plans for the near term and long term are surveyed as they relate to the solution of high energy physics computing problems. This survey includes large processors and the future role of alternatives to commercial mainframes. The needs for low speed and high speed networking are assessed, and the need for an integrated network for high energy physics is evaluated. Software requirements are analyzed. The role to be played by multiple processor systems is examined. The computing needs associated with elementary particle theory are briefly summarized. Computing needs associated with the Superconducting Super Collider are analyzed. Recommendations are offered for expanding computing capabilities in high energy physics and for networking between the laboratories

  17. Towards Portable Large-Scale Image Processing with High-Performance Computing.

    Science.gov (United States)

    Huo, Yuankai; Blaber, Justin; Damon, Stephen M; Boyd, Brian D; Bao, Shunxing; Parvathaneni, Prasanna; Noguera, Camilo Bermudez; Chaganti, Shikha; Nath, Vishwesh; Greer, Jasmine M; Lyu, Ilwoo; French, William R; Newton, Allen T; Rogers, Baxter P; Landman, Bennett A

    2018-05-03

    High-throughput, large-scale medical image computing demands tight integration of high-performance computing (HPC) infrastructure for data storage, job distribution, and image processing. The Vanderbilt University Institute for Imaging Science (VUIIS) Center for Computational Imaging (CCI) has constructed a large-scale image storage and processing infrastructure that is composed of (1) a large-scale image database using the eXtensible Neuroimaging Archive Toolkit (XNAT), (2) a content-aware job scheduling platform using the Distributed Automation for XNAT pipeline automation tool (DAX), and (3) a wide variety of encapsulated image processing pipelines called "spiders." The VUIIS CCI medical image data storage and processing infrastructure have housed and processed nearly half-million medical image volumes with Vanderbilt Advanced Computing Center for Research and Education (ACCRE), which is the HPC facility at the Vanderbilt University. The initial deployment was natively deployed (i.e., direct installations on a bare-metal server) within the ACCRE hardware and software environments, which lead to issues of portability and sustainability. First, it could be laborious to deploy the entire VUIIS CCI medical image data storage and processing infrastructure to another HPC center with varying hardware infrastructure, library availability, and software permission policies. Second, the spiders were not developed in an isolated manner, which has led to software dependency issues during system upgrades or remote software installation. To address such issues, herein, we describe recent innovations using containerization techniques with XNAT/DAX which are used to isolate the VUIIS CCI medical image data storage and processing infrastructure from the underlying hardware and software environments. The newly presented XNAT/DAX solution has the following new features: (1) multi-level portability from system level to the application level, (2) flexible and dynamic software

  18. INSPIRED High School Computing Academies

    Science.gov (United States)

    Doerschuk, Peggy; Liu, Jiangjiang; Mann, Judith

    2011-01-01

    If we are to attract more women and minorities to computing we must engage students at an early age. As part of its mission to increase participation of women and underrepresented minorities in computing, the Increasing Student Participation in Research Development Program (INSPIRED) conducts computing academies for high school students. The…

  19. Dramatic Demand Reduction In The Desert Southwest

    Energy Technology Data Exchange (ETDEWEB)

    Boehm, Robert [Univ. of Nevada, Las Vegas, NV (United States); Hsieh, Sean [Univ. of Nevada, Las Vegas, NV (United States); Lee, Joon [Univ. of Nevada, Las Vegas, NV (United States); Baghzouz, Yahia [Univ. of Nevada, Las Vegas, NV (United States); Cross, Andrew [Univ. of Nevada, Las Vegas, NV (United States); Chatterjee, Sarah [NV Energy, Las Vegas, NV (United States)

    2015-07-06

    This report summarizes a project that was funded to the University of Nevada Las Vegas (UNLV), with subcontractors Pulte Homes and NV Energy. The project was motivated by the fact that locations in the Desert Southwest portion of the US demonstrate very high peak electrical demands, typically in the late afternoons in the summer. These high demands often require high priced power to supply the needs, and the large loads can cause grid supply problems. An approach was proposed through this contact that would reduce the peak electrical demands to an anticipated 65% of what code-built houses of the similar size would have. It was proposed to achieve energy reduction through four approaches applied to a development of 185 homes in northwest part of Las Vegas named Villa Trieste. First, the homes would all be highly energy efficient. Secondly, each house would have a PV array installed on it. Third, an advanced demand response technique would be developed to allow the resident to have some control over the energy used. Finally, some type of battery storage would be used in the project. Pulte Homes designed the houses. The company considered initial cost vs. long-term savings and chose options that had relatively short paybacks. HERS (Home Energy Rating Service) ratings for the homes are approximately 43 on this scale. On this scale, code-built homes rate at 100, zero energy homes rate a 0, and Energy Star homes are 85. In addition a 1.764 Wp (peak Watt) rated PV array was used on each house. This was made up of solar shakes that were in visual harmony with the roofing material used. A demand response tool was developed to control the amount of electricity used during times of peak demand. While demand response techniques have been used in the utility industry for some time, this particular approach is designed to allow the customer to decide the degree of participation in the response activity. The temperature change in the residence can be decided by the residents by

  20. Implementation of the Principal Component Analysis onto High-Performance Computer Facilities for Hyperspectral Dimensionality Reduction: Results and Comparisons

    Directory of Open Access Journals (Sweden)

    Ernestina Martel

    2018-06-01

    Full Text Available Dimensionality reduction represents a critical preprocessing step in order to increase the efficiency and the performance of many hyperspectral imaging algorithms. However, dimensionality reduction algorithms, such as the Principal Component Analysis (PCA, suffer from their computationally demanding nature, becoming advisable for their implementation onto high-performance computer architectures for applications under strict latency constraints. This work presents the implementation of the PCA algorithm onto two different high-performance devices, namely, an NVIDIA Graphics Processing Unit (GPU and a Kalray manycore, uncovering a highly valuable set of tips and tricks in order to take full advantage of the inherent parallelism of these high-performance computing platforms, and hence, reducing the time that is required to process a given hyperspectral image. Moreover, the achieved results obtained with different hyperspectral images have been compared with the ones that were obtained with a field programmable gate array (FPGA-based implementation of the PCA algorithm that has been recently published, providing, for the first time in the literature, a comprehensive analysis in order to highlight the pros and cons of each option.

  1. Cloud Computing Organizational Benefits : A Managerial concern

    OpenAIRE

    Mandala, Venkata Bhaskar Reddy; Chandra, Marepalli Sharat

    2012-01-01

    Context: Software industry is looking for new methods and opportunities to reduce the project management problems and operational costs. Cloud Computing concept is providing answers to these problems. Cloud Computing is made possible with the availability of high internet bandwidth. Cloud Computing is providing wide range of various services to varied customer base. Cloud Computing has some key elements such as on-demand services, large pool of configurable computing resources and minimal man...

  2. BEAGLE: an application programming interface and high-performance computing library for statistical phylogenetics.

    Science.gov (United States)

    Ayres, Daniel L; Darling, Aaron; Zwickl, Derrick J; Beerli, Peter; Holder, Mark T; Lewis, Paul O; Huelsenbeck, John P; Ronquist, Fredrik; Swofford, David L; Cummings, Michael P; Rambaut, Andrew; Suchard, Marc A

    2012-01-01

    Phylogenetic inference is fundamental to our understanding of most aspects of the origin and evolution of life, and in recent years, there has been a concentration of interest in statistical approaches such as Bayesian inference and maximum likelihood estimation. Yet, for large data sets and realistic or interesting models of evolution, these approaches remain computationally demanding. High-throughput sequencing can yield data for thousands of taxa, but scaling to such problems using serial computing often necessitates the use of nonstatistical or approximate approaches. The recent emergence of graphics processing units (GPUs) provides an opportunity to leverage their excellent floating-point computational performance to accelerate statistical phylogenetic inference. A specialized library for phylogenetic calculation would allow existing software packages to make more effective use of available computer hardware, including GPUs. Adoption of a common library would also make it easier for other emerging computing architectures, such as field programmable gate arrays, to be used in the future. We present BEAGLE, an application programming interface (API) and library for high-performance statistical phylogenetic inference. The API provides a uniform interface for performing phylogenetic likelihood calculations on a variety of compute hardware platforms. The library includes a set of efficient implementations and can currently exploit hardware including GPUs using NVIDIA CUDA, central processing units (CPUs) with Streaming SIMD Extensions and related processor supplementary instruction sets, and multicore CPUs via OpenMP. To demonstrate the advantages of a common API, we have incorporated the library into several popular phylogenetic software packages. The BEAGLE library is free open source software licensed under the Lesser GPL and available from http://beagle-lib.googlecode.com. An example client program is available as public domain software.

  3. High performance computing in Windows Azure cloud

    OpenAIRE

    Ambruš, Dejan

    2013-01-01

    High performance, security, availability, scalability, flexibility and lower costs of maintenance have essentially contributed to the growing popularity of cloud computing in all spheres of life, especially in business. In fact cloud computing offers even more than this. With usage of virtual computing clusters a runtime environment for high performance computing can be efficiently implemented also in a cloud. There are many advantages but also some disadvantages of cloud computing, some ...

  4. High-performance computing — an overview

    Science.gov (United States)

    Marksteiner, Peter

    1996-08-01

    An overview of high-performance computing (HPC) is given. Different types of computer architectures used in HPC are discussed: vector supercomputers, high-performance RISC processors, various parallel computers like symmetric multiprocessors, workstation clusters, massively parallel processors. Software tools and programming techniques used in HPC are reviewed: vectorizing compilers, optimization and vector tuning, optimization for RISC processors; parallel programming techniques like shared-memory parallelism, message passing and data parallelism; and numerical libraries.

  5. COMPUTING THE VOCABULARY DEMANDS OF L2 READING

    Directory of Open Access Journals (Sweden)

    Tom Cobb

    2007-02-01

    Full Text Available Linguistic computing can make two important contributions to second language (L2 reading instruction. One is to resolve longstanding research issues that are based on an insufficiency of data for the researcher, and the other is to resolve related pedagogical problems based on insufficiency of input for the learner. The research section of the paper addresses the question of whether reading alone can give learners enough vocabulary to read. When the computer’s ability to process large amounts of both learner and linguistic data is applied to this question, it becomes clear that, for the vast majority of L2 learners, free or wide reading alone is not a sufficient source of vocabulary knowledge for reading. But computer processing also points to solutions to this problem. Through its ability to reorganize and link documents, the networked computer can increase the supply of vocabulary input that is available to the learner. The development section of the paper elaborates a principled role for computing in L2 reading pedagogy, with examples, in two broad areas, computer-based text design and computational enrichment of undesigned texts.

  6. Competition with supply and demand functions

    International Nuclear Information System (INIS)

    Bolle, F.

    2001-01-01

    If economic agents have to determine in advance their supply or demand in reaction to different market prices we may assume that their strategic instruments are supply or demand functions. The best examples for such markets are the spot markets for electricity in England and Wales, in Chile, in New Zealand, in Scandinavia and perhaps elsewhere. A further example is computerized trading in stock markets, financial markets, or commodity exchanges. The functional form of equilibria is explicitly determined in this paper. Under a certain condition, equilibria exist for every finite spread of (stochastic) autonomous demand, i.e. demand from small, non-strategically acting consumers. Contrary to competition with supply functions alone, however, there is no tendency for market prices to converge to 0 if the spread of autonomous demand increases infinitely. Lower bounds of market prices can be computed instead

  7. Biomass for electricity in the EU-27: Potential demand, CO2 abatements and breakeven prices for co-firing

    International Nuclear Information System (INIS)

    Bertrand, Vincent; Dequiedt, Benjamin; Le Cadre, Elodie

    2014-01-01

    This paper analyses the potential of biomass-based electricity in the EU-27 countries, and interactions with climate policy and the EU ETS. We estimate the potential biomass demand from the existing power plants, and we match our estimates with the potential biomass supply in Europe. Furthermore, we compute the CO2 abatement associated with the co-firing opportunities in European coal plants. We find that the biomass demand from the power sector may be very high compared with potential supply. We also identify that co-firing can produce high volumes of CO 2 abatements, which may be two times larger than that of the coal-to-gas fuel switching. We also compute biomass and CO2 breakeven prices for co-firing. Results indicate that biomass-based electricity remains profitable with high biomass prices, when the carbon price is high: a Euros 16–24 (25–35, respectively) biomass price (per MWh prim ) for a Euros 20 (50, respectively) carbon price. Hence, the carbon price appears as an important driver, which can make profitable a high share of the potential biomass demand from the power sector, even with high biomass prices. This aims to gain insights on how biomass market may be impacted by the EU ETS and others climate policies. - Highlights: • Technical potential of biomass (demand and CO 2 abatement) in European electricity. • Calculation for co-firing and biomass power plants; comparison with potential biomass supply in EU-27 countries. • Calculation of biomass and CO 2 breakeven prices for co-firing. • Potential demand is 8–148% of potential supply (up to 80% of demand from co-firing). • High potential abatement from co-firing (up to 365 Mt/yr); Profitable co-firing with €16-24 (25–35) biomass price for €20 (50) CO 2 price

  8. Exact Fill Rates for the (R, S Inventory Control with Discrete Distributed Demands for the Backordering Case

    Directory of Open Access Journals (Sweden)

    Eugenia BABILONI

    2012-01-01

    Full Text Available The fill rate is usually computed by using the traditional approach, which calculates it as the complement of the quotient between the expected unfulfilled demand and the expected demand per replenishment cycle, instead of directly the expected fraction of fulfilled demand. Furthermore the available methods to estimate the fill rate apply only under specific demand conditions. This paper shows the research gap regarding the estimation procedures to compute the fill rate and suggests: (i a new exact procedure to compute the traditional approximation for any discrete demand distribution; and (ii a new method to compute the fill rate directly as the fraction of fulfilled demand for any discrete demand distribution. Simulation results show that the latter methods outperform the traditional approach, which underestimates the simulated fill rate, over different demand patterns. This paper focuses on the traditional periodic review, base stock system when backlogged demands are allowed.

  9. A highly efficient parallel algorithm for solving the neutron diffusion nodal equations on shared-memory computers

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1990-01-01

    Modern parallel computer architectures offer an enormous potential for reducing CPU and wall-clock execution times of large-scale computations commonly performed in various applications in science and engineering. Recently, several authors have reported their efforts in developing and implementing parallel algorithms for solving the neutron diffusion equation on a variety of shared- and distributed-memory parallel computers. Testing of these algorithms for a variety of two- and three-dimensional meshes showed significant speedup of the computation. Even for very large problems (i.e., three-dimensional fine meshes) executed concurrently on a few nodes in serial (nonvector) mode, however, the measured computational efficiency is very low (40 to 86%). In this paper, the authors present a highly efficient (∼85 to 99.9%) algorithm for solving the two-dimensional nodal diffusion equations on the Sequent Balance 8000 parallel computer. Also presented is a model for the performance, represented by the efficiency, as a function of problem size and the number of participating processors. The model is validated through several tests and then extrapolated to larger problems and more processors to predict the performance of the algorithm in more computationally demanding situations

  10. From the web to the grid and beyond. Computing paradigms driven by high energy physics

    International Nuclear Information System (INIS)

    Brun, Rene; Carminati, Federico; Galli Carminati, Giuliana

    2012-01-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the Web, and grid- and cloud computing. The important issue of intellectual property regulations for distributed software engineering and computing is also addressed. Aptly, the book closes with a visionary chapter of what may lie ahead. Approachable and requiring only basic understanding of physics and computer sciences, this book is intended for both education and research. (orig.)

  11. From the web to the grid and beyond. Computing paradigms driven by high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Brun, Rene; Carminati, Federico [European Organization for Nuclear Research (CERN), Geneva (Switzerland); Galli Carminati, Giuliana (eds.) [Hopitaux Universitaire de Geneve, Chene-Bourg (Switzerland). Unite de la Psychiatrie du Developpement Mental

    2012-07-01

    Born after World War II, large-scale experimental high-energy physics (HEP) has found itself limited ever since by available accelerator, detector and computing technologies. Accordingly, HEP has made significant contributions to the development of these fields, more often than not driving their innovations. The invention of the World Wide Web at CERN is merely the best-known example out of many. This book is the first comprehensive account to trace the history of this pioneering spirit in the field of computing technologies. It covers everything up to and including the present-day handling of the huge demands imposed upon grid and distributed computing by full-scale LHC operations - operations which have for years involved many thousands of collaborating members worldwide and accordingly provide the original and natural testbed for grid computing concepts. This book takes the reader on a guided tour encompassing all relevant topics, including programming languages, software engineering, large databases, the Web, and grid- and cloud computing. The important issue of intellectual property regulations for distributed software engineering and computing is also addressed. Aptly, the book closes with a visionary chapter of what may lie ahead. Approachable and requiring only basic understanding of physics and computer sciences, this book is intended for both education and research. (orig.)

  12. Integrated computer network high-speed parallel interface

    International Nuclear Information System (INIS)

    Frank, R.B.

    1979-03-01

    As the number and variety of computers within Los Alamos Scientific Laboratory's Central Computer Facility grows, the need for a standard, high-speed intercomputer interface has become more apparent. This report details the development of a High-Speed Parallel Interface from conceptual through implementation stages to meet current and future needs for large-scle network computing within the Integrated Computer Network. 4 figures

  13. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    Science.gov (United States)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-03-01

    We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  14. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    International Nuclear Information System (INIS)

    Hadjidoukas, P.E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-01-01

    We present Π4U, 1 an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow

  15. Underreporting on the MMPI-2-RF in a high-demand police officer selection context: an illustration.

    Science.gov (United States)

    Detrick, Paul; Chibnall, John T

    2014-09-01

    Positive response distortion is common in the high-demand context of employment selection. This study examined positive response distortion, in the form of underreporting, on the Minnesota Multiphasic Personality Inventory-2-Restructured Form (MMPI-2-RF). Police officer job applicants completed the MMPI-2-RF under high-demand and low-demand conditions, once during the preemployment psychological evaluation and once without contingencies after completing the police academy. Demand-related score elevations were evident on the Uncommon Virtues (L-r) and Adjustment Validity (K-r) scales. Underreporting was evident on the Higher-Order scales Emotional/Internalizing Dysfunction and Behavioral/Externalizing Dysfunction; 5 of 9 Restructured Clinical scales; 6 of 9 Internalizing scales; 3 of 4 Externalizing scales; and 3 of 5 Personality Psychopathology 5 scales. Regression analyses indicated that L-r predicted demand-related underreporting on behavioral/externalizing scales, and K-r predicted underreporting on emotional/internalizing scales. Select scales of the MMPI-2-RF are differentially associated with different types of underreporting among police officer applicants. PsycINFO Database Record (c) 2014 APA, all rights reserved.

  16. DOE research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-12-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models whose execution is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex; consequently, it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  17. Transactive Demand Side Management Programs in Smart Grids with High Penetration of EVs

    Directory of Open Access Journals (Sweden)

    Poria Hasanpor Divshali

    2017-10-01

    Full Text Available Due to environmental concerns, economic issues, and emerging new loads, such as electrical vehicles (EVs, the importance of demand side management (DSM programs has increased in recent years. DSM programs using a dynamic real-time pricing (RTP method can help to adaptively control the electricity consumption. However, the existing RTP methods, particularly when they consider the EVs and the power system constraints, have many limitations, such as computational complexity and the need for centralized control. Therefore, a new transactive DSM program is proposed in this paper using an imperfect competition model with high EV penetration levels. In particular, a heuristic two-stage iterative method, considering the influence of decisions made independently by customers to minimize their own costs, is developed to find the market equilibrium quickly in a distributed manner. Simulations in the IEEE 37-bus system with 1141 customers and 670 EVs are performed to demonstrate the effectiveness of the proposed method. The results show that the proposed method can better manage the EVs and elastic appliances than the existing methods in terms of power constraints and cost. Also, the proposed method can solve the optimization problem quick enough to run in real-time.

  18. Demand side resource operation on the Irish power system with high wind power penetration

    International Nuclear Information System (INIS)

    Keane, A.; Tuohy, A.; Meibom, P.; Denny, E.; Flynn, D.; Mullane, A.; O'Malley, M.

    2011-01-01

    The utilisation of demand side resources is set to increase over the coming years with the advent of advanced metering infrastructure, home area networks and the promotion of increased energy efficiency. Demand side resources are proposed as an energy resource that, through aggregation, can form part of the power system plant mix and contribute to the flexible operation of a power system. A model for demand side resources is proposed here that captures its key characteristics for commitment and dispatch calculations. The model is tested on the all island Irish power system, and the operation of the model is simulated over one year in both a stochastic and deterministic mode, to illustrate the impact of wind and load uncertainty. The results illustrate that demand side resources can contribute to the efficient, flexible operation of systems with high penetrations of wind by replacing some of the functions of conventional peaking plant. Demand side resources are also shown to be capable of improving the reliability of the system, with reserve capability identified as a key requirement in this respect. - Highlights: → Demand side resource model presented for use in unit commitment and dispatch calculations. → Benefits of demand side aggregation demonstrated specifically as a peaking unit and provider of reserve. → Potential to displace or defer construction of conventional peaking units.

  19. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  20. Exploring Tradeoffs in Demand-Side and Supply-Side Management of Urban Water Resources Using Agent-Based Modeling and Evolutionary Computation

    Directory of Open Access Journals (Sweden)

    Lufthansa Kanta

    2015-11-01

    Full Text Available Urban water supply systems may be managed through supply-side and demand-side strategies, which focus on water source expansion and demand reductions, respectively. Supply-side strategies bear infrastructure and energy costs, while demand-side strategies bear costs of implementation and inconvenience to consumers. To evaluate the performance of demand-side strategies, the participation and water use adaptations of consumers should be simulated. In this study, a Complex Adaptive Systems (CAS framework is developed to simulate consumer agents that change their consumption to affect the withdrawal from the water supply system, which, in turn influences operational policies and long-term resource planning. Agent-based models are encoded to represent consumers and a policy maker agent and are coupled with water resources system simulation models. The CAS framework is coupled with an evolutionary computation-based multi-objective methodology to explore tradeoffs in cost, inconvenience to consumers, and environmental impacts for both supply-side and demand-side strategies. Decisions are identified to specify storage levels in a reservoir that trigger: (1 increases in the volume of water pumped through inter-basin transfers from an external reservoir; and (2 drought stages, which restrict the volume of water that is allowed for residential outdoor uses. The proposed methodology is demonstrated for Arlington, Texas, water supply system to identify non-dominated strategies for an historic drought decade. Results demonstrate that pumping costs associated with maximizing environmental reliability exceed pumping costs associated with minimizing restrictions on consumer water use.

  1. Worktime demands and work-family interference: Does worktime control buffer the adverse effects of high demands?

    NARCIS (Netherlands)

    Geurts, S.A.E.; Beckers, D.G.J.; Taris, T.W.; Kompier, M.A.J.; Smulders, P.G.W.

    2009-01-01

    This study examined whether worktime control buffered the impact of worktime demands on work-family interference (WFI), using data from 2,377 workers from various sectors of industry in The Netherlands. We distinguished among three types of worktime demands: time spent on work according to one's

  2. Computing in high-energy physics

    International Nuclear Information System (INIS)

    Mount, Richard P.

    2016-01-01

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Lastly, I describe recent developments aimed at improving the overall coherence of high-energy physics software

  3. Computing in high-energy physics

    Science.gov (United States)

    Mount, Richard P.

    2016-04-01

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Finally, I describe recent developments aimed at improving the overall coherence of high-energy physics software.

  4. Building highly available control system applications with Advanced Telecom Computing Architecture and open standards

    International Nuclear Information System (INIS)

    Kazakov, Artem; Furukawa, Kazuro

    2010-01-01

    Requirements for modern and future control systems for large projects like International Linear Collider demand high availability for control system components. Recently telecom industry came up with a great open hardware specification - Advanced Telecom Computing Architecture (ATCA). This specification is aimed for better reliability, availability and serviceability. Since its first market appearance in 2004, ATCA platform has shown tremendous growth and proved to be stable and well represented by a number of vendors. ATCA is an industry standard for highly available systems. On the other hand Service Availability Forum, a consortium of leading communications and computing companies, describes interaction between hardware and software. SAF defines a set of specifications such as Hardware Platform Interface, Application Interface Specification. SAF specifications provide extensive description of highly available systems, services and their interfaces. Originally aimed for telecom applications, these specifications can be used for accelerator controls software as well. This study describes benefits of using these specifications and their possible adoption to accelerator control systems. It is demonstrated how EPICS Redundant IOC was extended using Hardware Platform Interface specification, which made it possible to utilize benefits of the ATCA platform.

  5. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  6. Heuristic Scheduling in Grid Environments: Reducing the Operational Energy Demand

    Science.gov (United States)

    Bodenstein, Christian

    In a world where more and more businesses seem to trade in an online market, the supply of online services to the ever-growing demand could quickly reach its capacity limits. Online service providers may find themselves maxed out at peak operation levels during high-traffic timeslots but too little demand during low-traffic timeslots, although the latter is becoming less frequent. At this point deciding which user is allocated what level of service becomes essential. The concept of Grid computing could offer a meaningful alternative to conventional super-computing centres. Not only can Grids reach the same computing speeds as some of the fastest supercomputers, but distributed computing harbors a great energy-saving potential. When scheduling projects in such a Grid environment however, simply assigning one process to a system becomes so complex in calculation that schedules are often too late to execute, rendering their optimizations useless. Current schedulers attempt to maximize the utility, given some sort of constraint, often reverting to heuristics. This optimization often comes at the cost of environmental impact, in this case CO 2 emissions. This work proposes an alternate model of energy efficient scheduling while keeping a respectable amount of economic incentives untouched. Using this model, it is possible to reduce the total energy consumed by a Grid environment using 'just-in-time' flowtime management, paired with ranking nodes by efficiency.

  7. Advanced Certification Program for Computer Graphic Specialists. Final Performance Report.

    Science.gov (United States)

    Parkland Coll., Champaign, IL.

    A pioneer program in computer graphics was implemented at Parkland College (Illinois) to meet the demand for specialized technicians to visualize data generated on high performance computers. In summer 1989, 23 students were accepted into the pilot program. Courses included C programming, calculus and analytic geometry, computer graphics, and…

  8. High performance computing in linear control

    International Nuclear Information System (INIS)

    Datta, B.N.

    1993-01-01

    Remarkable progress has been made in both theory and applications of all important areas of control. The theory is rich and very sophisticated. Some beautiful applications of control theory are presently being made in aerospace, biomedical engineering, industrial engineering, robotics, economics, power systems, etc. Unfortunately, the same assessment of progress does not hold in general for computations in control theory. Control Theory is lagging behind other areas of science and engineering in this respect. Nowadays there is a revolution going on in the world of high performance scientific computing. Many powerful computers with vector and parallel processing have been built and have been available in recent years. These supercomputers offer very high speed in computations. Highly efficient software, based on powerful algorithms, has been developed to use on these advanced computers, and has also contributed to increased performance. While workers in many areas of science and engineering have taken great advantage of these hardware and software developments, control scientists and engineers, unfortunately, have not been able to take much advantage of these developments

  9. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    Energy Technology Data Exchange (ETDEWEB)

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  10. Indonesia’s Electricity Demand Dynamic Modelling

    Science.gov (United States)

    Sulistio, J.; Wirabhuana, A.; Wiratama, M. G.

    2017-06-01

    Electricity Systems modelling is one of the emerging area in the Global Energy policy studies recently. System Dynamics approach and Computer Simulation has become one the common methods used in energy systems planning and evaluation in many conditions. On the other hand, Indonesia experiencing several major issues in Electricity system such as fossil fuel domination, demand - supply imbalances, distribution inefficiency, and bio-devastation. This paper aims to explain the development of System Dynamics modelling approaches and computer simulation techniques in representing and predicting electricity demand in Indonesia. In addition, this paper also described the typical characteristics and relationship of commercial business sector, industrial sector, and family / domestic sector as electricity subsystems in Indonesia. Moreover, it will be also present direct structure, behavioural, and statistical test as model validation approach and ended by conclusions.

  11. Do traditional male role norms modify the association between high emotional demands in work, and sickness absence?

    DEFF Research Database (Denmark)

    Labriola, Merete; Hansen, Claus D.; Lund, Thomas

    2011-01-01

    analysis showed that participants with high MRNI-score were more affected by emotional demands in terms of their mental health than participants with lower MRNI-score. Conclusions The study confirms the association between emotional demands and absenteeism, and furthermore showed that the effect......Objectives Ambulance workers are exposed to high levels of emotional demands, which could affect sickness absence. Being a male dominated occupation, it is hypothesised that ambulance workers adhere to more traditional male role norms than men in other occupations. The aim is to investigate...... if adherence to traditional male role norms modifies the effect of emotional demands on sickness absence/presenteeism. Methods Data derive from MARS (Men, accidents, risk and safety), a two-wave panel study of ambulance workers and fire fighters in Denmark (n = 2585). Information was collected from...

  12. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    DEFF Research Database (Denmark)

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational thi...

  13. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  14. Computational Thermodynamics and Kinetics-Based ICME Framework for High-Temperature Shape Memory Alloys

    Science.gov (United States)

    Arróyave, Raymundo; Talapatra, Anjana; Johnson, Luke; Singh, Navdeep; Ma, Ji; Karaman, Ibrahim

    2015-11-01

    Over the last decade, considerable interest in the development of High-Temperature Shape Memory Alloys (HTSMAs) for solid-state actuation has increased dramatically as key applications in the aerospace and automotive industry demand actuation temperatures well above those of conventional SMAs. Most of the research to date has focused on establishing the (forward) connections between chemistry, processing, (micro)structure, properties, and performance. Much less work has been dedicated to the development of frameworks capable of addressing the inverse problem of establishing necessary chemistry and processing schedules to achieve specific performance goals. Integrated Computational Materials Engineering (ICME) has emerged as a powerful framework to address this problem, although it has yet to be applied to the development of HTSMAs. In this paper, the contributions of computational thermodynamics and kinetics to ICME of HTSMAs are described. Some representative examples of the use of computational thermodynamics and kinetics to understand the phase stability and microstructural evolution in HTSMAs are discussed. Some very recent efforts at combining both to assist in the design of HTSMAs and limitations to the full implementation of ICME frameworks for HTSMA development are presented.

  15. Evaluating the Efficacy of the Cloud for Cluster Computation

    Science.gov (United States)

    Knight, David; Shams, Khawaja; Chang, George; Soderstrom, Tom

    2012-01-01

    Computing requirements vary by industry, and it follows that NASA and other research organizations have computing demands that fall outside the mainstream. While cloud computing made rapid inroads for tasks such as powering web applications, performance issues on highly distributed tasks hindered early adoption for scientific computation. One venture to address this problem is Nebula, NASA's homegrown cloud project tasked with delivering science-quality cloud computing resources. However, another industry development is Amazon's high-performance computing (HPC) instances on Elastic Cloud Compute (EC2) that promises improved performance for cluster computation. This paper presents results from a series of benchmarks run on Amazon EC2 and discusses the efficacy of current commercial cloud technology for running scientific applications across a cluster. In particular, a 240-core cluster of cloud instances achieved 2 TFLOPS on High-Performance Linpack (HPL) at 70% of theoretical computational performance. The cluster's local network also demonstrated sub-100 ?s inter-process latency with sustained inter-node throughput in excess of 8 Gbps. Beyond HPL, a real-world Hadoop image processing task from NASA's Lunar Mapping and Modeling Project (LMMP) was run on a 29 instance cluster to process lunar and Martian surface images with sizes on the order of tens of gigapixels. These results demonstrate that while not a rival of dedicated supercomputing clusters, commercial cloud technology is now a feasible option for moderately demanding scientific workloads.

  16. Massively parallel evolutionary computation on GPGPUs

    CERN Document Server

    Tsutsui, Shigeyoshi

    2013-01-01

    Evolutionary algorithms (EAs) are metaheuristics that learn from natural collective behavior and are applied to solve optimization problems in domains such as scheduling, engineering, bioinformatics, and finance. Such applications demand acceptable solutions with high-speed execution using finite computational resources. Therefore, there have been many attempts to develop platforms for running parallel EAs using multicore machines, massively parallel cluster machines, or grid computing environments. Recent advances in general-purpose computing on graphics processing units (GPGPU) have opened u

  17. GRID computing for experimental high energy physics

    International Nuclear Information System (INIS)

    Moloney, G.R.; Martin, L.; Seviour, E.; Taylor, G.N.; Moorhead, G.F.

    2002-01-01

    Full text: The Large Hadron Collider (LHC), to be completed at the CERN laboratory in 2006, will generate 11 petabytes of data per year. The processing of this large data stream requires a large, distributed computing infrastructure. A recent innovation in high performance distributed computing, the GRID, has been identified as an important tool in data analysis for the LHC. GRID computing has actual and potential application in many fields which require computationally intensive analysis of large, shared data sets. The Australian experimental High Energy Physics community has formed partnerships with the High Performance Computing community to establish a GRID node at the University of Melbourne. Through Australian membership of the ATLAS experiment at the LHC, Australian researchers have an opportunity to be involved in the European DataGRID project. This presentation will include an introduction to the GRID, and it's application to experimental High Energy Physics. We will present the results of our studies, including participation in the first LHC data challenge

  18. Worksite interventions for preventing physical deterioration among employees in job-groups with high physical work demands

    DEFF Research Database (Denmark)

    Holtermann, Andreas; Jørgensen, Marie B; Gram, Bibi

    2010-01-01

    ) characterized by high physical work demands, musculoskeletal disorders, poor work ability and sickness absence. METHODS/DESIGN: A novel approach of the FINALE programme is that the interventions, i.e. 3 randomized controlled trials (RCT) and 1 exploratory case-control study are tailored to the physical work......BACKGROUND: A mismatch between individual physical capacities and physical work demands enhance the risk for musculoskeletal disorders, poor work ability and sickness absence, termed physical deterioration. However, effective intervention strategies for preventing physical deterioration in job...... groups with high physical demands remains to be established. This paper describes the background, design and conceptual model of the FINALE programme, a framework for health promoting interventions at 4 Danish job groups (i.e. cleaners, health-care workers, construction workers and industrial workers...

  19. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    Energy Technology Data Exchange (ETDEWEB)

    Hadjidoukas, P.E.; Angelikopoulos, P. [Computational Science and Engineering Laboratory, ETH Zürich, CH-8092 (Switzerland); Papadimitriou, C. [Department of Mechanical Engineering, University of Thessaly, GR-38334 Volos (Greece); Koumoutsakos, P., E-mail: petros@ethz.ch [Computational Science and Engineering Laboratory, ETH Zürich, CH-8092 (Switzerland)

    2015-03-01

    We present Π4U,{sup 1} an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  20. Demand driven decision support for efficient water resources allocation in irrigated agriculture

    Science.gov (United States)

    Schuetze, Niels; Grießbach, Ulrike Ulrike; Röhm, Patric; Stange, Peter; Wagner, Michael; Seidel, Sabine; Werisch, Stefan; Barfus, Klemens

    2014-05-01

    Due to climate change, extreme weather conditions, such as longer dry spells in the summer months, may have an increasing impact on the agriculture in Saxony (Eastern Germany). For this reason, and, additionally, declining amounts of rainfall during the growing season the use of irrigation will be more important in future in Eastern Germany. To cope with this higher demand of water, a new decision support framework is developed which focuses on an integrated management of both irrigation water supply and demand. For modeling the regional water demand, local (and site-specific) water demand functions are used which are derived from the optimized agronomic response at farms scale. To account for climate variability the agronomic response is represented by stochastic crop water production functions (SCWPF) which provide the estimated yield subject to the minimum amount of irrigation water. These functions take into account the different soil types, crops and stochastically generated climate scenarios. By applying mathematical interpolation and optimization techniques, the SCWPF's are used to compute the water demand considering different constraints, for instance variable and fix costs or the producer price. This generic approach enables the computation for both multiple crops at farm scale as well as of the aggregated response to water pricing at a regional scale for full and deficit irrigation systems. Within the SAPHIR (SAxonian Platform for High Performance Irrigation) project a prototype of a decision support system is developed which helps to evaluate combined water supply and demand management policies for an effective and efficient utilization of water in order to meet future demands. The prototype is implemented as a web-based decision support system and it is based on a service-oriented geo-database architecture.

  1. Computer Training for Entrepreneurial Meteorologists.

    Science.gov (United States)

    Koval, Joseph P.; Young, George S.

    2001-05-01

    Computer applications of increasing diversity form a growing part of the undergraduate education of meteorologists in the early twenty-first century. The advent of the Internet economy, as well as a waning demand for traditional forecasters brought about by better numerical models and statistical forecasting techniques has greatly increased the need for operational and commercial meteorologists to acquire computer skills beyond the traditional techniques of numerical analysis and applied statistics. Specifically, students with the skills to develop data distribution products are in high demand in the private sector job market. Meeting these demands requires greater breadth, depth, and efficiency in computer instruction. The authors suggest that computer instruction for undergraduate meteorologists should include three key elements: a data distribution focus, emphasis on the techniques required to learn computer programming on an as-needed basis, and a project orientation to promote management skills and support student morale. In an exploration of this approach, the authors have reinvented the Applications of Computers to Meteorology course in the Department of Meteorology at The Pennsylvania State University to teach computer programming within the framework of an Internet product development cycle. Because the computer skills required for data distribution programming change rapidly, specific languages are valuable for only a limited time. A key goal of this course was therefore to help students learn how to retrain efficiently as technologies evolve. The crux of the course was a semester-long project during which students developed an Internet data distribution product. As project management skills are also important in the job market, the course teamed students in groups of four for this product development project. The success, failures, and lessons learned from this experiment are discussed and conclusions drawn concerning undergraduate instructional methods

  2. Ground-glass opacity: High-resolution computed tomography and 64-multi-slice computed tomography findings comparison

    International Nuclear Information System (INIS)

    Sergiacomi, Gianluigi; Ciccio, Carmelo; Boi, Luca; Velari, Luca; Crusco, Sonia; Orlacchio, Antonio; Simonetti, Giovanni

    2010-01-01

    Objective: Comparative evaluation of ground-glass opacity using conventional high-resolution computed tomography technique and volumetric computed tomography by 64-row multi-slice scanner, verifying advantage of volumetric acquisition and post-processing technique allowed by 64-row CT scanner. Methods: Thirty-four patients, in which was assessed ground-glass opacity pattern by previous high-resolution computed tomography during a clinical-radiological follow-up for their lung disease, were studied by means of 64-row multi-slice computed tomography. Comparative evaluation of image quality was done by both CT modalities. Results: It was reported good inter-observer agreement (k value 0.78-0.90) in detection of ground-glass opacity with high-resolution computed tomography technique and volumetric Computed Tomography acquisition with moderate increasing of intra-observer agreement (k value 0.46) using volumetric computed tomography than high-resolution computed tomography. Conclusions: In our experience, volumetric computed tomography with 64-row scanner shows good accuracy in detection of ground-glass opacity, providing a better spatial and temporal resolution and advanced post-processing technique than high-resolution computed tomography.

  3. In demand

    Energy Technology Data Exchange (ETDEWEB)

    Coleman, B. [Bridgestone Ltd. (United Kingdom)

    2005-11-01

    The paper explains how good relationships can help alleviate potential tyre shortages. Demand for large dump truck tyres (largely for China) has increased by 50% within 12 months. Bridgestone's manufacturing plants are operating at maximum capacity. The company supplies tyres to all vehicles at Scottish Coal's opencast coal mines. Its Tyre Management System (TMS) supplied free of charge to customers helps maximise tyre life and minimise downtime from data on pressure, tread and general conditions fed into the hand-held TMS computer. 3 photos.

  4. Money Demand in Latvia

    OpenAIRE

    Ivars Tillers

    2004-01-01

    The econometric analysis of the demand for broad money in Latvia suggests a stable relationship of money demand. The analysis of parameter exogeneity indicates that the equilibrium adjustment is driven solely by the changes in the amount of money. The demand for money in Latvia is characterised by relatively high income elasticity typical for the economy in a monetary expansion phase. Due to stability, close fit of the money demand function and rapid equilibrium adjustment, broad money aggreg...

  5. Access control for on-demand provisioned cloud infrastructure services

    NARCIS (Netherlands)

    Ngo, C.T.

    2016-01-01

    The evolution of Cloud Computing brings advantages to both customers and service providers to utilize and manage computing and network resources more efficiently with virtualization, service-oriented architecture technologies, and automated on-demand resource provisioning. However, these advantages

  6. High-performance computing for airborne applications

    International Nuclear Information System (INIS)

    Quinn, Heather M.; Manuzatto, Andrea; Fairbanks, Tom; Dallmann, Nicholas; Desgeorges, Rose

    2010-01-01

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  7. Quantum Accelerators for High-performance Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S. [ORNL; Britt, Keith A. [ORNL; Mohiyaddin, Fahd A. [ORNL

    2017-11-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantum-accelerator framework that uses specialized kernels to offload select workloads while integrating with existing computing infrastructure. We elaborate on the role of the host operating system to manage these unique accelerator resources, the prospects for deploying quantum modules, and the requirements placed on the language hierarchy connecting these different system components. We draw on recent advances in the modeling and simulation of quantum computing systems with the development of architectures for hybrid high-performance computing systems and the realization of software stacks for controlling quantum devices. Finally, we present simulation results that describe the expected system-level behavior of high-performance computing systems composed from compute nodes with quantum processing units. We describe performance for these hybrid systems in terms of time-to-solution, accuracy, and energy consumption, and we use simple application examples to estimate the performance advantage of quantum acceleration.

  8. Case study of supply induced demand: the case of provision of imaging scans (computed tomography and magnetic resonance) at Unimed-Manaus

    Energy Technology Data Exchange (ETDEWEB)

    Andrade, Edson de Oliveira; Andrade, Elizabeth Nogueira de, E-mail: dredsonandrade@gmail.co [Universidade Federal do Amazonas (UFAM), Manaus, AM (Brazil); Gallo, Jose Hiran [Universidade do Porto (U.Porto) (Portugal)

    2011-03-15

    Objective: to present the experience of a health plan operator (Unimed-Manaus) in Manaus, Amazonas, Brazil, with the accreditation of imaging services and the demand induced by the supply of new services (Roemer's Law). Methods: this is a retrospective work studying a time series covering the period from January 1998 to June 2004, in which the computed tomography and the magnetic resonance imaging services were implemented as part of the services offered by that health plan operator. Statistical analysis consisted of a descriptive and an inferential part, with the latter using a mean parametric test (Student T-test and ANOVA) and the Pearson correlation test. A 5% alpha and a 95% confidence interval were adopted. Results: at Unimed-Manaus, the supply of new imaging services, by itself, was identified as capable of generating an increased service demand, thus characterizing the phenomenon described by Roemer. Conclusion: the results underscore the need to be aware of the fact that the supply of new health services could bring about their increased use without a real demand. (author)

  9. Case study of supply induced demand: the case of provision of imaging scans (computed tomography and magnetic resonance) at Unimed-Manaus

    International Nuclear Information System (INIS)

    Andrade, Edson de Oliveira; Andrade, Elizabeth Nogueira de; Gallo, Jose Hiran

    2011-01-01

    Objective: to present the experience of a health plan operator (Unimed-Manaus) in Manaus, Amazonas, Brazil, with the accreditation of imaging services and the demand induced by the supply of new services (Roemer's Law). Methods: this is a retrospective work studying a time series covering the period from January 1998 to June 2004, in which the computed tomography and the magnetic resonance imaging services were implemented as part of the services offered by that health plan operator. Statistical analysis consisted of a descriptive and an inferential part, with the latter using a mean parametric test (Student T-test and ANOVA) and the Pearson correlation test. A 5% alpha and a 95% confidence interval were adopted. Results: at Unimed-Manaus, the supply of new imaging services, by itself, was identified as capable of generating an increased service demand, thus characterizing the phenomenon described by Roemer. Conclusion: the results underscore the need to be aware of the fact that the supply of new health services could bring about their increased use without a real demand. (author)

  10. Case study of supply induced demand: the case of provision of imaging scans (computed tomography and magnetic resonance) at Unimed-Manaus.

    Science.gov (United States)

    Andrade, Edson de Oliveira; Andrade, Elizabeth Nogueira de; Gallo, José Hiran

    2011-01-01

    To present the experience of a health plan operator (Unimed-Manaus) in Manaus, Amazonas, Brazil, with the accreditation of imaging services and the demand induced by the supply of new services (Roemer's Law). This is a retrospective work studying a time series covering the period from January 1998 to June 2004, in which the computed tomography and the magnetic resonance imaging services were implemented as part of the services offered by that health plan operator. Statistical analysis consisted of a descriptive and an inferential part, with the latter using a mean parametric test (Student T-test and ANOVA) and the Pearson correlation test. A 5% alpha and a 95% confidence interval were adopted. At Unimed-Manaus, the supply of new imaging services, by itself, was identified as capable of generating an increased service demand, thus characterizing the phenomenon described by Roemer. The results underscore the need to be aware of the fact that the supply of new health services could bring about their increased use without a real demand.

  11. Cloud Computing: The Future of Computing

    OpenAIRE

    Aggarwal, Kanika

    2013-01-01

    Cloud computing has recently emerged as a new paradigm for hosting and delivering services over the Internet. Cloud computing is attractive to business owners as it eliminates the requirement for users to plan ahead for provisioning, and allows enterprises to start from the small and increase resources only when there is a rise in service demand. The basic principles of cloud computing is to make the computing be assigned in a great number of distributed computers, rather then local computer ...

  12. High-Performance Computing Paradigm and Infrastructure

    CERN Document Server

    Yang, Laurence T

    2006-01-01

    With hyperthreading in Intel processors, hypertransport links in next generation AMD processors, multi-core silicon in today's high-end microprocessors from IBM and emerging grid computing, parallel and distributed computers have moved into the mainstream

  13. Demand side resource operation on the Irish power system with high wind power penetration

    DEFF Research Database (Denmark)

    Keane, A.; Tuohy, A.; Meibom, Peter

    2011-01-01

    part of the power system plant mix and contribute to the flexible operation of a power system. A model for demand side resources is proposed here that captures its key characteristics for commitment and dispatch calculations. The model is tested on the all island Irish power system, and the operation...... of the functions of conventional peaking plant. Demand side resources are also shown to be capable of improving the reliability of the system, with reserve capability identified as a key requirement in this respect....... of the model is simulated over one year in both a stochastic and deterministic mode, to illustrate the impact of wind and load uncertainty. The results illustrate that demand side resources can contribute to the efficient, flexible operation of systems with high penetrations of wind by replacing some...

  14. Dynamic integration of remote cloud resources into local computing clusters

    Energy Technology Data Exchange (ETDEWEB)

    Fleig, Georg; Erli, Guenther; Giffels, Manuel; Hauth, Thomas; Quast, Guenter; Schnepf, Matthias [Institut fuer Experimentelle Kernphysik, Karlsruher Institut fuer Technologie (Germany)

    2016-07-01

    In modern high-energy physics (HEP) experiments enormous amounts of data are analyzed and simulated. Traditionally dedicated HEP computing centers are built or extended to meet this steadily increasing demand for computing resources. Nowadays it is more reasonable and more flexible to utilize computing power at remote data centers providing regular cloud services to users as they can be operated in a more efficient manner. This approach uses virtualization and allows the HEP community to run virtual machines containing a dedicated operating system and transparent access to the required software stack on almost any cloud site. The dynamic management of virtual machines depending on the demand for computing power is essential for cost efficient operation and sharing of resources with other communities. For this purpose the EKP developed the on-demand cloud manager ROCED for dynamic instantiation and integration of virtualized worker nodes into the institute's computing cluster. This contribution will report on the concept of our cloud manager and the implementation utilizing a remote OpenStack cloud site and a shared HPC center (bwForCluster located in Freiburg).

  15. Security framework for virtualised infrastructure services provisioned on-demand

    NARCIS (Netherlands)

    Ngo, C.; Membrey, P.; Demchenko, Y.; de Laat, C.

    2011-01-01

    Cloud computing is developing as a new wave of ICT technologies, offering a common approach to on-demand provisioning computation, storage and network resources which are generally referred to as infrastructure services. Most of currently available commercial Cloud services are built and organized

  16. Grid computing in high energy physics

    CERN Document Server

    Avery, P

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them. Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software r...

  17. Computational biology in the cloud: methods and new insights from computing at scale.

    Science.gov (United States)

    Kasson, Peter M

    2013-01-01

    The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.

  18. CHPS IN CLOUD COMPUTING ENVIRONMENT

    OpenAIRE

    K.L.Giridas; A.Shajin Nargunam

    2012-01-01

    Workflow have been utilized to characterize a various form of applications concerning high processing and storage space demands. So, to make the cloud computing environment more eco-friendly,our research project was aiming in reducing E-waste accumulated by computers. In a hybrid cloud, the user has flexibility offered by public cloud resources that can be combined to the private resources pool as required. Our previous work described the process of combining the low range and mid range proce...

  19. Demand Response Resource Quantification with Detailed Building Energy Models

    Energy Technology Data Exchange (ETDEWEB)

    Hale, Elaine; Horsey, Henry; Merket, Noel; Stoll, Brady; Nag, Ambarish

    2017-04-03

    Demand response is a broad suite of technologies that enables changes in electrical load operations in support of power system reliability and efficiency. Although demand response is not a new concept, there is new appetite for comprehensively evaluating its technical potential in the context of renewable energy integration. The complexity of demand response makes this task difficult -- we present new methods for capturing the heterogeneity of potential responses from buildings, their time-varying nature, and metrics such as thermal comfort that help quantify likely acceptability of specific demand response actions. Computed with an automated software framework, the methods are scalable.

  20. Asian oil demand

    International Nuclear Information System (INIS)

    Fesharaki, F.

    2005-01-01

    This conference presentation examined global oil market development and the role of Asian demand. It discussed plateau change versus cyclical movement in the global oil market; supply and demand issues of OPEC and non-OPEC oil; if high oil prices reduce demand; and the Asian oil picture in the global context. Asian oil demand has accounted for about 50 per cent of the global incremental oil market growth. The presentation provided data charts in graphical format on global and Asia-Pacific incremental oil demand from 1990-2005; Asia oil demand growth for selected nations; real GDP growth in selected Asian countries; and, Asia-Pacific oil production and net import requirements. It also included charts in petroleum product demand for Asia-Pacific, China, India, Japan, and South Korea. Other data charts included key indicators for China's petroleum sector; China crude production and net oil import requirements; China's imports and the share of the Middle East; China's oil exports and imports; China's crude imports by source for 2004; China's imports of main oil products for 2004; India's refining capacity; India's product balance for net-imports and net-exports; and India's trade pattern of oil products. tabs., figs

  1. A model for calculating the optimal replacement interval of computer systems

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1981-08-01

    A mathematical model for calculating the optimal replacement interval of computer systems is described. This model is made to estimate the best economical interval of computer replacement when computing demand, cost and performance of computer, etc. are known. The computing demand is assumed to monotonously increase every year. Four kinds of models are described. In the model 1, a computer system is represented by only a central processing unit (CPU) and all the computing demand is to be processed on the present computer until the next replacement. On the other hand in the model 2, the excessive demand is admitted and may be transferred to other computing center and processed costly there. In the model 3, the computer system is represented by a CPU, memories (MEM) and input/output devices (I/O) and it must process all the demand. Model 4 is same as model 3, but the excessive demand is admitted to be processed in other center. (1) Computing demand at the JAERI, (2) conformity of Grosch's law for the recent computers, (3) replacement cost of computer systems, etc. are also described. (author)

  2. Grid Computing in High Energy Physics

    International Nuclear Information System (INIS)

    Avery, Paul

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them.Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software resources, regardless of location); (4) collaboration (providing tools that allow members full and fair access to all collaboration resources and enable distributed teams to work effectively, irrespective of location); and (5) education, training and outreach (providing resources and mechanisms for training students and for communicating important information to the public).It is believed that computing infrastructures based on Data Grids and optical networks can meet these challenges and can offer data intensive enterprises in high energy physics and elsewhere a comprehensive, scalable framework for collaboration and resource sharing. A number of Data Grid projects have been underway since 1999. Interestingly, the most exciting and far ranging of these projects are led by collaborations of high energy physicists, computer scientists and scientists from other disciplines in support of experiments with massive, near-term data needs. I review progress in this

  3. Research on Demand for Bus Transport and Transport Habits of High School Students in Žilina Region

    Directory of Open Access Journals (Sweden)

    Konečný Vladimír

    2017-11-01

    Full Text Available The paper deals with the analysis of demand for bus transport to examine determinants of demand and practices of high school students based on survey of their transport habits in Žilina Region. Transport habits of students are individual and variable in time. This group of passengers is dependent on public passenger transport services because of their travelling to schools. Significant part of demand for public passenger transport is also formed by this this group of passengers. The knowledge of student's transport habits may help in process of adaptation of offering and quality of transport serviceability what may subsequently stabilize demand for public passenger transport.

  4. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  5. The challenge of networked enterprises for cloud computing interoperability

    OpenAIRE

    Mezgár, István; Rauschecker, Ursula

    2014-01-01

    Manufacturing enterprises have to organize themselves into effective system architectures forming different types of Networked Enterprises (NE) to match fast changing market demands. Cloud Computing (CC) is an important up to date computing concept for NE, as it offers significant financial and technical advantages beside high-level collaboration possibilities. As cloud computing is a new concept the solutions for handling interoperability, portability, security, privacy and standardization c...

  6. CHEP95: Computing in high energy physics. Abstracts

    International Nuclear Information System (INIS)

    1995-01-01

    These proceedings cover the technical papers on computation in High Energy Physics, including computer codes, computer devices, control systems, simulations, data acquisition systems. New approaches on computer architectures are also discussed

  7. Rural Dilemmas in School-to-Work Transition: Low Skill Jobs, High Social Demands.

    Science.gov (United States)

    Danzig, Arnold

    1996-01-01

    Thirty-three employers in rural Arizona were interviewed concerning employer expectations, workplace opportunities, authority patterns, rewards, and social interaction at work regarding entry level workers directly out of high school. Available work was low skill with few rewards, yet demanded strong social skills and work ethic. Discusses…

  8. High resolution heat atlases for demand and supply mapping

    DEFF Research Database (Denmark)

    Möller, Bernd; Nielsen, Steffen

    2014-01-01

    Significant reductions of heat demand, low-carbon and renewable energy sources, and district heating are key elements in 100% renewable energy systems. Appraisal of district heating along with energy efficient buildings and individual heat supply requires a geographical representation of heat...... demand, energy efficiency and energy supply. The present paper describes a Heat Atlas built around a spatial database using geographical information systems (GIS). The present atlas allows for per-building calculations of potentials and costs of energy savings, connectivity to existing district heat......, and current heat supply and demand. For the entire building mass a conclusive link is established between the built environment and its heat supply. The expansion of district heating; the interconnection of distributed district heating systems; or the question whether to invest in ultra-efficient buildings...

  9. Solving computationally expensive engineering problems

    CERN Document Server

    Leifsson, Leifur; Yang, Xin-She

    2014-01-01

    Computational complexity is a serious bottleneck for the design process in virtually any engineering area. While migration from prototyping and experimental-based design validation to verification using computer simulation models is inevitable and has a number of advantages, high computational costs of accurate, high-fidelity simulations can be a major issue that slows down the development of computer-aided design methodologies, particularly those exploiting automated design improvement procedures, e.g., numerical optimization. The continuous increase of available computational resources does not always translate into shortening of the design cycle because of the growing demand for higher accuracy and necessity to simulate larger and more complex systems. Accurate simulation of a single design of a given system may be as long as several hours, days or even weeks, which often makes design automation using conventional methods impractical or even prohibitive. Additional problems include numerical noise often pr...

  10. A transport layer protocol for the future high speed grid computing: SCTP versus fast tcp multihoming

    International Nuclear Information System (INIS)

    Arshad, M.J.; Mian, M.S.

    2010-01-01

    TCP (Transmission Control Protocol) is designed for reliable data transfer on the global Internet today. One of its strong points is its use of flow control algorithm that allows TCP to adjust its congestion window if network congestion is occurred. A number of studies and investigations have confirmed that traditional TCP is not suitable for each and every type of application, for example, bulk data transfer over high speed long distance networks. TCP sustained the time of low-capacity and short-delay networks, however, for numerous factors it cannot be capable to efficiently deal with today's growing technologies (such as wide area Grid computing and optical-fiber networks). This research work surveys the congestion control mechanism of transport protocols, and addresses the different issues involved for transferring the huge data over the future high speed Grid computing and optical-fiber networks. This work also presents the simulations to compare the performance of FAST TCP multihoming with SCTP (Stream Control Transmission Protocol) multihoming in high speed networks. These simulation results show that FAST TCP multihoming achieves bandwidth aggregation efficiently and outperforms SCTP multihoming under a similar network conditions. The survey and simulation results presented in this work reveal that multihoming support into FAST TCP does provide a lot of benefits like redundancy, load-sharing and policy-based routing, which largely improves the whole performance of a network and can meet the increasing demand of the future high-speed network infrastructures (such as in Grid computing). (author)

  11. Optimized management of a distributed demand response aggregation model

    International Nuclear Information System (INIS)

    Prelle, Thomas

    2014-01-01

    The desire to increase the share of renewable energies in the energy mix leads to an increase in share of volatile and non-controllable energy and makes it difficult to meet the supply-demand balance. A solution to manage anyway theses energies in the current electrical grid is to deploy new energy storage and demand response systems across the country to counterbalance under or over production. In order to integrate all these energies systems to the supply and demand balance process, there are gathered together within a virtual flexibility aggregation power plant which is then seen as a virtual power plant. As for any other power plant, it is necessary to compute its production plan. Firstly, we propose in this PhD thesis an architecture and management method for an aggregation power plant composed of any type of energies systems. Then, we propose algorithms to compute the production plan of any types of energy systems satisfying all theirs constraints. Finally, we propose an approach to compute the production plan of the aggregation power plant in order to maximize its financial profit while complying with all the constraints of the grid. (author)

  12. On-demand Simulation of Atmospheric Transport Processes on the AlpEnDAC Cloud

    Science.gov (United States)

    Hachinger, S.; Harsch, C.; Meyer-Arnek, J.; Frank, A.; Heller, H.; Giemsa, E.

    2016-12-01

    The "Alpine Environmental Data Analysis Centre" (AlpEnDAC) develops a data-analysis platform for high-altitude research facilities within the "Virtual Alpine Observatory" project (VAO). This platform, with its web portal, will support use cases going much beyond data management: On user request, the data are augmented with "on-demand" simulation results, such as air-parcel trajectories for tracing down the source of pollutants when they appear in high concentration. The respective back-end mechanism uses the Compute Cloud of the Leibniz Supercomputing Centre (LRZ) to transparently calculate results requested by the user, as far as they have not yet been stored in AlpEnDAC. The queuing-system operation model common in supercomputing is replaced by a model in which Virtual Machines (VMs) on the cloud are automatically created/destroyed, providing the necessary computing power immediately on demand. From a security point of view, this allows to perform simulations in a sandbox defined by the VM configuration, without direct access to a computing cluster. Within few minutes, the user receives conveniently visualized results. The AlpEnDAC infrastructure is distributed among two participating institutes [front-end at German Aerospace Centre (DLR), simulation back-end at LRZ], requiring an efficient mechanism for synchronization of measured and augmented data. We discuss our iRODS-based solution for these data-management tasks as well as the general AlpEnDAC framework. Our cloud-based offerings aim at making scientific computing for our users much more convenient and flexible than it has been, and to allow scientists without a broad background in scientific computing to benefit from complex numerical simulations.

  13. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  14. Bringing together high energy physicist and computer scientist

    International Nuclear Information System (INIS)

    Bock, R.K.

    1989-01-01

    The Oxford Conference on Computing in High Energy Physics approached the physics and computing issues with the question, ''Can computer science help?'' always in mind. This summary is a personal recollection of what I considered to be the highlights of the conference: the parts which contributed to my own learning experience. It can be used as a general introduction to the following papers, or as a brief overview of the current states of computer science within high energy physics. (orig.)

  15. Extreme Scale Computing to Secure the Nation

    Energy Technology Data Exchange (ETDEWEB)

    Brown, D L; McGraw, J R; Johnson, J R; Frincke, D

    2009-11-10

    will be accomplished through significant increases in the scientific bases that underlie the computational tools. Computer codes must be developed that replace phenomenology with increased levels of scientific understanding together with an accompanying quantification of uncertainty. These advanced codes will place significantly higher demands on the computing infrastructure than do the current 3D ASC codes. This article discusses not only the need for a future computing capability at the exascale for the SBSS program, but also considers high performance computing requirements for broader national security questions. For example, the increasing concern over potential nuclear terrorist threats demands a capability to assess threats and potential disablement technologies as well as a rapid forensic capability for determining a nuclear weapons design from post-detonation evidence (nuclear counterterrorism).

  16. Experience with a distributed computing system for magnetic field analysis

    International Nuclear Information System (INIS)

    Newman, M.J.

    1978-08-01

    The development of a general purpose computer system, THESEUS, is described the initial use for which has been magnetic field analysis. The system involves several computers connected by data links. Some are small computers with interactive graphics facilities and limited analysis capabilities, and others are large computers for batch execution of analysis programs with heavy processor demands. The system is highly modular for easy extension and highly portable for transfer to different computers. It can easily be adapted for a completely different application. It provides a highly efficient and flexible interface between magnet designers and specialised analysis programs. Both the advantages and problems experienced are highlighted, together with a mention of possible future developments. (U.K.)

  17. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  18. water demand prediction using artificial neural network

    African Journals Online (AJOL)

    user

    2017-01-01

    Jan 1, 2017 ... Interface for activation and deactivation of valves. •. Interface demand ... process could be done and monitored at the computer terminal as expected of a .... [15] Arbib, M. A.The Handbook of Brain Theory and Neural. Networks.

  19. Water advisory demand evaluation and resource toolkit

    OpenAIRE

    Paluszczyszyn, D.; Illya, S.; Goodyer, E.; Kubrycht, T.; Ambler, M.

    2016-01-01

    Cities are living organisms, 24h / 7day, with demands on resources and outputs. Water is a key resource whose management has not kept pace with modern urban life. Demand for clean water and loads on waste water no longer fit diurnal patterns; and they are impacted by events that are outside the normal range of parameters that are taken account of in water management. This feasibility study will determine how the application of computational intelligence can be used to analyse a mix of dat...

  20. Parallel Computing:. Some Activities in High Energy Physics

    Science.gov (United States)

    Willers, Ian

    This paper examines some activities in High Energy Physics that utilise parallel computing. The topic includes all computing from the proposed SIMD front end detectors, the farming applications, high-powered RISC processors and the large machines in the computer centers. We start by looking at the motivation behind using parallelism for general purpose computing. The developments around farming are then described from its simplest form to the more complex system in Fermilab. Finally, there is a list of some developments that are happening close to the experiments.

  1. Planning and Enacting Mathematical Tasks of High Cognitive Demand in the Primary Classroom

    Science.gov (United States)

    Georgius, Kelly

    2013-01-01

    This study offers an examination of two primary-grades teachers as they learn to transfer knowledge from professional development into their classrooms. I engaged in planning sessions with each teacher to help plan tasks of high cognitive demand, including anticipating and planning for classroom discourse that would occur around the task. A…

  2. Agglomeration Economies and the High-Tech Computer

    OpenAIRE

    Wallace, Nancy E.; Walls, Donald

    2004-01-01

    This paper considers the effects of agglomeration on the production decisions of firms in the high-tech computer cluster. We build upon an alternative definition of the high-tech computer cluster developed by Bardhan et al. (2003) and we exploit a new data source, the National Establishment Time-Series (NETS) Database, to analyze the spatial distribution of firms in this industry. An essential contribution of this research is the recognition that high-tech firms are heterogeneous collections ...

  3. On energy demand

    International Nuclear Information System (INIS)

    Haefele, W.

    1977-01-01

    Since the energy crisis, a number of energy plans have been proposed, and almost all of these envisage some kind of energy demand adaptations or conservation measures, hoping thus to escape the anticipated problems of energy supply. However, there seems to be no clear explanation of the basis on which our foreseeable future energy problems could be eased. And in fact, a first attempt at a more exact definition of energy demand and its interaction with other objectives, such as economic ones, shows that it is a highly complex concept which we still hardly understand. The article explains in some detail why it is so difficult to understand energy demand

  4. Exact Fill Rates for the (R, S) Inventory Control with Discrete Distributed Demands for the Backordering Case

    OpenAIRE

    Eugenia BABILONI; Ester GUIJARRO; Manuel CARDÓS; Sofía ESTELLÉS

    2012-01-01

    The fill rate is usually computed by using the traditional approach, which calculates it as the complement of the quotient between the expected unfulfilled demand and the expected demand per replenishment cycle, instead of directly the expected fraction of fulfilled demand. Furthermore the available methods to estimate the fill rate apply only under specific demand conditions. This paper shows the research gap regarding the estimation procedures to compute the fill rate and suggests: (i) a ne...

  5. Static Load Balancing Algorithms In Cloud Computing Challenges amp Solutions

    Directory of Open Access Journals (Sweden)

    Nadeem Shah

    2015-08-01

    Full Text Available Abstract Cloud computing provides on-demand hosted computing resources and services over the Internet on a pay-per-use basis. It is currently becoming the favored method of communication and computation over scalable networks due to numerous attractive attributes such as high availability scalability fault tolerance simplicity of management and low cost of ownership. Due to the huge demand of cloud computing efficient load balancing becomes critical to ensure that computational tasks are evenly distributed across servers to prevent bottlenecks. The aim of this review paper is to understand the current challenges in cloud computing primarily in cloud load balancing using static algorithms and finding gaps to bridge for more efficient static cloud load balancing in the future. We believe the ideas suggested as new solution will allow researchers to redesign better algorithms for better functionalities and improved user experiences in simple cloud systems. This could assist small businesses that cannot afford infrastructure that supports complex amp dynamic load balancing algorithms.

  6. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  7. High resolution heat atlases for demand and supply mapping

    Directory of Open Access Journals (Sweden)

    Bernd Möller

    2014-02-01

    Full Text Available Significant reductions of heat demand, low-carbon and renewable energy sources, and district heating are key elements in 100% renewable energy systems. Appraisal of district heating along with energy efficient buildings and individual heat supply requires a geographical representation of heat demand, energy efficiency and energy supply. The present paper describes a Heat Atlas built around a spatial database using geographical information systems (GIS. The present atlas allows for per-building calculations of potentials and costs of energy savings, connectivity to existing district heat, and current heat supply and demand. For the entire building mass a conclusive link is established between the built environment and its heat supply. The expansion of district heating; the interconnection of distributed district heating systems; or the question whether to invest in ultra-efficient buildings with individual supply, or in collective heating using renewable energy for heating the current building stock, can be based on improved data.

  8. Cloud computing models and their application in LTE based cellular systems

    NARCIS (Netherlands)

    Staring, A.J.; Karagiannis, Georgios

    2013-01-01

    As cloud computing emerges as the next novel concept in computer science, it becomes clear that the model applied in large data storage systems used to resolve issues coming forth from an increasing demand, could also be used to resolve the very high bandwidth requirements on access network, core

  9. Electricity demand in Kazakhstan

    International Nuclear Information System (INIS)

    Atakhanova, Zauresh; Howie, Peter

    2007-01-01

    Properties of electricity demand in transition economies have not been sufficiently well researched mostly due to data limitations. However, information on the properties of electricity demand is necessary for policy makers to evaluate effects of price changes on different consumers and obtain demand forecasts for capacity planning. This study estimates Kazakhstan's aggregate demand for electricity as well as electricity demand in the industrial, service, and residential sectors using regional data. Firstly, our results show that price elasticity of demand in all sectors is low. This fact suggests that there is considerable room for price increases necessary to finance generation and distribution system upgrading. Secondly, we find that income elasticity of demand in the aggregate and all sectoral models is less than unity. Of the three sectors, electricity demand in the residential sector has the lowest income elasticity. This result indicates that policy initiatives to secure affordability of electricity consumption to lower income residential consumers may be required. Finally, our forecast shows that electricity demand may grow at either 3% or 5% per year depending on rates of economic growth and government policy regarding price increases and promotion of efficiency. We find that planned supply increases would be sufficient to cover growing demand only if real electricity prices start to increase toward long-run cost-recovery levels and policy measures are implemented to maintain the current high growth of electricity efficiency

  10. Department of Energy research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-08-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programmatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models, the execution of which is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex, and consequently it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  11. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1991-03-15

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour.

  12. COMPUTERS: Teraflops for Europe; EEC Working Group on High Performance Computing

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    In little more than a decade, simulation on high performance computers has become an essential tool for theoretical physics, capable of solving a vast range of crucial problems inaccessible to conventional analytic mathematics. In many ways, computer simulation has become the calculus for interacting many-body systems, a key to the study of transitions from isolated to collective behaviour

  13. Employees facing high job demands: How to keep them fit, satisfied, and intrinsically motivated?

    NARCIS (Netherlands)

    Van Yperen, N.W.; Nagao, DH

    2002-01-01

    The purpose of the present research was to determine why some employees faced with high job demands feel fatigued, dissatisfied, and unmotivated, whereas others feel fatigued but satisfied and intrinsically motivated. It is argued and demonstrated that two job conditions, namely job control and job

  14. The demand for consumer health information.

    Science.gov (United States)

    Wagner, T H; Hu, T W; Hibbard, J H

    2001-11-01

    Using data from an evaluation of a community-wide informational intervention, we modeled the demand for medical reference books, telephone advice nurses, and computers for health information. Data were gathered from random household surveys in Boise, ID (experimental site), Billings, MT, and Eugene, OR (control sites). Conditional difference-in-differences show that the intervention increased the use of medical reference books, advice nurses, and computers for health information by approximately 15, 6, and 4%. respectively. The results also suggest that the intervention was associated with a decreased reliance on health professionals for information.

  15. Demand flexibility from residential heat pump

    DEFF Research Database (Denmark)

    Bhattarai, Bishnu Prasad; Bak-Jensen, Birgitte; Pillai, Jayakrishnan Radhakrishna

    2014-01-01

    Demand response (DR) is considered as a potentially effective tool to compensate generation intermittency imposed by renewable sources. Further, DR can instigate to offer optimum asset utilization and to avoid or delay the need for new infrastructure investment. Being a sizable load together...... with high thermal time constant, heat pumps (HP) can offer a great deal of flexibility in the future intelligent grids especially to compensate fluctuating generation. However, the HP flexibility is highly dependent on thermal demand profile, namely hot water and space heating demand. This paper proposes...... price based scheduling followed by a demand dispatch based central control and a local voltage based adaptive control, to realize HP demand flexibility. Two-step control architecture, namely local primary control encompassed by the central coordinative control, is proposed to implement...

  16. On-demand high-capacity ride-sharing via dynamic trip-vehicle assignment.

    Science.gov (United States)

    Alonso-Mora, Javier; Samaranayake, Samitha; Wallar, Alex; Frazzoli, Emilio; Rus, Daniela

    2017-01-17

    Ride-sharing services are transforming urban mobility by providing timely and convenient transportation to anybody, anywhere, and anytime. These services present enormous potential for positive societal impacts with respect to pollution, energy consumption, congestion, etc. Current mathematical models, however, do not fully address the potential of ride-sharing. Recently, a large-scale study highlighted some of the benefits of car pooling but was limited to static routes with two riders per vehicle (optimally) or three (with heuristics). We present a more general mathematical model for real-time high-capacity ride-sharing that (i) scales to large numbers of passengers and trips and (ii) dynamically generates optimal routes with respect to online demand and vehicle locations. The algorithm starts from a greedy assignment and improves it through a constrained optimization, quickly returning solutions of good quality and converging to the optimal assignment over time. We quantify experimentally the tradeoff between fleet size, capacity, waiting time, travel delay, and operational costs for low- to medium-capacity vehicles, such as taxis and van shuttles. The algorithm is validated with ∼3 million rides extracted from the New York City taxicab public dataset. Our experimental study considers ride-sharing with rider capacity of up to 10 simultaneous passengers per vehicle. The algorithm applies to fleets of autonomous vehicles and also incorporates rebalancing of idling vehicles to areas of high demand. This framework is general and can be used for many real-time multivehicle, multitask assignment problems.

  17. Computing for Lattice QCD: new developments from the APE experiment

    Energy Technology Data Exchange (ETDEWEB)

    Ammendola, R [INFN, Sezione di Roma Tor Vergata, Roma (Italy); Biagioni, A; De Luca, S [INFN, Sezione di Roma, Roma (Italy)

    2008-06-15

    As the Lattice QCD develops improved techniques to shed light on new physics, it demands increasing computing power. The aim of the current APE (Array Processor Experiment) project is to provide the reference computing platform to the Lattice QCD community for the period 2009-2011. We present the project proposal for a peta flops range super-computing center with high performance and low maintenance costs, to be delivered starting from 2010.

  18. Computing for Lattice QCD: new developments from the APE experiment

    International Nuclear Information System (INIS)

    Ammendola, R.; Biagioni, A.; De Luca, S.

    2008-01-01

    As the Lattice QCD develops improved techniques to shed light on new physics, it demands increasing computing power. The aim of the current APE (Array Processor Experiment) project is to provide the reference computing platform to the Lattice QCD community for the period 2009-2011. We present the project proposal for a peta flops range super-computing center with high performance and low maintenance costs, to be delivered starting from 2010.

  19. Accumulative job demands and support for strength use: Fine-tuning the job demands-resources model using conservation of resources theory.

    Science.gov (United States)

    van Woerkom, Marianne; Bakker, Arnold B; Nishii, Lisa H

    2016-01-01

    Absenteeism associated with accumulated job demands is a ubiquitous problem. We build on prior research on the benefits of counteracting job demands with resources by focusing on a still untapped resource for buffering job demands-that of strengths use. We test the idea that employees who are actively encouraged to utilize their personal strengths on the job are better positioned to cope with job demands. Based on conservation of resources (COR) theory, we hypothesized that job demands can accumulate and together have an exacerbating effect on company registered absenteeism. In addition, using job demands-resources theory, we hypothesized that perceived organizational support for strengths use can buffer the impact of separate and combined job demands (workload and emotional demands) on absenteeism. Our sample consisted of 832 employees from 96 departments (response rate = 40.3%) of a Dutch mental health care organization. Results of multilevel analyses indicated that high levels of workload strengthen the positive relationship between emotional demands and absenteeism and that support for strength use interacted with workload and emotional job demands in the predicted way. Moreover, workload, emotional job demands, and strengths use interacted to predict absenteeism. Strengths use support reduced the level of absenteeism of employees who experienced both high workload and high emotional demands. We conclude that providing strengths use support to employees offers organizations a tool to reduce absenteeism, even when it is difficult to redesign job demands. (c) 2016 APA, all rights reserved).

  20. Application of Selective Algorithm for Effective Resource Provisioning in Cloud Computing Environment

    OpenAIRE

    Katyal, Mayanka; Mishra, Atul

    2014-01-01

    Modern day continued demand for resource hungry services and applications in IT sector has led to development of Cloud computing. Cloud computing environment involves high cost infrastructure on one hand and need high scale computational resources on the other hand. These resources need to be provisioned (allocation and scheduling) to the end users in most efficient manner so that the tremendous capabilities of cloud are utilized effectively and efficiently. In this paper we discuss a selecti...

  1. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  2. The relationship between demand and need for orthodontic treatment in high school students in Bangkok.

    Science.gov (United States)

    Atisook, Pitraporn; Chuacharoen, Rattiya

    2014-07-01

    Orthodontic service is limited in Thailand and cannot meet the demand of the population. (1) To assess the need for orthodontic treatment (OT) using the Index of Orthodontic Treatment Need (IOTN) to analyze the relationship between demand and need for OT and (2) to compare the demand and need for OT between genders. A cross-sectional study was conducted on 450 students aged 12- to 14-years-old in three government high schools in Bangkok. A constructed questionnaire was used to assess demand for OT Clinical examination was done by two orthodontists to determine the needfor OT using the IOTN RESULTS: Most of the students (74.0%) wished to have OT while only one-third (37.5%) had severe need, and one-third (34.4%) had moderate need for OT as judge by the DHC of the IOTN. The AC of the IOTN indicated that most students (55.8%) had mild or no need for OT Females (79%) demanded OT more than males (66% p-value = 0.033) but the need was similar in both sexes. Most functional factors had strong relationships with the demand for OTexcept lower teeth bite on palate, but none was found to be associated with need for OT All of the aesthetic factors had strong relationships with demand for OT There were significant relationships with needs in five categories, 1) crooked, crowded, or spacing teeth, 2) worried when speaking or smiling, 3) had suggestedfor OT 4) breath smell and halitosis, and 5) wanted to put on braces to be like other people or for fashionable reasons. Most of the students requested OT but females had significantly higher demand for OT than males. Most of the samples needed to have OT The aesthetic factors that had strong relationships with the need for OT were 1) crooked, crowded, or spacing teeth, 2) worried when speaking or smiling, 3) had suggested for T07 4) breath smell and halitosis, and 5) wanted to put on braces to be like otherpeople orfor fashionable reasons.

  3. Review on the applications of the very high speed computing technique to atomic energy field

    International Nuclear Information System (INIS)

    Hoshino, Tsutomu

    1981-01-01

    The demand of calculation in atomic energy field is enormous, and the physical and technological knowledge obtained by experiments are summarized into mathematical models, and accumulated as the computer programs for design, safety analysis of operational management. These calculation code systems are classified into reactor physics, reactor technology, operational management and nuclear fusion. In this paper, the demand of calculation speed in the diffusion and transport of neutrons, shielding, technological safety, core control and particle simulation is explained as the typical calculation. These calculations are divided into two models, the one is fluid model which regards physical systems as continuum, and the other is particle model which regards physical systems as composed of finite number of particles. The speed of computers in present state is too slow, and the capability 1000 to 10000 times as much as the present general purpose machines is desirable. The calculation techniques of pipeline system and parallel processor system are described. As an example of the practical system, the computer network OCTOPUS in the Lorence Livermore Laboratory is shown. Also, the CHI system in UCLA is introduced. (Kako, I.)

  4. Why Electricity Demand Is Highly Income-Elastic in Spain: A Cross-Country Comparison Based on an Index-Decomposition Analysis

    Directory of Open Access Journals (Sweden)

    Julián Pérez-García

    2017-03-01

    Full Text Available Since 1990, Spain has had one of the highest elasticities of electricity demand in the European Union. We provide an in-depth analysis into the causes of this high elasticity, and we examine how these same causes influence electricity demand in other European countries. To this end, we present an index-decomposition analysis of growth in electricity demand which allows us to identify three key factors in the relationship between gross domestic product (GDP and electricity demand: (i structural change; (ii GDP growth; and (iii intensity of electricity use. Our findings show that the main differences in electricity demand elasticities across countries and time are accounted for by the fast convergence in residential per capita electricity consumption. This convergence has almost concluded, and we expect the Spanish energy demand elasticity to converge to European standards in the near future.

  5. Enabling Grid Computing resources within the KM3NeT computing model

    Directory of Open Access Journals (Sweden)

    Filippidis Christos

    2016-01-01

    Full Text Available KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that – located at the bottom of the Mediterranean Sea – will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  6. Distributed control system for demand response by servers

    Science.gov (United States)

    Hall, Joseph Edward

    Within the broad topical designation of smart grid, research in demand response, or demand-side management, focuses on investigating possibilities for electrically powered devices to adapt their power consumption patterns to better match generation and more efficiently integrate intermittent renewable energy sources, especially wind. Devices such as battery chargers, heating and cooling systems, and computers can be controlled to change the time, duration, and magnitude of their power consumption while still meeting workload constraints such as deadlines and rate of throughput. This thesis presents a system by which a computer server, or multiple servers in a data center, can estimate the power imbalance on the electrical grid and use that information to dynamically change the power consumption as a service to the grid. Implementation on a testbed demonstrates the system with a hypothetical but realistic usage case scenario of an online video streaming service in which there are workloads with deadlines (high-priority) and workloads without deadlines (low-priority). The testbed is implemented with real servers, estimates the power imbalance from the grid frequency with real-time measurements of the live outlet, and uses a distributed, real-time algorithm to dynamically adjust the power consumption of the servers based on the frequency estimate and the throughput of video transcoder workloads. Analysis of the system explains and justifies multiple design choices, compares the significance of the system in relation to similar publications in the literature, and explores the potential impact of the system.

  7. Preliminary energy demand studies for Ireland: base case and high case for 1980, 1985 and 1990

    Energy Technology Data Exchange (ETDEWEB)

    Henry, E W

    1981-01-01

    The framework of the Base Case and the High Case for 1990 for Ireland, related to the demand modules of the medium-term European Communities (EC) Energy Model, is described. The modules are: Multi-national Macre-economic Module (EURECA); National Input-Output Model (EXPLOR); and National Energy Demand Model (EDM). The final results of the EXPLOR and EDM are described; one set related to the Base Case and the other related to the High Case. The forecast or projection is termed Base Case because oil prices are assumed to increase with general price inflation, at the same rate. The other forecast is termed High Case because oil prices are assumed to increase at 5% per year more rapidly than general price inflation. The EXPLOR-EDM methodology is described. The lack of data on energy price elasticities for Ireland is noted. A comparison of the Base Case with the High Case is made. (MCW)

  8. High-Degree Neurons Feed Cortical Computations.

    Directory of Open Access Journals (Sweden)

    Nicholas M Timme

    2016-05-01

    Full Text Available Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree or sends out (out-degree. To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to

  9. Demand response in energy markets

    International Nuclear Information System (INIS)

    Skytte, K.; Birk Mortensen, J.

    2004-11-01

    Improving the ability of energy demand to respond to wholesale prices during critical periods of the spot market can reduce the total costs of reliably meeting demand, and the level and volatility of the prices. This fact has lead to a growing interest in the short-run demand response. There has especially been a growing interest in the electricity market where peak-load periods with high spot prices and occasional local blackouts have recently been seen. Market concentration at the supply side can result in even higher peak-load prices. Demand response by shifting demand from peak to base-load periods can counteract the market power in the peak-load. However, demand response has so far been modest since the current short-term price elasticity seems to be small. This is also the case for related markets, for example, green certificates where the demand is determined as a percentage of the power demand, or for heat and natural gas markets. This raises a number of interesting research issues: 1) Demand response in different energy markets, 2) Estimation of price elasticity and flexibility, 3) Stimulation of demand response, 4) Regulation, policy and modelling aspects, 5) Demand response and market power at the supply side, 6) Energy security of supply, 7) Demand response in forward, spot, ancillary service, balance and capacity markets, 8) Demand response in deviated markets, e.g., emission, futures, and green certificate markets, 9) Value of increased demand response, 10) Flexible households. (BA)

  10. Quantum Accelerators for High-Performance Computing Systems

    OpenAIRE

    Britt, Keith A.; Mohiyaddin, Fahd A.; Humble, Travis S.

    2017-01-01

    We define some of the programming and system-level challenges facing the application of quantum processing to high-performance computing. Alongside barriers to physical integration, prominent differences in the execution of quantum and conventional programs challenges the intersection of these computational models. Following a brief overview of the state of the art, we discuss recent advances in programming and execution models for hybrid quantum-classical computing. We discuss a novel quantu...

  11. Computer-aided engineering in High Energy Physics

    International Nuclear Information System (INIS)

    Bachy, G.; Hauviller, C.; Messerli, R.; Mottier, M.

    1988-01-01

    Computing, standard tool for a long time in the High Energy Physics community, is being slowly introduced at CERN in the mechanical engineering field. The first major application was structural analysis followed by Computer-Aided Design (CAD). Development work is now progressing towards Computer-Aided Engineering around a powerful data base. This paper gives examples of the power of this approach applied to engineering for accelerators and detectors

  12. Computer controlled high voltage system

    Energy Technology Data Exchange (ETDEWEB)

    Kunov, B; Georgiev, G; Dimitrov, L [and others

    1996-12-31

    A multichannel computer controlled high-voltage power supply system is developed. The basic technical parameters of the system are: output voltage -100-3000 V, output current - 0-3 mA, maximum number of channels in one crate - 78. 3 refs.

  13. High-Precision Computation and Mathematical Physics

    International Nuclear Information System (INIS)

    Bailey, David H.; Borwein, Jonathan M.

    2008-01-01

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

  14. High performance parallel computers for science

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1989-01-01

    This paper reports that Fermilab's Advanced Computer Program (ACP) has been developing cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 Mflops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction

  15. Computer proficiency questionnaire: assessing low and high computer proficient seniors.

    Science.gov (United States)

    Boot, Walter R; Charness, Neil; Czaja, Sara J; Sharit, Joseph; Rogers, Wendy A; Fisk, Arthur D; Mitzner, Tracy; Lee, Chin Chin; Nair, Sankaran

    2015-06-01

    Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users. To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ. The CPQ demonstrated excellent reliability (Cronbach's α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions. The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  16. An infrastructure with a unified control plane to integrate IP into optical metro networks to provide flexible and intelligent bandwidth on demand for cloud computing

    Science.gov (United States)

    Yang, Wei; Hall, Trevor

    2012-12-01

    The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users and the nature of the Internet traffic will undertake a fundamental transformation. Consequently, the current Internet will no longer suffice for serving cloud traffic in metro areas. This work proposes an infrastructure with a unified control plane that integrates simple packet aggregation technology with optical express through the interoperation between IP routers and electrical traffic controllers in optical metro networks. The proposed infrastructure provides flexible, intelligent, and eco-friendly bandwidth on demand for cloud computing in metro areas.

  17. Residential demand response reduces air pollutant emissions on peak electricity demand days in New York City

    International Nuclear Information System (INIS)

    Gilbraith, Nathaniel; Powers, Susan E.

    2013-01-01

    Many urban areas in the United States have experienced difficulty meeting the National Ambient Air Quality Standards (NAAQS), partially due to pollution from electricity generating units. We evaluated the potential for residential demand response to reduce pollutant emissions on days with above average pollutant emissions and a high potential for poor air quality. The study focused on New York City (NYC) due to non-attainment with NAAQS standards, large exposed populations, and the existing goal of reducing pollutant emissions. The baseline demand response scenario simulated a 1.8% average reduction in NYC peak demand on 49 days throughout the summer. Nitrogen oxide and particulate matter less than 2.5 μm in diameter emission reductions were predicted to occur (−70, −1.1 metric tons (MT) annually), although, these were not likely to be sufficient for NYC to meet the NAAQS. Air pollution mediated damages were predicted to decrease by $100,000–$300,000 annually. A sensitivity analysis predicted that substantially larger pollutant emission reductions would occur if electricity demand was shifted from daytime hours to nighttime hours, or the total consumption decreased. Policies which incentivize shifting electricity consumption away from periods of high human and environmental impacts should be implemented, including policies directed toward residential consumers. - Highlights: • The impact of residential demand response on air emissions was modeled. • Residential demand response will decrease pollutant emissions in NYC. • Emissions reductions occur during periods with high potential for poor air quality. • Shifting demand to nighttime hours was more beneficial than to off-peak daytime hours

  18. Cognitive task demands, self-control demands and the mental well-being of office workers.

    Science.gov (United States)

    Bridger, Robert S; Brasher, Kate

    2011-09-01

    The cognitive task demands of office workers and the self-control demands of their work roles were measured in a sample of 196 employees in two different office layouts using a self-report questionnaire, which was circulated electronically. Multiple linear regression analysis revealed that both factors were associated with mental well-being, but not with physical well-being, while controlling for exposure to psychosocial stressors. The interaction between cognitive task demands and self-control demands had the strongest association with mental well-being, suggesting that the deleterious effect of one was greater when the other was present. An exploratory analysis revealed that the association was stronger for employees working in a large open-plan office than for those working in smaller offices with more privacy. Frustration of work goals was the cognitive task demand having the strongest negative impact on mental well-being. Methodological limitations and scale psychometrics (particularly the use of the NASA Task Load Index) are discussed. STATEMENT OF RELEVANCE: Modern office work has high mental demands and low physical demands and there is a need to design offices to prevent adverse psychological reactions. It is shown that cognitive task demands interact with self-control demands to degrade mental well-being. The association was stronger in an open-plan office.

  19. THE ELASTICITY OF EXPORT DEMAND FOR US COTTON

    OpenAIRE

    Paudel, Laxmi; Houston, Jack E.; Adhikari, Murali; Devkota, Nirmala

    2004-01-01

    There exist conflicting views among the researchers about the magnitudes of US cotton export demand elasticity, ranging from the highly inelastic to highly elastic. An Armington model was used to analyze the export demand elasticity of US Cotton. Our analysis confirms an elastic nature of US cotton export demand.

  20. After Installation: Ubiquitous Computing and High School Science in Three Experienced, High-Technology Schools

    Science.gov (United States)

    Drayton, Brian; Falk, Joni K.; Stroud, Rena; Hobbs, Kathryn; Hammerman, James

    2010-01-01

    There are few studies of the impact of ubiquitous computing on high school science, and the majority of studies of ubiquitous computing report only on the early stages of implementation. The present study presents data on 3 high schools with carefully elaborated ubiquitous computing systems that have gone through at least one "obsolescence cycle"…

  1. Optical interconnection networks for high-performance computing systems

    International Nuclear Information System (INIS)

    Biberman, Aleksandr; Bergman, Keren

    2012-01-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers. (review article)

  2. Separation of metabolic supply and demand: aerobic glycolysis as a normal physiological response to fluctuating energetic demands in the membrane.

    Science.gov (United States)

    Epstein, Tamir; Xu, Liping; Gillies, Robert J; Gatenby, Robert A

    2014-01-01

    Cancer cells, and a variety of normal cells, exhibit aerobic glycolysis, high rates of glucose fermentation in the presence of normal oxygen concentrations, also known as the Warburg effect. This metabolism is considered abnormal because it violates the standard model of cellular energy production that assumes glucose metabolism is predominantly governed by oxygen concentrations and, therefore, fermentative glycolysis is an emergency back-up for periods of hypoxia. Though several hypotheses have been proposed for the origin of aerobic glycolysis, its biological basis in cancer and normal cells is still not well understood. We examined changes in glucose metabolism following perturbations in membrane activity in different normal and tumor cell lines and found that inhibition or activation of pumps on the cell membrane led to reduction or increase in glycolysis, respectively, while oxidative phosphorylation remained unchanged. Computational simulations demonstrated that these findings are consistent with a new model of normal physiological cellular metabolism in which efficient mitochondrial oxidative phosphorylation supplies chronic energy demand primarily for macromolecule synthesis and glycolysis is necessary to supply rapid energy demands primarily to support membrane pumps. A specific model prediction was that the spatial distribution of ATP-producing enzymes in the glycolytic pathway must be primarily localized adjacent to the cell membrane, while mitochondria should be predominantly peri-nuclear. The predictions were confirmed experimentally. Our results show that glycolytic metabolism serves a critical physiological function under normoxic conditions by responding to rapid energetic demand, mainly from membrane transport activities, even in the presence of oxygen. This supports a new model for glucose metabolism in which glycolysis and oxidative phosphorylation supply different types of energy demand. Cells use efficient but slow-responding aerobic metabolism

  3. Inkjet metrology: high-accuracy mass measurements of microdroplets produced by a drop-on-demand dispenser.

    Science.gov (United States)

    Verkouteren, R Michael; Verkouteren, Jennifer R

    2009-10-15

    We describe gravimetric methods for measuring the mass of droplets generated by a drop-on-demand (DOD) microdispenser. Droplets are deposited, either continuously at a known frequency or as a burst of known number, into a cylinder positioned on a submicrogram balance. Mass measurements are acquired precisely by computer, and results are corrected for evaporation. Capabilities are demonstrated using isobutyl alcohol droplets. For ejection rates greater than 100 Hz, the repeatability of droplet mass measurements was 0.2%, while the combined relative standard uncertainty (u(c)) was 0.9%. When bursts of droplets were dispensed, the limit of quantitation was 72 microg (1490 droplets) with u(c) = 1.0%. Individual droplet size in a burst was evaluated by high-speed videography. Diameters were consistent from the tenth droplet onward, and the mass of an individual droplet was best estimated by the average droplet mass with a combined uncertainty of about 1%. Diameters of the first several droplets were anomalous, but their contribution was accounted for when dispensing bursts. Above the limits of quantitation, the gravimetric methods provided statistically equivalent results and permit detailed study of operational factors that influence droplet mass during dispensing, including the development of reliable microassays and standard materials using DOD technologies.

  4. A hybrid optical switch architecture to integrate IP into optical networks to provide flexible and intelligent bandwidth on demand for cloud computing

    Science.gov (United States)

    Yang, Wei; Hall, Trevor J.

    2013-12-01

    The Internet is entering an era of cloud computing to provide more cost effective, eco-friendly and reliable services to consumer and business users. As a consequence, the nature of the Internet traffic has been fundamentally transformed from a pure packet-based pattern to today's predominantly flow-based pattern. Cloud computing has also brought about an unprecedented growth in the Internet traffic. In this paper, a hybrid optical switch architecture is presented to deal with the flow-based Internet traffic, aiming to offer flexible and intelligent bandwidth on demand to improve fiber capacity utilization. The hybrid optical switch is capable of integrating IP into optical networks for cloud-based traffic with predictable performance, for which the delay performance of the electronic module in the hybrid optical switch architecture is evaluated through simulation.

  5. High Resolution Map of Water Supply and Demand for North East United States

    Science.gov (United States)

    Ehsani, N.; Vorosmarty, C. J.; Fekete, B. M.

    2012-12-01

    Accurate estimates of water supply and demand are crucial elements in water resources management and modeling. As part of our NSF-funded EaSM effort to build a Northeast Regional Earth System Model (NE-RESM) as a framework to improve our understanding and capacity to forecast the implications of planning decisions on the region's environment, ecosystem services, energy and economic systems through the 21st century, we are producing a high resolution map (3' x 3' lat/long) of estimated water supply and use for the north east region of United States. Focusing on water demand, results from this study enables us to quantify how demand sources affect the hydrology and thermal-chemical water pollution across the region. In an attempt to generate this 3-minute resolution map in which each grid cell has a specific estimated monthly domestic, agriculture, thermoelectric and industrial water use. Estimated Use of Water in the United States in 2005 (Kenny et al., 2009) is being coupled to high resolution land cover and land use, irrigation, power plant and population data sets. In addition to water demands, we tried to improve estimates of water supply from the WBM model by improving the way it controls discharge from reservoirs. Reservoirs are key characteristics of the modern hydrologic system, with a particular impact on altering the natural stream flow, thermal characteristics, and biogeochemical fluxes of rivers. Depending on dam characteristics, watershed characteristics and the purpose of building a dam, each reservoir has a specific optimum operating rule. It means that literally 84,000 dams in the National Inventory of Dams potentially follow 84,000 different sets of rules for storing and releasing water which must somehow be accounted for in our modeling exercise. In reality, there is no comprehensive observational dataset depicting these operating rules. Thus, we will simulate these rules. Our perspective is not to find the optimum operating rule per se but to find

  6. Speed and path control for conflict-free flight in high air traffic demand in terminal airspace

    Science.gov (United States)

    Rezaei, Ali

    To accommodate the growing air traffic demand, flights will need to be planned and navigated with a much higher level of precision than today's aircraft flight path. The Next Generation Air Transportation System (NextGen) stands to benefit significantly in safety and efficiency from such movement of aircraft along precisely defined paths. Air Traffic Operations (ATO) relying on such precision--the Precision Air Traffic Operations or PATO--are the foundation of high throughput capacity envisioned for the future airports. In PATO, the preferred method is to manage the air traffic by assigning a speed profile to each aircraft in a given fleet in a given airspace (in practice known as (speed control). In this research, an algorithm has been developed, set in the context of a Hybrid Control System (HCS) model, that determines whether a speed control solution exists for a given fleet of aircraft in a given airspace and if so, computes this solution as a collective speed profile that assures separation if executed without deviation. Uncertainties such as weather are not considered but the algorithm can be modified to include uncertainties. The algorithm first computes all feasible sequences (i.e., all sequences that allow the given fleet of aircraft to reach destinations without violating the FAA's separation requirement) by looking at all pairs of aircraft. Then, the most likely sequence is determined and the speed control solution is constructed by a backward trajectory generation, starting with the aircraft last out and proceeds to the first out. This computation can be done for different sequences in parallel which helps to reduce the computation time. If such a solution does not exist, then the algorithm calculates a minimal path modification (known as path control) that will allow separation-compliance speed control. We will also prove that the algorithm will modify the path without creating a new separation violation. The new path will be generated by adding new

  7. Digital imaging of root traits (DIRT): a high-throughput computing and collaboration platform for field-based root phenomics.

    Science.gov (United States)

    Das, Abhiram; Schneider, Hannah; Burridge, James; Ascanio, Ana Karine Martinez; Wojciechowski, Tobias; Topp, Christopher N; Lynch, Jonathan P; Weitz, Joshua S; Bucksch, Alexander

    2015-01-01

    Plant root systems are key drivers of plant function and yield. They are also under-explored targets to meet global food and energy demands. Many new technologies have been developed to characterize crop root system architecture (CRSA). These technologies have the potential to accelerate the progress in understanding the genetic control and environmental response of CRSA. Putting this potential into practice requires new methods and algorithms to analyze CRSA in digital images. Most prior approaches have solely focused on the estimation of root traits from images, yet no integrated platform exists that allows easy and intuitive access to trait extraction and analysis methods from images combined with storage solutions linked to metadata. Automated high-throughput phenotyping methods are increasingly used in laboratory-based efforts to link plant genotype with phenotype, whereas similar field-based studies remain predominantly manual low-throughput. Here, we present an open-source phenomics platform "DIRT", as a means to integrate scalable supercomputing architectures into field experiments and analysis pipelines. DIRT is an online platform that enables researchers to store images of plant roots, measure dicot and monocot root traits under field conditions, and share data and results within collaborative teams and the broader community. The DIRT platform seamlessly connects end-users with large-scale compute "commons" enabling the estimation and analysis of root phenotypes from field experiments of unprecedented size. DIRT is an automated high-throughput computing and collaboration platform for field based crop root phenomics. The platform is accessible at http://www.dirt.iplantcollaborative.org/ and hosted on the iPlant cyber-infrastructure using high-throughput grid computing resources of the Texas Advanced Computing Center (TACC). DIRT is a high volume central depository and high-throughput RSA trait computation platform for plant scientists working on crop roots

  8. High-Precision Computation: Mathematical Physics and Dynamics

    International Nuclear Information System (INIS)

    Bailey, D.H.; Barrio, R.; Borwein, J.M.

    2010-01-01

    At the present time, IEEE 64-bit oating-point arithmetic is suficiently accurate for most scientic applications. However, for a rapidly growing body of important scientic computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion e ort. This pa- per presents a survey of recent applications of these techniques and provides someanalysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, studies of the one structure constant, scattering amplitudes of quarks, glu- ons and bosons, nonlinear oscillator theory, experimental mathematics, evaluation of orthogonal polynomials, numerical integration of ODEs, computation of periodic orbits, studies of the splitting of separatrices, detection of strange nonchaotic at- tractors, Ising theory, quantum held theory, and discrete dynamical systems. We conclude that high-precision arithmetic facilities are now an indispensable compo- nent of a modern large-scale scientic computing environment.

  9. High-Precision Computation: Mathematical Physics and Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, D. H.; Barrio, R.; Borwein, J. M.

    2010-04-01

    At the present time, IEEE 64-bit oating-point arithmetic is suficiently accurate for most scientic applications. However, for a rapidly growing body of important scientic computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion e ort. This pa- per presents a survey of recent applications of these techniques and provides someanalysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, studies of the one structure constant, scattering amplitudes of quarks, glu- ons and bosons, nonlinear oscillator theory, experimental mathematics, evaluation of orthogonal polynomials, numerical integration of ODEs, computation of periodic orbits, studies of the splitting of separatrices, detection of strange nonchaotic at- tractors, Ising theory, quantum held theory, and discrete dynamical systems. We conclude that high-precision arithmetic facilities are now an indispensable compo- nent of a modern large-scale scientic computing environment.

  10. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  11. Cloud Computing with iPlant Atmosphere.

    Science.gov (United States)

    McKay, Sheldon J; Skidmore, Edwin J; LaRose, Christopher J; Mercer, Andre W; Noutsos, Christos

    2013-10-15

    Cloud Computing refers to distributed computing platforms that use virtualization software to provide easy access to physical computing infrastructure and data storage, typically administered through a Web interface. Cloud-based computing provides access to powerful servers, with specific software and virtual hardware configurations, while eliminating the initial capital cost of expensive computers and reducing the ongoing operating costs of system administration, maintenance contracts, power consumption, and cooling. This eliminates a significant barrier to entry into bioinformatics and high-performance computing for many researchers. This is especially true of free or modestly priced cloud computing services. The iPlant Collaborative offers a free cloud computing service, Atmosphere, which allows users to easily create and use instances on virtual servers preconfigured for their analytical needs. Atmosphere is a self-service, on-demand platform for scientific computing. This unit demonstrates how to set up, access and use cloud computing in Atmosphere. Copyright © 2013 John Wiley & Sons, Inc.

  12. AELAS: Automatic ELAStic property derivations via high-throughput first-principles computation

    Science.gov (United States)

    Zhang, S. H.; Zhang, R. F.

    2017-11-01

    The elastic properties are fundamental and important for crystalline materials as they relate to other mechanical properties, various thermodynamic qualities as well as some critical physical properties. However, a complete set of experimentally determined elastic properties is only available for a small subset of known materials, and an automatic scheme for the derivations of elastic properties that is adapted to high-throughput computation is much demanding. In this paper, we present the AELAS code, an automated program for calculating second-order elastic constants of both two-dimensional and three-dimensional single crystal materials with any symmetry, which is designed mainly for high-throughput first-principles computation. Other derivations of general elastic properties such as Young's, bulk and shear moduli as well as Poisson's ratio of polycrystal materials, Pugh ratio, Cauchy pressure, elastic anisotropy and elastic stability criterion, are also implemented in this code. The implementation of the code has been critically validated by a lot of evaluations and tests on a broad class of materials including two-dimensional and three-dimensional materials, providing its efficiency and capability for high-throughput screening of specific materials with targeted mechanical properties. Program Files doi:http://dx.doi.org/10.17632/f8fwg4j9tw.1 Licensing provisions: BSD 3-Clause Programming language: Fortran Nature of problem: To automate the calculations of second-order elastic constants and the derivations of other elastic properties for two-dimensional and three-dimensional materials with any symmetry via high-throughput first-principles computation. Solution method: The space-group number is firstly determined by the SPGLIB code [1] and the structure is then redefined to unit cell with IEEE-format [2]. Secondly, based on the determined space group number, a set of distortion modes is automatically specified and the distorted structure files are generated

  13. CRITICAL ISSUES IN HIGH END COMPUTING - FINAL REPORT

    Energy Technology Data Exchange (ETDEWEB)

    Corones, James [Krell Institute

    2013-09-23

    High-End computing (HEC) has been a driver for advances in science and engineering for the past four decades. Increasingly HEC has become a significant element in the national security, economic vitality, and competitiveness of the United States. Advances in HEC provide results that cut across traditional disciplinary and organizational boundaries. This program provides opportunities to share information about HEC systems and computational techniques across multiple disciplines and organizations through conferences and exhibitions of HEC advances held in Washington DC so that mission agency staff, scientists, and industry can come together with White House, Congressional and Legislative staff in an environment conducive to the sharing of technical information, accomplishments, goals, and plans. A common thread across this series of conferences is the understanding of computational science and applied mathematics techniques across a diverse set of application areas of interest to the Nation. The specific objectives of this program are: Program Objective 1. To provide opportunities to share information about advances in high-end computing systems and computational techniques between mission critical agencies, agency laboratories, academics, and industry. Program Objective 2. To gather pertinent data, address specific topics of wide interest to mission critical agencies. Program Objective 3. To promote a continuing discussion of critical issues in high-end computing. Program Objective 4.To provide a venue where a multidisciplinary scientific audience can discuss the difficulties applying computational science techniques to specific problems and can specify future research that, if successful, will eliminate these problems.

  14. Harnessing the power of demand

    Energy Technology Data Exchange (ETDEWEB)

    Sheffrin, Anjali; Yoshimura, Henry; LaPlante, David; Neenan, Bernard

    2008-03-15

    Demand response can provide a series of economic services to the market and also provide ''insurance value'' under low-likelihood, but high-impact circumstances in which grid reliablity is enhanced. Here is how ISOs and RTOs are fostering demand response within wholesale electricity markets. (author)

  15. Development and application of computer network for working out of researches on high energy physics

    International Nuclear Information System (INIS)

    Boos, Eh.G.; Tashimov, M.A.

    2001-01-01

    Computer network of the Physical and Technological Institute of the Ministry and Science and Education of the Republic of Kazakhstan (FTI of MSE RK) jointing a number of the research institutions, leading universities and other enterprises of Almaty city. At the present time more than 350 computers are connected to this network, the velocity of satellite channel is increased up to 192 k bit/s per one reception. The university segments of the network are separated in individual domen. A new software for analysis and proceeding of experimental data are implemented and other measures are carried out as well. However an increasing volume of information exchange between nuclear-physical center demanding the further information network development. So for providing consumers demands in information exchange in the nearest years in the paper the possibility for following measures maintenance are considered: (1) Increase of satellite channel velocity up to 1-2 M bit/s by replace of the existing SDM-100 modem on a rapid one. Now using the Kedr-M station and the CISCO-2501 tracer allowing to provide such velocity; (2) Convert of the Institute local calculation network on the new Fast Ethernet technology permitting to increase the information transmission velocity up to 100 M bit/s at the complete succession of existing Ethernet; (3) The Proxy-server (Firewaal) install at the network support assay, that giving the possibility for discharging of satellite channel and localization of segment of the network, connected with learning on the Internet not in damage to educational process. In the framework of cooperation with DESY German accelerating center with help of the indicated network the data about 2 hundred thousand deep inelastic interactions of electrons with protons measured at ZEUS detector are obtained. Data about 10 thousand of events simulated at the OPAL installation are received as well. Besides the computer network is using for operative information exchange and

  16. Estimating Reduced Consumption for Dynamic Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Chelmis, Charalampos [Univ. of Southern California, Los Angeles, CA (United States); Aman, Saima [Univ. of Southern California, Los Angeles, CA (United States); Saeed, Muhammad Rizwan [Univ. of Southern California, Los Angeles, CA (United States); Frincu, Marc [Univ. of Southern California, Los Angeles, CA (United States); Prasanna, Viktor K. [Univ. of Southern California, Los Angeles, CA (United States)

    2015-01-30

    Growing demand is straining our existing electricity generation facilities and requires active participation of the utility and the consumers to achieve energy sustainability. One of the most effective and widely used ways to achieve this goal in the smart grid is demand response (DR), whereby consumers reduce their electricity consumption in response to a request sent from the utility whenever it anticipates a peak in demand. To successfully plan and implement demand response, the utility requires reliable estimate of reduced consumption during DR. This also helps in optimal selection of consumers and curtailment strategies during DR. While much work has been done on predicting normal consumption, reduced consumption prediction is an open problem that is under-studied. In this paper, we introduce and formalize the problem of reduced consumption prediction, and discuss the challenges associated with it. We also describe computational methods that use historical DR data as well as pre-DR conditions to make such predictions. Our experiments are conducted in the real-world setting of a university campus microgrid, and our preliminary results set the foundation for more detailed modeling.

  17. Parallel computing for event reconstruction in high-energy physics

    International Nuclear Information System (INIS)

    Wolbers, S.

    1993-01-01

    Parallel computing has been recognized as a solution to large computing problems. In High Energy Physics offline event reconstruction of detector data is a very large computing problem that has been solved with parallel computing techniques. A review of the parallel programming package CPS (Cooperative Processes Software) developed and used at Fermilab for offline reconstruction of Terabytes of data requiring the delivery of hundreds of Vax-Years per experiment is given. The Fermilab UNIX farms, consisting of 180 Silicon Graphics workstations and 144 IBM RS6000 workstations, are used to provide the computing power for the experiments. Fermilab has had a long history of providing production parallel computing starting with the ACP (Advanced Computer Project) Farms in 1986. The Fermilab UNIX Farms have been in production for over 2 years with 24 hour/day service to experimental user groups. Additional tools for management, control and monitoring these large systems will be described. Possible future directions for parallel computing in High Energy Physics will be given

  18. Association between job strain (high demand-low control and cardiovascular disease risk factors among petrochemical industry workers

    Directory of Open Access Journals (Sweden)

    Siamak Poorabdian

    2013-08-01

    Full Text Available Objective: One of the practical models for assessment of stressful working conditions due to job strain is "job demand and control" or Karasek's job strain model. This model explains how adverse physical and psychological effects including cardiovascular disease risk factors can be established due to high work demand. The aim was to investigate how certain cardiovascular risk factors including body mass index (BMI, heart rate, blood pressure, serum total cholesterol levels, and cigarette smoking are associated with job demand and control in workers. Materials and Methods: In this cohort study, 500 subjects completed "job demand and control" questionnaires. Factor analysis method was used in order to specify the most important "job demand and control" questions. Health check-up records of the workers were applied to extract data about cardiovascular disease risk factors. Ultimately, hypothesis testing, based on Eta, was used to assess the relationship between separated working groups and cardiovascular risk factors (hypertension and serum total cholesterol level. Results: A significant relationship was found between the job demand-control model and cardiovascular risk factors. In terms of chisquared test results, the highest value was assessed for heart rate (Chi2 = 145.078. The corresponding results for smoking and BMI were Chi2 = 85.652 and Chi2 = 30.941, respectively. Subsequently, Eta result for total cholesterol was 0.469, followed by hypertension equaling 0.684. Moreover, there was a significant difference between cardiovascular risk factors and job demand-control profiles among different working groups including the operational group, repairing group and servicing group. Conclusion: Job control and demand are significantly related to heart disease risk factors including hypertension, hyperlipidemia, and cigarette smoking.

  19. Usage of Cloud Computing Simulators and Future Systems For Computational Research

    OpenAIRE

    Lakshminarayanan, Ramkumar; Ramalingam, Rajasekar

    2016-01-01

    Cloud Computing is an Internet based computing, whereby shared resources, software and information, are provided to computers and devices on demand, like the electricity grid. Currently, IaaS (Infrastructure as a Service), PaaS (Platform as a Service) and SaaS (Software as a Service) are used as a business model for Cloud Computing. Nowadays, the adoption and deployment of Cloud Computing is increasing in various domains, forcing researchers to conduct research in the area of Cloud Computing ...

  20. Can high psychological job demands, low decision latitude, and high job strain predict disability pensions? A 12-year follow-up of middle-aged Swedish workers.

    Science.gov (United States)

    Canivet, Catarina; Choi, BongKyoo; Karasek, Robert; Moghaddassi, Mahnaz; Staland-Nyman, Carin; Östergren, Per-Olof

    2013-04-01

    The aim of this study was to investigate whether job strain, psychological demands, and decision latitude are independent determinants of disability pension rates over a 12-year follow-up period. We studied 3,181 men and 3,359 women, all middle-aged and working at least 30 h per week, recruited from the general population of Malmö, Sweden, in 1992. The participation rate was 41 %. Baseline data include sociodemographics, the Job Content Questionnaire, lifestyle, and health-related variables. Disability pension information was obtained through record linkage from the National Health Insurance Register. Nearly 20 % of the women and 15 % of the men were granted a disability pension during the follow-up period. The highest quartile of psychological job demands and the lowest quartile of decision latitude were associated with disability pensions when controlling for age, socioeconomic position, and health risk behaviours. In the final model, with adjustment also for health indicators and stress from outside the workplace, the hazard ratios for high strain jobs (i.e. high psychological demands in combination with low decision latitude) were 1.5 in men (95 % CI, 1.04-2.0) and 1.7 in women (95 % CI, 1.3-2.2). Stratifying for health at baseline showed that high strain tended to affect healthy but not unhealthy men, while this pattern was reversed in women. High psychological demands, low decision latitude, and job strain were all confirmed as independent risk factors for subsequent disability pensions. In order to increase chances of individuals remaining in the work force, interventions against these adverse psychosocial factors appear worthwhile.

  1. Trusted Virtual Infrastructure Bootstrapping for On Demand Services

    NARCIS (Netherlands)

    Membrey, P.; Chan, K.C.C.; Ngo, C.; Demchenko, Y.; de Laat, C.

    2012-01-01

    As cloud computing continues to gain traction, a great deal of effort is being expended in researching the most effective ways to build and manage secure and trustworthy clouds. Providing consistent security services in on-demand provisioned Cloud infrastructure services is of primary importance due

  2. Computer-related vision problems in Osogbo, south-western Nigeria ...

    African Journals Online (AJOL)

    Widespread use of computers for office work and e-learning has resulted in increased visual demands among computer users. The increased visual demands have led to development of ocular complaints and discomfort among users. The objective of this study is to determine the prevalence of computer related eye ...

  3. User Requirements & Demand for Services and Applications in PNs

    DEFF Research Database (Denmark)

    Jiang, Bo

    This paper focuses on the methodology for analyses of user requirements and demand for specific services and applications in relation to personal networks (PNs). The paper has a strong user-centric approach to service and application development based on the widely accepted fact that future servi...... demand for services and applications in a PN setting. This further includes discussion of service categorization, service description and human-value issues as personalization, security and privacy, billing and price and human-computer interaction paradigms....

  4. Combinatorial algorithms enabling computational science: tales from the front

    International Nuclear Information System (INIS)

    Bhowmick, Sanjukta; Boman, Erik G; Devine, Karen; Gebremedhin, Assefaw; Hendrickson, Bruce; Hovland, Paul; Munson, Todd; Pothen, Alex

    2006-01-01

    Combinatorial algorithms have long played a crucial enabling role in scientific and engineering computations. The importance of discrete algorithms continues to grow with the demands of new applications and advanced architectures. This paper surveys some recent developments in this rapidly changing and highly interdisciplinary field

  5. Combinatorial algorithms enabling computational science: tales from the front

    Energy Technology Data Exchange (ETDEWEB)

    Bhowmick, Sanjukta [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Boman, Erik G [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Devine, Karen [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Gebremedhin, Assefaw [Computer Science Department, Old Dominion University (United States); Hendrickson, Bruce [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Hovland, Paul [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Munson, Todd [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Pothen, Alex [Computer Science Department, Old Dominion University (United States)

    2006-09-15

    Combinatorial algorithms have long played a crucial enabling role in scientific and engineering computations. The importance of discrete algorithms continues to grow with the demands of new applications and advanced architectures. This paper surveys some recent developments in this rapidly changing and highly interdisciplinary field.

  6. Embedded High Performance Scalable Computing Systems

    National Research Council Canada - National Science Library

    Ngo, David

    2003-01-01

    The Embedded High Performance Scalable Computing Systems (EHPSCS) program is a cooperative agreement between Sanders, A Lockheed Martin Company and DARPA that ran for three years, from Apr 1995 - Apr 1998...

  7. A high turndown, ultra low emission low swirl burner for natural gas, on-demand water heaters

    Energy Technology Data Exchange (ETDEWEB)

    Rapp, Vi H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cheng, Robert K. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Therkelsen, Peter L. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-06-13

    Previous research has shown that on-demand water heaters are, on average, approximately 37% more efficient than storage water heaters. However, approximately 98% of water heaters in the U.S. use storage water heaters while the remaining 2% are on-demand. A major market barrier to deployment of on-demand water heaters is their high retail cost, which is due in part to their reliance on multi-stage burner banks that require complex electronic controls. This project aims to research and develop a cost-effective, efficient, ultra-low emission burner for next generation natural gas on-demand water heaters in residential and commercial buildings. To meet these requirements, researchers at the Lawrence Berkeley National Laboratory (LBNL) are adapting and testing the low-swirl burner (LSB) technology for commercially available on-demand water heaters. In this report, a low-swirl burner is researched, developed, and evaluated to meet targeted on-demand water heater performance metrics. Performance metrics for a new LSB design are identified by characterizing performance of current on-demand water heaters using published literature and technical specifications, and through experimental evaluations that measure fuel consumption and emissions output over a range of operating conditions. Next, target metrics and design criteria for the LSB are used to create six 3D printed prototypes for preliminary investigations. Prototype designs that proved the most promising were fabricated out of metal and tested further to evaluate the LSB’s full performance potential. After conducting a full performance evaluation on two designs, we found that one LSB design is capable of meeting or exceeding almost all the target performance metrics for on-demand water heaters. Specifically, this LSB demonstrated flame stability when operating from 4.07 kBTU/hr up to 204 kBTU/hr (50:1 turndown), compliance with SCAQMD Rule 1146.2 (14 ng/J or 20 ppm NOX @ 3% O2), and lower CO emissions than state

  8. Molecular computing: paths to chemical Turing machines.

    Science.gov (United States)

    Varghese, Shaji; Elemans, Johannes A A W; Rowan, Alan E; Nolte, Roeland J M

    2015-11-13

    To comply with the rapidly increasing demand of information storage and processing, new strategies for computing are needed. The idea of molecular computing, where basic computations occur through molecular, supramolecular, or biomolecular approaches, rather than electronically, has long captivated researchers. The prospects of using molecules and (bio)macromolecules for computing is not without precedent. Nature is replete with examples where the handling and storing of data occurs with high efficiencies, low energy costs, and high-density information encoding. The design and assembly of computers that function according to the universal approaches of computing, such as those in a Turing machine, might be realized in a chemical way in the future; this is both fascinating and extremely challenging. In this perspective, we highlight molecular and (bio)macromolecular systems that have been designed and synthesized so far with the objective of using them for computing purposes. We also present a blueprint of a molecular Turing machine, which is based on a catalytic device that glides along a polymer tape and, while moving, prints binary information on this tape in the form of oxygen atoms.

  9. Power systems balancing with high penetration renewables: The potential of demand response in Hawaii

    International Nuclear Information System (INIS)

    Critz, D. Karl; Busche, Sarah; Connors, Stephen

    2013-01-01

    Highlights: • Demand response for Oahu results in system cost savings. • Demand response improves thermal power plant operations. • Increased use of wind generation possible with demand response. • WILMAR model used to simulate various levels and prices of demand response. - Abstract: The State of Hawaii’s Clean Energy policies call for 40% of the state’s electricity to be supplied by renewable sources by 2030. A recent study focusing on the island of Oahu showed that meeting large amounts of the island’s electricity needs with wind and solar introduced significant operational challenges, especially when renewable generation varies from forecasts. This paper focuses on the potential of demand response in balancing supply and demand on an hourly basis. Using the WILMAR model, various levels and prices of demand response were simulated. Results indicate that demand response has the potential to smooth overall power system operation, with production cost savings arising from both improved thermal power plant operations and increased wind production. Demand response program design and cost structure is then discussed drawing from industry experience in direct load control programs

  10. Roads towards fault-tolerant universal quantum computation

    Science.gov (United States)

    Campbell, Earl T.; Terhal, Barbara M.; Vuillot, Christophe

    2017-09-01

    A practical quantum computer must not merely store information, but also process it. To prevent errors introduced by noise from multiplying and spreading, a fault-tolerant computational architecture is required. Current experiments are taking the first steps toward noise-resilient logical qubits. But to convert these quantum devices from memories to processors, it is necessary to specify how a universal set of gates is performed on them. The leading proposals for doing so, such as magic-state distillation and colour-code techniques, have high resource demands. Alternative schemes, such as those that use high-dimensional quantum codes in a modular architecture, have potential benefits, but need to be explored further.

  11. High-performance computing in accelerating structure design and analysis

    International Nuclear Information System (INIS)

    Li Zenghai; Folwell, Nathan; Ge Lixin; Guetz, Adam; Ivanov, Valentin; Kowalski, Marc; Lee, Lie-Quan; Ng, Cho-Kuen; Schussman, Greg; Stingelin, Lukas; Uplenchwar, Ravindra; Wolf, Michael; Xiao, Liling; Ko, Kwok

    2006-01-01

    Future high-energy accelerators such as the Next Linear Collider (NLC) will accelerate multi-bunch beams of high current and low emittance to obtain high luminosity, which put stringent requirements on the accelerating structures for efficiency and beam stability. While numerical modeling has been quite standard in accelerator R and D, designing the NLC accelerating structure required a new simulation capability because of the geometric complexity and level of accuracy involved. Under the US DOE Advanced Computing initiatives (first the Grand Challenge and now SciDAC), SLAC has developed a suite of electromagnetic codes based on unstructured grids and utilizing high-performance computing to provide an advanced tool for modeling structures at accuracies and scales previously not possible. This paper will discuss the code development and computational science research (e.g. domain decomposition, scalable eigensolvers, adaptive mesh refinement) that have enabled the large-scale simulations needed for meeting the computational challenges posed by the NLC as well as projects such as the PEP-II and RIA. Numerical results will be presented to show how high-performance computing has made a qualitative improvement in accelerator structure modeling for these accelerators, either at the component level (single cell optimization), or on the scale of an entire structure (beam heating and long-range wakefields)

  12. Demand management in Multi-Stage Distribution Chain

    NARCIS (Netherlands)

    de Kok, T.; Janssen, F.B.S.L.P.

    1996-01-01

    In this paper we discuss demand management problems in a multi-stage distribution chain.We focus on distribution chains where demand processes have high variability due to a few large customer orders.We give a possible explanation, and suggest two simple procedures that help to smooth demand.It is

  13. On the (R,s,Q) Inventory Model when Demand is Modelled as a Compound Process

    NARCIS (Netherlands)

    Janssen, F.B.S.L.P.; Heuts, R.M.J.; de Kok, T.

    1996-01-01

    In this paper we present an approximation method to compute the reorder point s in a (R; s; Q) inventory model with a service level restriction, where demand is modelled as a compound Bernoulli process, that is, with a xed probability there is positive demand during a time unit, otherwise demand is

  14. A digital laser for on-demand laser modes

    CSIR Research Space (South Africa)

    Ngcobo, S

    2013-08-01

    Full Text Available -cavity digitally addressed holographic mirror. The phase and amplitude of the holographic mirror may be controlled simply by writing a computer- generated hologram in the form of a grey-scale image to the device, for on-demand laser modes. We show that we can...

  15. Building a cluster computer for the computing grid of tomorrow

    International Nuclear Information System (INIS)

    Wezel, J. van; Marten, H.

    2004-01-01

    The Grid Computing Centre Karlsruhe takes part in the development, test and deployment of hardware and cluster infrastructure, grid computing middleware, and applications for particle physics. The construction of a large cluster computer with thousands of nodes and several PB data storage capacity is a major task and focus of research. CERN based accelerator experiments will use GridKa, one of only 8 world wide Tier-1 computing centers, for its huge computer demands. Computing and storage is provided already for several other running physics experiments on the exponentially expanding cluster. (orig.)

  16. The health literacy demands of electronic personal health records (e-PHRs): An integrative review to inform future inclusive research.

    Science.gov (United States)

    Hemsley, Bronwyn; Rollo, Megan; Georgiou, Andrew; Balandin, Susan; Hill, Sophie

    2018-01-01

    To integrate the findings of research on electronic personal health records (e-PHRs) for an understanding of their health literacy demands on both patients and providers. We sought peer-reviewed primary research in English addressing the health literacy demands of e-PHRs that are online and allow patients any degree of control or input to the record. A synthesis of three theoretical models was used to frame the analysis of 24 studies. e-PHRs pose a wide range of health literacy demands on both patients and health service providers. Patient participation in e-PHRs relies not only on their level of education and computer literacy, and attitudes to sharing health information, but also upon their executive function, verbal expression, and understanding of spoken and written language. The multiple health literacy demands of e-PHRs must be considered when implementing population-wide initiatives for storing and sharing health information using these systems. The health literacy demands of e-PHRs are high and could potentially exclude many patients unless strategies are adopted to support their use of these systems. Developing strategies for all patients to meet or reduce the high health literacy demands of e-PHRs will be important in population-wide implementation. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Do job demands and job control affect problem-solving?

    Science.gov (United States)

    Bergman, Peter N; Ahlberg, Gunnel; Johansson, Gun; Stoetzer, Ulrich; Aborg, Carl; Hallsten, Lennart; Lundberg, Ingvar

    2012-01-01

    The Job Demand Control model presents combinations of working conditions that may facilitate learning, the active learning hypothesis, or have detrimental effects on health, the strain hypothesis. To test the active learning hypothesis, this study analysed the effects of job demands and job control on general problem-solving strategies. A population-based sample of 4,636 individuals (55% women, 45% men) with the same job characteristics measured at two times with a three year time lag was used. Main effects of demands, skill discretion, task authority and control, and the combined effects of demands and control were analysed in logistic regressions, on four outcomes representing general problem-solving strategies. Those reporting high on skill discretion, task authority and control, as well as those reporting high demand/high control and low demand/high control job characteristics were more likely to state using problem solving strategies. Results suggest that working conditions including high levels of control may affect how individuals cope with problems and that workplace characteristics may affect behaviour in the non-work domain.

  18. Introducing a demand-based electricity distribution tariff in the residential sector: Demand response and customer perception

    International Nuclear Information System (INIS)

    Bartusch, Cajsa; Wallin, Fredrik; Odlare, Monica; Vassileva, Iana; Wester, Lars

    2011-01-01

    Increased demand response is essential to fully exploit the Swedish power system, which in turn is an absolute prerequisite for meeting political goals related to energy efficiency and climate change. Demand response programs are, nonetheless, still exceptional in the residential sector of the Swedish electricity market, one contributory factor being lack of knowledge about the extent of the potential gains. In light of these circumstances, this empirical study set out with the intention of estimating the scope of households' response to, and assessing customers' perception of, a demand-based time-of-use electricity distribution tariff. The results show that households as a whole have a fairly high opinion of the demand-based tariff and act on its intrinsic price signals by decreasing peak demand in peak periods and shifting electricity use from peak to off-peak periods. - Highlights: → Households are sympathetic to demand-based tariffs, seeing as they relate to environmental issues. → Households adjust their electricity use to the price signals of demand-based tariffs. → Demand-based tariffs lead to a shift in electricity use from peak to off-peak hours. → Demand-based tariffs lead to a decrease in maximum demand in peak periods. → Magnitude of these effects increases over time.

  19. Overcoming job demands to deliver high quality care in a hospital setting across Europe: The role of teamwork and positivity

    OpenAIRE

    Montgomery Anthony; Panagopoulou Efharis; Costa Patricia

    2014-01-01

    Health care professionals deal on a daily basis with several job demands – emotional, cognitive, organizational and physical. They must also ensure high quality care to their patients. The aim of this study is to analyse the impact of job demands on quality of care and to investigate team (backup behaviors) and individual (positivity ratio) processes that help to shield that impact. Data was collected from 2,890 doctors and nurses in 9 European countries by means of questionnaires. Job demand...

  20. Machine learning and computer vision approaches for phenotypic profiling.

    Science.gov (United States)

    Grys, Ben T; Lo, Dara S; Sahin, Nil; Kraus, Oren Z; Morris, Quaid; Boone, Charles; Andrews, Brenda J

    2017-01-02

    With recent advances in high-throughput, automated microscopy, there has been an increased demand for effective computational strategies to analyze large-scale, image-based data. To this end, computer vision approaches have been applied to cell segmentation and feature extraction, whereas machine-learning approaches have been developed to aid in phenotypic classification and clustering of data acquired from biological images. Here, we provide an overview of the commonly used computer vision and machine-learning methods for generating and categorizing phenotypic profiles, highlighting the general biological utility of each approach. © 2017 Grys et al.

  1. Modelling demand for crude oil products in Spain

    International Nuclear Information System (INIS)

    Pedregal, D.J.; Dejuan, O.; Gomez, N.; Tobarra, M.A.

    2009-01-01

    This paper develops an econometric model for the five most important crude oil products demand in Spain. The aim is the estimation of a range of elasticities of such demands that would serve as the basis for an applied general equilibrium model used for forecasting energy demand in a broader framework. The main distinctive features of the system with respect to previous literature are (1) it takes advantage of monthly information coming from very different information sources and (2) multivariate unobserved components (UC) models are implemented allowing for a separate analysis of long- and short-run relations. UC models decompose time series into a number of unobserved though economic meaningful components mainly trend, seasonal and irregular. A module is added to such structure to take into account the influence of exogenous variables necessary to compute price, cross and income elasticities. Since all models implemented are multivariate in nature, the demand components are allowed to interact among them through the system noises (similar to a seemingly unrelated equations model). The results show unambiguously that the main factor driving demand is real income with prices having little impact on energy consumption. (author)

  2. Modelling demand for crude oil products in Spain

    Energy Technology Data Exchange (ETDEWEB)

    Pedregal, D.J. [Escuela Tecnica Superior de Ingenieros Industriales and Instituto de Matematica Aplicada a la Ciencia y la Ingenieria (IMACI), Universidad de Castilla-La Mancha (UCLM), Avenida Camilo Jose Cela s/n, 13071 Ciudad Real (Spain); Dejuan, O.; Gomez, N.; Tobarra, M.A. [Facultad de Ciencias Economicas y Empresariales, Universidad de Castilla-La Mancha (UCLM) (Spain)

    2009-11-15

    This paper develops an econometric model for the five most important crude oil products demand in Spain. The aim is the estimation of a range of elasticities of such demands that would serve as the basis for an applied general equilibrium model used for forecasting energy demand in a broader framework. The main distinctive features of the system with respect to previous literature are (1) it takes advantage of monthly information coming from very different information sources and (2) multivariate unobserved components (UC) models are implemented allowing for a separate analysis of long- and short-run relations. UC models decompose time series into a number of unobserved though economic meaningful components mainly trend, seasonal and irregular. A module is added to such structure to take into account the influence of exogenous variables necessary to compute price, cross and income elasticities. Since all models implemented are multivariate in nature, the demand components are allowed to interact among them through the system noises (similar to a seemingly unrelated equations model). The results show unambiguously that the main factor driving demand is real income with prices having little impact on energy consumption. (author)

  3. The Principals and Practice of Distributed High Throughput Computing

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    The potential of Distributed Processing Systems to deliver computing capabilities with qualities ranging from high availability and reliability to easy expansion in functionality and capacity were recognized and formalized in the 1970’s. For more three decade these principals Distributed Computing guided the development of the HTCondor resource and job management system. The widely adopted suite of software tools offered by HTCondor are based on novel distributed computing technologies and are driven by the evolving needs of High Throughput scientific applications. We will review the principals that underpin our work, the distributed computing frameworks and technologies we developed and the lessons we learned from delivering effective and dependable software tools in an ever changing landscape computing technologies and needs that range today from a desktop computer to tens of thousands of cores offered by commercial clouds. About the speaker Miron Livny received a B.Sc. degree in Physics and Mat...

  4. The UK-DALE dataset, domestic appliance-level electricity demand and whole-house demand from five UK homes.

    Science.gov (United States)

    Kelly, Jack; Knottenbelt, William

    2015-01-01

    Many countries are rolling out smart electricity meters. These measure a home's total power demand. However, research into consumer behaviour suggests that consumers are best able to improve their energy efficiency when provided with itemised, appliance-by-appliance consumption information. Energy disaggregation is a computational technique for estimating appliance-by-appliance energy consumption from a whole-house meter signal. To conduct research on disaggregation algorithms, researchers require data describing not just the aggregate demand per building but also the 'ground truth' demand of individual appliances. In this context, we present UK-DALE: an open-access dataset from the UK recording Domestic Appliance-Level Electricity at a sample rate of 16 kHz for the whole-house and at 1/6 Hz for individual appliances. This is the first open access UK dataset at this temporal resolution. We recorded from five houses, one of which was recorded for 655 days, the longest duration we are aware of for any energy dataset at this sample rate. We also describe the low-cost, open-source, wireless system we built for collecting our dataset.

  5. Design of Computer Fault Diagnosis and Troubleshooting System ...

    African Journals Online (AJOL)

    Detection of personal computer (PC) hardware problems is a complicated process which demands high level of knowledge and skills. Depending on the know-how of the technician, a simple problem could take hours or even days to solve. Our aim is to develop an expert system for troubleshooting and diagnosing personal ...

  6. The TeraShake Computational Platform for Large-Scale Earthquake Simulations

    Science.gov (United States)

    Cui, Yifeng; Olsen, Kim; Chourasia, Amit; Moore, Reagan; Maechling, Philip; Jordan, Thomas

    Geoscientific and computer science researchers with the Southern California Earthquake Center (SCEC) are conducting a large-scale, physics-based, computationally demanding earthquake system science research program with the goal of developing predictive models of earthquake processes. The computational demands of this program continue to increase rapidly as these researchers seek to perform physics-based numerical simulations of earthquake processes for larger meet the needs of this research program, a multiple-institution team coordinated by SCEC has integrated several scientific codes into a numerical modeling-based research tool we call the TeraShake computational platform (TSCP). A central component in the TSCP is a highly scalable earthquake wave propagation simulation program called the TeraShake anelastic wave propagation (TS-AWP) code. In this chapter, we describe how we extended an existing, stand-alone, wellvalidated, finite-difference, anelastic wave propagation modeling code into the highly scalable and widely used TS-AWP and then integrated this code into the TeraShake computational platform that provides end-to-end (initialization to analysis) research capabilities. We also describe the techniques used to enhance the TS-AWP parallel performance on TeraGrid supercomputers, as well as the TeraShake simulations phases including input preparation, run time, data archive management, and visualization. As a result of our efforts to improve its parallel efficiency, the TS-AWP has now shown highly efficient strong scaling on over 40K processors on IBM’s BlueGene/L Watson computer. In addition, the TSCP has developed into a computational system that is useful to many members of the SCEC community for performing large-scale earthquake simulations.

  7. Information Literacy Skills Training: A Factor in Student Satisfaction with Access to High Demand Material

    Science.gov (United States)

    Perrett, Valerie

    2010-01-01

    In a survey of Business and Government, Law and Information Sciences students carried out at the University of Canberra, results showed that in-curricula information literacy skills training had a greater impact on students' satisfaction with access to high demand material than the purchase of additional copies of books. This paper will discuss…

  8. Demand controlled ventilation in a bathroom

    DEFF Research Database (Denmark)

    Mortensen, Dorthe Kragsig; Nielsen, Toke Rammer; Topp, Claus

    2008-01-01

    consumption during periods where the demand for ventilation is low and poor indoor climate during periods where the demand for ventilation is high. Controlling the ventilation rate by demand can improve the energy performance of the ventilation system and the indoor climate. This paper compares the indoor...... climate and energy consumption of a Constant Air Volume (CAV) system and a Demand Controlled Ventilation (DCV) system for two different bathroom designs. The air change rate of the CAV system corresponded to 0.5h-1. The ventilation rate of the DCV system was controlled by occupancy and by the relative...

  9. Explaining worker strain and learning: how important are emotional job demands?

    Science.gov (United States)

    Taris, Toon W; Schreurs, Paul J G

    2009-05-01

    This study examined the added value of emotional job demands in explaining worker well-being, relative to the effects of task characteristics, such as quantitative job demands, job control, and coworker support. Emotional job demands were expected to account for an additional proportion of the variance in well-being. Cross-sectional data were obtained from 11,361 female Dutch home care employees. Hierarchical stepwise regression analysis demonstrated that low control, low support and high quantitative demands were generally associated with lower well-being (as measured in terms of emotional exhaustion, dedication, professional accomplishment and learning). Moreover, high emotional demands were in three out of four cases significantly associated with adverse well-being, in these cases accounting for an additional 1-6% of the variance in the outcome variables. In three out of eight cases the main effects of emotional demands on well-being were qualified by support and control, such that high control and high support either buffered the adverse effects of high emotional demands on well-being or increased the positive effects thereof. All in all, high emotional demands are as important a risk factor for worker well-being as well-established concepts like low job control and high quantitative job demands.

  10. Computational tools for high-throughput discovery in biology

    OpenAIRE

    Jones, Neil Christopher

    2007-01-01

    High throughput data acquisition technology has inarguably transformed the landscape of the life sciences, in part by making possible---and necessary---the computational disciplines of bioinformatics and biomedical informatics. These fields focus primarily on developing tools for analyzing data and generating hypotheses about objects in nature, and it is in this context that we address three pressing problems in the fields of the computational life sciences which each require computing capaci...

  11. Scaling predictive modeling in drug development with cloud computing.

    Science.gov (United States)

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  12. DURIP: High Performance Computing in Biomathematics Applications

    Science.gov (United States)

    2017-05-10

    Mathematics and Statistics (AMS) at the University of California, Santa Cruz (UCSC) to conduct research and research-related education in areas of...Computing in Biomathematics Applications Report Title The goal of this award was to enhance the capabilities of the Department of Applied Mathematics and...DURIP: High Performance Computing in Biomathematics Applications The goal of this award was to enhance the capabilities of the Department of Applied

  13. AHPCRC - Army High Performance Computing Research Center

    Science.gov (United States)

    2010-01-01

    computing. Of particular interest is the ability of a distrib- uted jamming network (DJN) to jam signals in all or part of a sensor or communications net...and reasoning, assistive technologies. FRIEDRICH (FRITZ) PRINZ Finmeccanica Professor of Engineering, Robert Bosch Chair, Department of Engineering...High Performance Computing Research Center www.ahpcrc.org BARBARA BRYAN AHPCRC Research and Outreach Manager, HPTi (650) 604-3732 bbryan@hpti.com Ms

  14. Cloud computing applications for biomedical science: A perspective.

    Science.gov (United States)

    Navale, Vivek; Bourne, Philip E

    2018-06-01

    Biomedical research has become a digital data-intensive endeavor, relying on secure and scalable computing, storage, and network infrastructure, which has traditionally been purchased, supported, and maintained locally. For certain types of biomedical applications, cloud computing has emerged as an alternative to locally maintained traditional computing approaches. Cloud computing offers users pay-as-you-go access to services such as hardware infrastructure, platforms, and software for solving common biomedical computational problems. Cloud computing services offer secure on-demand storage and analysis and are differentiated from traditional high-performance computing by their rapid availability and scalability of services. As such, cloud services are engineered to address big data problems and enhance the likelihood of data and analytics sharing, reproducibility, and reuse. Here, we provide an introductory perspective on cloud computing to help the reader determine its value to their own research.

  15. Effective Heuristics for Capacitated Production Planning with Multiperiod Production and Demand with Forecast Band Refinement

    OpenAIRE

    Philip Kaminsky; Jayashankar M. Swaminathan

    2004-01-01

    In this paper we extend forecast band evolution and capacitated production modelling to the multiperiod demand case. In this model, forecasts of discrete demand for any period are modelled as bands and defined by lower and upper bounds on demand, such that future forecasts lie within the current band. We develop heuristics that utilize knowledge of demand forecast evolution to make production decisions in capacitated production planning environments. In our computational study we explore the ...

  16. Relations between work and upper extremity musculoskeletal problems (UEMSP) and the moderating role of psychosocial work factors on the relation between computer work and UEMSP.

    Science.gov (United States)

    Nicolakakis, Nektaria; Stock, Susan R; Abrahamowicz, Michal; Kline, Rex; Messing, Karen

    2017-11-01

    Computer work has been identified as a risk factor for upper extremity musculoskeletal problems (UEMSP). But few studies have investigated how psychosocial and organizational work factors affect this relation. Nor have gender differences in the relation between UEMSP and these work factors  been studied. We sought to estimate: (1) the association between UEMSP and a range of physical, psychosocial and organizational work exposures, including the duration of computer work, and (2) the moderating effect of psychosocial work exposures on the relation between computer work and UEMSP. Using 2007-2008 Québec survey data on 2478 workers, we carried out gender-stratified multivariable logistic regression modeling and two-way interaction analyses. In both genders, odds of UEMSP were higher with exposure to high physical work demands and emotionally demanding work. Additionally among women, UEMSP were associated with duration of occupational computer exposure, sexual harassment, tense situations when dealing with clients, high quantitative demands and lack of prospects for promotion, and among men, with low coworker support, episodes of unemployment, low job security and contradictory work demands. Among women, the effect of computer work on UEMSP was considerably increased in the presence of emotionally demanding work, and may also be moderated by low recognition at work, contradictory work demands, and low supervisor support. These results suggest that the relations between UEMSP and computer work are moderated by psychosocial work exposures and that the relations between working conditions and UEMSP are somewhat different for each gender, highlighting the complexity of these relations and the importance of considering gender.

  17. Household demand elasticities for meat products in Uruguay

    Energy Technology Data Exchange (ETDEWEB)

    Lanfranco, B. A.; Rava, C.

    2014-06-01

    This article analyzed the demand for meats at household level over the past decade in Uruguay, a country that exhibits a very high per capita consumption of these products. In particular, the consumption of beef is one of the highest in the world and only comparable to Argentina. The analysis involved a two-step estimation of an incomplete system of censored demand equations using household data from the last available national income and expenditure survey (2005/06). Thirteen meat products were included in the analysis: six broad beef products (de boned hindquarter cuts, bone-in hindquarter cuts, ground beef, rib plate, bone-in forequarter cuts, and other beef cuts), four products from other meats (sheep, pork, poultry, and fish), and three generic mixed-meat products. A complete set of short-term income, own-price and cross-price elasticities were computed and reported along with their 90% confidence intervals (CI). The results were consistent with both economic theory and empirical evidence as well as with the expected behavior, considering the relevance of these products, particularly beef, in the diet of Uruguayan consumers. All meat items were necessary goods and evidenced income-inelastic responses, which was expected given their high consumption level. All meats behaved as normal goods although exhibiting different reactions to changes in price. In general, beef cuts were more price elastic than other more broadly defined products. The more specific and dis aggregated the meat product the higher its corresponding direct price elasticity. The complement/substitute relationships found in this study were highly depended on the specific product combinations. (Author)

  18. Household demand elasticities for meat products in Uruguay

    Directory of Open Access Journals (Sweden)

    Bruno A. Lanfranco

    2014-01-01

    Full Text Available This article analyzed the demand for meats at household level over the past decade in Uruguay, a country that exhibits a very high per capita consumption of these products. In particular, the consumption of beef is one of the highest in the world and only comparable to Argentina. The analysis involved a two-step estimation of an incomplete system of censored demand equations using household data from the last available national income and expenditure survey (2005/06. Thirteen meat products were included in the analysis: six broad beef products (deboned hindquarter cuts, bone-in hindquarter cuts, ground beef, rib plate, bone-in forequarter cuts, and other beef cuts, four products from other meats (sheep, pork, poultry, and fish, and three generic mixed-meat products. A complete set of short-term income, own-price and cross-price elasticities were computed and reported along with their 90% confidence intervals (CI. The results were consistent with both economic theory and empirical evidence as well as with the expected behavior, considering the relevance of these products, particularly beef, in the diet of Uruguayan consumers. All meat items were necessary goods and evidenced income-inelastic responses, which was expected given their high consumption level. All meats behaved as normal goods although exhibiting different reactions to changes in price. In general, beef cuts were more price elastic than other more broadly defined products. The more specific and disaggregated the meat product the higher its corresponding direct price elasticity. The complement/substitute relationships found in this study were highly depended on the specific product combinations.

  19. Performance Management of High Performance Computing for Medical Image Processing in Amazon Web Services.

    Science.gov (United States)

    Bao, Shunxing; Damon, Stephen M; Landman, Bennett A; Gokhale, Aniruddha

    2016-02-27

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical-Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for-use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  20. Performance management of high performance computing for medical image processing in Amazon Web Services

    Science.gov (United States)

    Bao, Shunxing; Damon, Stephen M.; Landman, Bennett A.; Gokhale, Aniruddha

    2016-03-01

    Adopting high performance cloud computing for medical image processing is a popular trend given the pressing needs of large studies. Amazon Web Services (AWS) provide reliable, on-demand, and inexpensive cloud computing services. Our research objective is to implement an affordable, scalable and easy-to-use AWS framework for the Java Image Science Toolkit (JIST). JIST is a plugin for Medical- Image Processing, Analysis, and Visualization (MIPAV) that provides a graphical pipeline implementation allowing users to quickly test and develop pipelines. JIST is DRMAA-compliant allowing it to run on portable batch system grids. However, as new processing methods are implemented and developed, memory may often be a bottleneck for not only lab computers, but also possibly some local grids. Integrating JIST with the AWS cloud alleviates these possible restrictions and does not require users to have deep knowledge of programming in Java. Workflow definition/management and cloud configurations are two key challenges in this research. Using a simple unified control panel, users have the ability to set the numbers of nodes and select from a variety of pre-configured AWS EC2 nodes with different numbers of processors and memory storage. Intuitively, we configured Amazon S3 storage to be mounted by pay-for- use Amazon EC2 instances. Hence, S3 storage is recognized as a shared cloud resource. The Amazon EC2 instances provide pre-installs of all necessary packages to run JIST. This work presents an implementation that facilitates the integration of JIST with AWS. We describe the theoretical cost/benefit formulae to decide between local serial execution versus cloud computing and apply this analysis to an empirical diffusion tensor imaging pipeline.

  1. Econophysics of a ranked demand and supply resource allocation problem

    Science.gov (United States)

    Priel, Avner; Tamir, Boaz

    2018-01-01

    We present a two sided resource allocation problem, between demands and supplies, where both parties are ranked. For example, in Big Data problems where a set of different computational tasks is divided between a set of computers each with its own resources, or between employees and employers where both parties are ranked, the employees by their fitness and the employers by their package benefits. The allocation process can be viewed as a repeated game where in each iteration the strategy is decided by a meta-rule, based on the ranks of both parties and the results of the previous games. We show the existence of a phase transition between an absorbing state, where all demands are satisfied, and an active one where part of the demands are always left unsatisfied. The phase transition is governed by the ratio between supplies and demand. In a job allocation problem we find positive correlation between the rank of the workers and the rank of the factories; higher rank workers are usually allocated to higher ranked factories. These all suggest global emergent properties stemming from local variables. To demonstrate the global versus local relations, we introduce a local inertial force that increases the rank of employees in proportion to their persistence time in the same factory. We show that such a local force induces non trivial global effects, mostly to benefit the lower ranked employees.

  2. A Brief Analysis of Development Situations and Trend of Cloud Computing

    Science.gov (United States)

    Yang, Wenyan

    2017-12-01

    in recent years, the rapid development of Internet technology has radically changed people's work, learning and lifestyles. More and more activities are completed by virtue of computers and networks. The amount of information and data generated is bigger day by day, and people rely more on computer, which makes computing power of computer fail to meet demands of accuracy and rapidity from people. The cloud computing technology has experienced fast development, which is widely applied in the computer industry as a result of advantages of high precision, fast computing and easy usage. Moreover, it has become a focus in information research at present. In this paper, the development situations and trend of cloud computing shall be analyzed and researched.

  3. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    International Nuclear Information System (INIS)

    Pop, Florin

    2014-01-01

    Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  4. Home Network Technologies and Automating Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    McParland, Charles

    2009-12-01

    Over the past several years, interest in large-scale control of peak energy demand and total consumption has increased. While motivated by a number of factors, this interest has primarily been spurred on the demand side by the increasing cost of energy and, on the supply side by the limited ability of utilities to build sufficient electricity generation capacity to meet unrestrained future demand. To address peak electricity use Demand Response (DR) systems are being proposed to motivate reductions in electricity use through the use of price incentives. DR systems are also be design to shift or curtail energy demand at critical times when the generation, transmission, and distribution systems (i.e. the 'grid') are threatened with instabilities. To be effectively deployed on a large-scale, these proposed DR systems need to be automated. Automation will require robust and efficient data communications infrastructures across geographically dispersed markets. The present availability of widespread Internet connectivity and inexpensive, reliable computing hardware combined with the growing confidence in the capabilities of distributed, application-level communications protocols suggests that now is the time for designing and deploying practical systems. Centralized computer systems that are capable of providing continuous signals to automate customers reduction of power demand, are known as Demand Response Automation Servers (DRAS). The deployment of prototype DRAS systems has already begun - with most initial deployments targeting large commercial and industrial (C & I) customers. An examination of the current overall energy consumption by economic sector shows that the C & I market is responsible for roughly half of all energy consumption in the US. On a per customer basis, large C & I customers clearly have the most to offer - and to gain - by participating in DR programs to reduce peak demand. And, by concentrating on a small number of relatively

  5. Demand response in Indian electricity market

    International Nuclear Information System (INIS)

    Siddiqui, Md Zakaria; Maere d'Aertrycke, Gauthier de; Smeers, Yves

    2012-01-01

    This paper outlines a methodology for implementing cost of service regulation in retail market for electricity in India when wholesale market is liberalised and operates through an hourly spot market. As in a developing country context political considerations make tariff levels more important than supply security, satisfying the earmarked level of demand takes a back seat. Retail market regulators are often forced by politicians to keep the retail tariff at suboptimal level. This imposes budget constraint on distribution companies to procure electricity that it requires to meet the earmarked level of demand. This is the way demand response is introduced in the system and has its impact on spot market prices. We model such a situation of not being able to serve the earmarked demand as disutility to the regulator which has to be minimised and we compute associated equilibrium. This results in systematic mechanism for cutting loads. We find that even a small cut in ability of the distribution companies to procure electricity from the spot market has profound impact on the prices in the spot market. - Highlights: ► Modelling the impact of retail tariff in different states on spot prices of electricity in India. ► Retail tariffs are usually fixed below appropriate levels by states due to political reasons. ► Due to revenue constraint distribution utility withdraws demand from spot market in peak hours. ► This adversely affects the scarcity rent of generators and subsequently future investment. ► We show possibility of strategic behaviour among state level regulators in setting retail tariff.

  6. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  7. High capacity photonic integrated switching circuits

    NARCIS (Netherlands)

    Albores Mejia, A.

    2011-01-01

    As the demand for high-capacity data transfer keeps increasing in high performance computing and in a broader range of system area networking environments; reconfiguring the strained networks at ever faster speeds with larger volumes of traffic has become a huge challenge. Formidable bottlenecks

  8. Addressing Energy Demand through Demand Response. International Experiences and Practices

    Energy Technology Data Exchange (ETDEWEB)

    Shen, Bo [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ghatikar, Girish [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ni, Chun Chun [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dudley, Junqiao [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Martin, Phil [Enernoc, Inc., Boston, MA (United States); Wikler, Greg

    2012-06-01

    Demand response (DR) is a load management tool which provides a cost-effective alternative to traditional supply-side solutions to address the growing demand during times of peak electrical load. According to the US Department of Energy (DOE), demand response reflects “changes in electric usage by end-use customers from their normal consumption patterns in response to changes in the price of electricity over time, or to incentive payments designed to induce lower electricity use at times of high wholesale market prices or when system reliability is jeopardized.” 1 The California Energy Commission (CEC) defines DR as “a reduction in customers’ electricity consumption over a given time interval relative to what would otherwise occur in response to a price signal, other financial incentives, or a reliability signal.” 2 This latter definition is perhaps most reflective of how DR is understood and implemented today in countries such as the US, Canada, and Australia where DR is primarily a dispatchable resource responding to signals from utilities, grid operators, and/or load aggregators (or DR providers).

  9. Worksite interventions for preventing physical deterioration among employees in job-groups with high physical work demands: background, design and conceptual model of FINALE

    DEFF Research Database (Denmark)

    Holtermann, Andreas; Jørgensen, Marie B; Gram, Bibi

    2010-01-01

    physical demands remains to be established. This paper describes the background, design and conceptual model of the FINALE programme, a framework for health promoting interventions at 4 Danish job groups (i.e. cleaners, health-care workers, construction workers and industrial workers) characterized by high......A mismatch between individual physical capacities and physical work demands enhance the risk for musculoskeletal disorders, poor work ability and sickness absence, termed physical deterioration. However, effective intervention strategies for preventing physical deterioration in job groups with high...... physical work demands, musculoskeletal disorders, poor work ability and sickness absence....

  10. New computing techniques in physics research

    International Nuclear Information System (INIS)

    Becks, Karl-Heinz; Perret-Gallix, Denis

    1994-01-01

    New techniques were highlighted by the ''Third International Workshop on Software Engineering, Artificial Intelligence and Expert Systems for High Energy and Nuclear Physics'' in Oberammergau, Bavaria, Germany, from October 4 to 8. It was the third workshop in the series; the first was held in Lyon in 1990 and the second at France-Telecom site near La Londe les Maures in 1992. This series of workshops covers a broad spectrum of problems. New, highly sophisticated experiments demand new techniques in computing, in hardware as well as in software. Software Engineering Techniques could in principle satisfy the needs for forthcoming accelerator experiments. The growing complexity of detector systems demands new techniques in experimental error diagnosis and repair suggestions; Expert Systems seem to offer a way of assisting the experimental crew during data-taking

  11. Grid computing in high-energy physics

    International Nuclear Information System (INIS)

    Bischof, R.; Kuhn, D.; Kneringer, E.

    2003-01-01

    Full text: The future high energy physics experiments are characterized by an enormous amount of data delivered by the large detectors presently under construction e.g. at the Large Hadron Collider and by a large number of scientists (several thousands) requiring simultaneous access to the resulting experimental data. Since it seems unrealistic to provide the necessary computing and storage resources at one single place, (e.g. CERN), the concept of grid computing i.e. the use of distributed resources, will be chosen. The DataGrid project (under the leadership of CERN) develops, based on the Globus toolkit, the software necessary for computation and analysis of shared large-scale databases in a grid structure. The high energy physics group Innsbruck participates with several resources in the DataGrid test bed. In this presentation our experience as grid users and resource provider is summarized. In cooperation with the local IT-center (ZID) we installed a flexible grid system which uses PCs (at the moment 162) in student's labs during nights, weekends and holidays, which is especially used to compare different systems (local resource managers, other grid software e.g. from the Nordugrid project) and to supply a test bed for the future Austrian Grid (AGrid). (author)

  12. Usage of super high speed computer for clarification of complex phenomena

    International Nuclear Information System (INIS)

    Sekiguchi, Tomotsugu; Sato, Mitsuhisa; Nakata, Hideki; Tatebe, Osami; Takagi, Hiromitsu

    1999-01-01

    This study aims at construction of an efficient super high speed computer system application environment in response to parallel distributed system with easy transplantation to different computer system and different number by conducting research and development on super high speed computer application technology required for elucidation of complicated phenomenon in elucidation of complicated phenomenon of nuclear power field due to computed scientific method. In order to realize such environment, the Electrotechnical Laboratory has conducted development on Ninf, a network numerical information library. This Ninf system can supply a global network infrastructure for worldwide computing with high performance on further wide range distributed network (G.K.)

  13. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  14. Psychological demand and control of the work process of public university servants.

    Science.gov (United States)

    Moura, Denise Cristina Alves de; Greco, Rosangela Maria; Paschoalin, Heloisa Campos; Portela, Luciana Fernandes; Arreguy-Sena, Cristina; Chaoubah, Alfredo

    2018-02-01

    This cross-sectional research aimed to analyze the psychological demand and work control self-reported by the Education Administrative Technicians of a public university. This is a complete sample selection consisting of 833 Education Administrative Technicians who self-completed a questionnaire with questions structured in 2013/2014. A descriptive bivariate analysis was performed with the calculation of psychosocial stress at work, using the Demand-Control Model quadrants categorized as: low-demand work (low-demand and high-control), reference group, passive work (low-demand and low-control), active work (high-demand and high-control), high-demand (high-demand and low-control) - group with the highest exposure. The study complies with all ethical and legal research requirements involving human beings. There was a predominance of the category of workers performing passive work (n = 319, 39.7%), low work demand (n = 274, 34.1%), high work demand (n = 116, 14.4%) and active work (n = 95, 11.8%). There were contributions from the investigation on the health of these workers insofar as they provided a diagnosis of the category. There is a recommendation for such data to support interventions to empower them and retailor jobs.

  15. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs

  16. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  17. Introduction to massively-parallel computing in high-energy physics

    CERN Document Server

    AUTHOR|(CDS)2083520

    1993-01-01

    Ever since computers were first used for scientific and numerical work, there has existed an "arms race" between the technical development of faster computing hardware, and the desires of scientists to solve larger problems in shorter time-scales. However, the vast leaps in processor performance achieved through advances in semi-conductor science have reached a hiatus as the technology comes up against the physical limits of the speed of light and quantum effects. This has lead all high performance computer manufacturers to turn towards a parallel architecture for their new machines. In these lectures we will introduce the history and concepts behind parallel computing, and review the various parallel architectures and software environments currently available. We will then introduce programming methodologies that allow efficient exploitation of parallel machines, and present case studies of the parallelization of typical High Energy Physics codes for the two main classes of parallel computing architecture (S...

  18. International Conference: Computer-Aided Design of High-Temperature Materials

    National Research Council Canada - National Science Library

    Kalia, Rajiv

    1998-01-01

    .... The conference was attended by experimental and computational materials scientists, and experts in high performance computing and communications from universities, government laboratories, and industries in the U.S., Europe, and Japan...

  19. Demand Uncertainty

    DEFF Research Database (Denmark)

    Nguyen, Daniel Xuyen

    This paper presents a model of trade that explains why firms wait to export and why many exporters fail. Firms face uncertain demands that are only realized after the firm enters the destination. The model retools the timing of uncertainty resolution found in productivity heterogeneity models....... This retooling addresses several shortcomings. First, the imperfect correlation of demands reconciles the sales variation observed in and across destinations. Second, since demands for the firm's output are correlated across destinations, a firm can use previously realized demands to forecast unknown demands...... in untested destinations. The option to forecast demands causes firms to delay exporting in order to gather more information about foreign demand. Third, since uncertainty is resolved after entry, many firms enter a destination and then exit after learning that they cannot profit. This prediction reconciles...

  20. Load demand profile for a large charging station of a fleet of all-electric plug-in buses

    Directory of Open Access Journals (Sweden)

    Mario A. Rios

    2014-08-01

    Full Text Available This study proposes a general procedure to compute the load demand profile from a parking lot where a fleet of buses with electric propulsion mechanisms are charged. Such procedure is divided in three different stages, the first one models the daily energy utilisation of the batteries based on Monte Carlo simulations and route characteristics. The second one models the process in the charging station based on discrete event simulation of queues of buses served by a lot of available chargers. The third step computes the final demand profile in the parking lot because of the charging process based on the power consumption of batteries’ chargers and the utilisation of the available charges. The proposed procedure allows the computation of the number of required batteries’ chargers to be installed in a charging station placed at a parking lot in order to satisfy and ensure the operation of the fleet, the computation of the power demand profile and the peak load and the computation of the general characteristics of electrical infrastructure to supply the power to the station.

  1. Cloud Computing Quality

    Directory of Open Access Journals (Sweden)

    Anamaria Şiclovan

    2013-02-01

    Full Text Available Cloud computing was and it will be a new way of providing Internet services and computers. This calculation approach is based on many existing services, such as the Internet, grid computing, Web services. Cloud computing as a system aims to provide on demand services more acceptable as price and infrastructure. It is exactly the transition from computer to a service offered to the consumers as a product delivered online. This paper is meant to describe the quality of cloud computing services, analyzing the advantages and characteristics offered by it. It is a theoretical paper.Keywords: Cloud computing, QoS, quality of cloud computing

  2. Cloud computing approaches to accelerate drug discovery value chain.

    Science.gov (United States)

    Garg, Vibhav; Arora, Suchir; Gupta, Chitra

    2011-12-01

    Continued advancements in the area of technology have helped high throughput screening (HTS) evolve from a linear to parallel approach by performing system level screening. Advanced experimental methods used for HTS at various steps of drug discovery (i.e. target identification, target validation, lead identification and lead validation) can generate data of the order of terabytes. As a consequence, there is pressing need to store, manage, mine and analyze this data to identify informational tags. This need is again posing challenges to computer scientists to offer the matching hardware and software infrastructure, while managing the varying degree of desired computational power. Therefore, the potential of "On-Demand Hardware" and "Software as a Service (SAAS)" delivery mechanisms cannot be denied. This on-demand computing, largely referred to as Cloud Computing, is now transforming the drug discovery research. Also, integration of Cloud computing with parallel computing is certainly expanding its footprint in the life sciences community. The speed, efficiency and cost effectiveness have made cloud computing a 'good to have tool' for researchers, providing them significant flexibility, allowing them to focus on the 'what' of science and not the 'how'. Once reached to its maturity, Discovery-Cloud would fit best to manage drug discovery and clinical development data, generated using advanced HTS techniques, hence supporting the vision of personalized medicine.

  3. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  4. Short-Run and Long-Run Elasticities of Diesel Demand in Korea

    Directory of Open Access Journals (Sweden)

    Seung-Hoon Yoo

    2012-11-01

    Full Text Available This paper investigates the demand function for diesel in Korea covering the period 1986–2011. The short-run and long-run elasticities of diesel demand with respect to price and income are empirically examined using a co-integration and error-correction model. The short-run and long-run price elasticities are estimated to be −0.357 and −0.547, respectively. The short-run and long-run income elasticities are computed to be 1.589 and 1.478, respectively. Thus, diesel demand is relatively inelastic to price change and elastic to income change in both the short-run and long-run. Therefore, a demand-side management through raising the price of diesel will be ineffective and tightening the regulation of using diesel more efficiently appears to be more effective in Korea. The demand for diesel is expected to continuously increase as the economy grows.

  5. Energy demand patterns

    Energy Technology Data Exchange (ETDEWEB)

    Hoffmann, L; Schipper, L; Meyers, S; Sathaye, J; Hara, Y

    1984-05-01

    This report brings together three papers on energy demand presented at the Energy Research Priorities Seminar held in Ottawa on 8-10 August 1983. The first paper suggests a framework in which energy demand studies may be organized if they are to be useful in policy-making. Disaggregation and the analysis of the chain of energy transformations are possible paths toward more stable and reliable parameters. The second paper points to another factor that leads to instability in sectoral parameters, namely a changeover from one technology to another; insofar as technologies producing a product (or service) vary in their energy intensity, a technological shift will also change the energy intensity of the product. Rapid technological change is characteristic of some sectors in developing countries, and may well account for the high aggregate GDP-elasticities of energy consumption observed. The third paper begins with estimates of these elasticities, which were greater than one for all the member countries of the Asian Development Bank in 1961-78. The high elasticities, together with extreme oil dependence, made them vulnerable to the drastic rise in the oil price after 1973. The author distinguishes three diverging patterns of national experience. The oil-surplus countries naturally gained from the rise in the oil price. Among oil-deficit countries, the newly industrialized countries expanded their exports so rapidly that the oil crisis no longer worried them. For the rest, balance of payments adjustments became a prime concern of policy. Whether they dealt with the oil bill by borrowing, by import substitution, or by demand restraint, the impact of energy on their growth was unmistakable. The paper also shows why energy-demand studies, and energy studies in general, deserve to be taken seriously. 16 refs., 4 figs., 18 tabs.

  6. CUDA/GPU Technology : Parallel Programming For High Performance Scientific Computing

    OpenAIRE

    YUHENDRA; KUZE, Hiroaki; JOSAPHAT, Tetuko Sri Sumantyo

    2009-01-01

    [ABSTRACT]Graphics processing units (GP Us) originally designed for computer video cards have emerged as the most powerful chip in a high-performance workstation. In the high performance computation capabilities, graphic processing units (GPU) lead to much more powerful performance than conventional CPUs by means of parallel processing. In 2007, the birth of Compute Unified Device Architecture (CUDA) and CUDA-enabled GPUs by NVIDIA Corporation brought a revolution in the general purpose GPU a...

  7. Access control infrastructure for on-demand provisioned virtualised infrastructure services

    NARCIS (Netherlands)

    Demchenko, Y.; Ngo, C.; de Laat, C.; Smari, W.W.; Fox, G.C.

    2011-01-01

    Cloud technologies are emerging as a new way of provisioning virtualised computing and infrastructure services on-demand for collaborative projects and groups. Security in provisioning virtual infrastructure services should address two general aspects: supporting secure operation of the provisioning

  8. Green computing: efficient energy management of multiprocessor streaming applications via model checking

    NARCIS (Netherlands)

    Ahmad, W.

    2017-01-01

    Streaming applications such as virtual reality, video conferencing, and face detection, impose high demands on a system’s performance and battery life. With the advancement in mobile computing, these applications are increasingly implemented on battery-constrained platforms, such as gaming consoles,

  9. Computational Fluid Dynamics (CFD) Computations With Zonal Navier-Stokes Flow Solver (ZNSFLOW) Common High Performance Computing Scalable Software Initiative (CHSSI) Software

    National Research Council Canada - National Science Library

    Edge, Harris

    1999-01-01

    ...), computational fluid dynamics (CFD) 6 project. Under the project, a proven zonal Navier-Stokes solver was rewritten for scalable parallel performance on both shared memory and distributed memory high performance computers...

  10. The pharmacist Aggregate Demand Index to explain changing pharmacist demand over a ten-year period.

    Science.gov (United States)

    Knapp, Katherine K; Shah, Bijal M; Barnett, Mitchell J

    2010-12-15

    To describe Aggregate Demand Index (ADI) trends from 1999-2010; to compare ADI time trends to concurrent data for US unemployment levels, US entry-level pharmacy graduates, and US retail prescription growth rate; and to determine which variables were significant predictors of ADI. Annual ADI data (dependent variable) were analyzed against annual unemployment rates, annual number of pharmacy graduates, and annual prescription growth rate (independent variables). ADI data trended toward lower demand levels for pharmacists since late 2006, paralleling the US economic downturn. National ADI data were most highly correlated with unemployment (p demand. Predictable increases in future graduates and other factors support revisiting the modeling process as new data accumulate.

  11. High-speed linear optics quantum computing using active feed-forward.

    Science.gov (United States)

    Prevedel, Robert; Walther, Philip; Tiefenbacher, Felix; Böhi, Pascal; Kaltenbaek, Rainer; Jennewein, Thomas; Zeilinger, Anton

    2007-01-04

    As information carriers in quantum computing, photonic qubits have the advantage of undergoing negligible decoherence. However, the absence of any significant photon-photon interaction is problematic for the realization of non-trivial two-qubit gates. One solution is to introduce an effective nonlinearity by measurements resulting in probabilistic gate operations. In one-way quantum computation, the random quantum measurement error can be overcome by applying a feed-forward technique, such that the future measurement basis depends on earlier measurement results. This technique is crucial for achieving deterministic quantum computation once a cluster state (the highly entangled multiparticle state on which one-way quantum computation is based) is prepared. Here we realize a concatenated scheme of measurement and active feed-forward in a one-way quantum computing experiment. We demonstrate that, for a perfect cluster state and no photon loss, our quantum computation scheme would operate with good fidelity and that our feed-forward components function with very high speed and low error for detected photons. With present technology, the individual computational step (in our case the individual feed-forward cycle) can be operated in less than 150 ns using electro-optical modulators. This is an important result for the future development of one-way quantum computers, whose large-scale implementation will depend on advances in the production and detection of the required highly entangled cluster states.

  12. The UK-DALE dataset, domestic appliance-level electricity demand and whole-house demand from five UK homes

    Science.gov (United States)

    Kelly, Jack; Knottenbelt, William

    2015-03-01

    Many countries are rolling out smart electricity meters. These measure a home’s total power demand. However, research into consumer behaviour suggests that consumers are best able to improve their energy efficiency when provided with itemised, appliance-by-appliance consumption information. Energy disaggregation is a computational technique for estimating appliance-by-appliance energy consumption from a whole-house meter signal. To conduct research on disaggregation algorithms, researchers require data describing not just the aggregate demand per building but also the ‘ground truth’ demand of individual appliances. In this context, we present UK-DALE: an open-access dataset from the UK recording Domestic Appliance-Level Electricity at a sample rate of 16 kHz for the whole-house and at 1/6 Hz for individual appliances. This is the first open access UK dataset at this temporal resolution. We recorded from five houses, one of which was recorded for 655 days, the longest duration we are aware of for any energy dataset at this sample rate. We also describe the low-cost, open-source, wireless system we built for collecting our dataset.

  13. Application of dGNSS in Alpine Ski Racing: Basis for Evaluating Physical Demands and Safety

    Science.gov (United States)

    Gilgien, Matthias; Kröll, Josef; Spörri, Jörg; Crivelli, Philip; Müller, Erich

    2018-01-01

    External forces, such as ground reaction force or air drag acting on athletes' bodies in sports, determine the sport-specific demands on athletes' physical fitness. In order to establish appropriate physical conditioning regimes, which adequately prepare athletes for the loads and physical demands occurring in their sports and help reduce the risk of injury, sport-and/or discipline-specific knowledge of the external forces is needed. However, due to methodological shortcomings in biomechanical research, data comprehensively describing the external forces that occur in alpine super-G (SG) and downhill (DH) are so far lacking. Therefore, this study applied new and accurate wearable sensor-based technology to determine the external forces acting on skiers during World Cup (WC) alpine skiing competitions in the disciplines of SG and DH and to compare these with those occurring in giant slalom (GS), for which previous research knowledge exists. External forces were determined using WC forerunners carrying a differential global navigation satellite system (dGNSS). Combining the dGNSS data with a digital terrain model of the snow surface and an air drag model, the magnitudes of ground reaction forces were computed. It was found that the applied methodology may not only be used to track physical demands and loads on athletes, but also to simultaneously investigate safety aspects, such as the effectiveness of speed control through increased air drag and ski–snow friction forces in the respective disciplines. Therefore, the component of the ground reaction force in the direction of travel (ski–snow friction) and air drag force were computed. This study showed that (1) the validity of high-end dGNSS systems allows meaningful investigations such as characterization of physical demands and effectiveness of safety measures in highly dynamic sports; (2) physical demands were substantially different between GS, SG, and DH; and (3) safety-related reduction of skiing speed might

  14. Application of dGNSS in Alpine Ski Racing: Basis for Evaluating Physical Demands and Safety

    Directory of Open Access Journals (Sweden)

    Matthias Gilgien

    2018-03-01

    Full Text Available External forces, such as ground reaction force or air drag acting on athletes' bodies in sports, determine the sport-specific demands on athletes' physical fitness. In order to establish appropriate physical conditioning regimes, which adequately prepare athletes for the loads and physical demands occurring in their sports and help reduce the risk of injury, sport-and/or discipline-specific knowledge of the external forces is needed. However, due to methodological shortcomings in biomechanical research, data comprehensively describing the external forces that occur in alpine super-G (SG and downhill (DH are so far lacking. Therefore, this study applied new and accurate wearable sensor-based technology to determine the external forces acting on skiers during World Cup (WC alpine skiing competitions in the disciplines of SG and DH and to compare these with those occurring in giant slalom (GS, for which previous research knowledge exists. External forces were determined using WC forerunners carrying a differential global navigation satellite system (dGNSS. Combining the dGNSS data with a digital terrain model of the snow surface and an air drag model, the magnitudes of ground reaction forces were computed. It was found that the applied methodology may not only be used to track physical demands and loads on athletes, but also to simultaneously investigate safety aspects, such as the effectiveness of speed control through increased air drag and ski–snow friction forces in the respective disciplines. Therefore, the component of the ground reaction force in the direction of travel (ski–snow friction and air drag force were computed. This study showed that (1 the validity of high-end dGNSS systems allows meaningful investigations such as characterization of physical demands and effectiveness of safety measures in highly dynamic sports; (2 physical demands were substantially different between GS, SG, and DH; and (3 safety-related reduction of skiing

  15. Application of dGNSS in Alpine Ski Racing: Basis for Evaluating Physical Demands and Safety.

    Science.gov (United States)

    Gilgien, Matthias; Kröll, Josef; Spörri, Jörg; Crivelli, Philip; Müller, Erich

    2018-01-01

    External forces, such as ground reaction force or air drag acting on athletes' bodies in sports, determine the sport-specific demands on athletes' physical fitness. In order to establish appropriate physical conditioning regimes, which adequately prepare athletes for the loads and physical demands occurring in their sports and help reduce the risk of injury, sport-and/or discipline-specific knowledge of the external forces is needed. However, due to methodological shortcomings in biomechanical research, data comprehensively describing the external forces that occur in alpine super-G (SG) and downhill (DH) are so far lacking. Therefore, this study applied new and accurate wearable sensor-based technology to determine the external forces acting on skiers during World Cup (WC) alpine skiing competitions in the disciplines of SG and DH and to compare these with those occurring in giant slalom (GS), for which previous research knowledge exists. External forces were determined using WC forerunners carrying a differential global navigation satellite system (dGNSS). Combining the dGNSS data with a digital terrain model of the snow surface and an air drag model, the magnitudes of ground reaction forces were computed. It was found that the applied methodology may not only be used to track physical demands and loads on athletes, but also to simultaneously investigate safety aspects, such as the effectiveness of speed control through increased air drag and ski-snow friction forces in the respective disciplines. Therefore, the component of the ground reaction force in the direction of travel (ski-snow friction) and air drag force were computed. This study showed that (1) the validity of high-end dGNSS systems allows meaningful investigations such as characterization of physical demands and effectiveness of safety measures in highly dynamic sports; (2) physical demands were substantially different between GS, SG, and DH; and (3) safety-related reduction of skiing speed might be

  16. High Performance Computing Software Applications for Space Situational Awareness

    Science.gov (United States)

    Giuliano, C.; Schumacher, P.; Matson, C.; Chun, F.; Duncan, B.; Borelli, K.; Desonia, R.; Gusciora, G.; Roe, K.

    The High Performance Computing Software Applications Institute for Space Situational Awareness (HSAI-SSA) has completed its first full year of applications development. The emphasis of our work in this first year was in improving space surveillance sensor models and image enhancement software. These applications are the Space Surveillance Network Analysis Model (SSNAM), the Air Force Space Fence simulation (SimFence), and physically constrained iterative de-convolution (PCID) image enhancement software tool. Specifically, we have demonstrated order of magnitude speed-up in those codes running on the latest Cray XD-1 Linux supercomputer (Hoku) at the Maui High Performance Computing Center. The software applications improvements that HSAI-SSA has made, has had significant impact to the warfighter and has fundamentally changed the role of high performance computing in SSA.

  17. High-order hydrodynamic algorithms for exascale computing

    Energy Technology Data Exchange (ETDEWEB)

    Morgan, Nathaniel Ray [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-05

    Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broad range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.

  18. High-resolution computed tomography findings in pulmonary Langerhans cell histiocytosis

    Energy Technology Data Exchange (ETDEWEB)

    Rodrigues, Rosana Souza [Universidade Federal do Rio de Janeiro (HUCFF/UFRJ), RJ (Brazil). Hospital Universitario Clementino Fraga Filho. Unit of Radiology; Capone, Domenico; Ferreira Neto, Armando Leao [Universidade do Estado do Rio de Janeiro (UERJ), Rio de Janeiro, RJ (Brazil)

    2011-07-15

    Objective: The present study was aimed at characterizing main lung changes observed in pulmonary Langerhans cell histiocytosis by means of high-resolution computed tomography. Materials and Methods: High-resolution computed tomography findings in eight patients with proven disease diagnosed by open lung biopsy, immunohistochemistry studies and/or extrapulmonary manifestations were retrospectively evaluated. Results: Small rounded, thin-walled cystic lesions were observed in the lung of all the patients. Nodules with predominantly peripheral distribution over the lung parenchyma were observed in 75% of the patients. The lesions were diffusely distributed, predominantly in the upper and middle lung fields in all of the cases, but involvement of costophrenic angles was observed in 25% of the patients. Conclusion: Comparative analysis of high-resolution computed tomography and chest radiography findings demonstrated that thinwalled cysts and small nodules cannot be satisfactorily evaluated by conventional radiography. Because of its capacity to detect and characterize lung cysts and nodules, high-resolution computed tomography increases the probability of diagnosing pulmonary Langerhans cell histiocytosis. (author)

  19. Design Anthropology, Emerging Technologies and Alternative Computational Futures

    DEFF Research Database (Denmark)

    Smith, Rachel Charlotte

    Emerging technologies are providing a new field for design anthropological inquiry that unite experiences, imaginaries and materialities in complex way and demands new approaches to developing sustainable computational futures.......Emerging technologies are providing a new field for design anthropological inquiry that unite experiences, imaginaries and materialities in complex way and demands new approaches to developing sustainable computational futures....

  20. Monitoring SLAC High Performance UNIX Computing Systems

    International Nuclear Information System (INIS)

    Lettsome, Annette K.

    2005-01-01

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface

  1. ICT Solutions for Highly-Customized Water Demand Management Strategies

    Science.gov (United States)

    Giuliani, M.; Cominola, A.; Castelletti, A.; Fraternali, P.; Guardiola, J.; Barba, J.; Pulido-Velazquez, M.; Rizzoli, A. E.

    2016-12-01

    The recent deployment of smart metering networks is opening new opportunities for advancing the design of residential water demand management strategies (WDMS) relying on improved understanding of water consumers' behaviors. Recent applications showed that retrieving information on users' consumption behaviors, along with their explanatory and/or causal factors, is key to spot potential areas where targeting water saving efforts, and to design user-tailored WDMS. In this study, we explore the potential of ICT-based solutions in supporting the design and implementation of highly customized WDMS. On one side, the collection of consumption data at high spatial and temporal resolutions requires big data analytics and machine learning techniques to extract typical consumption features from the metered population of water users. On the other side, ICT solutions and gamifications can be used as effective means for facilitating both users' engagement and the collection of socio-psychographic users' information. This latter allows interpreting and improving the extracted profiles, ultimately supporting the customization of WDMS, such as awareness campaigns or personalized recommendations. Our approach is implemented in the SmartH2O platform and demonstrated in a pilot application in Valencia, Spain. Results show how the analysis of the smart metered consumption data, combined with the information retrieved from an ICT gamified web user portal, successfully identify the typical consumption profiles of the metered users and supports the design of alternative WDMS targeting the different users' profiles.

  2. How do household characteristics affect appliance usage? Application of conditional demand analysis to Japanese household data

    International Nuclear Information System (INIS)

    Matsumoto, Shigeru

    2016-01-01

    Although both appliance ownership and usage patterns determine residential electricity consumption, it is less known how households actually use their appliances. In this study, we conduct conditional demand analyses to break down total household electricity consumption into a set of demand functions for electricity usage, across 12 appliance categories. We then examine how the socioeconomic characteristics of the households explain their appliance usage. Analysis of micro-level data from the Nation Survey of Family and Expenditure in Japan reveals that the family and income structure of households affect appliance usage. Specifically, we find that the presence of teenagers increases both air conditioner and dishwasher use, labor income and nonlabor income affect microwave usage in different ways, air conditioner usage decreases as the wife's income increases, and microwave usage decreases as the husband's income increases. Furthermore, we find that households use more electricity with new personal computers than old ones; this implies that the replacement of old personal computers increases electricity consumption. - Highlights: •We conduct conditional demand analyses to study household appliance usage. •Micro-level data from the National Survey of Family and Expenditure in Japan are analyzed. •We show how household characteristics determine appliance usage. •High-income households use specific appliances less intensively than low-income households. •The replacement of old TVs and PCs lead to greater electricity consumption.

  3. Using Amazon's Elastic Compute Cloud to dynamically scale CMS computational resources

    International Nuclear Information System (INIS)

    Evans, D; Fisk, I; Holzman, B; Pordes, R; Tiradani, A; Melo, A; Sheldon, P; Metson, S

    2011-01-01

    Large international scientific collaborations such as the Compact Muon Solenoid (CMS) experiment at the Large Hadron Collider have traditionally addressed their data reduction and analysis needs by building and maintaining dedicated computational infrastructure. Emerging cloud computing services such as Amazon's Elastic Compute Cloud (EC2) offer short-term CPU and storage resources with costs based on usage. These services allow experiments to purchase computing resources as needed, without significant prior planning and without long term investments in facilities and their management. We have demonstrated that services such as EC2 can successfully be integrated into the production-computing model of CMS, and find that they work very well as worker nodes. The cost-structure and transient nature of EC2 services makes them inappropriate for some CMS production services and functions. We also found that the resources are not truely 'on-demand' as limits and caps on usage are imposed. Our trial workflows allow us to make a cost comparison between EC2 resources and dedicated CMS resources at a University, and conclude that it is most cost effective to purchase dedicated resources for the 'base-line' needs of experiments such as CMS. However, if the ability to use cloud computing resources is built into an experiment's software framework before demand requires their use, cloud computing resources make sense for bursting during times when spikes in usage are required.

  4. Cloud Computing : Research Issues and Implications

    OpenAIRE

    Marupaka Rajenda Prasad; R. Lakshman Naik; V. Bapuji

    2013-01-01

    Cloud computing is a rapidly developing and excellent promising technology. It has aroused the concern of the computer society of whole world. Cloud computing is Internet-based computing, whereby shared information, resources, and software, are provided to terminals and portable devices on-demand, like the energy grid. Cloud computing is the product of the combination of grid computing, distributed computing, parallel computing, and ubiquitous computing. It aims to build and forecast sophisti...

  5. Railing for safety: job demands, job control, and safety citizenship role definition.

    Science.gov (United States)

    Turner, Nick; Chmiel, Nik; Walls, Melanie

    2005-10-01

    This study investigated job demands and job control as predictors of safety citizenship role definition, that is, employees' role orientation toward improving workplace safety. Data from a survey of 334 trackside workers were framed in the context of R. A. Karasek's (1979) job demands-control model. High job demands were negatively related to safety citizenship role definition, whereas high job control was positively related to this construct. Safety citizenship role definition of employees with high job control was buffered from the influence of high job demands, unlike that of employees with low job control, for whom high job demands were related to lower levels of the construct. Employees facing both high job demands and low job control were less likely than other employees to view improving safety as part of their role orientation. Copyright (c) 2005 APA, all rights reserved.

  6. Modelling UK energy demand to 2000

    International Nuclear Information System (INIS)

    Thomas, S.D.

    1980-01-01

    A recent long-term demand forecast for the UK was made by Cheshire and Surrey. (SPRU Occasional Paper Series No.5, Science Policy Research Unit, Univ. Of Sussex, 1978.) Although they adopted a sectoral approach their study leaves some questions unanswered. Do they succeed in their aim of making all their assumptions fully explicit. How sensitive are their estimates to changes in assumptions and policies. Are important problems and 'turning points' fully identified in the period up to and immediately beyond their time horizon of 2000. The author addresses these questions by using a computer model based on the study by Cheshire and Surrey. This article is a shortened version of the report, S.D. Thomas, 'Modelling UK Energy Demand to 2000', Operational Research, Univ. of Sussex, Brighton, UK, 1979, in which full details of the author's model are given. Copies are available from the author. (author)

  7. Modelling UK energy demand to 2000

    Energy Technology Data Exchange (ETDEWEB)

    Thomas, S D [Sussex Univ., Brighton (UK)

    1980-03-01

    A recent long-term demand forecast for the UK was made by Cheshire and Surrey. (SPRU Occasional Paper Series No.5, Science Policy Research Unit, Univ. Of Sussex, 1978.) Although they adopted a sectoral approach their study leaves some questions unanswered. Do they succeed in their aim of making all their assumptions fully explicit. How sensitive are their estimates to changes in assumptions and policies. Are important problems and 'turning points' fully identified in the period up to and immediately beyond their time horizon of 2000. The author addresses these questions by using a computer model based on the study by Cheshire and Surrey. This article is a shortened version of the report, S.D. Thomas, 'Modelling UK Energy Demand to 2000', Operational Research, Univ. of Sussex, Brighton, UK, 1979, in which full details of the author's model are given. Copies are available from the author.

  8. Cloud Computing. Technology Briefing. Number 1

    Science.gov (United States)

    Alberta Education, 2013

    2013-01-01

    Cloud computing is Internet-based computing in which shared resources, software and information are delivered as a service that computers or mobile devices can access on demand. Cloud computing is already used extensively in education. Free or low-cost cloud-based services are used daily by learners and educators to support learning, social…

  9. Federal High End Computing (HEC) Information Portal

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This portal provides information about opportunities to engage in U.S. Federal government high performance computing activities, including supercomputer use,...

  10. High Performance Computing Modernization Program Kerberos Throughput Test Report

    Science.gov (United States)

    2017-10-26

    Naval Research Laboratory Washington, DC 20375-5320 NRL/MR/5524--17-9751 High Performance Computing Modernization Program Kerberos Throughput Test ...NUMBER 5d. PROJECT NUMBER 5e. TASK NUMBER 5f. WORK UNIT NUMBER 2. REPORT TYPE1. REPORT DATE (DD-MM-YYYY) 4. TITLE AND SUBTITLE 6. AUTHOR(S) 8. PERFORMING...PAGE 18. NUMBER OF PAGES 17. LIMITATION OF ABSTRACT High Performance Computing Modernization Program Kerberos Throughput Test Report Daniel G. Gdula* and

  11. Demand chain management - The evolution

    Directory of Open Access Journals (Sweden)

    D Ericsson

    2011-06-01

    Full Text Available The concepts of Supply Chain Management (SCM and Demand Chain Management (DCM are among the new and debated topics concerning logistics in the literature. The question considered in this paper is: “Are these concepts needed or will they just add to the confusion?” Lasting business concepts have always evolved in close interaction between business and academia. Different approaches start out in business and they are then, more or less si- multaneously, aligned, integrated, systemised and structured in academia. In this way a terminology (or language is provided that helps in further diffusion of the concepts. There is a lack of consensus on the definition of the concept of SCM. This may be one of the major reasons for the difficulty in advancing the science and measuring the results of implementation in business. Relationships in SCM span from rather loose coalitions to highly structured virtual network integrations. DCM is a highly organised chain in which the key is mutual interdependence and partnership. The purpose is to create a distinctive competence for the chain as a whole that helps to identify and satisfy customer needs and wishes. The classical research concerning vertical marketing systems is very helpful in systemising the rather unstructured discussions in current SCM research. The trend lies in increasing competition between channels rather than between companies, which in turn leads to the creation of channels with a high degree of partnership and mutual interdependence between members. These types of channels are known as organised vertical marketing systems in the classic marketing channel research. The behaviour in these types of channels, as well as the formal and informal structures, roles in the network, power and dependence relations, etc. are well covered topics in the literature. The concept of vertical marketing systems lies behind the definition of demand chains and demand chain management proposed in this paper. A

  12. Battlefield awareness computers: the engine of battlefield digitization

    Science.gov (United States)

    Ho, Jackson; Chamseddine, Ahmad

    1997-06-01

    To modernize the army for the 21st century, the U.S. Army Digitization Office (ADO) initiated in 1995 the Force XXI Battle Command Brigade-and-Below (FBCB2) Applique program which became a centerpiece in the U.S. Army's master plan to win future information wars. The Applique team led by TRW fielded a 'tactical Internet' for Brigade and below command to demonstrate the advantages of 'shared situation awareness' and battlefield digitization in advanced war-fighting experiments (AWE) to be conducted in March 1997 at the Army's National Training Center in California. Computing Devices is designated the primary hardware developer for the militarized version of the battlefield awareness computers. The first generation of militarized battlefield awareness computer, designated as the V3 computer, was an integration of off-the-shelf components developed to meet the agressive delivery requirements of the Task Force XXI AWE. The design efficiency and cost effectiveness of the computer hardware were secondary in importance to delivery deadlines imposed by the March 1997 AWE. However, declining defense budgets will impose cost constraints on the Force XXI production hardware that can only be met by rigorous value engineering to further improve design optimization for battlefield awareness without compromising the level of reliability the military has come to expect in modern military hardened vetronics. To answer the Army's needs for a more cost effective computing solution, Computing Devices developed a second generation 'combat ready' battlefield awareness computer, designated the V3+, which is designed specifically to meet the upcoming demands of Force XXI (FBCB2) and beyond. The primary design objective is to achieve a technologically superior design, value engineered to strike an optimal balance between reliability, life cycle cost, and procurement cost. Recognizing that the diverse digitization demands of Force XXI cannot be adequately met by any one computer hardware

  13. Evaluation of resource allocation and supply-demand balance in clinical practice with high-cost technologies.

    Science.gov (United States)

    Otsubo, Tetsuya; Imanaka, Yuichi; Lee, Jason; Hayashida, Kenshi

    2011-12-01

    Japan has one of the highest numbers of high-cost medical devices installed relative to its population. While evaluations of the distribution of these devices traditionally involve simple population-based assessments, an indicator that includes the demand of these devices would more accurately reflect the situation. The purpose of this study was to develop an indicator of the supply-demand balance of such devices, using examples of magnetic resonance imaging scanners (MRI) and extracorporeal shockwave lithotripters (ESWL), and to investigate the relationship between this indicator, personnel distribution statuses and operating statuses at the prefectural level. Using data from nation-wide surveys and claims data from 16 hospitals, we developed an indicator based on the ratio of the supplied number of device units to the number of device units in demand for MRI and ESWL. The latter value was based on patient volume and utilization proportion. Correlation analyses were conducted between the supply-demand balances of these devices, personal distribution and operating statuses. Comparisons between our indicator and conventional population-based indicators revealed that 15% and 30% of prefectures were at risk of underestimating the availability of MRI and ESWL, respectively. The numbers of specialist personnel/device units showed significant, negative correlations with our indicators in both devices. Utilization-based analyses of health care resource placement and utilization status provide a more accurate indication than simple population-based assessments, and can assist decision makers in reviewing gaps between health policy and management. Such an indicator therefore has the potential to be a tool in helping to improve the efficiency of the allocation and placement of such devices. © 2010 Blackwell Publishing Ltd.

  14. Perceived job demands relate to self-reported health complaints

    NARCIS (Netherlands)

    Roelen, C.A.M.; Schreuder, K.J.; Koopmans, P.C.; Groothoff, J.W.

    Background Illness and illness behaviour are important problems in the Dutch workforce. Illness has been associated with job demands, with high demands relating to poorer health. It has not been reported whether subjective health complaints relate to job demands. Aims To investigate whether

  15. SCEAPI: A unified Restful Web API for High-Performance Computing

    Science.gov (United States)

    Rongqiang, Cao; Haili, Xiao; Shasha, Lu; Yining, Zhao; Xiaoning, Wang; Xuebin, Chi

    2017-10-01

    The development of scientific computing is increasingly moving to collaborative web and mobile applications. All these applications need high-quality programming interface for accessing heterogeneous computing resources consisting of clusters, grid computing or cloud computing. In this paper, we introduce our high-performance computing environment that integrates computing resources from 16 HPC centers across China. Then we present a bundle of web services called SCEAPI and describe how it can be used to access HPC resources with HTTP or HTTPs protocols. We discuss SCEAPI from several aspects including architecture, implementation and security, and address specific challenges in designing compatible interfaces and protecting sensitive data. We describe the functions of SCEAPI including authentication, file transfer and job management for creating, submitting and monitoring, and how to use SCEAPI in an easy-to-use way. Finally, we discuss how to exploit more HPC resources quickly for the ATLAS experiment by implementing the custom ARC compute element based on SCEAPI, and our work shows that SCEAPI is an easy-to-use and effective solution to extend opportunistic HPC resources.

  16. Computer performance evaluation of FACOM 230-75 computer system, (2)

    International Nuclear Information System (INIS)

    Fujii, Minoru; Asai, Kiyoshi

    1980-08-01

    In this report are described computer performance evaluations for FACOM230-75 computers in JAERI. The evaluations are performed on following items: (1) Cost/benefit analysis of timesharing terminals, (2) Analysis of the response time of timesharing terminals, (3) Analysis of throughout time for batch job processing, (4) Estimation of current potential demands for computer time, (5) Determination of appropriate number of card readers and line printers. These evaluations are done mainly from the standpoint of cost reduction of computing facilities. The techniques adapted are very practical ones. This report will be useful for those people who are concerned with the management of computing installation. (author)

  17. Audiovisual integration of speech falters under high attention demands.

    Science.gov (United States)

    Alsius, Agnès; Navarra, Jordi; Campbell, Ruth; Soto-Faraco, Salvador

    2005-05-10

    One of the most commonly cited examples of human multisensory integration occurs during exposure to natural speech, when the vocal and the visual aspects of the signal are integrated in a unitary percept. Audiovisual association of facial gestures and vocal sounds has been demonstrated in nonhuman primates and in prelinguistic children, arguing for a general basis for this capacity. One critical question, however, concerns the role of attention in such multisensory integration. Although both behavioral and neurophysiological studies have converged on a preattentive conceptualization of audiovisual speech integration, this mechanism has rarely been measured under conditions of high attentional load, when the observers' attention resources are depleted. We tested the extent to which audiovisual integration was modulated by the amount of available attentional resources by measuring the observers' susceptibility to the classic McGurk illusion in a dual-task paradigm. The proportion of visually influenced responses was severely, and selectively, reduced if participants were concurrently performing an unrelated visual or auditory task. In contrast with the assumption that crossmodal speech integration is automatic, our results suggest that these multisensory binding processes are subject to attentional demands.

  18. CloVR: a virtual machine for automated and portable sequence analysis from the desktop using cloud computing.

    Science.gov (United States)

    Angiuoli, Samuel V; Matalka, Malcolm; Gussman, Aaron; Galens, Kevin; Vangala, Mahesh; Riley, David R; Arze, Cesar; White, James R; White, Owen; Fricke, W Florian

    2011-08-30

    Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing.

  19. On-demand semiconductor single-photon source with near-unity indistinguishability.

    Science.gov (United States)

    He, Yu-Ming; He, Yu; Wei, Yu-Jia; Wu, Dian; Atatüre, Mete; Schneider, Christian; Höfling, Sven; Kamp, Martin; Lu, Chao-Yang; Pan, Jian-Wei

    2013-03-01

    Single-photon sources based on semiconductor quantum dots offer distinct advantages for quantum information, including a scalable solid-state platform, ultrabrightness and interconnectivity with matter qubits. A key prerequisite for their use in optical quantum computing and solid-state networks is a high level of efficiency and indistinguishability. Pulsed resonance fluorescence has been anticipated as the optimum condition for the deterministic generation of high-quality photons with vanishing effects of dephasing. Here, we generate pulsed single photons on demand from a single, microcavity-embedded quantum dot under s-shell excitation with 3 ps laser pulses. The π pulse-excited resonance-fluorescence photons have less than 0.3% background contribution and a vanishing two-photon emission probability. Non-postselective Hong-Ou-Mandel interference between two successively emitted photons is observed with a visibility of 0.97(2), comparable to trapped atoms and ions. Two single photons are further used to implement a high-fidelity quantum controlled-NOT gate.

  20. Towards distributed multiscale computing for the VPH

    NARCIS (Netherlands)

    Hoekstra, A.G.; Coveney, P.

    2010-01-01

    Multiscale modeling is fundamental to the Virtual Physiological Human (VPH) initiative. Most detailed three-dimensional multiscale models lead to prohibitive computational demands. As a possible solution we present MAPPER, a computational science infrastructure for Distributed Multiscale Computing

  1. Computing challenges in HEP for WLHC grid

    CERN Document Server

    Muralidharan, Servesh

    2017-01-01

    As CERN moves towards preparation for increasing the luminosity of the particle beam towards HL-LHC, predictions shows computing demand would out grow our conservative scaling estimates by over ten times. Fortunately we are talking about a time scale of roughly ten years to develop new techniques and novel solutions to address this gap in compute resources. Experiments at CERN face a unique scenario where in they need to scale both latency sensitive workloads such as data acquisition of the detectors and throughput based ones such as simulations and reconstruction of high level events and physics processes. In this talk we cover some of the ongoing research at tier-0 in CERN which investigates several aspects of throughput sensitive workloads that consume significant compute cycles.

  2. Chicago's water market: Dynamics of demand, prices and scarcity rents

    Science.gov (United States)

    Ipe, V.C.; Bhagwat, S.B.

    2002-01-01

    Chicago and its suburbs are experiencing an increasing demand for water from a growing population and economy and may experience water scarcity in the near future. The Chicago metropolitan area has nearly depleted its groundwater resources to a point where interstate conflicts with Wisconsin could accompany an increased reliance on those sources. Further, the withdrawals from Lake Michigan is limited by the Supreme Court decree. The growing demand and indications of possible scarcity suggest a need to reexamine the pricing policies and the dynamics of demand. The study analyses the demand for water and develops estimates of scarcity rents for water in Chicago. The price and income elasticities computed at the means are -0.002 and 0.0002 respectively. The estimated scarcity rents ranges from $0.98 to $1.17 per thousand gallons. The results indicate that the current prices do not fully account for the scarcity rents and suggest a current rate with in the range $1.53 to $1.72 per thousand gallons.

  3. Dynamic Vehicle Scheduling for Working Service Network with Dual Demands

    Directory of Open Access Journals (Sweden)

    Bing Li

    2017-01-01

    Full Text Available This study aims to develop some models to aid in making decisions on the combined fleet size and vehicle assignment in working service network where the demands include two types (minimum demands and maximum demands, and vehicles themselves can act like a facility to provide services when they are stationary at one location. This type of problem is named as the dynamic working vehicle scheduling with dual demands (DWVS-DD and formulated as a mixed integer programming (MIP. Instead of a large integer program, the problem is decomposed into small local problems that are guided by preset control parameters. The approach for preset control parameters is given. By introducing them into the MIP formulation, the model is reformulated as a piecewise form. Further, a piecewise method by updating preset control parameters is proposed for solving the reformulated model. Numerical experiments show that the proposed method produces better solution within reasonable computing time.

  4. Embedded computing technology for highly-demanding cyber-physical systems

    NARCIS (Netherlands)

    Jóźwiak, L.

    2015-01-01

    The recent spectacular progress in the microelectronic, information, communication, material and sensor technologies created a big stimulus towards development of much more sophisticated, coherent and fit to use, smart communicating cyber-physical systems (CPS). The huge and rapidly developing

  5. Interaction effects among multiple job demands: an examination of healthcare workers across different contexts.

    Science.gov (United States)

    Jimmieson, Nerina L; Tucker, Michelle K; Walsh, Alexandra J

    2017-05-01

    Simultaneous exposure to time, cognitive, and emotional demands is a feature of the work environment for healthcare workers, yet effects of these common stressors in combination are not well established. Survey data were collected from 125 hospital employees (Sample 1, Study 1), 93 ambulance service employees (Sample 2, Study 1), and 380 aged care/disability workers (Study 2). Hierarchical multiple regressions were conducted. In Sample 1, high cognitive demand exacerbated high emotional demand on psychological strain and job burnout, whereas the negative effect of high emotional demand was not present at low cognitive demand. In Sample 2, a similar pattern between emotional demand and time demand on stress-remedial intentions was observed. In Study 2, emotional demand × time demand and time demand × cognitive demand interactions again revealed that high levels of two demands were stress-exacerbating and low levels of one demand neutralized the other. A three-way interaction on job satisfaction showed the negative impact of emotional demand was exacerbated when both time and cognitive demands were high, creating a "triple disadvantage" of job demands. The results demonstrate that reducing some job demands helps attenuate the stressful effects of other job demands on different employee outcomes.

  6. A high performance scientific cloud computing environment for materials simulations

    OpenAIRE

    Jorissen, Kevin; Vila, Fernando D.; Rehr, John J.

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including...

  7. A Lightweight Distributed Framework for Computational Offloading in Mobile Cloud Computing

    Science.gov (United States)

    Shiraz, Muhammad; Gani, Abdullah; Ahmad, Raja Wasim; Adeel Ali Shah, Syed; Karim, Ahmad; Rahman, Zulkanain Abdul

    2014-01-01

    The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs). Therefore, Mobile Cloud Computing (MCC) leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC. PMID:25127245

  8. A lightweight distributed framework for computational offloading in mobile cloud computing.

    Directory of Open Access Journals (Sweden)

    Muhammad Shiraz

    Full Text Available The latest developments in mobile computing technology have enabled intensive applications on the modern Smartphones. However, such applications are still constrained by limitations in processing potentials, storage capacity and battery lifetime of the Smart Mobile Devices (SMDs. Therefore, Mobile Cloud Computing (MCC leverages the application processing services of computational clouds for mitigating resources limitations in SMDs. Currently, a number of computational offloading frameworks are proposed for MCC wherein the intensive components of the application are outsourced to computational clouds. Nevertheless, such frameworks focus on runtime partitioning of the application for computational offloading, which is time consuming and resources intensive. The resource constraint nature of SMDs require lightweight procedures for leveraging computational clouds. Therefore, this paper presents a lightweight framework which focuses on minimizing additional resources utilization in computational offloading for MCC. The framework employs features of centralized monitoring, high availability and on demand access services of computational clouds for computational offloading. As a result, the turnaround time and execution cost of the application are reduced. The framework is evaluated by testing prototype application in the real MCC environment. The lightweight nature of the proposed framework is validated by employing computational offloading for the proposed framework and the latest existing frameworks. Analysis shows that by employing the proposed framework for computational offloading, the size of data transmission is reduced by 91%, energy consumption cost is minimized by 81% and turnaround time of the application is decreased by 83.5% as compared to the existing offloading frameworks. Hence, the proposed framework minimizes additional resources utilization and therefore offers lightweight solution for computational offloading in MCC.

  9. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  10. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  11. Supply-demand controls the futures

    International Nuclear Information System (INIS)

    Brown, D.

    1991-01-01

    This paper briefly discusses the futures market of petroleum and explains how futures operate. The purpose of the paper is to demonstrate that oil futures markets does no determine energy prices - it merely reflects the prices recorded through trades made in an open marketplace. A futures contract is an agreement between a buyer and a seller at a price that seems fair to both. High demand from buyers can push prices up; low demand or a willingness to sell pushes prices down. As a result, supply and demand control the futures exchange and not vice-versa. The paper goes on to explain some basic principals of the futures market including the differences between hedging and speculating on prices and marketing strategy

  12. Computational methods for high-energy source shielding

    International Nuclear Information System (INIS)

    Armstrong, T.W.; Cloth, P.; Filges, D.

    1983-01-01

    The computational methods for high-energy radiation transport related to shielding of the SNQ-spallation source are outlined. The basic approach is to couple radiation-transport computer codes which use Monte Carlo methods and discrete ordinates methods. A code system is suggested that incorporates state-of-the-art radiation-transport techniques. The stepwise verification of that system is briefly summarized. The complexity of the resulting code system suggests a more straightforward code specially tailored for thick shield calculations. A short guide line to future development of such a Monte Carlo code is given

  13. A multiresolution approach to iterative reconstruction algorithms in X-ray computed tomography.

    Science.gov (United States)

    De Witte, Yoni; Vlassenbroeck, Jelle; Van Hoorebeke, Luc

    2010-09-01

    In computed tomography, the application of iterative reconstruction methods in practical situations is impeded by their high computational demands. Especially in high resolution X-ray computed tomography, where reconstruction volumes contain a high number of volume elements (several giga voxels), this computational burden prevents their actual breakthrough. Besides the large amount of calculations, iterative algorithms require the entire volume to be kept in memory during reconstruction, which quickly becomes cumbersome for large data sets. To overcome this obstacle, we present a novel multiresolution reconstruction, which greatly reduces the required amount of memory without significantly affecting the reconstructed image quality. It is shown that, combined with an efficient implementation on a graphical processing unit, the multiresolution approach enables the application of iterative algorithms in the reconstruction of large volumes at an acceptable speed using only limited resources.

  14. [Application of job demands-resources model in research on relationships between job satisfaction, job resources, individual resources and job demands].

    Science.gov (United States)

    Potocka, Adrianna; Waszkowska, Małgorzata

    2013-01-01

    The aim of this study was to explore the relationships between job demands, job resourses, personal resourses and job satisfaction and to assess the usefulness of the Job Demands-Resources (JD-R) model in the explanation of these phenomena. The research was based on a sample of 500 social workers. The "Psychosocial Factors" and "Job satisfaction" questionnaires were used to test the hypothesis. The results showed that job satisfaction increased with increasing job accessibility and personal resources (r = 0.44; r = 0.31; p job resources and job demands [F(1.474) = 4.004; F(1.474) = 4.166; p job satisfaction. Moreover, interactions between job demands and job resources [F(3,474) = 2.748; p job demands and personal resources [F(3.474) = 3.021; p job satisfaction. The post hoc tests showed that 1) in low job demands, but high job resources employees declared higher job satisfaction, than those who perceived them as medium (p = 0.0001) or low (p = 0.0157); 2) when the level of job demands was perceived as medium, employees with high personal resources declared significantly higher job satisfaction than those with low personal resources (p = 0.0001). The JD-R model can be used to investigate job satisfaction. Taking into account fundamental factors of this model, in organizational management there are possibilities of shaping job satisfaction among employees.

  15. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa; Parashar, Manish; Kim, Hyunjoo; Jordan, Kirk E.; Sachdeva, Vipin; Sexton, James; Jamjoom, Hani; Shae, Zon-Yin; Pencheva, Gergina; Tavakoli, Reza; Wheeler, Mary F.

    2012-01-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a

  16. Aggregated Demand Modelling Including Distributed Generation, Storage and Demand Response

    OpenAIRE

    Marzooghi, Hesamoddin; Hill, David J.; Verbic, Gregor

    2014-01-01

    It is anticipated that penetration of renewable energy sources (RESs) in power systems will increase further in the next decades mainly due to environmental issues. In the long term of several decades, which we refer to in terms of the future grid (FG), balancing between supply and demand will become dependent on demand actions including demand response (DR) and energy storage. So far, FG feasibility studies have not considered these new demand-side developments for modelling future demand. I...

  17. Short-term effects of implemented high intensity shoulder elevation during computer work

    DEFF Research Database (Denmark)

    Larsen, Mette K.; Samani, Afshin; Madeleine, Pascal

    2009-01-01

    computer work to prevent neck-shoulder pain may be possible without affecting the working routines. However, the unexpected reduction in clavicular trapezius rest during a pause with preceding high intensity contraction requires further investigation before high intensity shoulder elevations can......BACKGROUND: Work-site strength training sessions are shown effective to prevent and reduce neck-shoulder pain in computer workers, but difficult to integrate in normal working routines. A solution for avoiding neck-shoulder pain during computer work may be to implement high intensity voluntary...... contractions during the computer work. However, it is unknown how this may influence productivity, rate of perceived exertion (RPE) as well as activity and rest of neck-shoulder muscles during computer work. The aim of this study was to investigate short-term effects of a high intensity contraction...

  18. Program package for the computation of lenses and deflectors

    International Nuclear Information System (INIS)

    Lencova, B.; Wisselink, G.

    1990-01-01

    In this paper a set of computer programs for the design of electrostatic and magnetic electron lenses and for the design of multipoles for electron microscopy and lithography is described. The two-dimensional field computation is performed by the finite-element method. In order to meet the high demands on accuracy, the programs include the use of a variable step in the fine mesh made with an automeshing procedure, improved methods for coefficient evaluation, a fast solution procedure for the linear equations, and modified algorithms for computation of multipoles and electrostatic lenses. They allow for a fast and accurate computation of electron optical elements. For the input and modification of data, and for presentation of results, graphical menu driven programs written for personal computers are used. For the computation of electron optical properties axial fields are used. (orig.)

  19. Could High Mental Demands at Work Offset the Adverse Association Between Social Isolation and Cognitive Functioning? Results of the Population-Based LIFE-Adult-Study.

    Science.gov (United States)

    Rodriguez, Francisca S; Schroeter, Matthias L; Witte, A Veronica; Engel, Christoph; Löffler, Markus; Thiery, Joachim; Villringer, Arno; Luck, Tobias; Riedel-Heller, Steffi G

    2017-11-01

    The study investigated whether high mental demands at work, which have shown to promote a good cognitive functioning in old age, could offset the adverse association between social isolation and cognitive functioning. Based on data from the population-based LIFE-Adult-Study, the association between cognitive functioning (Verbal Fluency Test, Trail Making Test B) and social isolation (Lubben Social Network Scale) as well as mental demands at work (O*NET database) was analyzed via linear regression analyses adjusted for age, sex, education, and sampling weights. Cognitive functioning was significantly lower in socially isolated individuals and in individuals working in low mental demands jobs-even in old age after retirement and even after taking into account the educational level. An interaction effect suggested stronger effects of mental demands at work in socially isolated than nonisolated individuals. The findings suggest that working in high mental-demand jobs could offset the adverse association between social isolation and cognitive functioning. Further research should evaluate how interventions that target social isolation and enhance mentally demanding activities promote a good cognitive functioning in old age. Copyright © 2017 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  20. An inventory control project in a major Danish company using compound renewal demand models

    DEFF Research Database (Denmark)

    Larsen, Christian; Seiding, Claus Hoe; Teller, Christian

    operation is highly automated. However, the procedures for estimating demands and the policies for the inventory control system that were in use at the beginning of the project did not fully match the sophisticated technological standard of the physical system. During the initial phase of the project...... We describe the development of a framework to compute the optimal inventory policy for a large spare-parts' distribution centre operation in the RA division of the Danfoss Group in Denmark. The RA division distributes spare parts worldwide for cooling and A/C systems. The warehouse logistics...

  1. High-End Computing Challenges in Aerospace Design and Engineering

    Science.gov (United States)

    Bailey, F. Ronald

    2004-01-01

    High-End Computing (HEC) has had significant impact on aerospace design and engineering and is poised to make even more in the future. In this paper we describe four aerospace design and engineering challenges: Digital Flight, Launch Simulation, Rocket Fuel System and Digital Astronaut. The paper discusses modeling capabilities needed for each challenge and presents projections of future near and far-term HEC computing requirements. NASA's HEC Project Columbia is described and programming strategies presented that are necessary to achieve high real performance.

  2. Using FRED Data to Teach Price Elasticity of Demand

    Science.gov (United States)

    Méndez-Carbajo, Diego; Asarta, Carlos J.

    2017-01-01

    In this article, the authors discuss the use of Federal Reserve Economic Data (FRED) statistics to teach the concept of price elasticity of demand in an introduction to economics course. By using real data in its computation, they argue that instructors can create a value-adding context for illustrating and applying a foundational concept in…

  3. THE PRICE OF HIGHER EDUCATION AND INDIVIDUAL DEMAND

    Directory of Open Access Journals (Sweden)

    Filiz Golpek

    2012-01-01

    Full Text Available The rise in the living standards in most of the world, the rise in population and schooling rates have increased the demand for higher education. The attribution of semi public property becomes determinant to decide whom will provide the supply and the production in semi public properties is realized by means of a supply and demand mechanism. The supply of higher education is mostly secured in accordance with the public demand as well as the resources available.  In addition, the fact that higher education services have produced significant benefits has led to over demand. This situation relates to a simple economic rule that a commodity or service which costs almost nothing or little will increase until the mariginal benefit of its demand equals to 0 or almost 0. However, the educational supply and demand is difficult to identify in accordance to the supply and demand and balance of price as observed in the economic theory. The high profits that would be attracted in the future are significant factors influencing individual’s decisions for investment. The decision for investment depends on the possible return in the future, the cost of investment, and the current interest rates. Higher education with investment purposes is influenced by these three factors and higher education is demanded more and more by individuals on the expectation that they will gain high profits In theory, it is accepted that the basic factors identifying the demand for higher education are in harmony with empirical research results in several countries including Turkey.

  4. Characterization of the elastic displacement demand: Case study - Sofia city

    International Nuclear Information System (INIS)

    Paskaleva, I.; Kouteva, M.; Vaccari, F.; Panza, G.F.

    2008-02-01

    The results of the study on the seismic site response in a part of the metropolitan Sofia are discussed. The neo-deterministic seismic hazard assessment procedure has been used to compute realistic synthetic waveforms considering four earthquake scenarios, with magnitudes M = 3.7, M = 6.3 and M = 7.0. Source and site specific ground motion time histories are computed along three investigated cross sections, making use of the hybrid approach, combining the modal summation technique and the finite differences scheme. Displacement and acceleration response spectra are considered. These results are validated against the design elastic displacement response spectra and displacement demand, recommended in Eurocode 8. The elastic response design spectrum from the standard pseudo-acceleration, versus natural period, Tn, format is converted to the Sa - Sd format. The elastic displacement response spectra and displacement demand are discussed with respect to the earthquake magnitude, the seismic source-to-site distance, seismic source mechanism, and the local geological site conditions. (author)

  5. Ethanol demand in Brazil: Regional approach

    International Nuclear Information System (INIS)

    Freitas, Luciano Charlita de; Kaneko, Shinji

    2011-01-01

    Successive studies attempting to clarify national aspects of ethanol demand have assisted policy makers and producers in defining strategies, but little information is available on the dynamic of regional ethanol markets. This study aims to analyze the characteristics of ethanol demand at the regional level taking into account the peculiarities of the developed center-south and the developing north-northeast regions. Regional ethanol demand is evaluated based on a set of market variables that include ethanol price, consumer's income, vehicle stock and prices of substitute fuels; i.e., gasoline and natural gas. A panel cointegration analysis with monthly observations from January 2003 to April 2010 is employed to estimate the long-run demand elasticity. The results reveal that the demand for ethanol in Brazil differs between regions. While in the center-south region the price elasticity for both ethanol and alternative fuels is high, consumption in the north-northeast is more sensitive to changes in the stock of the ethanol-powered fleet and income. These, among other evidences, suggest that the pattern of ethanol demand in the center-south region most closely resembles that in developed nations, while the pattern of demand in the north-northeast most closely resembles that in developing nations. - Research highlights: → Article consists of a first insight on regional demand for ethanol in Brazil. → It proposes a model with multiple fuels, i.e., hydrous ethanol, gasohol and natural gas. → Results evidence that figures for regional demand for ethanol differ amongst regions and with values reported for national demand. → Elasticities for the center-south keep similarities to patterns for fuel demand in developed nations while coefficients for the north-northeast are aligned to patterns on developing countries.

  6. Ethanol demand in Brazil: Regional approach

    Energy Technology Data Exchange (ETDEWEB)

    Freitas, Luciano Charlita de, E-mail: lucianofreitas@hiroshima-u.ac.j [Graduate School for International Development and Cooperation, Development Policy, Hiroshima University 1-5-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8529 (Japan); Kaneko, Shinji [Graduate School for International Development and Cooperation, Development Policy, Hiroshima University 1-5-1 Kagamiyama, Higashi-Hiroshima, Hiroshima 739-8529 (Japan)

    2011-05-15

    Successive studies attempting to clarify national aspects of ethanol demand have assisted policy makers and producers in defining strategies, but little information is available on the dynamic of regional ethanol markets. This study aims to analyze the characteristics of ethanol demand at the regional level taking into account the peculiarities of the developed center-south and the developing north-northeast regions. Regional ethanol demand is evaluated based on a set of market variables that include ethanol price, consumer's income, vehicle stock and prices of substitute fuels; i.e., gasoline and natural gas. A panel cointegration analysis with monthly observations from January 2003 to April 2010 is employed to estimate the long-run demand elasticity. The results reveal that the demand for ethanol in Brazil differs between regions. While in the center-south region the price elasticity for both ethanol and alternative fuels is high, consumption in the north-northeast is more sensitive to changes in the stock of the ethanol-powered fleet and income. These, among other evidences, suggest that the pattern of ethanol demand in the center-south region most closely resembles that in developed nations, while the pattern of demand in the north-northeast most closely resembles that in developing nations. - Research highlights: {yields} Article consists of a first insight on regional demand for ethanol in Brazil. {yields} It proposes a model with multiple fuels, i.e., hydrous ethanol, gasohol and natural gas. {yields} Results evidence that figures for regional demand for ethanol differ amongst regions and with values reported for national demand. {yields} Elasticities for the center-south keep similarities to patterns for fuel demand in developed nations while coefficients for the north-northeast are aligned to patterns on developing countries.

  7. Ontario demand response scenarios

    International Nuclear Information System (INIS)

    Rowlands, I.H.

    2005-09-01

    Strategies for demand management in Ontario were examined via 2 scenarios for a commercial/institutional building with a normal summertime peak load of 300 kW between 14:00 and 18:00 during a period of high electricity demand and high electricity prices. The first scenario involved the deployment of a 150 kW on-site generator fuelled by either diesel or natural gas. The second scenario involved curtailing load by 60 kW during the same periods. Costs and benefits of both scenarios were evaluated for 3 groups: consumers, system operators and society. Benefits included electricity cost savings, deferred transmission capacity development, lower system prices for electricity, as well as environmental changes, economic development, and a greater sense of corporate social responsibility. It was noted that while significant benefits were observed for all 3 groups, they were not substantial enough to encourage action, as the savings arising from deferred generation capacity development do not accrue to individual players. The largest potential benefit was identified as lower prices, spread across all users of electricity in Ontario. It was recommended that representative bodies cooperate so that the system-wide benefits can be reaped. It was noted that if 10 municipal utilities were able to have 250 commercial or institutional customers engaged in distributed response, then a total peak demand reduction of 375 MW could be achieved, representing more than 25 per cent of Ontario's target for energy conservation. It was concluded that demand response often involves the investment of capital and new on-site procedures, which may affect reactions to various incentives. 78 refs., 10 tabs., 5 figs

  8. Production practices affecting worker task demands in concrete operations: A case study.

    Science.gov (United States)

    Memarian, Babak; Mitropoulos, Panagiotis

    2015-01-01

    Construction work involves significant physical, mental, and temporal task demands. Excessive task demands can have negative consequences for safety, errors and production. This exploratory study investigates the magnitude and sources of task demands on a concrete operation, and examines the effect of the production practices on the workers' task demands. The NASA Task Load Index was used to measure the perceived task demands of two work crews. The operation involved the construction of a cast-in-place concrete building under high schedule pressures. Interviews with each crew member were used to identify the main sources of the perceived demands. Extensive field observations and interviews with the supervisors and crews identified the production practices. The workers perceived different level of task demands depending on their role. The production practices influenced the task demands in two ways: (1) practices related to work organization, task design, resource management, and crew management mitigated the task demands; and (2) other practices related to work planning and crew management increased the crew's ability to cope with and adapt to high task demands. The findings identify production practices that regulate the workers' task demands. The effect of task demands on performance is mitigated by the ability to cope with high demands.

  9. Enabling high performance computational science through combinatorial algorithms

    International Nuclear Information System (INIS)

    Boman, Erik G; Bozdag, Doruk; Catalyurek, Umit V; Devine, Karen D; Gebremedhin, Assefaw H; Hovland, Paul D; Pothen, Alex; Strout, Michelle Mills

    2007-01-01

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation

  10. Enabling high performance computational science through combinatorial algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Erik G [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Bozdag, Doruk [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Catalyurek, Umit V [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Devine, Karen D [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Gebremedhin, Assefaw H [Computer Science and Center for Computational Science, Old Dominion University (United States); Hovland, Paul D [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Pothen, Alex [Computer Science and Center for Computational Science, Old Dominion University (United States); Strout, Michelle Mills [Computer Science, Colorado State University (United States)

    2007-07-15

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation.

  11. Experimental high energy physics and modern computer architectures

    International Nuclear Information System (INIS)

    Hoek, J.

    1988-06-01

    The paper examines how experimental High Energy Physics can use modern computer architectures efficiently. In this connection parallel and vector architectures are investigated, and the types available at the moment for general use are discussed. A separate section briefly describes some architectures that are either a combination of both, or exemplify other architectures. In an appendix some directions in which computing seems to be developing in the USA are mentioned. (author)

  12. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  13. A computational study of high entropy alloys

    Science.gov (United States)

    Wang, Yang; Gao, Michael; Widom, Michael; Hawk, Jeff

    2013-03-01

    As a new class of advanced materials, high-entropy alloys (HEAs) exhibit a wide variety of excellent materials properties, including high strength, reasonable ductility with appreciable work-hardening, corrosion and oxidation resistance, wear resistance, and outstanding diffusion-barrier performance, especially at elevated and high temperatures. In this talk, we will explain our computational approach to the study of HEAs that employs the Korringa-Kohn-Rostoker coherent potential approximation (KKR-CPA) method. The KKR-CPA method uses Green's function technique within the framework of multiple scattering theory and is uniquely designed for the theoretical investigation of random alloys from the first principles. The application of the KKR-CPA method will be discussed as it pertains to the study of structural and mechanical properties of HEAs. In particular, computational results will be presented for AlxCoCrCuFeNi (x = 0, 0.3, 0.5, 0.8, 1.0, 1.3, 2.0, 2.8, and 3.0), and these results will be compared with experimental information from the literature.

  14. Micromagnetics on high-performance workstation and mobile computational platforms

    Science.gov (United States)

    Fu, S.; Chang, R.; Couture, S.; Menarini, M.; Escobar, M. A.; Kuteifan, M.; Lubarda, M.; Gabay, D.; Lomakin, V.

    2015-05-01

    The feasibility of using high-performance desktop and embedded mobile computational platforms is presented, including multi-core Intel central processing unit, Nvidia desktop graphics processing units, and Nvidia Jetson TK1 Platform. FastMag finite element method-based micromagnetic simulator is used as a testbed, showing high efficiency on all the platforms. Optimization aspects of improving the performance of the mobile systems are discussed. The high performance, low cost, low power consumption, and rapid performance increase of the embedded mobile systems make them a promising candidate for micromagnetic simulations. Such architectures can be used as standalone systems or can be built as low-power computing clusters.

  15. Modelling Commodity Demands and Labour Supply with m-Demands

    OpenAIRE

    Browning, Martin

    1999-01-01

    In the empirical modelling of demands and labour supply we often lack data on a full set of goods. The usual response is to invoke separability assumptions. Here we present an alternative based on modelling demands as a function of prices and the quantity of a reference good rather than total expenditure. We term such demands m-demands. The advantage of this approach is that we make maximum use of the data to hand without invoking implausible separability assumptions. In the theory section qu...

  16. The Computer Industry. High Technology Industries: Profiles and Outlooks.

    Science.gov (United States)

    International Trade Administration (DOC), Washington, DC.

    A series of meetings was held to assess future problems in United States high technology, particularly in the fields of robotics, computers, semiconductors, and telecommunications. This report, which focuses on the computer industry, includes a profile of this industry and the papers presented by industry speakers during the meetings. The profile…

  17. New Challenges for Computing in High Energy Physics

    International Nuclear Information System (INIS)

    Santoro, Alberto

    2003-01-01

    In view of the new scientific programs established for the LHC (Large Hadron Collider) era, the way to face the technological challenges in computing was develop a new concept of GRID computing. We show some examples and, in particular, a proposal for high energy physicists in countries like Brazil. Due to the big amount of data and the need of close collaboration it will be impossible to work in research centers and universities very far from Fermilab or CERN unless a GRID architecture is built. An important effort is being made by the international community to up to date their computing infrastructure and networks

  18. The active learning hypothesis of the job-demand-control model: an experimental examination.

    Science.gov (United States)

    Häusser, Jan Alexander; Schulz-Hardt, Stefan; Mojzisch, Andreas

    2014-01-01

    The active learning hypothesis of the job-demand-control model [Karasek, R. A. 1979. "Job Demands, Job Decision Latitude, and Mental Strain: Implications for Job Redesign." Administration Science Quarterly 24: 285-307] proposes positive effects of high job demands and high job control on performance. We conducted a 2 (demands: high vs. low) × 2 (control: high vs. low) experimental office workplace simulation to examine this hypothesis. Since performance during a work simulation is confounded by the boundaries of the demands and control manipulations (e.g. time limits), we used a post-test, in which participants continued working at their task, but without any manipulation of demands and control. This post-test allowed for examining active learning (transfer) effects in an unconfounded fashion. Our results revealed that high demands had a positive effect on quantitative performance, without affecting task accuracy. In contrast, high control resulted in a speed-accuracy tradeoff, that is participants in the high control conditions worked slower but with greater accuracy than participants in the low control conditions.

  19. Surprise responses in the human brain demonstrate statistical learning under high concurrent cognitive demand

    Science.gov (United States)

    Garrido, Marta Isabel; Teng, Chee Leong James; Taylor, Jeremy Alexander; Rowe, Elise Genevieve; Mattingley, Jason Brett

    2016-06-01

    The ability to learn about regularities in the environment and to make predictions about future events is fundamental for adaptive behaviour. We have previously shown that people can implicitly encode statistical regularities and detect violations therein, as reflected in neuronal responses to unpredictable events that carry a unique prediction error signature. In the real world, however, learning about regularities will often occur in the context of competing cognitive demands. Here we asked whether learning of statistical regularities is modulated by concurrent cognitive load. We compared electroencephalographic metrics associated with responses to pure-tone sounds with frequencies sampled from narrow or wide Gaussian distributions. We showed that outliers evoked a larger response than those in the centre of the stimulus distribution (i.e., an effect of surprise) and that this difference was greater for physically identical outliers in the narrow than in the broad distribution. These results demonstrate an early neurophysiological marker of the brain's ability to implicitly encode complex statistical structure in the environment. Moreover, we manipulated concurrent cognitive load by having participants perform a visual working memory task while listening to these streams of sounds. We again observed greater prediction error responses in the narrower distribution under both low and high cognitive load. Furthermore, there was no reliable reduction in prediction error magnitude under high-relative to low-cognitive load. Our findings suggest that statistical learning is not a capacity limited process, and that it proceeds automatically even when cognitive resources are taxed by concurrent demands.

  20. Natural gas demand prospects in Korea

    Energy Technology Data Exchange (ETDEWEB)

    Kwon, Young-Jin [Korea Electric Power Corp. (KEPCO), Seoul (Korea, Republic of)

    1997-06-01

    Korea s natural gas demand has increase enormously since 1986. Natural gas demand in Korea will approach to 29 million tonnes by the year 2010, from little over 9 million tonnes in 1996. This rapid expansion of natural gas demand is largely due to regulations for environmental protection by the government as well as consumers preference to natural gas over other sources of energy. Especially industrial use of gas will expand faster than other use of gas, although it will not be as high as that in European and North America countries. To meet the enormous increase in demand, Korean government and Korea Gas Corporation (KOGAS) are undertaking expansion of capacities of natural gas supply facilities, and are seeking diversification of import sources, including participation in major gas projects, to secure the import sources on more reliable grounds. (Author). 5 tabs.

  1. Natural gas demand prospects in Korea

    International Nuclear Information System (INIS)

    Young-Jin Kwon

    1997-01-01

    Korea s natural gas demand has increase enormously since 1986. Natural gas demand in Korea will approach to 29 million tonnes by the year 2010, from little over 9 million tonnes in 1996. This rapid expansion of natural gas demand is largely due to regulations for environmental protection by the government as well as consumers preference to natural gas over other sources of energy. Especially industrial use of gas will expand faster than other use of gas, although it will not be as high as that in European and North America countries. To meet the enormous increase in demand, Korean government and Korea Gas Corporation (KOGAS) are undertaking expansion of capacities of natural gas supply facilities, and are seeking diversification of import sources, including participation in major gas projects, to secure the import sources on more reliable grounds. (Author). 5 tabs

  2. Probabilistic Quantification of Potentially Flexible Residential Demand

    DEFF Research Database (Denmark)

    Kouzelis, Konstantinos; Mendaza, Iker Diaz de Cerio; Bak-Jensen, Birgitte

    2014-01-01

    The balancing of power systems with high penetration of renewable energy is a serious challenge to be faced in the near future. One of the possible solutions, recently capturing a lot of attention, is demand response. Demand response can only be achieved by power consumers holding loads which allow...... them to modify their normal power consumption pattern, namely flexible consumers. However flexibility, despite being constantly mentioned, is usually not properly defined and even rarer quantified. This manuscript introduces a methodology to identify and quantify potentially flexible demand...

  3. The Need for Optical Means as an Alternative for Electronic Computing

    Science.gov (United States)

    Adbeldayem, Hossin; Frazier, Donald; Witherow, William; Paley, Steve; Penn, Benjamin; Bank, Curtis; Whitaker, Ann F. (Technical Monitor)

    2001-01-01

    An increasing demand for faster computers is rapidly growing to encounter the fast growing rate of Internet, space communication, and robotic industry. Unfortunately, the Very Large Scale Integration technology is approaching its fundamental limits beyond which the device will be unreliable. Optical interconnections and optical integrated circuits are strongly believed to provide the way out of the extreme limitations imposed on the growth of speed and complexity of nowadays computations by conventional electronics. This paper demonstrates two ultra-fast, all-optical logic gates and a high-density storage medium, which are essential components in building the future optical computer.

  4. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa

    2012-10-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a representative HPC application. © 2012 IEEE.

  5. Interpersonal interactions, job demands and work-related outcomes in pharmacy.

    Science.gov (United States)

    Gaither, Caroline A; Nadkarni, Anagha

    2012-04-01

    Objectives  The objective of this study was to examine the interaction between job demands of pharmacists and resources in the form of interpersonal interactions and its association with work-related outcomes such as organizational and professional commitment, job burnout, professional identity and job satisfaction. The job demands-resources (JD-R) model served as the theoretical framework. Methods  Subjects for the study were drawn from the Pharmacy Manpower Project Database (n = 1874). A 14-page mail-in survey measured hospital pharmacists' responses on the frequency of occurrence of various job-related scenarios as well as work-related outcomes. The study design was a 2 × 2 factorial design. Responses were collected on a Likert scale. Descriptive statistics, reliability analyses and correlational and multiple regression analyses were conducted using SPSS version 17 (SPSS, Chicago, IL, USA). Key findings  The 566 pharmacists (30% response rate) who responded to the survey indicated that high-demand/pleasant encounters and low-demand/pleasant encounters occurred more frequently in the workplace. The strongest correlations were found between high-demand/unpleasant encounters and frequency and intensity of emotional exhaustion. Multiple regression analyses indicated that when controlling for demographic factors high-demand/unpleasant encounters were negatively related to affective organizational commitment and positively related to frequency and intensity of emotional exhaustion. Low-demand/pleasant encounters were positively related to frequency and intensity of personal accomplishment. Low-demand/unpleasant encounters were significantly and negatively related to professional commitment, job satisfaction and frequency and intensity of emotional exhaustion, while high-demand/pleasant encounters were also related to frequency and intensity of emotional exhaustion Conclusion  Support was found for the JD-R model and the proposed interaction effects

  6. Employer Demand for Welfare Recipients by Race. Discussion Paper.

    Science.gov (United States)

    Holzer, Harry J.; Stoll, Michael A.

    This paper uses new survey data on employers in four large metropolitan areas to examine the determinants of employer demand for welfare recipients. Data come from a telephone survey of approximately 750 establishments. Results suggest a high level of demand for welfare recipients, although such demand appears fairly sensitive to business cycle…

  7. The impact of predicted demand on energy production

    Science.gov (United States)

    El kafazi, I.; Bannari, R.; Aboutafail, My. O.

    2018-05-01

    Energy is crucial for human life, a secure and accessible supply of power is essential for the sustainability of societies. Economic development and demographic progression increase energy demand, prompting countries to conduct research and studies on energy demand and production. Although, increasing in energy demand in the future requires a correct determination of the amount of energy supplied. Our article studies the impact of demand on energy production to find the relationship between the two latter and managing properly the production between the different energy sources. Historical data of demand and energy production since 2000 are used. The data are processed by the regression model to study the impact of demand on production. The obtained results indicate that demand has a positive and significant impact on production (high impact). Production is also increasing but at a slower pace. In this work, Morocco is considered as a case study.

  8. Future demand scenarios of Bangladesh power sector

    International Nuclear Information System (INIS)

    Mondal, Md. Alam Hossain; Boie, Wulf; Denich, Manfred

    2010-01-01

    Data on the future electricity demand is an essential requirement for planning the expansion of a power system. The purpose of this study is to provide a general overview of electricity consumption in Bangladesh, forecast sector-wise electricity demand up to 2035 considering the base year 2005, and compare the results with official projections. The Long-range Energy Alternative Planning (LEAP) model with three scenarios, namely low gross domestic product (GDP) growth, average GDP growth and high GDP growth, is applied in this study. In the low to high GDP growth scenarios, the extent of industrial restructuring and technical advancement is gradually increased. The findings have significant implications with respect to energy conservation and economic development. The study also compares the projected per capita electricity consumption in Bangladesh with the historical growth in several other developing countries. Such an evaluation can create awareness among the planners of power system expansion in Bangladesh to meet the high future demand.

  9. Coping with unexpected oil demand movements

    International Nuclear Information System (INIS)

    Anon.

    2004-01-01

    Continuous upward revisions to world oil demand projections for 2003 and 2004 are compared with the downward revisions that took place in 1998 and 1999, following the 1997 Asian economic crisis. Demand leads supply, in the current case, resulting in a time-lag in the whole supply chain, while supply led demand half a decade ago, with the OECD's commercial stocks reaching record highs. Recent months have seen a reversal of the longstanding inverse relationship between the United States of America's commercial crude oil stock levels and crude prices, and they are now moving in parallel. The fact that the US market is now adequately or even well supplied means that factors other than inventory levels are causing the present high prices. These factors are briefly outlined. OPEC is doing everything it can to maintain market stability, with prices at levels acceptable to producers and consumers. The agreement reached in Beirut on 3 June is the latest example of this. (Author)

  10. Secure Cloud Computing Implementation Study For Singapore Military Operations

    Science.gov (United States)

    2016-09-01

    50 Figure 7. Basic Military Cloud Features Integrated into the OODA Loop Figure 8. Process...demand via the network” to cloud users [2]. International Business Machines (IBM) defines it as “the delivery of on-demand computing resources...to Statista [6], the public cloud computing market has shown continuous revenue growth in cloud services, beginning with a notable increase in 5

  11. Demand uncertainty and investment in the restaurant industry

    OpenAIRE

    Sohn, Jayoung

    2016-01-01

    Since the collapse of the housing market, the prolonged economic uncertainty lingering in the U.S. economy has dampened restaurant performance. Economic uncertainty affects consumer sentiment and spending, turning into demand uncertainty. Nevertheless, the highly competitive nature of the restaurant industry does not allow much room for restaurants to actively control prices, leaving most food service firms exposed to demand uncertainty. To investigate the impact of demand uncertainty in the ...

  12. High-Performance Java Codes for Computational Fluid Dynamics

    Science.gov (United States)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  13. A multithreaded and GPU-optimized compact finite difference algorithm for turbulent mixing at high Schmidt number using petascale computing

    Science.gov (United States)

    Clay, M. P.; Yeung, P. K.; Buaria, D.; Gotoh, T.

    2017-11-01

    Turbulent mixing at high Schmidt number is a multiscale problem which places demanding requirements on direct numerical simulations to resolve fluctuations down the to Batchelor scale. We use a dual-grid, dual-scheme and dual-communicator approach where velocity and scalar fields are computed by separate groups of parallel processes, the latter using a combined compact finite difference (CCD) scheme on finer grid with a static 3-D domain decomposition free of the communication overhead of memory transposes. A high degree of scalability is achieved for a 81923 scalar field at Schmidt number 512 in turbulence with a modest inertial range, by overlapping communication with computation whenever possible. On the Cray XE6 partition of Blue Waters, use of a dedicated thread for communication combined with OpenMP locks and nested parallelism reduces CCD timings by 34% compared to an MPI baseline. The code has been further optimized for the 27-petaflops Cray XK7 machine Titan using GPUs as accelerators with the latest OpenMP 4.5 directives, giving 2.7X speedup compared to CPU-only execution at the largest problem size. Supported by NSF Grant ACI-1036170, the NCSA Blue Waters Project with subaward via UIUC, and a DOE INCITE allocation at ORNL.

  14. Ethanol Demand in United States Gasoline Production

    Energy Technology Data Exchange (ETDEWEB)

    Hadder, G.R.

    1998-11-24

    The Oak Ridge National Laboratory (OWL) Refinery Yield Model (RYM) has been used to estimate the demand for ethanol in U.S. gasoline production in year 2010. Study cases examine ethanol demand with variations in world oil price, cost of competing oxygenate, ethanol value, and gasoline specifications. For combined-regions outside California summer ethanol demand is dominated by conventional gasoline (CG) because the premised share of reformulated gasoline (RFG) production is relatively low and because CG offers greater flexibility for blending high vapor pressure components like ethanol. Vapor pressure advantages disappear for winter CG, but total ethanol used in winter RFG remains low because of the low RFG production share. In California, relatively less ethanol is used in CG because the RFG production share is very high. During the winter in California, there is a significant increase in use of ethanol in RFG, as ethanol displaces lower-vapor-pressure ethers. Estimated U.S. ethanol demand is a function of the refiner value of ethanol. For example, ethanol demand for reference conditions in year 2010 is 2 billion gallons per year (BGY) at a refiner value of $1.00 per gallon (1996 dollars), and 9 BGY at a refiner value of $0.60 per gallon. Ethanol demand could be increased with higher oil prices, or by changes in gasoline specifications for oxygen content, sulfur content, emissions of volatile organic compounds (VOCS), and octane numbers.

  15. High-integrity software, computation and the scientific method

    International Nuclear Information System (INIS)

    Hatton, L.

    2012-01-01

    Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. With the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. In this paper, some of the problems with computation, for example the long-term unquantifiable presence of undiscovered defect, problems with programming languages and process issues will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within computer science itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest. (author)

  16. PREDICTING DEMAND FOR COTTON YARNS

    Directory of Open Access Journals (Sweden)

    SALAS-MOLINA Francisco

    2017-05-01

    Full Text Available Predicting demand for fashion products is crucial for textile manufacturers. In an attempt to both avoid out-of-stocks and minimize holding costs, different forecasting techniques are used by production managers. Both linear and non-linear time-series analysis techniques are suitable options for forecasting purposes. However, demand for fashion products presents a number of particular characteristics such as short life-cycles, short selling seasons, high impulse purchasing, high volatility, low predictability, tremendous product variety and a high number of stock-keeping-units. In this paper, we focus on predicting demand for cotton yarns using a non-linear forecasting technique that has been fruitfully used in many areas, namely, random forests. To this end, we first identify a number of explanatory variables to be used as a key input to forecasting using random forests. We consider explanatory variables usually labeled either as causal variables, when some correlation is expected between them and the forecasted variable, or as time-series features, when extracted from time-related attributes such as seasonality. Next, we evaluate the predictive power of each variable by means of out-of-sample accuracy measurement. We experiment on a real data set from a textile company in Spain. The numerical results show that simple time-series features present more predictive ability than other more sophisticated explanatory variables.

  17. Neck pain and postural balance among workers with high postural demands - a cross-sectional study

    DEFF Research Database (Denmark)

    Jørgensen, Marie B.; Skotte, Jørgen H.; Holtermann, Andreas

    2011-01-01

    Neck pain is related to impaired postural balance among patients and is highly prevalent among workers with high postural demands, for example, cleaners. We therefore hypothesised, that cleaners with neck pain suffer from postural dysfunction. This cross-sectional study tested if cleaners with neck...... pain have an impaired postural balance compared with cleaners without neck pain. Postural balance of 194 cleaners with (n = 85) and without (N = 109) neck pain was studied using three different tests. Success or failure to maintain the standing position for 30 s in unilateral stance was recorded...... to cleaners without neck/low back pain (p balance, measured as CEA (p

  18. Demand as Frequency Controlled Reserve

    DEFF Research Database (Denmark)

    Xu, Zhao; Østergaard, Jacob; Togeby, Mikael

    2011-01-01

    Relying on generation side alone is deemed insufficient to fulfill the system balancing needs for future Danish power system, where a 50% wind penetration is outlined by the government for year 2025. This paper investigates using the electricity demand as frequency controlled reserve (DFR) as a new...... balancing measure, which has a high potential and can provide many advantages. Firstly, the background of the research is reviewed, including conventional power system reserves and the electricity demand side potentials. Subsequently, the control logics and corresponding design considerations for the DFR...

  19. Computation of the intensities of parametric holographic scattering patterns in photorefractive crystals.

    Science.gov (United States)

    Schwalenberg, Simon

    2005-06-01

    The present work represents a first attempt to perform computations of output intensity distributions for different parametric holographic scattering patterns. Based on the model for parametric four-wave mixing processes in photorefractive crystals and taking into account realistic material properties, we present computed images of selected scattering patterns. We compare these calculated light distributions to the corresponding experimental observations. Our analysis is especially devoted to dark scattering patterns as they make high demands on the underlying model.

  20. Demand modelling for central heating systems

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    Most researchers in the field of heat demand estimation have focussed on explaning the load for a given plant based on rather few measurements. This approach is simply the only one adaptable with the very limited data material and limited computer power. This way of dealing with the subject is here called the top-down approach, due to the fact that one tries to explain the load from the overall data. The results of such efforts are discussed in the report, leading to inspiration for own work. Also the significance of the findings to the causes for given heat loads are discussed and summarised. Contrary to the top-down approach applied in literature, a here-called bottom-up approach is applied in this work, describing the causes of a given partial load in detail and combining them to explain the total load for the system. Three partial load 'components' are discussed: 1) Space heating. 2) Hot-Water Consumption. 3) Heat losses in pipe networks. The report is aimed at giving an introduction to these subjects, but at the same time at collecting the previous work done by the author. Space heating is shortly discussed and loads are generated by an advanced simulation model. A hot water consumption model is presented and heat loads, generated by this model, utilised in the overall work. Heat loads due to heat losses in district heating a given a high priority in the current work. Hence a detailed presentation and overview of the subject is given to solar heating experts normally not dealing with district heating. Based on the 'partial' loads generated by the above-mentioned method, an overall load model is built in the computer simulation environment TRNSYS. The final tool is then employed for the generation of time series for heat demand, representing a district heating area. The results are compared to alternative methods for the generation of heat demand profiles. Results form this comparison will be presented. Computerised modelling of systems

  1. Distributed quantum computing with single photon sources

    International Nuclear Information System (INIS)

    Beige, A.; Kwek, L.C.

    2005-01-01

    Full text: Distributed quantum computing requires the ability to perform nonlocal gate operations between the distant nodes (stationary qubits) of a large network. To achieve this, it has been proposed to interconvert stationary qubits with flying qubits. In contrast to this, we show that distributed quantum computing only requires the ability to encode stationary qubits into flying qubits but not the conversion of flying qubits into stationary qubits. We describe a scheme for the realization of an eventually deterministic controlled phase gate by performing measurements on pairs of flying qubits. Our scheme could be implemented with a linear optics quantum computing setup including sources for the generation of single photons on demand, linear optics elements and photon detectors. In the presence of photon loss and finite detector efficiencies, the scheme could be used to build large cluster states for one way quantum computing with a high fidelity. (author)

  2. Modelling of demand response and market power

    International Nuclear Information System (INIS)

    Kristoffersen, B.B.; Donslund, B.; Boerre Eriksen, P.

    2004-01-01

    Demand-side flexibility and demand response to high prices are prerequisites for the proper functioning of the Nordic power market. If the consumers are unwilling to respond to high prices, the market may fail the clearing, and this may result in unwanted forced demand disconnections. Being the TSO of Western Denmark, Eltra is responsible of both security of supply and the design of the power market within its area. On this basis, Eltra has developed a new mathematical model tool for analysing the Nordic wholesale market. The model is named MARS (MARket Simulation). The model is able to handle hydropower and thermal production, nuclear power and wind power. Production, demand and exchanges modelled on an hourly basis are new important features of the model. The model uses the same principles as Nord Pool (The Nordic Power Exchange), including the division of the Nordic countries into price areas. On the demand side, price elasticity is taken into account and described by a Cobb-Douglas function. Apart from simulating perfect competition markets, particular attention has been given to modelling imperfect market conditions, i.e. exercise of market power on the supply side. Market power is simulated by using game theory, including the Nash equilibrium concept. The paper gives a short description of the MARS model. Besides, focus is on the application of the model in order to illustrate the importance of demand response in the Nordic market. Simulations with different values of demand elasticity are compared. Calculations are carried out for perfect competition and for the situation in which market power is exercised by the large power producers in the Nordic countries (oligopoly). (au)

  3. HIGH-PERFORMANCE COMPUTING FOR THE STUDY OF EARTH AND ENVIRONMENTAL SCIENCE MATERIALS USING SYNCHROTRON X-RAY COMPUTED MICROTOMOGRAPHY

    International Nuclear Information System (INIS)

    FENG, H.; JONES, K.W.; MCGUIGAN, M.; SMITH, G.J.; SPILETIC, J.

    2001-01-01

    Synchrotron x-ray computed microtomography (CMT) is a non-destructive method for examination of rock, soil, and other types of samples studied in the earth and environmental sciences. The high x-ray intensities of the synchrotron source make possible the acquisition of tomographic volumes at a high rate that requires the application of high-performance computing techniques for data reconstruction to produce the three-dimensional volumes, for their visualization, and for data analysis. These problems are exacerbated by the need to share information between collaborators at widely separated locations over both local and tide-area networks. A summary of the CMT technique and examples of applications are given here together with a discussion of the applications of high-performance computing methods to improve the experimental techniques and analysis of the data

  4. HIGH-PERFORMANCE COMPUTING FOR THE STUDY OF EARTH AND ENVIRONMENTAL SCIENCE MATERIALS USING SYNCHROTRON X-RAY COMPUTED MICROTOMOGRAPHY.

    Energy Technology Data Exchange (ETDEWEB)

    FENG,H.; JONES,K.W.; MCGUIGAN,M.; SMITH,G.J.; SPILETIC,J.

    2001-10-12

    Synchrotron x-ray computed microtomography (CMT) is a non-destructive method for examination of rock, soil, and other types of samples studied in the earth and environmental sciences. The high x-ray intensities of the synchrotron source make possible the acquisition of tomographic volumes at a high rate that requires the application of high-performance computing techniques for data reconstruction to produce the three-dimensional volumes, for their visualization, and for data analysis. These problems are exacerbated by the need to share information between collaborators at widely separated locations over both local and tide-area networks. A summary of the CMT technique and examples of applications are given here together with a discussion of the applications of high-performance computing methods to improve the experimental techniques and analysis of the data.

  5. Computational Lipidomics and Lipid Bioinformatics: Filling In the Blanks.

    Science.gov (United States)

    Pauling, Josch; Klipp, Edda

    2016-12-22

    Lipids are highly diverse metabolites of pronounced importance in health and disease. While metabolomics is a broad field under the omics umbrella that may also relate to lipids, lipidomics is an emerging field which specializes in the identification, quantification and functional interpretation of complex lipidomes. Today, it is possible to identify and distinguish lipids in a high-resolution, high-throughput manner and simultaneously with a lot of structural detail. However, doing so may produce thousands of mass spectra in a single experiment which has created a high demand for specialized computational support to analyze these spectral libraries. The computational biology and bioinformatics community has so far established methodology in genomics, transcriptomics and proteomics but there are many (combinatorial) challenges when it comes to structural diversity of lipids and their identification, quantification and interpretation. This review gives an overview and outlook on lipidomics research and illustrates ongoing computational and bioinformatics efforts. These efforts are important and necessary steps to advance the lipidomics field alongside analytic, biochemistry, biomedical and biology communities and to close the gap in available computational methodology between lipidomics and other omics sub-branches.

  6. CloVR: A virtual machine for automated and portable sequence analysis from the desktop using cloud computing

    Science.gov (United States)

    2011-01-01

    Background Next-generation sequencing technologies have decentralized sequence acquisition, increasing the demand for new bioinformatics tools that are easy to use, portable across multiple platforms, and scalable for high-throughput applications. Cloud computing platforms provide on-demand access to computing infrastructure over the Internet and can be used in combination with custom built virtual machines to distribute pre-packaged with pre-configured software. Results We describe the Cloud Virtual Resource, CloVR, a new desktop application for push-button automated sequence analysis that can utilize cloud computing resources. CloVR is implemented as a single portable virtual machine (VM) that provides several automated analysis pipelines for microbial genomics, including 16S, whole genome and metagenome sequence analysis. The CloVR VM runs on a personal computer, utilizes local computer resources and requires minimal installation, addressing key challenges in deploying bioinformatics workflows. In addition CloVR supports use of remote cloud computing resources to improve performance for large-scale sequence processing. In a case study, we demonstrate the use of CloVR to automatically process next-generation sequencing data on multiple cloud computing platforms. Conclusion The CloVR VM and associated architecture lowers the barrier of entry for utilizing complex analysis protocols on both local single- and multi-core computers and cloud systems for high throughput data processing. PMID:21878105

  7. Architecture design of reconfigurable accelerators for demanding apllications.

    NARCIS (Netherlands)

    Jozwiak, L.; Jan, Y.

    2010-01-01

    This paper focuses on mastering the architecture development of reconfigurable hardware accelerators for highly demanding applications. It presents the results of our analysis of the main issues that have to be addressed when designing accelerators for demanding applications, when using as an

  8. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  9. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  10. 6th International Conference on High Performance Scientific Computing

    CERN Document Server

    Phu, Hoang; Rannacher, Rolf; Schlöder, Johannes

    2017-01-01

    This proceedings volume highlights a selection of papers presented at the Sixth International Conference on High Performance Scientific Computing, which took place in Hanoi, Vietnam on March 16-20, 2015. The conference was jointly organized by the Heidelberg Institute of Theoretical Studies (HITS), the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) at Heidelberg University, and the Vietnam Institute for Advanced Study in Mathematics, Ministry of Education The contributions cover a broad, interdisciplinary spectrum of scientific computing and showcase recent advances in theory, methods, and practical applications. Subjects covered numerical simulation, methods for optimization and control, parallel computing, and software development, as well as the applications of scientific computing in physics, mechanics, biomechanics and robotics, material science, hydrology, biotechnology, medicine, transport, scheduling, and in...

  11. Corrective economic dispatch and operational cycles for probabilistic unit commitment with demand response and high wind power

    International Nuclear Information System (INIS)

    Azizipanah-Abarghooee, Rasoul; Golestaneh, Faranak; Gooi, Hoay Beng; Lin, Jeremy; Bavafa, Farhad; Terzija, Vladimir

    2016-01-01

    Highlights: • Suggesting a new UC mixing a probabilistic security and incentive demand response. • Investigating the effects of uncertainty on UC using chance-constraint programming. • Proposing an efficient spinning reserve satisfaction based on a new ED correction. • Presenting a new operational cycles way to convert binary variable to discrete one. - Abstract: We propose a probabilistic unit commitment problem with incentive-based demand response and high level of wind power. Our novel formulation provides an optimal allocation of up/down spinning reserve. A more efficient unit commitment algorithm based on operational cycles is developed. A multi-period elastic residual demand economic model based on the self- and cross-price elasticities and customers’ benefit function is used. In the proposed scheme, the probability of residual demand falling within the up/down spinning reserve imposed by n − 1 security criterion is considered as a stochastic constraint. A chance-constrained method, with a new iterative economic dispatch correction, wind power curtailment, and commitment of cheaper units, is applied to guarantee that the probability of loss of load is lower than a pre-defined risk level. The developed architecture builds upon an improved Jaya algorithm to generate feasible, robust and optimal solutions corresponding to the operational cost. The proposed framework is applied to a small test system with 10 units and also to the IEEE 118-bus system to illustrate its advantages in efficient scheduling of generation in the power systems.

  12. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    CERN Document Server

    Abdurachmanov, David; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2014-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  13. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    Science.gov (United States)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Knight, Robert; Muzaffar, Shahzad

    2015-05-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG).

  14. Heterogeneous High Throughput Scientific Computing with APM X-Gene and Intel Xeon Phi

    International Nuclear Information System (INIS)

    Abdurachmanov, David; Bockelman, Brian; Elmer, Peter; Eulisse, Giulio; Muzaffar, Shahzad; Knight, Robert

    2015-01-01

    Electrical power requirements will be a constraint on the future growth of Distributed High Throughput Computing (DHTC) as used by High Energy Physics. Performance-per-watt is a critical metric for the evaluation of computer architectures for cost- efficient computing. Additionally, future performance growth will come from heterogeneous, many-core, and high computing density platforms with specialized processors. In this paper, we examine the Intel Xeon Phi Many Integrated Cores (MIC) co-processor and Applied Micro X-Gene ARMv8 64-bit low-power server system-on-a-chip (SoC) solutions for scientific computing applications. We report our experience on software porting, performance and energy efficiency and evaluate the potential for use of such technologies in the context of distributed computing systems such as the Worldwide LHC Computing Grid (WLCG). (paper)

  15. OpenTopography: Addressing Big Data Challenges Using Cloud Computing, HPC, and Data Analytics

    Science.gov (United States)

    Crosby, C. J.; Nandigam, V.; Phan, M.; Youn, C.; Baru, C.; Arrowsmith, R.

    2014-12-01

    OpenTopography (OT) is a geoinformatics-based data facility initiated in 2009 for democratizing access to high-resolution topographic data, derived products, and tools. Hosted at the San Diego Supercomputer Center (SDSC), OT utilizes cyberinfrastructure, including large-scale data management, high-performance computing, and service-oriented architectures to provide efficient Web based access to large, high-resolution topographic datasets. OT collocates data with processing tools to enable users to quickly access custom data and derived products for their application. OT's ongoing R&D efforts aim to solve emerging technical challenges associated with exponential growth in data, higher order data products, as well as user base. Optimization of data management strategies can be informed by a comprehensive set of OT user access metrics that allows us to better understand usage patterns with respect to the data. By analyzing the spatiotemporal access patterns within the datasets, we can map areas of the data archive that are highly active (hot) versus the ones that are rarely accessed (cold). This enables us to architect a tiered storage environment consisting of high performance disk storage (SSD) for the hot areas and less expensive slower disk for the cold ones, thereby optimizing price to performance. From a compute perspective, OT is looking at cloud based solutions such as the Microsoft Azure platform to handle sudden increases in load. An OT virtual machine image in Microsoft's VM Depot can be invoked and deployed quickly in response to increased system demand. OT has also integrated SDSC HPC systems like the Gordon supercomputer into our infrastructure tier to enable compute intensive workloads like parallel computation of hydrologic routing on high resolution topography. This capability also allows OT to scale to HPC resources during high loads to meet user demand and provide more efficient processing. With a growing user base and maturing scientific user

  16. High performance computing and communications: FY 1996 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  17. Fluid/Structure Interaction Studies of Aircraft Using High Fidelity Equations on Parallel Computers

    Science.gov (United States)

    Guruswamy, Guru; VanDalsem, William (Technical Monitor)

    1994-01-01

    Abstract Aeroelasticity which involves strong coupling of fluids, structures and controls is an important element in designing an aircraft. Computational aeroelasticity using low fidelity methods such as the linear aerodynamic flow equations coupled with the modal structural equations are well advanced. Though these low fidelity approaches are computationally less intensive, they are not adequate for the analysis of modern aircraft such as High Speed Civil Transport (HSCT) and Advanced Subsonic Transport (AST) which can experience complex flow/structure interactions. HSCT can experience vortex induced aeroelastic oscillations whereas AST can experience transonic buffet associated structural oscillations. Both aircraft may experience a dip in the flutter speed at the transonic regime. For accurate aeroelastic computations at these complex fluid/structure interaction situations, high fidelity equations such as the Navier-Stokes for fluids and the finite-elements for structures are needed. Computations using these high fidelity equations require large computational resources both in memory and speed. Current conventional super computers have reached their limitations both in memory and speed. As a result, parallel computers have evolved to overcome the limitations of conventional computers. This paper will address the transition that is taking place in computational aeroelasticity from conventional computers to parallel computers. The paper will address special techniques needed to take advantage of the architecture of new parallel computers. Results will be illustrated from computations made on iPSC/860 and IBM SP2 computer by using ENSAERO code that directly couples the Euler/Navier-Stokes flow equations with high resolution finite-element structural equations.

  18. FPGA Compute Acceleration for High-Throughput Data Processing in High-Energy Physics Experiments

    CERN Multimedia

    CERN. Geneva

    2017-01-01

    The upgrades of the four large experiments of the LHC at CERN in the coming years will result in a huge increase of data bandwidth for each experiment which needs to be processed very efficiently. For example the LHCb experiment will upgrade its detector 2019/2020 to a 'triggerless' readout scheme, where all of the readout electronics and several sub-detector parts will be replaced. The new readout electronics will be able to readout the detector at 40MHz. This increases the data bandwidth from the detector down to the event filter farm to 40TBit/s, which must be processed to select the interesting proton-proton collisions for later storage. The architecture of such a computing farm, which can process this amount of data as efficiently as possible, is a challenging task and several compute accelerator technologies are being considered.    In the high performance computing sector more and more FPGA compute accelerators are being used to improve the compute performance and reduce the...

  19. The demand of guava in Colombia

    Directory of Open Access Journals (Sweden)

    Julio César Alonso-Cifuentes

    2017-01-01

    Full Text Available In Colombia, no systematic work have been carried out to determine the demand for fruits beyond descriptive analysis of per capita consumption according to different individual socioeconomic characteristics and much less for a specific product such as guava, Psidium guajava L. (Myrtaceae. This paper estimates the relationship between guava prices and the demand of guava in Colombia. We found that guava consumption is not affected by its price and its demand is highly correlated to income. While socio-economic characteristics such as income, education and household head labor affect the decision-making for consuming guava, other characteristics such as race and the number of household members determine the quantity of guava consumed in a Colombian household.

  20. High-performance computing for structural mechanics and earthquake/tsunami engineering

    CERN Document Server

    Hori, Muneo; Ohsaki, Makoto

    2016-01-01

    Huge earthquakes and tsunamis have caused serious damage to important structures such as civil infrastructure elements, buildings and power plants around the globe.  To quantitatively evaluate such damage processes and to design effective prevention and mitigation measures, the latest high-performance computational mechanics technologies, which include telascale to petascale computers, can offer powerful tools. The phenomena covered in this book include seismic wave propagation in the crust and soil, seismic response of infrastructure elements such as tunnels considering soil-structure interactions, seismic response of high-rise buildings, seismic response of nuclear power plants, tsunami run-up over coastal towns and tsunami inundation considering fluid-structure interactions. The book provides all necessary information for addressing these phenomena, ranging from the fundamentals of high-performance computing for finite element methods, key algorithms of accurate dynamic structural analysis, fluid flows ...

  1. Computer simulation of high energy displacement cascades

    International Nuclear Information System (INIS)

    Heinisch, H.L.

    1990-01-01

    A methodology developed for modeling many aspects of high energy displacement cascades with molecular level computer simulations is reviewed. The initial damage state is modeled in the binary collision approximation (using the MARLOWE computer code), and the subsequent disposition of the defects within a cascade is modeled with a Monte Carlo annealing simulation (the ALSOME code). There are few adjustable parameters, and none are set to physically unreasonable values. The basic configurations of the simulated high energy cascades in copper, i.e., the number, size and shape of damage regions, compare well with observations, as do the measured numbers of residual defects and the fractions of freely migrating defects. The success of these simulations is somewhat remarkable, given the relatively simple models of defects and their interactions that are employed. The reason for this success is that the behavior of the defects is very strongly influenced by their initial spatial distributions, which the binary collision approximation adequately models. The MARLOWE/ALSOME system, with input from molecular dynamics and experiments, provides a framework for investigating the influence of high energy cascades on microstructure evolution. (author)

  2. ClustalXeed: a GUI-based grid computation version for high performance and terabyte size multiple sequence alignment

    Directory of Open Access Journals (Sweden)

    Kim Taeho

    2010-09-01

    Full Text Available Abstract Background There is an increasing demand to assemble and align large-scale biological sequence data sets. The commonly used multiple sequence alignment programs are still limited in their ability to handle very large amounts of sequences because the system lacks a scalable high-performance computing (HPC environment with a greatly extended data storage capacity. Results We designed ClustalXeed, a software system for multiple sequence alignment with incremental improvements over previous versions of the ClustalX and ClustalW-MPI software. The primary advantage of ClustalXeed over other multiple sequence alignment software is its ability to align a large family of protein or nucleic acid sequences. To solve the conventional memory-dependency problem, ClustalXeed uses both physical random access memory (RAM and a distributed file-allocation system for distance matrix construction and pair-align computation. The computation efficiency of disk-storage system was markedly improved by implementing an efficient load-balancing algorithm, called "idle node-seeking task algorithm" (INSTA. The new editing option and the graphical user interface (GUI provide ready access to a parallel-computing environment for users who seek fast and easy alignment of large DNA and protein sequence sets. Conclusions ClustalXeed can now compute a large volume of biological sequence data sets, which were not tractable in any other parallel or single MSA program. The main developments include: 1 the ability to tackle larger sequence alignment problems than possible with previous systems through markedly improved storage-handling capabilities. 2 Implementing an efficient task load-balancing algorithm, INSTA, which improves overall processing times for multiple sequence alignment with input sequences of non-uniform length. 3 Support for both single PC and distributed cluster systems.

  3. A first attempt to bring computational biology into advanced high school biology classrooms.

    Science.gov (United States)

    Gallagher, Suzanne Renick; Coon, William; Donley, Kristin; Scott, Abby; Goldberg, Debra S

    2011-10-01

    Computer science has become ubiquitous in many areas of biological research, yet most high school and even college students are unaware of this. As a result, many college biology majors graduate without adequate computational skills for contemporary fields of biology. The absence of a computational element in secondary school biology classrooms is of growing concern to the computational biology community and biology teachers who would like to acquaint their students with updated approaches in the discipline. We present a first attempt to correct this absence by introducing a computational biology element to teach genetic evolution into advanced biology classes in two local high schools. Our primary goal was to show students how computation is used in biology and why a basic understanding of computation is necessary for research in many fields of biology. This curriculum is intended to be taught by a computational biologist who has worked with a high school advanced biology teacher to adapt the unit for his/her classroom, but a motivated high school teacher comfortable with mathematics and computing may be able to teach this alone. In this paper, we present our curriculum, which takes into consideration the constraints of the required curriculum, and discuss our experiences teaching it. We describe the successes and challenges we encountered while bringing this unit to high school students, discuss how we addressed these challenges, and make suggestions for future versions of this curriculum.We believe that our curriculum can be a valuable seed for further development of computational activities aimed at high school biology students. Further, our experiences may be of value to others teaching computational biology at this level. Our curriculum can be obtained at http://ecsite.cs.colorado.edu/?page_id=149#biology or by contacting the authors.

  4. Does good leadership buffer effects of high emotional demands at work on risk of antidepressant treatment?

    DEFF Research Database (Denmark)

    Madsen, Ida E H; Hanson, Linda L Magnusson; Rugulies, Reiner Ernst

    2014-01-01

    Emotionally demanding work has been associated with increased risk of common mental disorders. Because emotional demands may not be preventable in certain occupations, the identification of workplace factors that can modify this association is vital. This article examines whether effects of emoti...... of emotional demands on antidepressant treatment, as an indicator of common mental disorders, are buffered by good leadership.......Emotionally demanding work has been associated with increased risk of common mental disorders. Because emotional demands may not be preventable in certain occupations, the identification of workplace factors that can modify this association is vital. This article examines whether effects...

  5. Real Time Animation of Trees Based on BBSC in Computer Games

    Directory of Open Access Journals (Sweden)

    Xuefeng Ao

    2009-01-01

    Full Text Available That researchers in the field of computer games usually find it is difficult to simulate the motion of actual 3D model trees lies in the fact that the tree model itself has very complicated structure, and many sophisticated factors need to be considered during the simulation. Though there are some works on simulating 3D tree and its motion, few of them are used in computer games due to the high demand for real-time in computer games. In this paper, an approach of animating trees in computer games based on a novel tree model representation—Ball B-Spline Curves (BBSCs are proposed. By taking advantage of the good features of the BBSC-based model, physical simulation of the motion of leafless trees with wind blowing becomes easier and more efficient. The method can generate realistic 3D tree animation in real-time, which meets the high requirement for real time in computer games.

  6. Current Trends in Cloud Computing A Survey of Cloud Computing Systems

    OpenAIRE

    Harjit Singh

    2012-01-01

    Cloud computing that has become an increasingly important trend, is a virtualization technology that uses the internet and central remote servers to offer the sharing of resources that include infrastructures, software, applications and business processes to the market environment to fulfill the elastic demand. In today’s competitive environment, the service vitality, elasticity, choices and flexibility offered by this scalable technology are too attractive that makes the cloud computing to i...

  7. High Speed Mobility Through On-Demand Aviation

    Science.gov (United States)

    Moore, Mark D.; Goodrich, Ken; Viken, Jeff; Smith, Jeremy; Fredericks, Bill; Trani, Toni; Barraclough, Jonathan; German, Brian; Patterson, Michael

    2013-01-01

    automobiles. ?? Community Noise: Hub and smaller GA airports are facing increasing noise restrictions, and while commercial airliners have dramatically decreased their community noise footprint over the past 30 years, GA aircraft noise has essentially remained same, and moreover, is located in closer proximity to neighborhoods and businesses. ?? Operating Costs: GA operating costs have risen dramatically due to average fuel costs of over $6 per gallon, which has constrained the market over the past decade and resulted in more than 50% lower sales and 35% less yearly operations. Infusion of autonomy and electric propulsion technologies can accomplish not only a transformation of the GA market, but also provide a technology enablement bridge for both larger aircraft and the emerging civil Unmanned Aerial Systems (UAS) markets. The NASA Advanced General Aviation Transport Experiments (AGATE) project successfully used a similar approach to enable the introduction of primary composite structures and flat panel displays in the 1990s, establishing both the technology and certification standardization to permit quick adoption through partnerships with industry, academia, and the Federal Aviation Administration (FAA). Regional and airliner markets are experiencing constant pressure to achieve decreasing levels of community emissions and noise, while lowering operating costs and improving safety. But to what degree can these new technology frontiers impact aircraft safety, the environment, operations, cost, and performance? Are the benefits transformational enough to fundamentally alter aircraft competiveness and productivity to permit much greater aviation use for high speed and On-Demand Mobility (ODM)? These questions were asked in a Zip aviation system study named after the Zip Car, an emerging car-sharing business model. Zip Aviation investigates the potential to enable new emergent markets for aviation that offer "more flexibility than the existing transportation solutions

  8. Employment consequences of depressive symptoms and work demands individually and combined.

    Science.gov (United States)

    Thielen, Karsten; Nygaard, Else; Andersen, Ingelise; Diderichsen, Finn

    2014-02-01

    Denmark, like other Western countries, is recently burdened by increasingly high social spending on employment consequences caused by ill mental health. This might be the result of high work demands affecting persons with ill mental health. Therefore, this study assesses to what extent depressive symptoms and high work demands, individually and combined, have an effect on employment consequences. We conducted a population-based 7-year longitudinal follow-up study with baseline information from the year 2000 on socio-demographics, lifestyle, depressive symptoms and work demands. In total, 5785 employed persons, aged 40 and 50 years, were included. Information about employment status, sick leave and work disability was obtained from registers. Logistic regression models were used to measure separate and combined effects of depressive symptoms and work demands on job change, unemployment and sick leave during 2001-02 and work disability during 2003-07. After adjustment for covariates, high physical work demands and depressive symptoms had a graded effect on subsequent unemployment, sick leave and permanent work disability. Persons with both depressive symptoms and high physical demands had the highest risks, especially for sick leave, but the combined effect did not exceed the product of single effects. Persons who perceived high amount of work changed job significantly more frequently. Persons with depressive symptoms might have an increased risk of negative employment consequences irrespective of the kind and amount of work demands. This might be an effect on the level of work ability in general as well as partly the result of health selection and co-morbidity.

  9. Templet Web: the use of volunteer computing approach in PaaS-style cloud

    Science.gov (United States)

    Vostokin, Sergei; Artamonov, Yuriy; Tsarev, Daniil

    2018-03-01

    This article presents the Templet Web cloud service. The service is designed for high-performance scientific computing automation. The use of high-performance technology is specifically required by new fields of computational science such as data mining, artificial intelligence, machine learning, and others. Cloud technologies provide a significant cost reduction for high-performance scientific applications. The main objectives to achieve this cost reduction in the Templet Web service design are: (a) the implementation of "on-demand" access; (b) source code deployment management; (c) high-performance computing programs development automation. The distinctive feature of the service is the approach mainly used in the field of volunteer computing, when a person who has access to a computer system delegates his access rights to the requesting user. We developed an access procedure, algorithms, and software for utilization of free computational resources of the academic cluster system in line with the methods of volunteer computing. The Templet Web service has been in operation for five years. It has been successfully used for conducting laboratory workshops and solving research problems, some of which are considered in this article. The article also provides an overview of research directions related to service development.

  10. Computer Science in High School Graduation Requirements. ECS Education Trends (Updated)

    Science.gov (United States)

    Zinth, Jennifer

    2016-01-01

    Allowing high school students to fulfill a math or science high school graduation requirement via a computer science credit may encourage more student to pursue computer science coursework. This Education Trends report is an update to the original report released in April 2015 and explores state policies that allow or require districts to apply…

  11. A Crafts-Oriented Approach to Computing in High School: Introducing Computational Concepts, Practices, and Perspectives with Electronic Textiles

    Science.gov (United States)

    Kafai, Yasmin B.; Lee, Eunkyoung; Searle, Kristin; Fields, Deborah; Kaplan, Eliot; Lui, Debora

    2014-01-01

    In this article, we examine the use of electronic textiles (e-textiles) for introducing key computational concepts and practices while broadening perceptions about computing. The starting point of our work was the design and implementation of a curriculum module using the LilyPad Arduino in a pre-AP high school computer science class. To…

  12. Computing trends using graphic processor in high energy physics

    CERN Document Server

    Niculescu, Mihai

    2011-01-01

    One of the main challenges in Heavy Energy Physics is to make fast analysis of high amount of experimental and simulated data. At LHC-CERN one p-p event is approximate 1 Mb in size. The time taken to analyze the data and obtain fast results depends on high computational power. The main advantage of using GPU(Graphic Processor Unit) programming over traditional CPU one is that graphical cards bring a lot of computing power at a very low price. Today a huge number of application(scientific, financial etc) began to be ported or developed for GPU, including Monte Carlo tools or data analysis tools for High Energy Physics. In this paper, we'll present current status and trends in HEP using GPU.

  13. RAPPORT: running scientific high-performance computing applications on the cloud.

    Science.gov (United States)

    Cohen, Jeremy; Filippis, Ioannis; Woodbridge, Mark; Bauer, Daniela; Hong, Neil Chue; Jackson, Mike; Butcher, Sarah; Colling, David; Darlington, John; Fuchs, Brian; Harvey, Matt

    2013-01-28

    Cloud computing infrastructure is now widely used in many domains, but one area where there has been more limited adoption is research computing, in particular for running scientific high-performance computing (HPC) software. The Robust Application Porting for HPC in the Cloud (RAPPORT) project took advantage of existing links between computing researchers and application scientists in the fields of bioinformatics, high-energy physics (HEP) and digital humanities, to investigate running a set of scientific HPC applications from these domains on cloud infrastructure. In this paper, we focus on the bioinformatics and HEP domains, describing the applications and target cloud platforms. We conclude that, while there are many factors that need consideration, there is no fundamental impediment to the use of cloud infrastructure for running many types of HPC applications and, in some cases, there is potential for researchers to benefit significantly from the flexibility offered by cloud platforms.

  14. Evaluation of high temperature gas reactor for demanding cogeneration load follow

    International Nuclear Information System (INIS)

    Yan, Xing L.; Sato, Hiroyuki; Tachibana, Yukio; Kunitomi, Kazuhiko; Hino, Ryutaro

    2012-01-01

    Modular nuclear reactor systems are being developed around the world for new missions among which is cogeneration for industries and remote areas. Like existing fossil energy counterpart in these markets, a nuclear plant would need to demonstrate the feasibility of load follow including (1) the reliability to generate power and heat simultaneously and alone and (2) the flexibility to vary cogeneration rates concurrent to demand changes. This article reports the results of JAEA's evaluation on the high temperature gas reactor (HTGR) to perform these duties. The evaluation results in a plant design based on the materials and design codes developed with JAEA's operating test reactor and from additional equipment validation programs. The 600 MWt-HTGR plant generates electricity efficiently by gas turbine and 900degC heat by a topping heater. The heater couples via a heat transport loop to industrial facility that consumes the high temperature heat to yield heat product such as hydrogen fuel, steel, or chemical. Original control methods are proposed to automate transition between the load duties. Equipment challenges are addressed for severe operation conditions. Performance limits of cogeneration load following are quantified from the plant system simulation to a range of bounding events including a loss of either load and a rapid peaking of electricity. (author)

  15. Analysis of water supply and demand in high mountain cities of Bolivia under growing population and changing climate

    Science.gov (United States)

    Kinouchi, T.; Mendoza, J.; Asaoka, Y.; Fuchs, P.

    2017-12-01

    Water resources in La Paz and El Alto, high mountain capital cities of Bolivia, strongly depend on the surface and subsurface runoff from partially glacierized catchments located in the Cordillera Real, Andes. Due to growing population and changing climate, the balance between water supply from the source catchments and demand for drinking, agriculture, industry and hydropower has become precarious in recent years as evidenced by a serious drought during the 2015-2016 El Nino event. To predict the long-term availability of water resources under changing climate, we developed a semi-distributed glacio-hydrological model that considers various runoff pathways from partially glacierized high-altitude catchments. Two GCM projections (MRI-AGCM and INGV-ECHAM4) were used for the prediction with bias corrected by reanalysis data (ERA-INTERIM) and downscaled to target areas using data monitored at several weather stations. The model was applied to three catchments from which current water resources are supplied and eight additional catchments that will be potentially effective in compensating reduced runoff from the current water resource areas. For predicting the future water demand, a cohort-component method was used for the projection of size and composition of population change, considering natural and social change (birth, death and transfer). As a result, total population is expected to increase from 1.6 million in 2012 to 2.0 million in 2036. The water demand was predicted for given unit water consumption, non-revenue water rate (NWR), and sectorial percentage of water consumption for domestic, industrial and commercial purposes. The results of hydrological simulations and the analysis of water demand indicated that water supply and demand are barely balanced in recent years, while the total runoff from current water resource areas will continue to decrease and unprecedented water shortage is likely to occur since around 2020 toward the middle of 21st century even

  16. Mobile cloud computing for computation offloading: Issues and challenges

    Directory of Open Access Journals (Sweden)

    Khadija Akherfi

    2018-01-01

    Full Text Available Despite the evolution and enhancements that mobile devices have experienced, they are still considered as limited computing devices. Today, users become more demanding and expect to execute computational intensive applications on their smartphone devices. Therefore, Mobile Cloud Computing (MCC integrates mobile computing and Cloud Computing (CC in order to extend capabilities of mobile devices using offloading techniques. Computation offloading tackles limitations of Smart Mobile Devices (SMDs such as limited battery lifetime, limited processing capabilities, and limited storage capacity by offloading the execution and workload to other rich systems with better performance and resources. This paper presents the current offloading frameworks, computation offloading techniques, and analyzes them along with their main critical issues. In addition, it explores different important parameters based on which the frameworks are implemented such as offloading method and level of partitioning. Finally, it summarizes the issues in offloading frameworks in the MCC domain that requires further research.

  17. Reduction of peak energy demand based on smart appliances energy consumption adjustment

    Science.gov (United States)

    Powroźnik, P.; Szulim, R.

    2017-08-01

    In the paper the concept of elastic model of energy management for smart grid and micro smart grid is presented. For the proposed model a method for reducing peak demand in micro smart grid has been defined. The idea of peak demand reduction in elastic model of energy management is to introduce a balance between demand and supply of current power for the given Micro Smart Grid in the given moment. The results of the simulations studies were presented. They were carried out on real household data available on UCI Machine Learning Repository. The results may have practical application in the smart grid networks, where there is a need for smart appliances energy consumption adjustment. The article presents a proposal to implement the elastic model of energy management as the cloud computing solution. This approach of peak demand reduction might have application particularly in a large smart grid.

  18. Energy efficiency models and optimization algoruthm to enhance on-demand resource delivery in a cloud computing environment / Thusoyaone Joseph Moemi

    OpenAIRE

    Moemi, Thusoyaone Joseph

    2013-01-01

    Online hosed services are what is referred to as Cloud Computing. Access to these services is via the internet. h shifts the traditional IT resource ownership model to renting. Thus, high cost of infrastructure cannot limit the less privileged from experiencing the benefits that this new paradigm brings. Therefore, c loud computing provides flexible services to cloud user in the form o f software, platform and infrastructure as services. The goal behind cloud computing is to provi...

  19. Distributed computing for real-time petroleum reservoir monitoring

    Energy Technology Data Exchange (ETDEWEB)

    Ayodele, O. R. [University of Alberta, Edmonton, AB (Canada)

    2004-05-01

    Computer software architecture is presented to illustrate how the concept of distributed computing can be applied to real-time reservoir monitoring processes, permitting the continuous monitoring of the dynamic behaviour of petroleum reservoirs at much shorter intervals. The paper describes the fundamental technologies driving distributed computing, namely Java 2 Platform Enterprise edition (J2EE) by Sun Microsystems, and the Microsoft Dot-Net (Microsoft.Net) initiative, and explains the challenges involved in distributed computing. These are: (1) availability of permanently placed downhole equipment to acquire and transmit seismic data; (2) availability of high bandwidth to transmit the data; (3) security considerations; (4) adaptation of existing legacy codes to run on networks as downloads on demand; and (5) credibility issues concerning data security over the Internet. Other applications of distributed computing in the petroleum industry are also considered, specifically MWD, LWD and SWD (measurement-while-drilling, logging-while-drilling, and simulation-while-drilling), and drill-string vibration monitoring. 23 refs., 1 fig.

  20. Simple, parallel, high-performance virtual machines for extreme computations

    International Nuclear Information System (INIS)

    Chokoufe Nejad, Bijan; Ohl, Thorsten; Reuter, Jurgen

    2014-11-01

    We introduce a high-performance virtual machine (VM) written in a numerically fast language like Fortran or C to evaluate very large expressions. We discuss the general concept of how to perform computations in terms of a VM and present specifically a VM that is able to compute tree-level cross sections for any number of external legs, given the corresponding byte code from the optimal matrix element generator, O'Mega. Furthermore, this approach allows to formulate the parallel computation of a single phase space point in a simple and obvious way. We analyze hereby the scaling behaviour with multiple threads as well as the benefits and drawbacks that are introduced with this method. Our implementation of a VM can run faster than the corresponding native, compiled code for certain processes and compilers, especially for very high multiplicities, and has in general runtimes in the same order of magnitude. By avoiding the tedious compile and link steps, which may fail for source code files of gigabyte sizes, new processes or complex higher order corrections that are currently out of reach could be evaluated with a VM given enough computing power.

  1. Innovation and Demand

    DEFF Research Database (Denmark)

    Andersen, Esben Sloth

    2007-01-01

    the demand-side of markets in the simplest possible way. This strategy has allowed a gradual increase in the sophistication of supply-side aspects of economic evolution, but the one-sided focus on supply is facing diminishing returns. Therefore, demand-side aspects of economic evolution have in recent years...... received increased attention. The present paper argues that the new emphasis on demand-side factors is quite crucial for a deepened understanding of economic evolution. The major reasons are the following: First, demand represents the core force of selection that gives direction to the evolutionary process....... Second, firms' innovative activities relate, directly or indirectly, to the structure of expected and actual demand. Third, the demand side represents the most obvious way of turning to the much-needed analysis of macro-evolutionary change of the economic system....

  2. Parameters that affect parallel processing for computational electromagnetic simulation codes on high performance computing clusters

    Science.gov (United States)

    Moon, Hongsik

    What is the impact of multicore and associated advanced technologies on computational software for science? Most researchers and students have multicore laptops or desktops for their research and they need computing power to run computational software packages. Computing power was initially derived from Central Processing Unit (CPU) clock speed. That changed when increases in clock speed became constrained by power requirements. Chip manufacturers turned to multicore CPU architectures and associated technological advancements to create the CPUs for the future. Most software applications benefited by the increased computing power the same way that increases in clock speed helped applications run faster. However, for Computational ElectroMagnetics (CEM) software developers, this change was not an obvious benefit - it appeared to be a detriment. Developers were challenged to find a way to correctly utilize the advancements in hardware so that their codes could benefit. The solution was parallelization and this dissertation details the investigation to address these challenges. Prior to multicore CPUs, advanced computer technologies were compared with the performance using benchmark software and the metric was FLoting-point Operations Per Seconds (FLOPS) which indicates system performance for scientific applications that make heavy use of floating-point calculations. Is FLOPS an effective metric for parallelized CEM simulation tools on new multicore system? Parallel CEM software needs to be benchmarked not only by FLOPS but also by the performance of other parameters related to type and utilization of the hardware, such as CPU, Random Access Memory (RAM), hard disk, network, etc. The codes need to be optimized for more than just FLOPs and new parameters must be included in benchmarking. In this dissertation, the parallel CEM software named High Order Basis Based Integral Equation Solver (HOBBIES) is introduced. This code was developed to address the needs of the

  3. Cloud Computing Security Issue: Survey

    Science.gov (United States)

    Kamal, Shailza; Kaur, Rajpreet

    2011-12-01

    Cloud computing is the growing field in IT industry since 2007 proposed by IBM. Another company like Google, Amazon, and Microsoft provides further products to cloud computing. The cloud computing is the internet based computing that shared recourses, information on demand. It provides the services like SaaS, IaaS and PaaS. The services and recourses are shared by virtualization that run multiple operation applications on cloud computing. This discussion gives the survey on the challenges on security issues during cloud computing and describes some standards and protocols that presents how security can be managed.

  4. The comparison of high and standard definition computed ...

    African Journals Online (AJOL)

    The comparison of high and standard definition computed tomography techniques regarding coronary artery imaging. A Aykut, D Bumin, Y Omer, K Mustafa, C Meltem, C Orhan, U Nisa, O Hikmet, D Hakan, K Mert ...

  5. Estimation of the Demand for Hospital Care After a Possible High-Magnitude Earthquake in the City of Lima, Peru.

    Science.gov (United States)

    Bambarén, Celso; Uyen, Angela; Rodriguez, Miguel

    2017-02-01

    Introduction A model prepared by National Civil Defense (INDECI; Lima, Peru) estimated that an earthquake with an intensity of 8.0 Mw in front of the central coast of Peru would result in 51,019 deaths and 686,105 injured in districts of Metropolitan Lima and Callao. Using this information as a base, a study was designed to determine the characteristics of the demand for treatment in public hospitals and to estimate gaps in care in the hours immediately after such an event. A probabilistic model was designed that included the following variables: demand for hospital care; time of arrival at the hospitals; type of medical treatment; reason for hospital admission; and the need for specialized care like hemodialysis, blood transfusions, and surgical procedures. The values for these variables were obtained through a literature search of the databases of the MEDLINE medical bibliography, the Cochrane and SciELO libraries, and Google Scholar for information on earthquakes over the last 30 years of over magnitude 6.0 on the moment magnitude scale. If a high-magnitude earthquake were to occur in Lima, it was estimated that between 23,328 and 178,387 injured would go to hospitals, of which between 4,666 and 121,303 would require inpatient care, while between 18,662 and 57,084 could be treated as outpatients. It was estimated that there would be an average of 8,768 cases of crush syndrome and 54,217 cases of other health problems. Enough blood would be required for 8,761 wounded in the first 24 hours. Furthermore, it was expected that there would be a deficit of hospital beds and operating theaters due to the high demand. Sudden and violent disasters, such as earthquakes, represent significant challenges for health systems and services. This study shows the deficit of preparation and capacity to respond to a possible high-magnitude earthquake. The study also showed there are not enough resources to face mega-disasters, especially in large cities. Bambarén C , Uyen A

  6. Oil supply and demand

    International Nuclear Information System (INIS)

    Rech, O.

    2006-01-01

    The year 2004 saw a change in the oil market paradigm that was confirmed in 2005. Despite a calmer geopolitical context, prices continued to rise vigorously. Driven by world demand, they remain high as a result of the saturation of production and refining capacity. The market is still seeking its new equilibrium. (author)

  7. Oil supply and demand

    Energy Technology Data Exchange (ETDEWEB)

    Rech, O

    2006-07-01

    The year 2004 saw a change in the oil market paradigm that was confirmed in 2005. Despite a calmer geopolitical context, prices continued to rise vigorously. Driven by world demand, they remain high as a result of the saturation of production and refining capacity. The market is still seeking its new equilibrium. (author)

  8. OMNET - high speed data communications for PDP-11 computers

    International Nuclear Information System (INIS)

    Parkman, C.F.; Lee, J.G.

    1979-12-01

    Omnet is a high speed data communications network designed at CERN for PDP-11 computers. It has grown from a link multiplexor system built for a CII 10070 computer into a full multi-point network, to which some fifty computers are now connected. It provides communications facilities for several large experimental installations as well as many smaller systems and has connections to all parts of the CERN site. The transmission protocol is discussed and brief details are given of the hardware and software used in its implementation. Also described is the gateway interface to the CERN packet switching network, 'Cernet'. (orig.)

  9. High-resolution computer-aided moire

    Science.gov (United States)

    Sciammarella, Cesar A.; Bhat, Gopalakrishna K.

    1991-12-01

    This paper presents a high resolution computer assisted moire technique for the measurement of displacements and strains at the microscopic level. The detection of micro-displacements using a moire grid and the problem associated with the recovery of displacement field from the sampled values of the grid intensity are discussed. A two dimensional Fourier transform method for the extraction of displacements from the image of the moire grid is outlined. An example of application of the technique to the measurement of strains and stresses in the vicinity of the crack tip in a compact tension specimen is given.

  10. High speed computer assisted tomography

    International Nuclear Information System (INIS)

    Maydan, D.; Shepp, L.A.

    1980-01-01

    X-ray generation and detection apparatus for use in a computer assisted tomography system which permits relatively high speed scanning. A large x-ray tube having a circular anode (3) surrounds the patient area. A movable electron gun (8) orbits adjacent to the anode. The anode directs into the patient area xrays which are delimited into a fan beam by a pair of collimating rings (21). After passing through the patient, x-rays are detected by an array (22) of movable detectors. Detector subarrays (23) are synchronously movable out of the x-ray plane to permit the passage of the fan beam

  11. High Performance Computing Operations Review Report

    Energy Technology Data Exchange (ETDEWEB)

    Cupps, Kimberly C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2013-12-19

    The High Performance Computing Operations Review (HPCOR) meeting—requested by the ASC and ASCR program headquarters at DOE—was held November 5 and 6, 2013, at the Marriott Hotel in San Francisco, CA. The purpose of the review was to discuss the processes and practices for HPC integration and its related software and facilities. Experiences and lessons learned from the most recent systems deployed were covered in order to benefit the deployment of new systems.

  12. Application of Job Demands-Resources model in research on relationships between job satisfaction, job resources, individual resources and job demands

    Directory of Open Access Journals (Sweden)

    Adrianna Potocka

    2013-04-01

    Full Text Available Background: The aim of this study was to explore the relationships between job demands, job resourses, personal resourses and job satisfaction and to assess the usefulness of the Job Demands-Resources (JD-R model in the explanation of these phenomena. Materials and Methods: The research was based on a sample of 500 social workers. The "Psychosocial Factors" and "Job satisfaction" questionnaires were used to test the hypothesis. Results: The results showed that job satisfaction increased with increasing job accessibility and personal resources (r = 0.44; r = 0.31; p < 0.05. The analysis of variance (ANOVA indicated that job resources and job demands [F(1.474 = 4.004; F(1.474 = 4.166; p < 0.05] were statistically significant sources of variation in job satisfaction. Moreover, interactions between job demands and job resources [F(3,474 = 2.748; p < 0.05], as well as between job demands and personal resources [F(3.474 = 3.021; p < 0.05] had a significant impact on job satisfaction. The post hoc tests showed that 1 in low job demands, but high job resources employees declared higher job satisfaction, than those who perceived them as medium (p = 0.0001 or low (p = 0.0157; 2 when the level of job demands was perceived as medium, employees with high personal resources declared significantly higher job satisfaction than those with low personal resources (p = 0.0001. Conclusion: The JD-R model can be used to investigate job satisfaction. Taking into account fundamental factors of this model, in organizational management there are possibilities of shaping job satisfaction among employees. Med Pr 2013;64(2:217–225

  13. FY 1992 Blue Book: Grand Challenges: High Performance Computing and Communications

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — High performance computing and computer communications networks are becoming increasingly important to scientific advancement, economic competition, and national...

  14. High performance computing and communications: FY 1997 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  15. High Energy Physics Exascale Requirements Review. An Office of Science review sponsored jointly by Advanced Scientific Computing Research and High Energy Physics, June 10-12, 2015, Bethesda, Maryland

    Energy Technology Data Exchange (ETDEWEB)

    Habib, Salman [Argonne National Lab. (ANL), Argonne, IL (United States); Roser, Robert [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Gerber, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Antypas, Katie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dart, Eli [Esnet, Berkeley, CA (United States); Dosanjh, Sudip [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hack, James [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Monga, Inder [Esnet, Berkeley, CA (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Riley, Katherine [Argonne National Lab. (ANL), Argonne, IL (United States); Rotman, Lauren [Esnet, Berkeley, CA (United States); Straatsma, Tjerk [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wells, Jack [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Williams, Tim [Argonne National Lab. (ANL), Argonne, IL (United States); Almgren, A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Amundson, J. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Bailey, Stephen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bard, Deborah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bloom, Ken [Univ. of Nebraska, Lincoln, NE (United States); Bockelman, Brian [Univ. of Nebraska, Lincoln, NE (United States); Borgland, Anders [SLAC National Accelerator Lab., Menlo Park, CA (United States); Borrill, Julian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Boughezal, Radja [Argonne National Lab. (ANL), Argonne, IL (United States); Brower, Richard [Boston Univ., MA (United States); Cowan, Benjamin [SLAC National Accelerator Lab., Menlo Park, CA (United States); Finkel, Hal [Argonne National Lab. (ANL), Argonne, IL (United States); Frontiere, Nicholas [Argonne National Lab. (ANL), Argonne, IL (United States); Fuess, Stuart [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Ge, Lixin [SLAC National Accelerator Lab., Menlo Park, CA (United States); Gnedin, Nick [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Gottlieb, Steven [Indiana Univ., Bloomington, IN (United States); Gutsche, Oliver [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Han, T. [Indiana Univ., Bloomington, IN (United States); Heitmann, Katrin [Argonne National Lab. (ANL), Argonne, IL (United States); Hoeche, Stefan [SLAC National Accelerator Lab., Menlo Park, CA (United States); Ko, Kwok [SLAC National Accelerator Lab., Menlo Park, CA (United States); Kononenko, Oleksiy [SLAC National Accelerator Lab., Menlo Park, CA (United States); LeCompte, Thomas [Argonne National Lab. (ANL), Argonne, IL (United States); Li, Zheng [SLAC National Accelerator Lab., Menlo Park, CA (United States); Lukic, Zarija [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mori, Warren [Univ. of California, Los Angeles, CA (United States); Ng, Cho-Kuen [SLAC National Accelerator Lab., Menlo Park, CA (United States); Nugent, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Oleynik, Gene [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); O’Shea, Brian [Michigan State Univ., East Lansing, MI (United States); Padmanabhan, Nikhil [Yale Univ., New Haven, CT (United States); Petravick, Donald [Univ. of Illinois, Urbana, IL (United States). National Center for Supercomputing Applications; Petriello, Frank J. [Argonne National Lab. (ANL), Argonne, IL (United States); Pope, Adrian [Argonne National Lab. (ANL), Argonne, IL (United States); Power, John [Argonne National Lab. (ANL), Argonne, IL (United States); Qiang, Ji [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Reina, Laura [Florida State Univ., Tallahassee, FL (United States); Rizzo, Thomas Gerard [SLAC National Accelerator Lab., Menlo Park, CA (United States); Ryne, Robert [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schram, Malachi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Spentzouris, P. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Toussaint, Doug [Univ. of Arizona, Tucson, AZ (United States); Vay, Jean Luc [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Viren, B. [Brookhaven National Lab. (BNL), Upton, NY (United States); Wuerthwein, Frank [Univ. of California, San Diego, CA (United States); Xiao, Liling [SLAC National Accelerator Lab., Menlo Park, CA (United States); Coffey, Richard [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-11-29

    The U.S. Department of Energy (DOE) Office of Science (SC) Offices of High Energy Physics (HEP) and Advanced Scientific Computing Research (ASCR) convened a programmatic Exascale Requirements Review on June 10–12, 2015, in Bethesda, Maryland. This report summarizes the findings, results, and recommendations derived from that meeting. The high-level findings and observations are as follows. Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude — and in some cases greater — than that available currently. The growth rate of data produced by simulations is overwhelming the current ability of both facilities and researchers to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. Data rates and volumes from experimental facilities are also straining the current HEP infrastructure in its ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. A close integration of high-performance computing (HPC) simulation and data analysis will greatly aid in interpreting the results of HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. Long-range planning between HEP and ASCR will be required to meet HEP’s research needs. To best use ASCR HPC resources, the experimental HEP program needs (1) an established, long-term plan for access to ASCR computational and data resources, (2) the ability to map workflows to HPC resources, (3) the ability for ASCR facilities to accommodate workflows run by collaborations potentially comprising thousands of individual members, (4) to transition codes to the next-generation HPC platforms that will be available at ASCR

  16. High performance computing in science and engineering '09: transactions of the High Performance Computing Center, Stuttgart (HLRS) 2009

    National Research Council Canada - National Science Library

    Nagel, Wolfgang E; Kröner, Dietmar; Resch, Michael

    2010-01-01

    ...), NIC/JSC (J¨ u lich), and LRZ (Munich). As part of that strategic initiative, in May 2009 already NIC/JSC has installed the first phase of the GCS HPC Tier-0 resources, an IBM Blue Gene/P with roughly 300.000 Cores, this time in J¨ u lich, With that, the GCS provides the most powerful high-performance computing infrastructure in Europe alread...

  17. Matching Behavior as a Tradeoff Between Reward Maximization and Demands on Neural Computation [version 2; referees: 2 approved

    Directory of Open Access Journals (Sweden)

    Jan Kubanek

    2015-10-01

    Full Text Available When faced with a choice, humans and animals commonly distribute their behavior in proportion to the frequency of payoff of each option. Such behavior is referred to as matching and has been captured by the matching law. However, matching is not a general law of economic choice. Matching in its strict sense seems to be specifically observed in tasks whose properties make matching an optimal or a near-optimal strategy. We engaged monkeys in a foraging task in which matching was not the optimal strategy. Over-matching the proportions of the mean offered reward magnitudes would yield more reward than matching, yet, surprisingly, the animals almost exactly matched them. To gain insight into this phenomenon, we modeled the animals' decision-making using a mechanistic model. The model accounted for the animals' macroscopic and microscopic choice behavior. When the models' three parameters were not constrained to mimic the monkeys' behavior, the model over-matched the reward proportions and in doing so, harvested substantially more reward than the monkeys. This optimized model revealed a marked bottleneck in the monkeys' choice function that compares the value of the two options. The model featured a very steep value comparison function relative to that of the monkeys. The steepness of the value comparison function had a profound effect on the earned reward and on the level of matching. We implemented this value comparison function through responses of simulated biological neurons. We found that due to the presence of neural noise, steepening the value comparison requires an exponential increase in the number of value-coding neurons. Matching may be a compromise between harvesting satisfactory reward and the high demands placed by neural noise on optimal neural computation.

  18. Component-based software for high-performance scientific computing

    Energy Technology Data Exchange (ETDEWEB)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  19. Component-based software for high-performance scientific computing

    International Nuclear Information System (INIS)

    Alexeev, Yuri; Allan, Benjamin A; Armstrong, Robert C; Bernholdt, David E; Dahlgren, Tamara L; Gannon, Dennis; Janssen, Curtis L; Kenny, Joseph P; Krishnan, Manojkumar; Kohl, James A; Kumfert, Gary; McInnes, Lois Curfman; Nieplocha, Jarek; Parker, Steven G; Rasmussen, Craig; Windus, Theresa L

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly

  20. Symbolic computation and its application to high energy physics

    International Nuclear Information System (INIS)

    Hearn, A.C.

    1981-01-01

    It is clear that we are in the middle of an electronic revolution whose effect will be as profound as the industrial revolution. The continuing advances in computing technology will provide us with devices which will make present day computers appear primitive. In this environment, the algebraic and other non-mumerical capabilities of such devices will become increasingly important. These lectures will review the present state of the field of algebraic computation and its potential for problem solving in high energy physics and related areas. We shall begin with a brief description of the available systems and examine the data objects which they consider. As an example of the facilities which these systems can offer, we shall then consider the problem of analytic integration, since this is so fundamental to many of the calculational techniques used by high energy physicists. Finally, we shall study the implications which the current developments in hardware technology hold for scientific problem solving. (orig.)

  1. Too easy? The influence of task demands conveyed tacitly on prospective memory

    Science.gov (United States)

    Lourenço, Joana S.; Hill, Johnathan H.; Maylor, Elizabeth A.

    2015-01-01

    Previous research suggests that when intentions are encoded, participants establish an attention allocation policy based on their metacognitive beliefs about how demanding it will be to fulfill the prospective memory (PM) task. We investigated whether tacit PM demands can influence judgments about the cognitive effort required for success, and, as a result, affect ongoing task interference and PM performance. Participants performed a lexical decision task in which a PM task of responding to animal words was embedded. PM demands were tacitly manipulated by presenting participants with either typical or atypical animal exemplars at both instructions and practice (low vs. high tacit demands, respectively). Crucially, objective PM task demands were the same for all participants as PM targets were always atypical animals. Tacit demands affected participants’ attention allocation policies such that task interference was greater for the high than low demands condition. Also, PM performance was reduced in the low relative to the high demands condition. Participants in the low demands condition who succeeded to the first target showed a subsequent increase in task interference, suggesting adjustment to the higher than expected demands. This study demonstrates that tacit information regarding the PM task can affect ongoing task processing as well as harm PM performance when actual demands are higher than expected. Furthermore, in line with the proposal that attention allocation is a dynamic and flexible process, we found evidence that PM task experience can trigger changes in ongoing task interference. PMID:25983687

  2. Octopus: LLL's computing utility

    International Nuclear Information System (INIS)

    Anon.

    1978-01-01

    The Laboratory's Octopus network constitutes one of the greatest concentrations of computing power in the world. This power derives from the network's organization as well as from the size and capability of its computers, storage media, input/output devices, and communication channels. Being in a network enables these facilities to work together to form a unified computing utility that is accessible on demand directly from the users' offices. This computing utility has made a major contribution to the pace of research and development at the Laboratory; an adequate rate of progress in research could not be achieved without it. 4 figures

  3. Employment consequences of depressive symptoms and work demands individually and combined

    DEFF Research Database (Denmark)

    Thielen, Karsten; Nygaard, Else; Andersen, Ingelise

    2013-01-01

    BACKGROUND: Denmark, like other Western countries, is recently burdened by increasingly high social spending on employment consequences caused by ill mental health. This might be the result of high work demands affecting persons with ill mental health. Therefore, this study assesses to what extent...... depressive symptoms and high work demands, individually and combined, have an effect on employment consequences. METHODS: We conducted a population-based 7-year longitudinal follow-up study with baseline information from the year 2000 on socio-demographics, lifestyle, depressive symptoms and work demands...... employment consequences irrespective of the kind and amount of work demands. This might be an effect on the level of work ability in general as well as partly the result of health selection and co-morbidity....

  4. An Experimental QoE Performance Study for the Efficient Transmission of High Demanding Traffic over an Ad Hoc Network Using BATMAN

    Directory of Open Access Journals (Sweden)

    Ramon Sanchez-Iborra

    2015-01-01

    Full Text Available Multimedia communications are attracting great attention from the research, industry, and end-user communities. The latter are increasingly claiming for higher levels of quality and the possibility of consuming multimedia content from a plethora of devices at their disposal. Clearly, the most appealing gadgets are those that communicate wirelessly to access these services. However, current wireless technologies raise severe concerns to support extremely demanding services such as real-time multimedia transmissions. This paper evaluates from QoE and QoS perspectives the capability of the ad hoc routing protocol called BATMAN to support Voice over IP and video traffic. To this end, two test-benches were proposed, namely, a real (emulated testbed and a simulation framework. Additionally, a series of modifications was proposed on both protocols’ parameters settings and video-stream characteristics that contributes to further improving the multimedia quality perceived by the users. The performance of the well-extended protocol OLSR is also evaluated in detail to compare it with BATMAN. From the results, a notably high correlation between real experimentation and computer simulation outcomes was observed. It was also found out that, with the proper configuration, BATMAN is able to transmit several QCIF video-streams and VoIP calls with high quality. In addition, BATMAN outperforms OLSR supporting multimedia traffic in both experimental and simulated environments.

  5. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    Science.gov (United States)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  6. A coordinate descent MM algorithm for fast computation of sparse logistic PCA

    KAUST Repository

    Lee, Seokho

    2013-06-01

    Sparse logistic principal component analysis was proposed in Lee et al. (2010) for exploratory analysis of binary data. Relying on the joint estimation of multiple principal components, the algorithm therein is computationally too demanding to be useful when the data dimension is high. We develop a computationally fast algorithm using a combination of coordinate descent and majorization-minimization (MM) auxiliary optimization. Our new algorithm decouples the joint estimation of multiple components into separate estimations and consists of closed-form elementwise updating formulas for each sparse principal component. The performance of the proposed algorithm is tested using simulation and high-dimensional real-world datasets. © 2013 Elsevier B.V. All rights reserved.

  7. The Future of Cloud Computing

    Directory of Open Access Journals (Sweden)

    Anamaroa SIclovan

    2011-12-01

    Full Text Available Cloud computing was and it will be a new way of providing Internet services and computers. This calculation approach is based on many existing services, such as the Internet, grid computing, Web services. Cloud computing as a system aims to provide on demand services more acceptable as price and infrastructure. It is exactly the transition from computer to a service offeredto the consumers as a product delivered online. This represents an advantage for the organization both regarding the cost and the opportunity for the new business. This paper presents the future perspectives in cloud computing. The paper presents some issues of the cloud computing paradigm. It is a theoretical paper.Keywords: Cloud Computing, Pay-per-use

  8. Efficient Customer Selection for Sustainable Demand Response in Smart Grids

    Energy Technology Data Exchange (ETDEWEB)

    Zois, Vasileios; Frincu, Marc; Chelmis, Charalambos; Saeed, Muhammad Rizwan; Prasanna, Viktor K.

    2014-11-03

    Regulating the power consumption to avoid peaks in demand is a common practice. Demand Response(DR) is being used by utility providers to minimize costs or ensure system reliability. Although it has been used extensively there is a shortage of solutions dealing with dynamic DR. Past attempts focus on minimizing the load demand without considering the sustainability of the reduced energy. In this paper an efficient algorithm is presented which solves the problem of dynamic DR scheduling. Data from the USC campus micro grid were used to evaluate the efficiency as well as the robustness of the proposed solution. The targeted energy reduction is achieved with a maximum average approximation error of ≈ 0.7%. Sustainability of the reduced energy is achieved with respect to the optimal available solution providing a maximum average error less than 0.6%. It is also shown that a solution is provided with a low computational cost fulfilling the requirements of dynamic DR.

  9. Simulation-based Strategies for Smart Demand Response

    Directory of Open Access Journals (Sweden)

    Ines Leobner

    2018-03-01

    Full Text Available Demand Response can be seen as one effective way to harmonize demand and supply in order to achieve high self-coverage of energy consumption by means of renewable energy sources. This paper presents two different simulation-based concepts to integrate demand-response strategies into energy management systems in the customer domain of the Smart Grid. The first approach is a Model Predictive Control of the heating and cooling system of a low-energy office building. The second concept aims at industrial Demand Side Management by integrating energy use optimization into industrial automation systems. Both approaches are targeted at day-ahead planning. Furthermore, insights gained into the implications of the concepts onto the design of the model, simulation and optimization will be discussed. While both approaches share a similar architecture, different modelling and simulation approaches were required by the use cases.

  10. Computer simulation of high resolution transmission electron micrographs: theory and analysis

    International Nuclear Information System (INIS)

    Kilaas, R.

    1985-03-01

    Computer simulation of electron micrographs is an invaluable aid in their proper interpretation and in defining optimum conditions for obtaining images experimentally. Since modern instruments are capable of atomic resolution, simulation techniques employing high precision are required. This thesis makes contributions to four specific areas of this field. First, the validity of a new method for simulating high resolution electron microscope images has been critically examined. Second, three different methods for computing scattering amplitudes in High Resolution Transmission Electron Microscopy (HRTEM) have been investigated as to their ability to include upper Laue layer (ULL) interaction. Third, a new method for computing scattering amplitudes in high resolution transmission electron microscopy has been examined. Fourth, the effect of a surface layer of amorphous silicon dioxide on images of crystalline silicon has been investigated for a range of crystal thicknesses varying from zero to 2 1/2 times that of the surface layer

  11. Regional energy demand and adaptations to climate change: Methodology and application to the state of Maryland, USA

    International Nuclear Information System (INIS)

    Ruth, Matthias; Lin, A.-C.

    2006-01-01

    This paper explores potential impacts of climate change on natural gas, electricity and heating oil use by the residential and commercial sectors in the state of Maryland, USA. Time series analysis is used to quantify historical temperature-energy demand relationships. A dynamic computer model uses those relationships to simulate future energy demand under a range of energy prices, temperatures and other drivers. The results indicate that climate exerts a comparably small signal on future energy demand, but that the combined climate and non-climate-induced changes in energy demand may pose significant challenges to policy and investment decisions in the state

  12. Regional energy demand and adaptations to climate change: Methodology and application to the state of Maryland, USA

    Energy Technology Data Exchange (ETDEWEB)

    Ruth, Matthias [Environmental Policy Program, School of Public Policy, 3139 Van Munching Hall, College Park, MD 20782 (United States)]. E-mail: mruth1@umd.edu; Lin, A.-C. [Environmental Policy Program, School of Public Policy, 3139 Van Munching Hall, College Park, MD 20782 (United States)

    2006-11-15

    This paper explores potential impacts of climate change on natural gas, electricity and heating oil use by the residential and commercial sectors in the state of Maryland, USA. Time series analysis is used to quantify historical temperature-energy demand relationships. A dynamic computer model uses those relationships to simulate future energy demand under a range of energy prices, temperatures and other drivers. The results indicate that climate exerts a comparably small signal on future energy demand, but that the combined climate and non-climate-induced changes in energy demand may pose significant challenges to policy and investment decisions in the state.

  13. Oil supply and demand

    International Nuclear Information System (INIS)

    Babusiaux, D.

    2004-01-01

    Following the military intervention in Iraq, it is taking longer than expected for Iraqi exports to make a comeback on the market. Demand is sustained by economic growth in China and in the United States. OPEC is modulating production to prevent inventory build-up. Prices have stayed high despite increased production by non-OPEC countries, especially Russia. (author)

  14. Oil supply and demand

    Energy Technology Data Exchange (ETDEWEB)

    Babusiaux, D

    2004-07-01

    Following the military intervention in Iraq, it is taking longer than expected for Iraqi exports to make a comeback on the market. Demand is sustained by economic growth in China and in the United States. OPEC is modulating production to prevent inventory build-up. Prices have stayed high despite increased production by non-OPEC countries, especially Russia. (author)

  15. Implications of Ubiquitous Computing for the Social Studies Curriculum

    Science.gov (United States)

    van Hover, Stephanie D.; Berson, Michael J.; Bolick, Cheryl Mason; Swan, Kathleen Owings

    2004-01-01

    In March 2002, members of the National Technology Leadership Initiative (NTLI) met in Charlottesville, Virginia to discuss the potential effects of ubiquitous computing on the field of education. Ubiquitous computing, or "on-demand availability of task-necessary computing power," involves providing every student with a handheld computer--a…

  16. A high performance scientific cloud computing environment for materials simulations

    Science.gov (United States)

    Jorissen, K.; Vila, F. D.; Rehr, J. J.

    2012-09-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditional HPC resources for materials science or quantum chemistry applications.

  17. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    International Nuclear Information System (INIS)

    Khaleel, Mohammad A.

    2009-01-01

    This report is an account of the deliberations and conclusions of the workshop on 'Forefront Questions in Nuclear Science and the Role of High Performance Computing' held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to (1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; (2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; (3) provide nuclear physicists the opportunity to influence the development of high performance computing; and (4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  18. Scientific Grand Challenges: Forefront Questions in Nuclear Science and the Role of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Khaleel, Mohammad A.

    2009-10-01

    This report is an account of the deliberations and conclusions of the workshop on "Forefront Questions in Nuclear Science and the Role of High Performance Computing" held January 26-28, 2009, co-sponsored by the U.S. Department of Energy (DOE) Office of Nuclear Physics (ONP) and the DOE Office of Advanced Scientific Computing (ASCR). Representatives from the national and international nuclear physics communities, as well as from the high performance computing community, participated. The purpose of this workshop was to 1) identify forefront scientific challenges in nuclear physics and then determine which-if any-of these could be aided by high performance computing at the extreme scale; 2) establish how and why new high performance computing capabilities could address issues at the frontiers of nuclear science; 3) provide nuclear physicists the opportunity to influence the development of high performance computing; and 4) provide the nuclear physics community with plans for development of future high performance computing capability by DOE ASCR.

  19. The contribution of high-performance computing and modelling for industrial development

    CSIR Research Space (South Africa)

    Sithole, Happy

    2017-10-01

    Full Text Available Performance Computing and Modelling for Industrial Development Dr Happy Sithole and Dr Onno Ubbink 2 Strategic context • High-performance computing (HPC) combined with machine Learning and artificial intelligence present opportunities to non...

  20. Coordination of Energy Efficiency and Demand Response

    Energy Technology Data Exchange (ETDEWEB)

    Goldman, Charles; Reid, Michael; Levy, Roger; Silverstein, Alison

    2010-01-29

    This paper reviews the relationship between energy efficiency and demand response and discusses approaches and barriers to coordinating energy efficiency and demand response. The paper is intended to support the 10 implementation goals of the National Action Plan for Energy Efficiency's Vision to achieve all cost-effective energy efficiency by 2025. Improving energy efficiency in our homes, businesses, schools, governments, and industries - which consume more than 70 percent of the nation's natural gas and electricity - is one of the most constructive, cost-effective ways to address the challenges of high energy prices, energy security and independence, air pollution, and global climate change. While energy efficiency is an increasingly prominent component of efforts to supply affordable, reliable, secure, and clean electric power, demand response is becoming a valuable tool in utility and regional resource plans. The Federal Energy Regulatory Commission (FERC) estimated the contribution from existing U.S. demand response resources at about 41,000 megawatts (MW), about 5.8 percent of 2008 summer peak demand (FERC, 2008). Moreover, FERC recently estimated nationwide achievable demand response potential at 138,000 MW (14 percent of peak demand) by 2019 (FERC, 2009).2 A recent Electric Power Research Institute study estimates that 'the combination of demand response and energy efficiency programs has the potential to reduce non-coincident summer peak demand by 157 GW' by 2030, or 14-20 percent below projected levels (EPRI, 2009a). This paper supports the Action Plan's effort to coordinate energy efficiency and demand response programs to maximize value to customers. For information on the full suite of policy and programmatic options for removing barriers to energy efficiency, see the Vision for 2025 and the various other Action Plan papers and guides available at www.epa.gov/eeactionplan.

  1. Strategies for Demand Response in Commercial Buildings

    Energy Technology Data Exchange (ETDEWEB)

    Watson, David S.; Kiliccote, Sila; Motegi, Naoya; Piette, Mary Ann

    2006-06-20

    This paper describes strategies that can be used in commercial buildings to temporarily reduce electric load in response to electric grid emergencies in which supplies are limited or in response to high prices that would be incurred if these strategies were not employed. The demand response strategies discussed herein are based on the results of three years of automated demand response field tests in which 28 commercial facilities with an occupied area totaling over 11 million ft{sup 2} were tested. Although the demand response events in the field tests were initiated remotely and performed automatically, the strategies used could also be initiated by on-site building operators and performed manually, if desired. While energy efficiency measures can be used during normal building operations, demand response measures are transient; they are employed to produce a temporary reduction in demand. Demand response strategies achieve reductions in electric demand by temporarily reducing the level of service in facilities. Heating ventilating and air conditioning (HVAC) and lighting are the systems most commonly adjusted for demand response in commercial buildings. The goal of demand response strategies is to meet the electric shed savings targets while minimizing any negative impacts on the occupants of the buildings or the processes that they perform. Occupant complaints were minimal in the field tests. In some cases, ''reductions'' in service level actually improved occupant comfort or productivity. In other cases, permanent improvements in efficiency were discovered through the planning and implementation of ''temporary'' demand response strategies. The DR strategies that are available to a given facility are based on factors such as the type of HVAC, lighting and energy management and control systems (EMCS) installed at the site.

  2. High performance computing and communications: Advancing the frontiers of information technology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

  3. The Geospatial Data Cloud: An Implementation of Applying Cloud Computing in Geosciences

    Directory of Open Access Journals (Sweden)

    Xuezhi Wang

    2014-11-01

    Full Text Available The rapid growth in the volume of remote sensing data and its increasing computational requirements bring huge challenges for researchers as traditional systems cannot adequately satisfy the huge demand for service. Cloud computing has the advantage of high scalability and reliability, which can provide firm technical support. This paper proposes a highly scalable geospatial cloud platform named the Geospatial Data Cloud, which is constructed based on cloud computing. The architecture of the platform is first introduced, and then two subsystems, the cloud-based data management platform and the cloud-based data processing platform, are described.  ––– This paper was presented at the First Scientific Data Conference on Scientific Research, Big Data, and Data Science, organized by CODATA-China and held in Beijing on 24-25 February, 2014.

  4. Greenhouse gas emissions from high demand, natural gas-intensive energy scenarios

    International Nuclear Information System (INIS)

    Victor, D.G.

    1990-01-01

    Since coal and oil emit 70% and 30% more CO 2 per unit of energy than natural gas (methane), fuel switching to natural gas is an obvious pathway to lower CO 2 emissions and reduced theorized greenhouse warming. However, methane is, itself, a strong greenhouse gas so the CO 2 advantages of natural gas may be offset by leaks in the natural gas recovery and supply system. Simple models of atmospheric CO 2 and methane are used to test this hypothesis for several natural gas-intensive energy scenarios, including the work of Ausubel et al (1988). It is found that the methane leaks are significant and may increase the total 'greenhouse effect' from natural gas-intensive energy scenarios by 10%. Furthermore, because methane is short-lived in the atmosphere, leaking methane from natural gas-intensive, high energy growth scenarios effectively recharges the concentration of atmospheric methane continuously. For such scenarios, the problem of methane leaks is even more serious. A second objective is to explore some high demand scenarios that describe the role of methane leaks in the greenhouse tradeoff between gas and coal as energy sources. It is found that the uncertainty in the methane leaks from the natural gas system are large enough to consume the CO 2 advantages from using natural gas instead of coal for 20% of the market share. (author)

  5. High-efficiency photorealistic computer-generated holograms based on the backward ray-tracing technique

    Science.gov (United States)

    Wang, Yuan; Chen, Zhidong; Sang, Xinzhu; Li, Hui; Zhao, Linmin

    2018-03-01

    Holographic displays can provide the complete optical wave field of a three-dimensional (3D) scene, including the depth perception. However, it often takes a long computation time to produce traditional computer-generated holograms (CGHs) without more complex and photorealistic rendering. The backward ray-tracing technique is able to render photorealistic high-quality images, which noticeably reduce the computation time achieved from the high-degree parallelism. Here, a high-efficiency photorealistic computer-generated hologram method is presented based on the ray-tracing technique. Rays are parallelly launched and traced under different illuminations and circumstances. Experimental results demonstrate the effectiveness of the proposed method. Compared with the traditional point cloud CGH, the computation time is decreased to 24 s to reconstruct a 3D object of 100 ×100 rays with continuous depth change.

  6. A data acquisition computer for high energy physics applications DAFNE:- hardware manual

    International Nuclear Information System (INIS)

    Barlow, J.; Seller, P.; De-An, W.

    1983-07-01

    A high performance stand alone computer system based on the Motorola 68000 micro processor has been built at the Rutherford Appleton Laboratory. Although the design was strongly influenced by the requirement to provide a compact data acquisition computer for the high energy physics environment, the system is sufficiently general to find applications in a wider area. It provides colour graphics and tape and disc storage together with access to CAMAC systems. This report is the hardware manual of the data acquisition computer, DAFNE (Data Acquisition For Nuclear Experiments), and as such contains a full description of the hardware structure of the computer system. (author)

  7. A ground-up approach to High Throughput Cloud Computing in High-Energy Physics

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00245123; Ganis, Gerardo; Bagnasco, Stefano

    The thesis explores various practical approaches in making existing High Throughput computing applications common in High Energy Physics work on cloud-provided resources, as well as opening the possibility for running new applications. The work is divided into two parts: firstly we describe the work done at the computing facility hosted by INFN Torino to entirely convert former Grid resources into cloud ones, eventually running Grid use cases on top along with many others in a more flexible way. Integration and conversion problems are duly described. The second part covers the development of solutions for automatizing the orchestration of cloud workers based on the load of a batch queue and the development of HEP applications based on ROOT's PROOF that can adapt at runtime to a changing number of workers.

  8. A Computer Controlled Precision High Pressure Measuring System

    Science.gov (United States)

    Sadana, S.; Yadav, S.; Jha, N.; Gupta, V. K.; Agarwal, R.; Bandyopadhyay, A. K.; Saxena, T. K.

    2011-01-01

    A microcontroller (AT89C51) based electronics has been designed and developed for high precision calibrator based on Digiquartz pressure transducer (DQPT) for the measurement of high hydrostatic pressure up to 275 MPa. The input signal from DQPT is converted into a square wave form and multiplied through frequency multiplier circuit over 10 times to input frequency. This input frequency is multiplied by a factor of ten using phased lock loop. Octal buffer is used to store the calculated frequency, which in turn is fed to microcontroller AT89C51 interfaced with a liquid crystal display for the display of frequency as well as corresponding pressure in user friendly units. The electronics developed is interfaced with a computer using RS232 for automatic data acquisition, computation and storage. The data is acquired by programming in Visual Basic 6.0. This system is interfaced with the PC to make it a computer controlled system. The system is capable of measuring the frequency up to 4 MHz with a resolution of 0.01 Hz and the pressure up to 275 MPa with a resolution of 0.001 MPa within measurement uncertainty of 0.025%. The details on the hardware of the pressure measuring system, associated electronics, software and calibration are discussed in this paper.

  9. Inclusive vision for high performance computing at the CSIR

    CSIR Research Space (South Africa)

    Gazendam, A

    2006-02-01

    Full Text Available and computationally intensive applications. A number of different technologies and standards were identified as core to the open and distributed high-performance infrastructure envisaged...

  10. Monitoring system of multiple fire fighting based on computer vision

    Science.gov (United States)

    Li, Jinlong; Wang, Li; Gao, Xiaorong; Wang, Zeyong; Zhao, Quanke

    2010-10-01

    With the high demand of fire control in spacious buildings, computer vision is playing a more and more important role. This paper presents a new monitoring system of multiple fire fighting based on computer vision and color detection. This system can adjust to the fire position and then extinguish the fire by itself. In this paper, the system structure, working principle, fire orientation, hydrant's angle adjusting and system calibration are described in detail; also the design of relevant hardware and software is introduced. At the same time, the principle and process of color detection and image processing are given as well. The system runs well in the test, and it has high reliability, low cost, and easy nodeexpanding, which has a bright prospect of application and popularization.

  11. Knowledge acquisition and interface design for learning on demand systems

    Science.gov (United States)

    Nelson, Wayne A.

    1993-01-01

    The rapid changes in our world precipitated by technology have created new problems and new challenges for education and training. A knowledge 'explosion' is occurring as our society moves toward a service oriented economy that relies on information as the major resource. Complex computer systems are beginning to dominate the workplace, causing alarming growth and change in many fields. The rapidly changing nature of the workplace, especially in fields related to information technology, requires that our knowledge be updated constantly. This characteristic of modern society poses seemingly unsolvable instructional problems involving coverage and obsolescence. The sheer amount of information to be learned is rapidly increasing, while at the same time some information becomes obsolete in light of new information. Education, therefore, must become a lifelong process that features learning of new material and skills as needed in relation to the job to be done. Because of the problems cited above, the current model of learning in advance may no longer be feasible in our high-technology world. In many cases, learning in advance is impossible because there are simply too many things to learn. In addition, learning in advance can be time consuming, and often results in decontextualized knowledge that does not readily transfer to the work environment. The large and growing discrepancy between the amount of potentially relevant knowledge available and the amount a person can know and remember makes learning on demand an important alternative to current instructional practices. Learning on demand takes place whenever an individual must learn something new in order to perform a task or make a decision. Learning on demand is a promising approach for addressing the problems of coverage and obsolescence because learning is contextualized and integrated into the task environment rather than being relegated to a separate phase that precedes work. Learning on demand allows learners

  12. Cloud Computing Governance Lifecycle

    OpenAIRE

    Soňa Karkošková; George Feuerlicht

    2016-01-01

    Externally provisioned cloud services enable flexible and on-demand sourcing of IT resources. Cloud computing introduces new challenges such as need of business process redefinition, establishment of specialized governance and management, organizational structures and relationships with external providers and managing new types of risk arising from dependency on external providers. There is a general consensus that cloud computing in addition to challenges brings many benefits but it is uncle...

  13. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    Science.gov (United States)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  14. Computation of high Reynolds number internal/external flows

    International Nuclear Information System (INIS)

    Cline, M.C.; Wilmoth, R.G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, has been developed to calculate high Reynolds number, internal/external flows. VNAP2 solves the two-dimensional, time-dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented

  15. Computation of high Reynolds number internal/external flows

    Science.gov (United States)

    Cline, M. C.; Wilmoth, R. G.

    1981-01-01

    A general, user oriented computer program, called VNAP2, was developed to calculate high Reynolds number, internal/ external flows. The VNAP2 program solves the two dimensional, time dependent Navier-Stokes equations. The turbulence is modeled with either a mixing-length, a one transport equation, or a two transport equation model. Interior grid points are computed using the explicit MacCormack Scheme with special procedures to speed up the calculation in the fine grid. All boundary conditions are calculated using a reference plane characteristic scheme with the viscous terms treated as source terms. Several internal, external, and internal/external flow calculations are presented.

  16. Development of reader for the demand data from compound demand meter for power supply/demand (CDM). Development of recommended tools for load leveling in existing works; Denryoku jukyuyo fukugo keiki kara no demand data yomitori sochi no kaihatsu. Kisetsu kojo no fuka heijunka suisho tool no kaihatsu

    Energy Technology Data Exchange (ETDEWEB)

    Sakamoto, S. [Kansai Electric Power Co. Inc., Osaka (Japan)

    1997-10-10

    Kansai Electric Power has developed a system which reads the demand data for 30min. stored in the compound demand meter for power supply/demand (CDM), and prints the load curves. It is for customers of high-voltage power of less than 500kW, where load management is less extensive than that in larger users, for initial consulting on improvement of load factor (recommendation of heat storage contracts). It is to be installed on the spot to display the load curves, to allow the expert visiting the site to issue initial proposals immediately. It displays `daily demands by time zone` instead of `monthly power consumption` previously provided, and makes the graph of demands by time zone. It is designed to be compact, light, and easily and safely handled. The field test results indicate that the system can be sufficiently practical with the major performance items. 4 figs., 1 tab.

  17. Analysis of stationary fuel cell dynamic ramping capabilities and ultra capacitor energy storage using high resolution demand data

    Science.gov (United States)

    Meacham, James R.; Jabbari, Faryar; Brouwer, Jacob; Mauzey, Josh L.; Samuelsen, G. Scott

    Current high temperature fuel cell (HTFC) systems used for stationary power applications (in the 200-300 kW size range) have very limited dynamic load following capability or are simply base load devices. Considering the economics of existing electric utility rate structures, there is little incentive to increase HTFC ramping capability beyond 1 kWs -1 (0.4% s -1). However, in order to ease concerns about grid instabilities from utility companies and increase market adoption, HTFC systems will have to increase their ramping abilities, and will likely have to incorporate electrical energy storage (EES). Because batteries have low power densities and limited lifetimes in highly cyclic applications, ultra capacitors may be the EES medium of choice. The current analyses show that, because ultra capacitors have a very low energy storage density, their integration with HTFC systems may not be feasible unless the fuel cell has a ramp rate approaching 10 kWs -1 (4% s -1) when using a worst-case design analysis. This requirement for fast dynamic load response characteristics can be reduced to 1 kWs -1 by utilizing high resolution demand data to properly size ultra capacitor systems and through demand management techniques that reduce load volatility.

  18. Cloud Computing: Architecture and Services

    OpenAIRE

    Ms. Ravneet Kaur

    2018-01-01

    Cloud computing is Internet-based computing, whereby shared resources, software, and information are provided to computers and other devices on demand, like the electricity grid. It is a method for delivering information technology (IT) services where resources are retrieved from the Internet through web-based tools and applications, as opposed to a direct connection to a server. Rather than keeping files on a proprietary hard drive or local storage device, cloud-based storage makes it possib...

  19. Highly Scalable Asynchronous Computing Method for Partial Differential Equations: A Path Towards Exascale

    Science.gov (United States)

    Konduri, Aditya

    Many natural and engineering systems are governed by nonlinear partial differential equations (PDEs) which result in a multiscale phenomena, e.g. turbulent flows. Numerical simulations of these problems are computationally very expensive and demand for extreme levels of parallelism. At realistic conditions, simulations are being carried out on massively parallel computers with hundreds of thousands of processing elements (PEs). It has been observed that communication between PEs as well as their synchronization at these extreme scales take up a significant portion of the total simulation time and result in poor scalability of codes. This issue is likely to pose a bottleneck in scalability of codes on future Exascale systems. In this work, we propose an asynchronous computing algorithm based on widely used finite difference methods to solve PDEs in which synchronization between PEs due to communication is relaxed at a mathematical level. We show that while stability is conserved when schemes are used asynchronously, accuracy is greatly degraded. Since message arrivals at PEs are random processes, so is the behavior of the error. We propose a new statistical framework in which we show that average errors drop always to first-order regardless of the original scheme. We propose new asynchrony-tolerant schemes that maintain accuracy when synchronization is relaxed. The quality of the solution is shown to depend, not only on the physical phenomena and numerical schemes, but also on the characteristics of the computing machine. A novel algorithm using remote memory access communications has been developed to demonstrate excellent scalability of the method for large-scale computing. Finally, we present a path to extend this method in solving complex multi-scale problems on Exascale machines.

  20. Global Food Demand Scenarios for the 21st Century

    Science.gov (United States)

    Biewald, Anne; Weindl, Isabelle; Popp, Alexander; Lotze-Campen, Hermann

    2015-01-01

    Long-term food demand scenarios are an important tool for studying global food security and for analysing the environmental impacts of agriculture. We provide a simple and transparent method to create scenarios for future plant-based and animal-based calorie demand, using time-dependent regression models between calorie demand and income. The scenarios can be customized to a specific storyline by using different input data for gross domestic product (GDP) and population projections and by assuming different functional forms of the regressions. Our results confirm that total calorie demand increases with income, but we also found a non-income related positive time-trend. The share of animal-based calories is estimated to rise strongly with income for low-income groups. For high income groups, two ambiguous relations between income and the share of animal-based products are consistent with historical data: First, a positive relation with a strong negative time-trend and second a negative relation with a slight negative time-trend. The fits of our regressions are highly significant and our results compare well to other food demand estimates. The method is exemplarily used to construct four food demand scenarios until the year 2100 based on the storylines of the IPCC Special Report on Emissions Scenarios (SRES). We find in all scenarios a strong increase of global food demand until 2050 with an increasing share of animal-based products, especially in developing countries. PMID:26536124