WorldWideScience

Sample records for energy analysis computer

  1. Computational Analysis on Performance of Thermal Energy Storage (TES) Diffuser

    Science.gov (United States)

    Adib, M. A. H. M.; Adnan, F.; Ismail, A. R.; Kardigama, K.; Salaam, H. A.; Ahmad, Z.; Johari, N. H.; Anuar, Z.; Azmi, N. S. N.

    2012-09-01

    Application of thermal energy storage (TES) system reduces cost and energy consumption. The performance of the overall operation is affected by diffuser design. In this study, computational analysis is used to determine the thermocline thickness. Three dimensional simulations with different tank height-to-diameter ratio (HD), diffuser opening and the effect of difference number of diffuser holes are investigated. Medium HD tanks simulations with double ring octagonal diffuser show good thermocline behavior and clear distinction between warm and cold water. The result show, the best performance of thermocline thickness during 50% time charging occur in medium tank with height-to-diameter ratio of 4.0 and double ring octagonal diffuser with 48 holes (9mm opening ~ 60%) acceptable compared to diffuser with 6mm ~ 40% and 12mm ~ 80% opening. The conclusion is computational analysis method are very useful in the study on performance of thermal energy storage (TES).

  2. Computational Analysis on Performance of Thermal Energy Storage (TES) Diffuser

    International Nuclear Information System (INIS)

    Adib, M A H M; Ismail, A R; Kardigama, K; Salaam, H A; Ahmad, Z; Johari, N H; Anuar, Z; Azmi, N S N; Adnan, F

    2012-01-01

    Application of thermal energy storage (TES) system reduces cost and energy consumption. The performance of the overall operation is affected by diffuser design. In this study, computational analysis is used to determine the thermocline thickness. Three dimensional simulations with different tank height-to-diameter ratio (HD), diffuser opening and the effect of difference number of diffuser holes are investigated. Medium HD tanks simulations with double ring octagonal diffuser show good thermocline behavior and clear distinction between warm and cold water. The result show, the best performance of thermocline thickness during 50% time charging occur in medium tank with height-to-diameter ratio of 4.0 and double ring octagonal diffuser with 48 holes (9mm opening ∼ 60%) acceptable compared to diffuser with 6mm ∼ 40% and 12mm ∼ 80% opening. The conclusion is computational analysis method are very useful in the study on performance of thermal energy storage (TES).

  3. Assessing Power Monitoring Approaches for Energy and Power Analysis of Computers

    OpenAIRE

    El Mehdi Diouria, Mohammed; Dolz Zaragozá, Manuel Francisco; Glückc, Olivier; Lefèvre, Laurent; Alonso, Pedro; Catalán Pallarés, Sandra; Mayo, Rafael; Quintana Ortí, Enrique S.

    2014-01-01

    Large-scale distributed systems (e.g., datacenters, HPC systems, clouds, large-scale networks, etc.) consume and will consume enormous amounts of energy. Therefore, accurately monitoring the power dissipation and energy consumption of these systems is more unavoidable. The main novelty of this contribution is the analysis and evaluation of different external and internal power monitoring devices tested using two different computing systems, a server and a desktop machine. Furthermore, we prov...

  4. A new computer code for quantitative analysis of low-energy ion scattering data

    NARCIS (Netherlands)

    Dorenbos, G; Breeman, M; Boerma, D.O

    We have developed a computer program for the full analysis of low-energy ion scattering (LEIS) data, i.e. an analysis that is equivalent to the full calculation of the three-dimensional trajectories of beam particles through a number of layers in the solid, and ending in the detector. A dedicated

  5. Computing in high energy physics

    International Nuclear Information System (INIS)

    Hertzberger, L.O.; Hoogland, W.

    1986-01-01

    This book deals with advanced computing applications in physics, and in particular in high energy physics environments. The main subjects covered are networking; vector and parallel processing; and embedded systems. Also examined are topics such as operating systems, future computer architectures and commercial computer products. The book presents solutions that are foreseen as coping, in the future, with computing problems in experimental and theoretical High Energy Physics. In the experimental environment the large amounts of data to be processed offer special problems on-line as well as off-line. For on-line data reduction, embedded special purpose computers, which are often used for trigger applications are applied. For off-line processing, parallel computers such as emulator farms and the cosmic cube may be employed. The analysis of these topics is therefore a main feature of this volume

  6. Determining Balıkesir’s Energy Potential Using a Regression Analysis Computer Program

    Directory of Open Access Journals (Sweden)

    Bedri Yüksel

    2014-01-01

    Full Text Available Solar power and wind energy are used concurrently during specific periods, while at other times only the more efficient is used, and hybrid systems make this possible. When establishing a hybrid system, the extent to which these two energy sources support each other needs to be taken into account. This paper is a study of the effects of wind speed, insolation levels, and the meteorological parameters of temperature and humidity on the energy potential in Balıkesir, in the Marmara region of Turkey. The relationship between the parameters was studied using a multiple linear regression method. Using a designed-for-purpose computer program, two different regression equations were derived, with wind speed being the dependent variable in the first and insolation levels in the second. The regression equations yielded accurate results. The computer program allowed for the rapid calculation of different acceptance rates. The results of the statistical analysis proved the reliability of the equations. An estimate of identified meteorological parameters and unknown parameters could be produced with a specified precision by using the regression analysis method. The regression equations also worked for the evaluation of energy potential.

  7. GAMUT: A computer code for γ-ray energy and intensity analysis

    International Nuclear Information System (INIS)

    Firestone, R.B.

    1991-05-01

    GAMUT is a computer code to analyze γ-ray energies and intensities. It does a linear least-squares fit of measured γ-ray energies from one or more experiments to the level scheme. GAMUT also performs a non-linear least-squares analysis of branching intensities. For both energy and intensity data, a statistical Chi-square analysis is performed with an iterative uncertainty adjustment. The uncertainties of outlying measured values and sets of measurements with x 2 /f>1 are increased, and the calculation is repeated until the uncertainties are consistent with the fitted values. GAMUT accepts input from standard or special-format ENSDF data sets. The special-format ENSDF data sets were designed to permit analysis of more than one set of measurements associated with a single ENSDF data set. GAMUT prepares a standard ENSDF format output data set containing the adjusted values. If more than one input ENSDF data set is provided, GAMUT creates an ADOPTED LEVELS, GAMMAS data set containing the adjusted level and γ-ray energies and branching intensities from each level normalized to 100 for the strongest γ-ray. GAMUT also provides a summary of the results and an extensive log of the iterative analysis. GAMUT is interactive prompting the user for input and output file names and for default calculation options. This version of GAMUT has adjustable dimensions so that any maximum number of data sets, levels, and γ-rays can be established at the time of implementation. 6 refs

  8. GRID computing for experimental high energy physics

    International Nuclear Information System (INIS)

    Moloney, G.R.; Martin, L.; Seviour, E.; Taylor, G.N.; Moorhead, G.F.

    2002-01-01

    Full text: The Large Hadron Collider (LHC), to be completed at the CERN laboratory in 2006, will generate 11 petabytes of data per year. The processing of this large data stream requires a large, distributed computing infrastructure. A recent innovation in high performance distributed computing, the GRID, has been identified as an important tool in data analysis for the LHC. GRID computing has actual and potential application in many fields which require computationally intensive analysis of large, shared data sets. The Australian experimental High Energy Physics community has formed partnerships with the High Performance Computing community to establish a GRID node at the University of Melbourne. Through Australian membership of the ATLAS experiment at the LHC, Australian researchers have an opportunity to be involved in the European DataGRID project. This presentation will include an introduction to the GRID, and it's application to experimental High Energy Physics. We will present the results of our studies, including participation in the first LHC data challenge

  9. Computing in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Watase, Yoshiyuki

    1991-09-15

    The increasingly important role played by computing and computers in high energy physics is displayed in the 'Computing in High Energy Physics' series of conferences, bringing together experts in different aspects of computing - physicists, computer scientists, and vendors.

  10. Computer-aided engineering in High Energy Physics

    International Nuclear Information System (INIS)

    Bachy, G.; Hauviller, C.; Messerli, R.; Mottier, M.

    1988-01-01

    Computing, standard tool for a long time in the High Energy Physics community, is being slowly introduced at CERN in the mechanical engineering field. The first major application was structural analysis followed by Computer-Aided Design (CAD). Development work is now progressing towards Computer-Aided Engineering around a powerful data base. This paper gives examples of the power of this approach applied to engineering for accelerators and detectors

  11. Computing in high energy physics

    International Nuclear Information System (INIS)

    Watase, Yoshiyuki

    1991-01-01

    The increasingly important role played by computing and computers in high energy physics is displayed in the 'Computing in High Energy Physics' series of conferences, bringing together experts in different aspects of computing - physicists, computer scientists, and vendors

  12. Turbo Pascal Computer Code for PIXE Analysis

    International Nuclear Information System (INIS)

    Darsono

    2002-01-01

    To optimal utilization of the 150 kV ion accelerator facilities and to govern the analysis technique using ion accelerator, the research and development of low energy PIXE technology has been done. The R and D for hardware of the low energy PIXE installation in P3TM have been carried on since year 2000. To support the R and D of PIXE accelerator facilities in harmonize with the R and D of the PIXE hardware, the development of PIXE software for analysis is also needed. The development of database of PIXE software for analysis using turbo Pascal computer code is reported in this paper. This computer code computes the ionization cross-section, the fluorescence yield, and the stopping power of elements also it computes the coefficient attenuation of X- rays energy. The computer code is named PIXEDASIS and it is part of big computer code planed for PIXE analysis that will be constructed in the near future. PIXEDASIS is designed to be communicative with the user. It has the input from the keyboard. The output shows in the PC monitor, which also can be printed. The performance test of the PIXEDASIS shows that it can be operated well and it can provide data agreement with data form other literatures. (author)

  13. Energy expenditure in adolescents playing new generation computer games.

    Science.gov (United States)

    Graves, Lee; Stratton, Gareth; Ridgers, N D; Cable, N T

    2008-07-01

    To compare the energy expenditure of adolescents when playing sedentary and new generation active computer games. Cross sectional comparison of four computer games. Setting Research laboratories. Six boys and five girls aged 13-15 years. Participants were fitted with a monitoring device validated to predict energy expenditure. They played four computer games for 15 minutes each. One of the games was sedentary (XBOX 360) and the other three were active (Wii Sports). Predicted energy expenditure, compared using repeated measures analysis of variance. Mean (standard deviation) predicted energy expenditure when playing Wii Sports bowling (190.6 (22.2) kl/kg/min), tennis (202.5 (31.5) kl/kg/min), and boxing (198.1 (33.9) kl/kg/min) was significantly greater than when playing sedentary games (125.5 (13.7) kl/kg/min) (Pgames. Playing new generation active computer games uses significantly more energy than playing sedentary computer games but not as much energy as playing the sport itself. The energy used when playing active Wii Sports games was not of high enough intensity to contribute towards the recommended daily amount of exercise in children.

  14. Computing in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Sarah; Devenish, Robin [Nuclear Physics Laboratory, Oxford University (United Kingdom)

    1989-07-15

    Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations. The state of the art, as well as new trends and hopes, were reflected in this year's 'Computing In High Energy Physics' conference held in the dreamy setting of Oxford's spires. The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference's aim – 'to bring together high energy physicists and computer scientists'.

  15. High energy physics and grid computing

    International Nuclear Information System (INIS)

    Yu Chuansong

    2004-01-01

    The status of the new generation computing environment of the high energy physics experiments is introduced briefly in this paper. The development of the high energy physics experiments and the new computing requirements by the experiments are presented. The blueprint of the new generation computing environment of the LHC experiments, the history of the Grid computing, the R and D status of the high energy physics grid computing technology, the network bandwidth needed by the high energy physics grid and its development are described. The grid computing research in Chinese high energy physics community is introduced at last. (authors)

  16. Computing in high energy physics

    International Nuclear Information System (INIS)

    Smith, Sarah; Devenish, Robin

    1989-01-01

    Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations. The state of the art, as well as new trends and hopes, were reflected in this year's 'Computing In High Energy Physics' conference held in the dreamy setting of Oxford's spires. The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference's aim – 'to bring together high energy physicists and computer scientists'

  17. Magnetic fusion energy and computers: the role of computing in magnetic fusion energy research and development

    International Nuclear Information System (INIS)

    1979-10-01

    This report examines the role of computing in the Department of Energy magnetic confinement fusion program. The present status of the MFECC and its associated network is described. The third part of this report examines the role of computer models in the main elements of the fusion program and discusses their dependence on the most advanced scientific computers. A review of requirements at the National MFE Computer Center was conducted in the spring of 1976. The results of this review led to the procurement of the CRAY 1, the most advanced scientific computer available, in the spring of 1978. The utilization of this computer in the MFE program has been very successful and is also described in the third part of the report. A new study of computer requirements for the MFE program was conducted during the spring of 1979 and the results of this analysis are presented in the forth part of this report

  18. Structural analysis of magnetic fusion energy systems in a combined interactive/batch computer environment

    International Nuclear Information System (INIS)

    Johnson, N.E.; Singhal, M.K.; Walls, J.C.; Gray, W.H.

    1979-01-01

    A system of computer programs has been developed to aid in the preparation of input data for and the evaluation of output data from finite element structural analyses of magnetic fusion energy devices. The system utilizes the NASTRAN structural analysis computer program and a special set of interactive pre- and post-processor computer programs, and has been designed for use in an environment wherein a time-share computer system is linked to a batch computer system. In such an environment, the analyst must only enter, review and/or manipulate data through interactive terminals linked to the time-share computer system. The primary pre-processor programs include NASDAT, NASERR and TORMAC. NASDAT and TORMAC are used to generate NASTRAN input data. NASERR performs routine error checks on this data. The NASTRAN program is run on a batch computer system using data generated by NASDAT and TORMAC. The primary post-processing programs include NASCMP and NASPOP. NASCMP is used to compress the data initially stored on magnetic tape by NASTRAN so as to facilitate interactive use of the data. NASPOP reads the data stored by NASCMP and reproduces NASTRAN output for selected grid points, elements and/or data types

  19. High energy physics and cloud computing

    International Nuclear Information System (INIS)

    Cheng Yaodong; Liu Baoxu; Sun Gongxing; Chen Gang

    2011-01-01

    High Energy Physics (HEP) has been a strong promoter of computing technology, for example WWW (World Wide Web) and the grid computing. In the new era of cloud computing, HEP has still a strong demand, and major international high energy physics laboratories have launched a number of projects to research on cloud computing technologies and applications. It describes the current developments in cloud computing and its applications in high energy physics. Some ongoing projects in the institutes of high energy physics, Chinese Academy of Sciences, including cloud storage, virtual computing clusters, and BESⅢ elastic cloud, are also described briefly in the paper. (authors)

  20. Engineering computations at the national magnetic fusion energy computer center

    International Nuclear Information System (INIS)

    Murty, S.

    1983-01-01

    The National Magnetic Fusion Energy Computer Center (NMFECC) was established by the U.S. Department of Energy's Division of Magnetic Fusion Energy (MFE). The NMFECC headquarters is located at Lawrence Livermore National Laboratory. Its purpose is to apply large-scale computational technology and computing techniques to the problems of controlled thermonuclear research. In addition to providing cost effective computing services, the NMFECC also maintains a large collection of computer codes in mathematics, physics, and engineering that is shared by the entire MFE research community. This review provides a broad perspective of the NMFECC, and a list of available codes at the NMFECC for engineering computations is given

  1. Economic Analysis of Nuclear Energy

    International Nuclear Information System (INIS)

    Lee, Han Myung; Lee, M. K.; Moon, K. H.; Kim, S. S.; Lim, C. Y.; Song, K. D.; Oh, K. B.

    2002-12-01

    This study deals with current energy issues, environmental aspects of energy, project feasibility evaluation, and activities of international organizations. Current energy issues including activities related with UNFCCC, sustainable development, and global concern on energy issues were surveyed with focusing on nuclear related activities. Environmental aspects of energy includes various topics such as, inter- industrial analysis of nuclear sector, the role of nuclear power in mitigating GHG emission, carbon capture and sequestration technology, hydrogen production by using nuclear energy, Life Cycle Analysis as a method of evaluating environmental impacts of a technology, and spent fuel management in the case of introducing fast reactor and/or accelerator driven system. Project feasibility evaluation includes nuclear desalination using SMART reactor, and introduction of COMFAR computer model, developed by UNIDO to carry out feasibility analysis in terms of business attitude. Activities of international organizations includes energy planning activities of IAEA and OECD/NEA, introduction of the activities of FNCA, one of the cooperation mechanism among Asian countries. In addition, MESSAGE computer model was also introduced. The model is being developed by IAEA to effectively handle liberalization of electricity market combined with environmental constraints

  2. Computational approaches to energy materials

    CERN Document Server

    Catlow, Richard; Walsh, Aron

    2013-01-01

    The development of materials for clean and efficient energy generation and storage is one of the most rapidly developing, multi-disciplinary areas of contemporary science, driven primarily by concerns over global warming, diminishing fossil-fuel reserves, the need for energy security, and increasing consumer demand for portable electronics. Computational methods are now an integral and indispensable part of the materials characterisation and development process.   Computational Approaches to Energy Materials presents a detailed survey of current computational techniques for the

  3. The application of AFS in the high energy physics computing system

    International Nuclear Information System (INIS)

    Xu Dong; Yan Xiaofei; Chen Yaodong; Chen Gang; Yu Chuansong

    2010-01-01

    With the development of high energy physics, physics experiments are producing large amount of data. The workload of data analysis is very large, and the analysis work needs to be finished by many scientists together. So, the computing system must provide more secure user manage function and higher level of data-sharing ability. The article introduces a solution based on AFS in the high energy physics computing system, which not only make user management safer, but also make data-sharing easier. (authors)

  4. Grid Computing in High Energy Physics

    International Nuclear Information System (INIS)

    Avery, Paul

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them.Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software resources, regardless of location); (4) collaboration (providing tools that allow members full and fair access to all collaboration resources and enable distributed teams to work effectively, irrespective of location); and (5) education, training and outreach (providing resources and mechanisms for training students and for communicating important information to the public).It is believed that computing infrastructures based on Data Grids and optical networks can meet these challenges and can offer data intensive enterprises in high energy physics and elsewhere a comprehensive, scalable framework for collaboration and resource sharing. A number of Data Grid projects have been underway since 1999. Interestingly, the most exciting and far ranging of these projects are led by collaborations of high energy physicists, computer scientists and scientists from other disciplines in support of experiments with massive, near-term data needs. I review progress in this

  5. Grid computing in high energy physics

    CERN Document Server

    Avery, P

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them. Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software r...

  6. Magnetic-fusion energy and computers

    International Nuclear Information System (INIS)

    Killeen, J.

    1982-01-01

    The application of computers to magnetic fusion energy research is essential. In the last several years the use of computers in the numerical modeling of fusion systems has increased substantially. There are several categories of computer models used to study the physics of magnetically confined plasmas. A comparable number of types of models for engineering studies are also in use. To meet the needs of the fusion program, the National Magnetic Fusion Energy Computer Center has been established at the Lawrence Livermore National Laboratory. A large central computing facility is linked to smaller computer centers at each of the major MFE laboratories by a communication network. In addition to providing cost effective computing services, the NMFECC environment stimulates collaboration and the sharing of computer codes among the various fusion research groups

  7. Magnetic fusion energy and computers

    International Nuclear Information System (INIS)

    Killeen, J.

    1982-01-01

    The application of computers to magnetic fusion energy research is essential. In the last several years the use of computers in the numerical modeling of fusion systems has increased substantially. There are several categories of computer models used to study the physics of magnetically confined plasmas. A comparable number of types of models for engineering studies are also in use. To meet the needs of the fusion program, the National Magnetic Fusion Energy Computer Center has been established at the Lawrence Livermore National Laboratory. A large central computing facility is linked to smaller computer centers at each of the major MFE laboratories by a communication network. In addition to providing cost effective computing services, the NMFECC environment stimulates collaboration and the sharing of computer codes among the various fusion research groups

  8. Computing trends using graphic processor in high energy physics

    CERN Document Server

    Niculescu, Mihai

    2011-01-01

    One of the main challenges in Heavy Energy Physics is to make fast analysis of high amount of experimental and simulated data. At LHC-CERN one p-p event is approximate 1 Mb in size. The time taken to analyze the data and obtain fast results depends on high computational power. The main advantage of using GPU(Graphic Processor Unit) programming over traditional CPU one is that graphical cards bring a lot of computing power at a very low price. Today a huge number of application(scientific, financial etc) began to be ported or developed for GPU, including Monte Carlo tools or data analysis tools for High Energy Physics. In this paper, we'll present current status and trends in HEP using GPU.

  9. Impact analysis on a massively parallel computer

    International Nuclear Information System (INIS)

    Zacharia, T.; Aramayo, G.A.

    1994-01-01

    Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper

  10. Computational Analysis of Nanoparticles-Molten Salt Thermal Energy Storage for Concentrated Solar Power Systems

    Energy Technology Data Exchange (ETDEWEB)

    Kumar, Vinod [Univ. of Texas, El Paso, TX (United States)

    2017-05-05

    High fidelity computational models of thermocline-based thermal energy storage (TES) were developed. The research goal was to advance the understanding of a single tank nanofludized molten salt based thermocline TES system under various concentration and sizes of the particles suspension. Our objectives were to utilize sensible-heat that operates with least irreversibility by using nanoscale physics. This was achieved by performing computational analysis of several storage designs, analyzing storage efficiency and estimating cost effectiveness for the TES systems under a concentrating solar power (CSP) scheme using molten salt as the storage medium. Since TES is one of the most costly but important components of a CSP plant, an efficient TES system has potential to make the electricity generated from solar technologies cost competitive with conventional sources of electricity.

  11. Survey of Energy Computing in the Smart Grid Domain

    OpenAIRE

    Rajesh Kumar; Arun Agarwala

    2013-01-01

    Resource optimization, with advance computing tools, improves the efficient use of energy resources. The renewable energy resources are instantaneous and needs to be conserve at the same time. To optimize real time process, the complex design, includes plan of resources and control for effective utilization. The advances in information communication technology tools enables data formatting and analysis results in optimization of use the renewable resources for sustainable energy solution on s...

  12. Computer aided safety analysis

    International Nuclear Information System (INIS)

    1988-05-01

    The document reproduces 20 selected papers from the 38 papers presented at the Technical Committee/Workshop on Computer Aided Safety Analysis organized by the IAEA in co-operation with the Institute of Atomic Energy in Otwock-Swierk, Poland on 25-29 May 1987. A separate abstract was prepared for each of these 20 technical papers. Refs, figs and tabs

  13. Evaluation of four building energy analysis computer programs against ASHRAE standard 140-2007

    CSIR Research Space (South Africa)

    Szewczuk, S

    2014-08-01

    Full Text Available ) standard or code of practice. Agrément requested the CSIR to evaluate a range of building energy simulation computer programs. The standard against which these computer programs were to be evaluated was developed by the American Society of Heating...

  14. High energy physics computing in Japan

    International Nuclear Information System (INIS)

    Watase, Yoshiyuki

    1989-01-01

    A brief overview of the computing provision for high energy physics in Japan is presented. Most of the computing power for high energy physics is concentrated in KEK. Here there are two large scale systems: one providing a general computing service including vector processing and the other dedicated to TRISTAN experiments. Each university group has a smaller sized mainframe or VAX system to facilitate both their local computing needs and the remote use of the KEK computers through a network. The large computer system for the TRISTAN experiments is described. An overview of a prospective future large facility is also given. (orig.)

  15. Grid computing in high-energy physics

    International Nuclear Information System (INIS)

    Bischof, R.; Kuhn, D.; Kneringer, E.

    2003-01-01

    Full text: The future high energy physics experiments are characterized by an enormous amount of data delivered by the large detectors presently under construction e.g. at the Large Hadron Collider and by a large number of scientists (several thousands) requiring simultaneous access to the resulting experimental data. Since it seems unrealistic to provide the necessary computing and storage resources at one single place, (e.g. CERN), the concept of grid computing i.e. the use of distributed resources, will be chosen. The DataGrid project (under the leadership of CERN) develops, based on the Globus toolkit, the software necessary for computation and analysis of shared large-scale databases in a grid structure. The high energy physics group Innsbruck participates with several resources in the DataGrid test bed. In this presentation our experience as grid users and resource provider is summarized. In cooperation with the local IT-center (ZID) we installed a flexible grid system which uses PCs (at the moment 162) in student's labs during nights, weekends and holidays, which is especially used to compare different systems (local resource managers, other grid software e.g. from the Nordugrid project) and to supply a test bed for the future Austrian Grid (AGrid). (author)

  16. Energy consumption program: A computer model simulating energy loads in buildings

    Science.gov (United States)

    Stoller, F. W.; Lansing, F. L.; Chai, V. W.; Higgins, S.

    1978-01-01

    The JPL energy consumption computer program developed as a useful tool in the on-going building modification studies in the DSN energy conservation project is described. The program simulates building heating and cooling loads and computes thermal and electric energy consumption and cost. The accuracy of computations are not sacrificed, however, since the results lie within + or - 10 percent margin compared to those read from energy meters. The program is carefully structured to reduce both user's time and running cost by asking minimum information from the user and reducing many internal time-consuming computational loops. Many unique features were added to handle two-level electronics control rooms not found in any other program.

  17. Energy Consumption Management of Virtual Cloud Computing Platform

    Science.gov (United States)

    Li, Lin

    2017-11-01

    For energy consumption management research on virtual cloud computing platforms, energy consumption management of virtual computers and cloud computing platform should be understood deeper. Only in this way can problems faced by energy consumption management be solved. In solving problems, the key to solutions points to data centers with high energy consumption, so people are in great need to use a new scientific technique. Virtualization technology and cloud computing have become powerful tools in people’s real life, work and production because they have strong strength and many advantages. Virtualization technology and cloud computing now is in a rapid developing trend. It has very high resource utilization rate. In this way, the presence of virtualization and cloud computing technologies is very necessary in the constantly developing information age. This paper has summarized, explained and further analyzed energy consumption management questions of the virtual cloud computing platform. It eventually gives people a clearer understanding of energy consumption management of virtual cloud computing platform and brings more help to various aspects of people’s live, work and son on.

  18. Fusion energy division computer systems network

    International Nuclear Information System (INIS)

    Hammons, C.E.

    1980-12-01

    The Fusion Energy Division of the Oak Ridge National Laboratory (ORNL) operated by Union Carbide Corporation Nuclear Division (UCC-ND) is primarily involved in the investigation of problems related to the use of controlled thermonuclear fusion as an energy source. The Fusion Energy Division supports investigations of experimental fusion devices and related fusion theory. This memo provides a brief overview of the computing environment in the Fusion Energy Division and the computing support provided to the experimental effort and theory research

  19. Computing in high-energy physics

    International Nuclear Information System (INIS)

    Mount, Richard P.

    2016-01-01

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Lastly, I describe recent developments aimed at improving the overall coherence of high-energy physics software

  20. Computing in high-energy physics

    Science.gov (United States)

    Mount, Richard P.

    2016-04-01

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Finally, I describe recent developments aimed at improving the overall coherence of high-energy physics software.

  1. Run 2 analysis computing for CDF and D0

    International Nuclear Information System (INIS)

    Fuess, S.

    1995-11-01

    Two large experiments at the Fermilab Tevatron collider will use upgraded of running. The associated analysis software is also expected to change, both to account for higher data rates and to embrace new computing paradigms. A discussion is given to the problems facing current and future High Energy Physics (HEP) analysis computing, and several issues explored in detail

  2. Energy System Analysis of 100 Per cent Renewable Energy Systems

    DEFF Research Database (Denmark)

    Lund, Henrik; Mathiesen, Brian Vad

    2007-01-01

    This paper presents the methodology and results of the overall energy system analysis of a 100 per cent renewable energy system. The input for the systems is the result of a project of the Danish Association of Engineers, in which 1600 participants during more than 40 seminars discussed...... and designed a model for the future energy system of Denmark, putting emphasis on energy efficiency, CO2 reduction, and industrial development. The energy system analysis methodology includes hour by hour computer simulations leading to the design of flexible energy systems with the ability to balance...... the electricity supply and demand and to exchange electricity productions on the international electricity markets. The results are detailed system designs and energy balances for two energy target years: year 2050 with 100 per cent renewable energy from biomass and combinations of wind, wave and solar power...

  3. Energy-aware memory management for embedded multimedia systems a computer-aided design approach

    CERN Document Server

    Balasa, Florin

    2011-01-01

    Energy-Aware Memory Management for Embedded Multimedia Systems: A Computer-Aided Design Approach presents recent computer-aided design (CAD) ideas that address memory management tasks, particularly the optimization of energy consumption in the memory subsystem. It explains how to efficiently implement CAD solutions, including theoretical methods and novel algorithms. The book covers various energy-aware design techniques, including data-dependence analysis techniques, memory size estimation methods, extensions of mapping approaches, and memory banking approaches. It shows how these techniques

  4. Computational methods for planning and evaluating geothermal energy projects

    International Nuclear Information System (INIS)

    Goumas, M.G.; Lygerou, V.A.; Papayannakis, L.E.

    1999-01-01

    In planning, designing and evaluating a geothermal energy project, a number of technical, economic, social and environmental parameters should be considered. The use of computational methods provides a rigorous analysis improving the decision-making process. This article demonstrates the application of decision-making methods developed in operational research for the optimum exploitation of geothermal resources. Two characteristic problems are considered: (1) the economic evaluation of a geothermal energy project under uncertain conditions using a stochastic analysis approach and (2) the evaluation of alternative exploitation schemes for optimum development of a low enthalpy geothermal field using a multicriteria decision-making procedure. (Author)

  5. Exascale for Energy: The Role of Exascale Computing in Energy Security

    International Nuclear Information System (INIS)

    2010-01-01

    How will the United States satisfy energy demand in a tightening global energy marketplace while, at the same time, reducing greenhouse gas emissions? Exascale computing - expected to be available within the next eight to ten years - may play a crucial role in answering that question by enabling a paradigm shift from test-based to science-based design and engineering. Computational modeling of complete power generation systems and engines, based on scientific first principles, will accelerate the improvement of existing energy technologies and the development of new transformational technologies by pre-selecting the designs most likely to be successful for experimental validation, rather than relying on trial and error. The predictive understanding of complex engineered systems made possible by computational modeling will also reduce the construction and operations costs, optimize performance, and improve safety. Exascale computing will make possible fundamentally new approaches to quantifying the uncertainty of safety and performance engineering. This report discusses potential contributions of exa-scale modeling in four areas of energy production and distribution: nuclear power, combustion, the electrical grid, and renewable sources of energy, which include hydrogen fuel, bioenergy conversion, photovoltaic solar energy, and wind turbines.

  6. Soft computing in green and renewable energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Gopalakrishnan, Kasthurirangan [Iowa State Univ., Ames, IA (United States). Iowa Bioeconomy Inst.; US Department of Energy, Ames, IA (United States). Ames Lab; Kalogirou, Soteris [Cyprus Univ. of Technology, Limassol (Cyprus). Dept. of Mechanical Engineering and Materials Sciences and Engineering; Khaitan, Siddhartha Kumar (eds.) [Iowa State Univ. of Science and Technology, Ames, IA (United States). Dept. of Electrical Engineering and Computer Engineering

    2011-07-01

    Soft Computing in Green and Renewable Energy Systems provides a practical introduction to the application of soft computing techniques and hybrid intelligent systems for designing, modeling, characterizing, optimizing, forecasting, and performance prediction of green and renewable energy systems. Research is proceeding at jet speed on renewable energy (energy derived from natural resources such as sunlight, wind, tides, rain, geothermal heat, biomass, hydrogen, etc.) as policy makers, researchers, economists, and world agencies have joined forces in finding alternative sustainable energy solutions to current critical environmental, economic, and social issues. The innovative models, environmentally benign processes, data analytics, etc. employed in renewable energy systems are computationally-intensive, non-linear and complex as well as involve a high degree of uncertainty. Soft computing technologies, such as fuzzy sets and systems, neural science and systems, evolutionary algorithms and genetic programming, and machine learning, are ideal in handling the noise, imprecision, and uncertainty in the data, and yet achieve robust, low-cost solutions. As a result, intelligent and soft computing paradigms are finding increasing applications in the study of renewable energy systems. Researchers, practitioners, undergraduate and graduate students engaged in the study of renewable energy systems will find this book very useful. (orig.)

  7. Computing with memory for energy-efficient robust systems

    CERN Document Server

    Paul, Somnath

    2013-01-01

    This book analyzes energy and reliability as major challenges faced by designers of computing frameworks in the nanometer technology regime.  The authors describe the existing solutions to address these challenges and then reveal a new reconfigurable computing platform, which leverages high-density nanoscale memory for both data storage and computation to maximize the energy-efficiency and reliability. The energy and reliability benefits of this new paradigm are illustrated and the design challenges are discussed. Various hardware and software aspects of this exciting computing paradigm are de

  8. A mechanical energy analysis of gait initiation

    Science.gov (United States)

    Miller, C. A.; Verstraete, M. C.

    1999-01-01

    The analysis of gait initiation (the transient state between standing and walking) is an important diagnostic tool to study pathologic gait and to evaluate prosthetic devices. While past studies have quantified mechanical energy of the body during steady-state gait, to date no one has computed the mechanical energy of the body during gait initiation. In this study, gait initiation in seven normal male subjects was studied using a mechanical energy analysis to compute total body energy. The data showed three separate states: quiet standing, gait initiation, and steady-state gait. During gait initiation, the trends in the energy data for the individual segments were similar to those seen during steady-state gait (and in Winter DA, Quanbury AO, Reimer GD. Analysis of instantaneous energy of normal gait. J Biochem 1976;9:253-257), but diminished in amplitude. However, these amplitudes increased to those seen in steady-state during the gait initiation event (GIE), with the greatest increase occurring in the second step due to the push-off of the foundation leg. The baseline level of mechanical energy was due to the potential energy of the individual segments, while the cyclic nature of the data was indicative of the kinetic energy of the particular leg in swing phase during that step. The data presented showed differences in energy trends during gait initiation from those of steady state, thereby demonstrating the importance of this event in the study of locomotion.

  9. Exascale for Energy: The Role of Exascale Computing in Energy Security

    Energy Technology Data Exchange (ETDEWEB)

    Authors, Various

    2010-07-15

    How will the United States satisfy energy demand in a tightening global energy marketplace while, at the same time, reducing greenhouse gas emissions? Exascale computing -- expected to be available within the next eight to ten years ? may play a crucial role in answering that question by enabling a paradigm shift from test-based to science-based design and engineering. Computational modeling of complete power generation systems and engines, based on scientific first principles, will accelerate the improvement of existing energy technologies and the development of new transformational technologies by pre-selecting the designs most likely to be successful for experimental validation, rather than relying on trial and error. The predictive understanding of complex engineered systems made possible by computational modeling will also reduce the construction and operations costs, optimize performance, and improve safety. Exascale computing will make possible fundamentally new approaches to quantifying the uncertainty of safety and performance engineering. This report discusses potential contributions of exa-scale modeling in four areas of energy production and distribution: nuclear power, combustion, the electrical grid, and renewable sources of energy, which include hydrogen fuel, bioenergy conversion, photovoltaic solar energy, and wind turbines. Examples of current research are taken from projects funded by the U.S. Department of Energy (DOE) Office of Science at universities and national laboratories, with a special focus on research conducted at Lawrence Berkeley National Laboratory.

  10. Computer code for qualitative analysis of gamma-ray spectra

    International Nuclear Information System (INIS)

    Yule, H.P.

    1979-01-01

    Computer code QLN1 provides complete analysis of gamma-ray spectra observed with Ge(Li) detectors and is used at both the National Bureau of Standards and the Environmental Protection Agency. It locates peaks, resolves multiplets, identifies component radioisotopes, and computes quantitative results. The qualitative-analysis (or component identification) algorithms feature thorough, self-correcting steps which provide accurate isotope identification in spite of errors in peak centroids, energy calibration, and other typical problems. The qualitative-analysis algorithm is described in this paper

  11. National Energy Research Scientific Computing Center (NERSC): Advancing the frontiers of computational science and technology

    Energy Technology Data Exchange (ETDEWEB)

    Hules, J. [ed.

    1996-11-01

    National Energy Research Scientific Computing Center (NERSC) provides researchers with high-performance computing tools to tackle science`s biggest and most challenging problems. Founded in 1974 by DOE/ER, the Controlled Thermonuclear Research Computer Center was the first unclassified supercomputer center and was the model for those that followed. Over the years the center`s name was changed to the National Magnetic Fusion Energy Computer Center and then to NERSC; it was relocated to LBNL. NERSC, one of the largest unclassified scientific computing resources in the world, is the principal provider of general-purpose computing services to DOE/ER programs: Magnetic Fusion Energy, High Energy and Nuclear Physics, Basic Energy Sciences, Health and Environmental Research, and the Office of Computational and Technology Research. NERSC users are a diverse community located throughout US and in several foreign countries. This brochure describes: the NERSC advantage, its computational resources and services, future technologies, scientific resources, and computational science of scale (interdisciplinary research over a decade or longer; examples: combustion in engines, waste management chemistry, global climate change modeling).

  12. Free energy minimization to predict RNA secondary structures and computational RNA design.

    Science.gov (United States)

    Churkin, Alexander; Weinbrand, Lina; Barash, Danny

    2015-01-01

    Determining the RNA secondary structure from sequence data by computational predictions is a long-standing problem. Its solution has been approached in two distinctive ways. If a multiple sequence alignment of a collection of homologous sequences is available, the comparative method uses phylogeny to determine conserved base pairs that are more likely to form as a result of billions of years of evolution than by chance. In the case of single sequences, recursive algorithms that compute free energy structures by using empirically derived energy parameters have been developed. This latter approach of RNA folding prediction by energy minimization is widely used to predict RNA secondary structure from sequence. For a significant number of RNA molecules, the secondary structure of the RNA molecule is indicative of its function and its computational prediction by minimizing its free energy is important for its functional analysis. A general method for free energy minimization to predict RNA secondary structures is dynamic programming, although other optimization methods have been developed as well along with empirically derived energy parameters. In this chapter, we introduce and illustrate by examples the approach of free energy minimization to predict RNA secondary structures.

  13. Computed Potential Energy Surfaces and Minimum Energy Pathways for Chemical Reactions

    Science.gov (United States)

    Walch, Stephen P.; Langhoff, S. R. (Technical Monitor)

    1994-01-01

    Computed potential energy surfaces are often required for computation of such parameters as rate constants as a function of temperature, product branching ratios, and other detailed properties. For some dynamics methods, global potential energy surfaces are required. In this case, it is necessary to obtain the energy at a complete sampling of all the possible arrangements of the nuclei, which are energetically accessible, and then a fitting function must be obtained to interpolate between the computed points. In other cases, characterization of the stationary points and the reaction pathway connecting them is sufficient. These properties may be readily obtained using analytical derivative methods. We have found that computation of the stationary points/reaction pathways using CASSCF/derivative methods, followed by use of the internally contracted CI method to obtain accurate energetics, gives usefull results for a number of chemically important systems. The talk will focus on a number of applications including global potential energy surfaces, H + O2, H + N2, O(3p) + H2, and reaction pathways for complex reactions, including reactions leading to NO and soot formation in hydrocarbon combustion.

  14. Energy-Aware Computation Offloading of IoT Sensors in Cloudlet-Based Mobile Edge Computing.

    Science.gov (United States)

    Ma, Xiao; Lin, Chuang; Zhang, Han; Liu, Jianwei

    2018-06-15

    Mobile edge computing is proposed as a promising computing paradigm to relieve the excessive burden of data centers and mobile networks, which is induced by the rapid growth of Internet of Things (IoT). This work introduces the cloud-assisted multi-cloudlet framework to provision scalable services in cloudlet-based mobile edge computing. Due to the constrained computation resources of cloudlets and limited communication resources of wireless access points (APs), IoT sensors with identical computation offloading decisions interact with each other. To optimize the processing delay and energy consumption of computation tasks, theoretic analysis of the computation offloading decision problem of IoT sensors is presented in this paper. In more detail, the computation offloading decision problem of IoT sensors is formulated as a computation offloading game and the condition of Nash equilibrium is derived by introducing the tool of a potential game. By exploiting the finite improvement property of the game, the Computation Offloading Decision (COD) algorithm is designed to provide decentralized computation offloading strategies for IoT sensors. Simulation results demonstrate that the COD algorithm can significantly reduce the system cost compared with the random-selection algorithm and the cloud-first algorithm. Furthermore, the COD algorithm can scale well with increasing IoT sensors.

  15. Large scale computing in the Energy Research Programs

    International Nuclear Information System (INIS)

    1991-05-01

    The Energy Research Supercomputer Users Group (ERSUG) comprises all investigators using resources of the Department of Energy Office of Energy Research supercomputers. At the December 1989 meeting held at Florida State University (FSU), the ERSUG executive committee determined that the continuing rapid advances in computational sciences and computer technology demanded a reassessment of the role computational science should play in meeting DOE's commitments. Initial studies were to be performed for four subdivisions: (1) Basic Energy Sciences (BES) and Applied Mathematical Sciences (AMS), (2) Fusion Energy, (3) High Energy and Nuclear Physics, and (4) Health and Environmental Research. The first two subgroups produced formal subreports that provided a basis for several sections of this report. Additional information provided in the AMS/BES is included as Appendix C in an abridged form that eliminates most duplication. Additionally, each member of the executive committee was asked to contribute area-specific assessments; these assessments are included in the next section. In the following sections, brief assessments are given for specific areas, a conceptual model is proposed that the entire computational effort for energy research is best viewed as one giant nation-wide computer, and then specific recommendations are made for the appropriate evolution of the system

  16. Energy Dissipation in Quantum Computers

    OpenAIRE

    Granik, A.; Chapline, G.

    2003-01-01

    A method is described for calculating the heat generated in a quantum computer due to loss of quantum phase information. Amazingly enough, this heat generation can take place at zero temperature. and may explain why it is impossible to extract energy from vacuum fluctuations. Implications for optical computers and quantum cosmology are also briefly discussed.

  17. CAISSE (Computer Aided Information System on Solar Energy) technical manual

    Energy Technology Data Exchange (ETDEWEB)

    Cantelon, P E; Beinhauer, F W

    1979-01-01

    The Computer Aided Information System on Solar Energy (CAISSE) was developed to provide the general public with information on solar energy and its potential uses and costs for domestic consumption. CAISSE is an interactive computing system which illustrates solar heating concepts through the use of 35 mm slides, text displays on a screen and a printed report. The user communicates with the computer by responding to questions about his home and heating requirements through a touch sensitive screen. The CAISSE system contains a solar heating simulation model which calculates the heating load capable of being supplied by a solar heating system and uses this information to illustrate installation costs, fuel savings and a 20 year life-cycle analysis of cost and benefits. The system contains several sets of radiation and weather data for Canada and USA. The selection of one of four collector models is based upon the requirements input during the computer session. Optimistic and pessimistic fuel cost forecasts are made for oil, natural gas, electricity, or propane; and the forecasted fuel cost is made the basis of the life cycle cost evaluation for the solar heating application chosen. This manual is organized so that each section describes one major aspect of the use of solar energy systems to provide energy for domestic consumption. The sources of data and technical information and the method of incorporating them into the CAISSE display system are described in the same order as the computer processing. Each section concludes with a list of future developments that could be included to make CAISSE outputs more regionally specific and more useful to designers. 19 refs., 1 tab.

  18. Energy efficient distributed computing systems

    CERN Document Server

    Lee, Young-Choon

    2012-01-01

    The energy consumption issue in distributed computing systems raises various monetary, environmental and system performance concerns. Electricity consumption in the US doubled from 2000 to 2005.  From a financial and environmental standpoint, reducing the consumption of electricity is important, yet these reforms must not lead to performance degradation of the computing systems.  These contradicting constraints create a suite of complex problems that need to be resolved in order to lead to 'greener' distributed computing systems.  This book brings together a group of outsta

  19. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    International Nuclear Information System (INIS)

    Pop, Florin

    2014-01-01

    Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  20. A directory of computer software applications: energy. Report for 1974--1976

    International Nuclear Information System (INIS)

    Grooms, D.W.

    1977-04-01

    The computer programs or the computer program documentation cited in this directory have been developed for a variety of applications in the field of energy. The cited computer software includes applications in solar energy, petroleum resources, batteries, electrohydrodynamic generators, magnetohydrodynamic generators, natural gas, nuclear fission, nuclear fusion, hydroelectric power production, and geothermal energy. The computer software cited has been used for simulation and modeling, calculations of future energy requirements, calculations of energy conservation measures, and computations of economic considerations of energy systems

  1. Shielding Benchmark Computational Analysis

    International Nuclear Information System (INIS)

    Hunter, H.T.; Slater, C.O.; Holland, L.B.; Tracz, G.; Marshall, W.J.; Parsons, J.L.

    2000-01-01

    Over the past several decades, nuclear science has relied on experimental research to verify and validate information about shielding nuclear radiation for a variety of applications. These benchmarks are compared with results from computer code models and are useful for the development of more accurate cross-section libraries, computer code development of radiation transport modeling, and building accurate tests for miniature shielding mockups of new nuclear facilities. When documenting measurements, one must describe many parts of the experimental results to allow a complete computational analysis. Both old and new benchmark experiments, by any definition, must provide a sound basis for modeling more complex geometries required for quality assurance and cost savings in nuclear project development. Benchmarks may involve one or many materials and thicknesses, types of sources, and measurement techniques. In this paper the benchmark experiments of varying complexity are chosen to study the transport properties of some popular materials and thicknesses. These were analyzed using three-dimensional (3-D) models and continuous energy libraries of MCNP4B2, a Monte Carlo code developed at Los Alamos National Laboratory, New Mexico. A shielding benchmark library provided the experimental data and allowed a wide range of choices for source, geometry, and measurement data. The experimental data had often been used in previous analyses by reputable groups such as the Cross Section Evaluation Working Group (CSEWG) and the Organization for Economic Cooperation and Development/Nuclear Energy Agency Nuclear Science Committee (OECD/NEANSC)

  2. CHEP95: Computing in high energy physics. Abstracts

    International Nuclear Information System (INIS)

    1995-01-01

    These proceedings cover the technical papers on computation in High Energy Physics, including computer codes, computer devices, control systems, simulations, data acquisition systems. New approaches on computer architectures are also discussed

  3. Computational movement analysis

    CERN Document Server

    Laube, Patrick

    2014-01-01

    This SpringerBrief discusses the characteristics of spatiotemporal movement data, including uncertainty and scale. It investigates three core aspects of Computational Movement Analysis: Conceptual modeling of movement and movement spaces, spatiotemporal analysis methods aiming at a better understanding of movement processes (with a focus on data mining for movement patterns), and using decentralized spatial computing methods in movement analysis. The author presents Computational Movement Analysis as an interdisciplinary umbrella for analyzing movement processes with methods from a range of fi

  4. Bringing together high energy physicist and computer scientist

    International Nuclear Information System (INIS)

    Bock, R.K.

    1989-01-01

    The Oxford Conference on Computing in High Energy Physics approached the physics and computing issues with the question, ''Can computer science help?'' always in mind. This summary is a personal recollection of what I considered to be the highlights of the conference: the parts which contributed to my own learning experience. It can be used as a general introduction to the following papers, or as a brief overview of the current states of computer science within high energy physics. (orig.)

  5. Energy-density field approach for low- and medium-frequency vibroacoustic analysis of complex structures using a statistical computational model

    Science.gov (United States)

    Kassem, M.; Soize, C.; Gagliardini, L.

    2009-06-01

    In this paper, an energy-density field approach applied to the vibroacoustic analysis of complex industrial structures in the low- and medium-frequency ranges is presented. This approach uses a statistical computational model. The analyzed system consists of an automotive vehicle structure coupled with its internal acoustic cavity. The objective of this paper is to make use of the statistical properties of the frequency response functions of the vibroacoustic system observed from previous experimental and numerical work. The frequency response functions are expressed in terms of a dimensionless matrix which is estimated using the proposed energy approach. Using this dimensionless matrix, a simplified vibroacoustic model is proposed.

  6. Aiding Design of Wave Energy Converters via Computational Simulations

    Science.gov (United States)

    Jebeli Aqdam, Hejar; Ahmadi, Babak; Raessi, Mehdi; Tootkaboni, Mazdak

    2015-11-01

    With the increasing interest in renewable energy sources, wave energy converters will continue to gain attention as a viable alternative to current electricity production methods. It is therefore crucial to develop computational tools for the design and analysis of wave energy converters. A successful design requires balance between the design performance and cost. Here an analytical solution is used for the approximate analysis of interactions between a flap-type wave energy converter (WEC) and waves. The method is verified using other flow solvers and experimental test cases. Then the model is used in conjunction with a powerful heuristic optimization engine, Charged System Search (CSS) to explore the WEC design space. CSS is inspired by charged particles behavior. It searches the design space by considering candidate answers as charged particles and moving them based on the Coulomb's laws of electrostatics and Newton's laws of motion to find the global optimum. Finally the impacts of changes in different design parameters on the power takeout of the superior WEC designs are investigated. National Science Foundation, CBET-1236462.

  7. Exascale for Energy: The Role of Exascale Computing in Energy Security

    OpenAIRE

    Authors, Various

    2010-01-01

    How will the United States satisfy energy demand in a tightening global energy marketplace while, at the same time, reducing greenhouse gas emissions? Exascale computing -- expected to be available within the next eight to ten years ? may play a crucial role in answering that question by enabling a paradigm shift from test-based to science-based design and engineering. Computational modeling of complete power generation systems and engines, based on scientific first principles, will accelerat...

  8. A review of computer tools for analysing the integration of renewable energy into various energy systems

    DEFF Research Database (Denmark)

    Connolly, D.; Lund, Henrik; Mathiesen, Brian Vad

    2010-01-01

    to integrating renewable energy, but instead the ‘ideal’ energy tool is highly dependent on the specific objectives that must be fulfilled. The typical applications for the 37 tools reviewed (from analysing single-building systems to national energy-systems), combined with numerous other factors......This paper includes a review of the different computer tools that can be used to analyse the integration of renewable energy. Initially 68 tools were considered, but 37 were included in the final analysis which was carried out in collaboration with the tool developers or recommended points...... of contact. The results in this paper provide the information necessary to identify a suitable energy tool for analysing the integration of renewable energy into various energy-systems under different objectives. It is evident from this paper that there is no energy tool that addresses all issues related...

  9. Analysis of the energy development variants

    International Nuclear Information System (INIS)

    Tsvetanov, P.

    1990-01-01

    Analysis of the variants of energy development is made as the third stage of a procedure of energy-economy interrelations dynamics study, the other two stages being the scenarios description and the formulation of the variants. This stage includes a research on the dimensions and the dynamics of the resources demands, the general features and the trends of the national energy development. There is a presentation of a comparative analysis of the variants in terms of economic indices and energy values, computed by the model IMPACT-B. A resource evaluation of the development variants is given in terms of investments, requirements (direct, indirect and total) and limited national resources demands of the energy system. The trends of the national energy development discussed are: trends characterizing the changes in the structure of the energy consumption, resulting from changes in the economy; trends of the energy system impact on the productivity of labor; general trends of the proportionality in the industrial, the household and services sector development. 16 refs., 16 figs., 4 tabs. (R.Ts.)

  10. Intelligent computing for sustainable energy and environment

    Energy Technology Data Exchange (ETDEWEB)

    Li, Kang [Queen' s Univ. Belfast (United Kingdom). School of Electronics, Electrical Engineering and Computer Science; Li, Shaoyuan; Li, Dewei [Shanghai Jiao Tong Univ., Shanghai (China). Dept. of Automation; Niu, Qun (eds.) [Shanghai Univ. (China). School of Mechatronic Engineering and Automation

    2013-07-01

    Fast track conference proceedings. State of the art research. Up to date results. This book constitutes the refereed proceedings of the Second International Conference on Intelligent Computing for Sustainable Energy and Environment, ICSEE 2012, held in Shanghai, China, in September 2012. The 60 full papers presented were carefully reviewed and selected from numerous submissions and present theories and methodologies as well as the emerging applications of intelligent computing in sustainable energy and environment.

  11. Visual Cluster Analysis for Computing Tasks at Workflow Management System of the ATLAS Experiment

    CERN Document Server

    Grigoryeva, Maria; The ATLAS collaboration

    2018-01-01

    Hundreds of petabytes of experimental data in high energy and nuclear physics (HENP) have already been obtained by unique scientific facilities such as LHC, RHIC, KEK. As the accelerators are being modernized (energy and luminosity were increased), data volumes are rapidly growing and have reached the exabyte scale, that also affects the increasing the number of analysis and data processing tasks, that are competing continuously for computational resources. The increase of processing tasks causes an increase in the performance of the computing environment by the involvement of high-performance computing resources, and forming a heterogeneous distributed computing environment (hundreds of distributed computing centers). In addition, errors happen to occur while executing tasks for data analysis and processing, which are caused by software and hardware failures. With a distributed model of data processing and analysis, the optimization of data management and workload systems becomes a fundamental task, and the ...

  12. Energy-Water Modeling and Analysis | Energy Analysis | NREL

    Science.gov (United States)

    Generation (ReEDS Model Analysis) U.S. Energy Sector Vulnerabilities to Climate Change and Extreme Weather Modeling and Analysis Energy-Water Modeling and Analysis NREL's energy-water modeling and analysis vulnerabilities from various factors, including water. Example Projects Renewable Electricity Futures Study

  13. Comparison of energy expenditure in adolescents when playing new generation and sedentary computer games: cross sectional study.

    Science.gov (United States)

    Graves, Lee; Stratton, Gareth; Ridgers, N D; Cable, N T

    2007-12-22

    To compare the energy expenditure of adolescents when playing sedentary and new generation active computer games. Cross sectional comparison of four computer games. Research laboratories. Six boys and five girls aged 13-15 years. Procedure Participants were fitted with a monitoring device validated to predict energy expenditure. They played four computer games for 15 minutes each. One of the games was sedentary (XBOX 360) and the other three were active (Wii Sports). Predicted energy expenditure, compared using repeated measures analysis of variance. Mean (standard deviation) predicted energy expenditure when playing Wii Sports bowling (190.6 (22.2) kJ/kg/min), tennis (202.5 (31.5) kJ/kg/min), and boxing (198.1 (33.9) kJ/kg/min) was significantly greater than when playing sedentary games (125.5 (13.7) kJ/kg/min) (Pgames. Playing new generation active computer games uses significantly more energy than playing sedentary computer games but not as much energy as playing the sport itself. The energy used when playing active Wii Sports games was not of high enough intensity to contribute towards the recommended daily amount of exercise in children.

  14. Shadow Replication: An Energy-Aware, Fault-Tolerant Computational Model for Green Cloud Computing

    Directory of Open Access Journals (Sweden)

    Xiaolong Cui

    2014-08-01

    Full Text Available As the demand for cloud computing continues to increase, cloud service providers face the daunting challenge to meet the negotiated SLA agreement, in terms of reliability and timely performance, while achieving cost-effectiveness. This challenge is increasingly compounded by the increasing likelihood of failure in large-scale clouds and the rising impact of energy consumption and CO2 emission on the environment. This paper proposes Shadow Replication, a novel fault-tolerance model for cloud computing, which seamlessly addresses failure at scale, while minimizing energy consumption and reducing its impact on the environment. The basic tenet of the model is to associate a suite of shadow processes to execute concurrently with the main process, but initially at a much reduced execution speed, to overcome failures as they occur. Two computationally-feasible schemes are proposed to achieve Shadow Replication. A performance evaluation framework is developed to analyze these schemes and compare their performance to traditional replication-based fault tolerance methods, focusing on the inherent tradeoff between fault tolerance, the specified SLA and profit maximization. The results show that Shadow Replication leads to significant energy reduction, and is better suited for compute-intensive execution models, where up to 30% more profit increase can be achieved due to reduced energy consumption.

  15. Energy Efficiency in Computing (1/2)

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    As manufacturers improve the silicon process, truly low energy computing is becoming a reality - both in servers and in the consumer space. This series of lectures covers a broad spectrum of aspects related to energy efficient computing - from circuits to datacentres. We will discuss common trade-offs and basic components, such as processors, memory and accelerators. We will also touch on the fundamentals of modern datacenter design and operation. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP and Google), as well as international research institutes, such as EPFL. Currently, Andrzej acts as a consultant on technology and innovation with TIK Services (http://tik.services), and runs a peer-to-peer lending start-up. NB! All Academic L...

  16. The impact of optimize solar radiation received on the levels and energy disposal of levels on architectural design result by using computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Rezaei, Davood; Farajzadeh Khosroshahi, Samaneh; Sadegh Falahat, Mohammad [Zanjan University (Iran, Islamic Republic of)], email: d_rezaei@znu.ac.ir, email: ronas_66@yahoo.com, email: Safalahat@yahoo.com

    2011-07-01

    In order to minimize the energy consumption of a building it is important to achieve optimum solar energy. The aim of this paper is to introduce the use of computer modeling in the early stages of design to optimize solar radiation received and energy disposal in an architectural design. Computer modeling was performed on 2 different projects located in Los Angeles, USA, using ECOTECT software. Changes were made to the designs following analysis of the modeling results and a subsequent analysis was carried out on the optimized designs. Results showed that the computer simulation allows the designer to set the analysis criteria and improve the energy performance of a building before it is constructed; moreover, it can be used for a wide range of optimization levels. This study pointed out that computer simulation should be performed in the design stage to optimize a building's energy performance.

  17. Thermodynamic analysis of environmental problems of energy

    Directory of Open Access Journals (Sweden)

    Kaganovich Boris M.

    2017-01-01

    Full Text Available The paper discusses the problems of the ecological analysis of physicochemical processes in power units and the impact of energy systems on the nature in large territorial regions. The model of extreme intermediate states developed at the Energy Systems Institute based on the principles of classical equilibrium thermodynamics was chosen to devise specific computational methods. The results of the conducted studies are presented and directions for further work are outlined.

  18. Compound analysis of gallstones using dual energy computed tomography-Results in a phantom model

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, Ralf W., E-mail: ralfwbauer@aol.co [Department of Diagnostic and Interventional Radiology, Clinic of the Goethe University Frankfurt, Theodor-Stern-Kai 7, 60596 Frankfurt (Germany); Schulz, Julian R., E-mail: julian.schulz@t-online.d [Department of Diagnostic and Interventional Radiology, Clinic of the Goethe University Frankfurt, Theodor-Stern-Kai 7, 60596 Frankfurt (Germany); Zedler, Barbara, E-mail: zedler@em.uni-frankfurt.d [Department of Forensic Medicine, Clinic of the Goethe University Frankfurt, Kennedyallee 104, 60596 Frankfurt (Germany); Graf, Thomas G., E-mail: thomas.gt.graf@siemens.co [Siemens AG Healthcare Sector, Computed Tomography, Physics and Applications, Siemensstrasse 1, 91313 Forchheim (Germany); Vogl, Thomas J., E-mail: t.vogl@em.uni-frankfurt.d [Department of Diagnostic and Interventional Radiology, Clinic of the Goethe University Frankfurt, Theodor-Stern-Kai 7, 60596 Frankfurt (Germany)

    2010-07-15

    Purpose: The potential of dual energy computed tomography (DECT) for the analysis of gallstone compounds was investigated. The main goal was to find parameters, that can reliably define high percentage (>70%) cholesterol stones without calcium components. Materials and methods: 35 gallstones were analyzed with DECT using a phantom model. Stone samples were put into specimen containers filled with formalin. Containers were put into a water-filled cylindrical acrylic glass phantom. DECT scans were performed using a tube voltage/current of 140 kV/83 mAs (tube A) and 80 kV/340 mAs (tube B). ROI-measurements to determine CT attenuation of each sector of the stones that had different appearance on the CT images were performed. Finally, semi-quantitative infrared spectroscopy (FTIR) of these sectors was performed for chemical analysis. Results: ROI-measurements were performed in 45 different sectors in 35 gallstones. Sectors containing >70% of cholesterol and no calcium component (n = 20) on FTIR could be identified with 95% sensitivity and 100% specificity on DECT. These sectors showed typical attenuation of -8 {+-} 4 HU at 80 kV and +22 {+-} 3 HU at 140 kV. Even the presence of a small calcium component (<10%) hindered the reliable identification of cholesterol components as such. Conclusion: Dual energy CT allows for reliable identification of gallstones containing a high percentage of cholesterol and no calcium component in this pre-clinical phantom model. Results from in vivo or anthropomorphic phantom trials will have to confirm these results. This may enable the identification of patients eligible for non-surgical treatment options in the future.

  19. Compound analysis of gallstones using dual energy computed tomography-Results in a phantom model

    International Nuclear Information System (INIS)

    Bauer, Ralf W.; Schulz, Julian R.; Zedler, Barbara; Graf, Thomas G.; Vogl, Thomas J.

    2010-01-01

    Purpose: The potential of dual energy computed tomography (DECT) for the analysis of gallstone compounds was investigated. The main goal was to find parameters, that can reliably define high percentage (>70%) cholesterol stones without calcium components. Materials and methods: 35 gallstones were analyzed with DECT using a phantom model. Stone samples were put into specimen containers filled with formalin. Containers were put into a water-filled cylindrical acrylic glass phantom. DECT scans were performed using a tube voltage/current of 140 kV/83 mAs (tube A) and 80 kV/340 mAs (tube B). ROI-measurements to determine CT attenuation of each sector of the stones that had different appearance on the CT images were performed. Finally, semi-quantitative infrared spectroscopy (FTIR) of these sectors was performed for chemical analysis. Results: ROI-measurements were performed in 45 different sectors in 35 gallstones. Sectors containing >70% of cholesterol and no calcium component (n = 20) on FTIR could be identified with 95% sensitivity and 100% specificity on DECT. These sectors showed typical attenuation of -8 ± 4 HU at 80 kV and +22 ± 3 HU at 140 kV. Even the presence of a small calcium component (<10%) hindered the reliable identification of cholesterol components as such. Conclusion: Dual energy CT allows for reliable identification of gallstones containing a high percentage of cholesterol and no calcium component in this pre-clinical phantom model. Results from in vivo or anthropomorphic phantom trials will have to confirm these results. This may enable the identification of patients eligible for non-surgical treatment options in the future.

  20. Computational Modelling of Materials for Wind Turbine Blades: Selected DTU Wind Energy Activities.

    Science.gov (United States)

    Mikkelsen, Lars Pilgaard; Mishnaevsky, Leon

    2017-11-08

    Computational and analytical studies of degradation of wind turbine blade materials at the macro-, micro-, and nanoscale carried out by the modelling team of the Section Composites and Materials Mechanics, Department of Wind Energy, DTU, are reviewed. Examples of the analysis of the microstructural effects on the strength and fatigue life of composites are shown. Computational studies of degradation mechanisms of wind blade composites under tensile and compressive loading are presented. The effect of hybrid and nanoengineered structures on the performance of the composite was studied in computational experiments as well.

  1. Development of analysis methodology for hot leg break mass and energy release

    International Nuclear Information System (INIS)

    Song, Jin Ho; Kim, Cheol Woo; Kwon, Young Min; Kim, Sook Kwan

    1995-04-01

    A study for the development of an analysis methodology for hot leg break mass and energy release is performed. For the blowdown period a modified CEFLASH-4A methodology is suggested. For the post blowdown period a modified CONTRAST boil-off model is suggested. By using these computer code improved mass and energy release data are generated. Also, a RELAP5/MOD3 analysis for finally the FLOOD-3 computer code has been modified for use in the analysis of hot leg break. The results of analysis using modified FLOOD-3 are reasonable as we expected and their trends are good. 66 figs., 8 tabs. (Author) .new

  2. Architectural analysis for wirelessly powered computing platforms

    NARCIS (Netherlands)

    Kapoor, A.; Pineda de Gyvez, J.

    2013-01-01

    We present a design framework for wirelessly powered generic computing platforms that takes into account various system parameters in response to a time-varying energy source. These parameters are the charging profile of the energy source, computing speed (fclk), digital supply voltage (VDD), energy

  3. Computational Fluid Dynamics and Building Energy Performance Simulation

    DEFF Research Database (Denmark)

    Nielsen, Peter V.; Tryggvason, Tryggvi

    An interconnection between a building energy performance simulation program and a Computational Fluid Dynamics program (CFD) for room air distribution will be introduced for improvement of the predictions of both the energy consumption and the indoor environment. The building energy performance...

  4. Spin-neurons: A possible path to energy-efficient neuromorphic computers

    Energy Technology Data Exchange (ETDEWEB)

    Sharad, Mrigank; Fan, Deliang; Roy, Kaushik [School of Electrical and Computer Engineering, Purdue University, West Lafayette, Indiana 47907 (United States)

    2013-12-21

    Recent years have witnessed growing interest in the field of brain-inspired computing based on neural-network architectures. In order to translate the related algorithmic models into powerful, yet energy-efficient cognitive-computing hardware, computing-devices beyond CMOS may need to be explored. The suitability of such devices to this field of computing would strongly depend upon how closely their physical characteristics match with the essential computing primitives employed in such models. In this work, we discuss the rationale of applying emerging spin-torque devices for bio-inspired computing. Recent spin-torque experiments have shown the path to low-current, low-voltage, and high-speed magnetization switching in nano-scale magnetic devices. Such magneto-metallic, current-mode spin-torque switches can mimic the analog summing and “thresholding” operation of an artificial neuron with high energy-efficiency. Comparison with CMOS-based analog circuit-model of a neuron shows that “spin-neurons” (spin based circuit model of neurons) can achieve more than two orders of magnitude lower energy and beyond three orders of magnitude reduction in energy-delay product. The application of spin-neurons can therefore be an attractive option for neuromorphic computers of future.

  5. Computational Modelling of Materials for Wind Turbine Blades: Selected DTUWind Energy Activities

    DEFF Research Database (Denmark)

    Mikkelsen, Lars Pilgaard; Mishnaevsky, Leon

    2017-01-01

    Computational and analytical studies of degradation of wind turbine blade materials at the macro-, micro-, and nanoscale carried out by the modelling team of the Section Composites and Materials Mechanics, Department of Wind Energy, DTU, are reviewed. Examples of the analysis of the microstructural...... effects on the strength and fatigue life of composites are shown. Computational studies of degradation mechanisms of wind blade composites under tensile and compressive loading are presented. The effect of hybrid and nanoengineered structures on the performance of the composite was studied...

  6. Status of computer codes available in AEOI for reactor physics analysis

    International Nuclear Information System (INIS)

    Karbassiafshar, M.

    1986-01-01

    Many of the nuclear computer codes available in Atomic Energy Organization of Iran AEOI can be used for physics analysis of an operating reactor or design purposes. Grasp of the various methods involved and practical experience with these codes would be the starting point for interesting design studies or analysis of operating conditions of presently existing and future reactors. A review of the objectives and flowchart of commonly practiced procedures in reactor physics analysis of LWRs and related computer codes was made, extrapolating to the nationally and internationally available resources. Finally, effective utilization of the existing facilities is discussed and called upon

  7. Diagnostic accuracy of dual-energy computed tomography in patients with gout: A meta-analysis.

    Science.gov (United States)

    Lee, Young Ho; Song, Gwan Gyu

    2017-08-01

    This study aimed to evaluate the diagnostic performance of dual-energy computed tomography (DECT) for patients with gout. We searched the Medline, Embase, and Cochrane Library databases, and performed a meta-analysis on the diagnostic accuracy of DECT in patients with gout. A total of eight studies including 510 patients with gout and 268 controls (patients with non-gout inflammatory arthritis) were available for the meta-analysis. The pooled sensitivity and specificity of DECT were 84.7% (95% confidence interval [CI]: 81.3-87.7) and 93.7% (93.0-96.3), respectively. The positive likelihood ratio, negative likelihood ratio, and diagnostic odds ratio were 9.882 (6.122-15.95), 0.163 (0.097-0.272), and 78.10 (31.14-195.84), respectively. The area under the curve of DECT was 0.956 and the Q * index was 0.889, indicating a high diagnostic accuracy. Some between-study heterogeneity was found in the meta-analyses. However, there was no evidence of a threshold effect (Spearman correlation coefficient = 0.419; p = 0.035). In addition, meta-regression showed that the sample size, study design, and diagnostic criteria were not sources of heterogeneity, and subgroup meta-analyses did not change the overall diagnostic accuracy. Our meta-analysis of published studies demonstrates that DECT has a high diagnostic accuracy and plays an important role in the diagnosis of gout. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Computer aided analysis and design of industrial energy systems; Rechnergestuetzte Analyse und Konzeption industrieller Energiesysteme

    Energy Technology Data Exchange (ETDEWEB)

    Augenstein, Eckardt Marc Guenter

    2009-03-02

    engineering tasks can be created. The modules implemented up to now form a solution for the computational support of industrial energy audits. Nevertheless, the framework could be used as a basis for other fields of engineering. As an example for a complex module, a simulator for the calculation of industrial energy supply systems is presented. This module allows modelling of supply systems with low effort in order to calculate the annual costs, system efficiency and emissions. Besides technical components for the conversion, storage and transport of energy, other decisive elements like energy tariffs can be modelled. As input for the simulation, time series of the different target energy demands are needed. As detailed design data of the components is usually not available, the model parameters are typically restricted to the data found in technical data sheets. Moreover, typical sample times of energy demand time series will be 15 minutes or higher, so that dynamic effects below this time interval are neglected. For the purpose of analysis it turns out to be advantageous to assess a supply system without the influences of the concrete system control strategy, which means to run the simulation under the regime of an optimal control strategy. Even in cases where the control strategy is to be taken into account, this approach allows a simpler modelling of the system control as aspects with little impact on the system efficiency can be left to the optimizer instead of formulating appropriate rules. In order to allow an operation optimization of machines whose efficiency depends on temperatures in addition to the part load state (e.g. chillers), an optimization method which allows for quadratic constraints was selected. In order to achieve a method most robust towards the large variety of systems structures to be handled, a combination of evolutionary algorithms and mixed integer linear programming was chosen. In order to create supply system models, component models can be

  9. Comparison of energy expenditure in adolescents when playing new generation and sedentary computer games: cross sectional study

    Science.gov (United States)

    2007-01-01

    Objective To compare the energy expenditure of adolescents when playing sedentary and new generation active computer games. Design Cross sectional comparison of four computer games. Setting Research laboratories. Participants Six boys and five girls aged 13-15 years. Procedure Participants were fitted with a monitoring device validated to predict energy expenditure. They played four computer games for 15 minutes each. One of the games was sedentary (XBOX 360) and the other three were active (Wii Sports). Main outcome measure Predicted energy expenditure, compared using repeated measures analysis of variance. Results Mean (standard deviation) predicted energy expenditure when playing Wii Sports bowling (190.6 (22.2) kJ/kg/min), tennis (202.5 (31.5) kJ/kg/min), and boxing (198.1 (33.9) kJ/kg/min) was significantly greater than when playing sedentary games (125.5 (13.7) kJ/kg/min) (P<0.001). Predicted energy expenditure was at least 65.1 (95% confidence interval 47.3 to 82.9) kJ/kg/min greater when playing active rather than sedentary games. Conclusions Playing new generation active computer games uses significantly more energy than playing sedentary computer games but not as much energy as playing the sport itself. The energy used when playing active Wii Sports games was not of high enough intensity to contribute towards the recommended daily amount of exercise in children. PMID:18156227

  10. Parallel computing for event reconstruction in high-energy physics

    International Nuclear Information System (INIS)

    Wolbers, S.

    1993-01-01

    Parallel computing has been recognized as a solution to large computing problems. In High Energy Physics offline event reconstruction of detector data is a very large computing problem that has been solved with parallel computing techniques. A review of the parallel programming package CPS (Cooperative Processes Software) developed and used at Fermilab for offline reconstruction of Terabytes of data requiring the delivery of hundreds of Vax-Years per experiment is given. The Fermilab UNIX farms, consisting of 180 Silicon Graphics workstations and 144 IBM RS6000 workstations, are used to provide the computing power for the experiments. Fermilab has had a long history of providing production parallel computing starting with the ACP (Advanced Computer Project) Farms in 1986. The Fermilab UNIX Farms have been in production for over 2 years with 24 hour/day service to experimental user groups. Additional tools for management, control and monitoring these large systems will be described. Possible future directions for parallel computing in High Energy Physics will be given

  11. Analytic computation of average energy of neutrons inducing fission

    International Nuclear Information System (INIS)

    Clark, Alexander Rich

    2016-01-01

    The objective of this report is to describe how I analytically computed the average energy of neutrons that induce fission in the bare BeRP ball. The motivation of this report is to resolve a discrepancy between the average energy computed via the FMULT and F4/FM cards in MCNP6 by comparison to the analytic results.

  12. Computer-Aided Sustainable Process Synthesis-Design and Analysis

    DEFF Research Database (Denmark)

    Kumar Tula, Anjan

    -groups is that, the performance of the entire process can be evaluated from the contributions of the individual process-groups towards the selected flowsheet property (for example, energy consumed). The developed flowsheet property models include energy consumption, carbon footprint, product recovery, product......Process synthesis involves the investigation of chemical reactions needed to produce the desired product, selection of the separation techniques needed for downstream processing, as well as taking decisions on sequencing the involved separation operations. For an effective, efficient and flexible...... focuses on the development and application of a computer-aided framework for sustainable synthesis-design and analysis of process flowsheets by generating feasible alternatives covering the entire search space and includes analysis tools for sustainability, LCA and economics. The synthesis method is based...

  13. Computed Potential Energy Surfaces and Minimum Energy Pathway for Chemical Reactions

    Science.gov (United States)

    Walch, Stephen P.; Langhoff, S. R. (Technical Monitor)

    1994-01-01

    Computed potential energy surfaces are often required for computation of such observables as rate constants as a function of temperature, product branching ratios, and other detailed properties. We have found that computation of the stationary points/reaction pathways using CASSCF/derivative methods, followed by use of the internally contracted CI method with the Dunning correlation consistent basis sets to obtain accurate energetics, gives useful results for a number of chemically important systems. Applications to complex reactions leading to NO and soot formation in hydrocarbon combustion are discussed.

  14. Alternative energy technologies an introduction with computer simulations

    CERN Document Server

    Buxton, Gavin

    2014-01-01

    Introduction to Alternative Energy SourcesGlobal WarmingPollutionSolar CellsWind PowerBiofuelsHydrogen Production and Fuel CellsIntroduction to Computer ModelingBrief History of Computer SimulationsMotivation and Applications of Computer ModelsUsing Spreadsheets for SimulationsTyping Equations into SpreadsheetsFunctions Available in SpreadsheetsRandom NumbersPlotting DataMacros and ScriptsInterpolation and ExtrapolationNumerical Integration and Diffe

  15. Computational advances in transition phase analysis

    International Nuclear Information System (INIS)

    Morita, K.; Kondo, S.; Tobita, Y.; Shirakawa, N.; Brear, D.J.; Fischer, E.A.

    1994-01-01

    In this paper, historical perspective and recent advances are reviewed on computational technologies to evaluate a transition phase of core disruptive accidents in liquid-metal fast reactors. An analysis of the transition phase requires treatment of multi-phase multi-component thermohydraulics coupled with space- and energy-dependent neutron kinetics. Such a comprehensive modeling effort was initiated when the program of SIMMER-series computer code development was initiated in the late 1970s in the USA. Successful application of the latest SIMMER-II in USA, western Europe and Japan have proved its effectiveness, but, at the same time, several areas that require further research have been identified. Based on the experience and lessons learned during the SIMMER-II application through 1980s, a new project of SIMMER-III development is underway at the Power Reactor and Nuclear Fuel Development Corporation (PNC), Japan. The models and methods of SIMMER-III are briefly described with emphasis on recent advances in multi-phase multi-component fluid dynamics technologies and their expected implication on a future reliable transition phase analysis. (author)

  16. Development of integrated models for energy-economy systems analysis at JAERI

    International Nuclear Information System (INIS)

    Yasukawa, Shigeru; Mankin, Shuichi; Sato, Osamu; Yonese, Hiromi

    1984-08-01

    This report, being a revision of the preprint for distribution to participants at IEA/ETSAP Workshop held, at JAERI, Tokyo, March 1984, describes the concept of the integrated models for energy-economy systems analysis now being carried out at JAERI. In this model system, there contains four different categories of computer codes. The first one is a series of computer codes named as E 3 -SD representatively, which are utilized to develop a dynamic scenario generation in a long-term energy economy evolution. The second one, of which the main constituents are the MARKAL, i.e. an optimal energy flow analizer, and the TRANS-I/O, i.e. a multi-sectoral economy analyzer, has been developed for the analysis of structural characteristics embodied in our energy-economy system. The third one is for a strategy analysis on nuclear power reactor installation and fuel cycle development, and its main constituent is the JALTES. The fourth one is for a cost-benefit-risk analysis including various kinds of data bases. As the model system being still under development, but the idea of application of it to such a problem as '' the role of the HTGR in the prospects of future energy supply'' is also explained in the report. (author)

  17. Energy-efficient computing and networking. Revised selected papers

    Energy Technology Data Exchange (ETDEWEB)

    Hatziargyriou, Nikos; Dimeas, Aris [Ethnikon Metsovion Polytechneion, Athens (Greece); Weidlich, Anke (eds.) [SAP Research Center, Karlsruhe (Germany); Tomtsi, Thomai

    2011-07-01

    This book constitutes the postproceedings of the First International Conference on Energy-Efficient Computing and Networking, E-Energy, held in Passau, Germany in April 2010. The 23 revised papers presented were carefully reviewed and selected for inclusion in the post-proceedings. The papers are organized in topical sections on energy market and algorithms, ICT technology for the energy market, implementation of smart grid and smart home technology, microgrids and energy management, and energy efficiency through distributed energy management and buildings. (orig.)

  18. Theoretical basis of the DOE-2 building energy use analysis program

    Science.gov (United States)

    Curtis, R. B.

    1981-04-01

    A user-oriented, public domain, computer program was developed that will enable architects and engineers to perform design and retrofit studies of the energy-use of buildings under realistic weather conditions. The DOE-2.1A has been named by the US DOE as the standard evaluation technique for the Congressionally mandated building energy performance standards (BEPS). A number of program design decisions were made that determine the breadth of applicability of DOE-2.1. Such design decisions are intrinsic to all building energy use analysis computer programs and determine the types of buildings or the kind of HVAC systems that can be modeled. In particular, the weighting factor method used in DOE-2 has both advantages and disadvantages relative to other computer programs.

  19. Computational Music Analysis

    DEFF Research Database (Denmark)

    This book provides an in-depth introduction and overview of current research in computational music analysis. Its seventeen chapters, written by leading researchers, collectively represent the diversity as well as the technical and philosophical sophistication of the work being done today...... on well-established theories in music theory and analysis, such as Forte's pitch-class set theory, Schenkerian analysis, the methods of semiotic analysis developed by Ruwet and Nattiez, and Lerdahl and Jackendoff's Generative Theory of Tonal Music. The book is divided into six parts, covering...... music analysis, the book provides an invaluable resource for researchers, teachers and students in music theory and analysis, computer science, music information retrieval and related disciplines. It also provides a state-of-the-art reference for practitioners in the music technology industry....

  20. Use of energy analysis to evaluate the parameters of wave fields

    Energy Technology Data Exchange (ETDEWEB)

    Soldatov, V.N.; Sinitsyn, Ye.S.

    1984-01-01

    Algorithms are proposed and studied for energy analysis of the wave fields. A comparative evaluation is made of the resolution of the energy analysis methods. A method is examined for automated processing of the energograms allowing a search for an estimate of the parameters with significant acceleration of the computer calculations and saving of its working storage by designing multipurpose algorithms of data processing.

  1. Wireless-Uplinks-Based Energy-Efficient Scheduling in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Xing Liu

    2015-01-01

    Full Text Available Mobile cloud computing (MCC combines cloud computing and mobile internet to improve the computational capabilities of resource-constrained mobile devices (MDs. In MCC, mobile users could not only improve the computational capability of MDs but also save operation consumption by offloading the mobile applications to the cloud. However, MCC faces the problem of energy efficiency because of time-varying channels when the offloading is being executed. In this paper, we address the issue of energy-efficient scheduling for wireless uplink in MCC. By introducing Lyapunov optimization, we first propose a scheduling algorithm that can dynamically choose channel to transmit data based on queue backlog and channel statistics. Then, we show that the proposed scheduling algorithm can make a tradeoff between queue backlog and energy consumption in a channel-aware MCC system. Simulation results show that the proposed scheduling algorithm can reduce the time average energy consumption for offloading compared to the existing algorithm.

  2. Description and application of the EAP computer program for calculating life-cycle energy use and greenhouse gas emissions of household consumption items

    NARCIS (Netherlands)

    Benders, R.M.J.; Wilting, H.C.; Kramer, K.J.; Moll, H.C.

    2001-01-01

    Focusing on reduction in energy use and greenhouse gas emissions, a life-cycle-based analysis tool has been developed. The energy analysis program (EAP) is a computer program for determining energy use and greenhouse gas emissions related to household consumption items, using a hybrid calculation

  3. Asymmetric energy flow in liquid alkylbenzenes: A computational study

    International Nuclear Information System (INIS)

    Leitner, David M.; Pandey, Hari Datt

    2015-01-01

    Ultrafast IR-Raman experiments on substituted benzenes [B. C. Pein et al., J. Phys. Chem. B 117, 10898–10904 (2013)] reveal that energy can flow more efficiently in one direction along a molecule than in others. We carry out a computational study of energy flow in the three alkyl benzenes, toluene, isopropylbenzene, and t-butylbenzene, studied in these experiments, and find an asymmetry in the flow of vibrational energy between the two chemical groups of the molecule due to quantum mechanical vibrational relaxation bottlenecks, which give rise to a preferred direction of energy flow. We compare energy flow computed for all modes of the three alkylbenzenes over the relaxation time into the liquid with energy flow through the subset of modes monitored in the time-resolved Raman experiments and find qualitatively similar results when using the subset compared to all the modes

  4. Energy efficient hybrid computing systems using spin devices

    Science.gov (United States)

    Sharad, Mrigank

    Emerging spin-devices like magnetic tunnel junctions (MTJ's), spin-valves and domain wall magnets (DWM) have opened new avenues for spin-based logic design. This work explored potential computing applications which can exploit such devices for higher energy-efficiency and performance. The proposed applications involve hybrid design schemes, where charge-based devices supplement the spin-devices, to gain large benefits at the system level. As an example, lateral spin valves (LSV) involve switching of nanomagnets using spin-polarized current injection through a metallic channel such as Cu. Such spin-torque based devices possess several interesting properties that can be exploited for ultra-low power computation. Analog characteristic of spin current facilitate non-Boolean computation like majority evaluation that can be used to model a neuron. The magneto-metallic neurons can operate at ultra-low terminal voltage of ˜20mV, thereby resulting in small computation power. Moreover, since nano-magnets inherently act as memory elements, these devices can facilitate integration of logic and memory in interesting ways. The spin based neurons can be integrated with CMOS and other emerging devices leading to different classes of neuromorphic/non-Von-Neumann architectures. The spin-based designs involve `mixed-mode' processing and hence can provide very compact and ultra-low energy solutions for complex computation blocks, both digital as well as analog. Such low-power, hybrid designs can be suitable for various data processing applications like cognitive computing, associative memory, and currentmode on-chip global interconnects. Simulation results for these applications based on device-circuit co-simulation framework predict more than ˜100x improvement in computation energy as compared to state of the art CMOS design, for optimal spin-device parameters.

  5. Cloud computing platform for real-time measurement and verification of energy performance

    International Nuclear Information System (INIS)

    Ke, Ming-Tsun; Yeh, Chia-Hung; Su, Cheng-Jie

    2017-01-01

    Highlights: • Application of PSO algorithm can improve the accuracy of the baseline model. • M&V cloud platform automatically calculates energy performance. • M&V cloud platform can be applied in all energy conservation measures. • Real-time operational performance can be monitored through the proposed platform. • M&V cloud platform facilitates the development of EE programs and ESCO industries. - Abstract: Nations worldwide are vigorously promoting policies to improve energy efficiency. The use of measurement and verification (M&V) procedures to quantify energy performance is an essential topic in this field. Currently, energy performance M&V is accomplished via a combination of short-term on-site measurements and engineering calculations. This requires extensive amounts of time and labor and can result in a discrepancy between actual energy savings and calculated results. In addition, the M&V period typically lasts for periods as long as several months or up to a year, the failure to immediately detect abnormal energy performance not only decreases energy performance, results in the inability to make timely correction, and misses the best opportunity to adjust or repair equipment and systems. In this study, a cloud computing platform for the real-time M&V of energy performance is developed. On this platform, particle swarm optimization and multivariate regression analysis are used to construct accurate baseline models. Instantaneous and automatic calculations of the energy performance and access to long-term, cumulative information about the energy performance are provided via a feature that allows direct uploads of the energy consumption data. Finally, the feasibility of this real-time M&V cloud platform is tested for a case study involving improvements to a cold storage system in a hypermarket. Cloud computing platform for real-time energy performance M&V is applicable to any industry and energy conservation measure. With the M&V cloud platform, real

  6. Energy Use and Power Levels in New Monitors and Personal Computers; TOPICAL

    International Nuclear Information System (INIS)

    Roberson, Judy A.; Homan, Gregory K.; Mahajan, Akshay; Nordman, Bruce; Webber, Carrie A.; Brown, Richard E.; McWhinney, Marla; Koomey, Jonathan G.

    2002-01-01

    Our research was conducted in support of the EPA ENERGY STAR Office Equipment program, whose goal is to reduce the amount of electricity consumed by office equipment in the U.S. The most energy-efficient models in each office equipment category are eligible for the ENERGY STAR label, which consumers can use to identify and select efficient products. As the efficiency of each category improves over time, the ENERGY STAR criteria need to be revised accordingly. The purpose of this study was to provide reliable data on the energy consumption of the newest personal computers and monitors that the EPA can use to evaluate revisions to current ENERGY STAR criteria as well as to improve the accuracy of ENERGY STAR program savings estimates. We report the results of measuring the power consumption and power management capabilities of a sample of new monitors and computers. These results will be used to improve estimates of program energy savings and carbon emission reductions, and to inform rev isions of the ENERGY STAR criteria for these products. Our sample consists of 35 monitors and 26 computers manufactured between July 2000 and October 2001; it includes cathode ray tube (CRT) and liquid crystal display (LCD) monitors, Macintosh and Intel-architecture computers, desktop and laptop computers, and integrated computer systems, in which power consumption of the computer and monitor cannot be measured separately. For each machine we measured power consumption when off, on, and in each low-power level. We identify trends in and opportunities to reduce power consumption in new personal computers and monitors. Our results include a trend among monitor manufacturers to provide a single very low low-power level, well below the current ENERGY STAR criteria for sleep power consumption. These very low sleep power results mean that energy consumed when monitors are off or in active use has become more important in terms of contribution to the overall unit energy consumption (UEC

  7. Computer technology: its potential for industrial energy conservation. A technology applications manual

    Energy Technology Data Exchange (ETDEWEB)

    None

    1979-01-01

    Today, computer technology is within the reach of practically any industrial corporation regardless of product size. This manual highlights a few of the many applications of computers in the process industry and provides the technical reader with a basic understanding of computer technology, terminology, and the interactions among the various elements of a process computer system. The manual has been organized to separate process applications and economics from computer technology. Chapter 1 introduces the present status of process computer technology and describes the four major applications - monitoring, analysis, control, and optimization. The basic components of a process computer system also are defined. Energy-saving applications in the four major categories defined in Chapter 1 are discussed in Chapter 2. The economics of process computer systems is the topic of Chapter 3, where the historical trend of process computer system costs is presented. Evaluating a process for the possible implementation of a computer system requires a basic understanding of computer technology as well as familiarity with the potential applications; Chapter 4 provides enough technical information for an evaluation. Computer and associated peripheral costs and the logical sequence of steps in the development of a microprocessor-based process control system are covered in Chapter 5.

  8. Parallel Computing:. Some Activities in High Energy Physics

    Science.gov (United States)

    Willers, Ian

    This paper examines some activities in High Energy Physics that utilise parallel computing. The topic includes all computing from the proposed SIMD front end detectors, the farming applications, high-powered RISC processors and the large machines in the computer centers. We start by looking at the motivation behind using parallelism for general purpose computing. The developments around farming are then described from its simplest form to the more complex system in Fermilab. Finally, there is a list of some developments that are happening close to the experiments.

  9. Computed tomography in severe protein energy malnutrition.

    OpenAIRE

    Househam, K C; de Villiers, J F

    1987-01-01

    Computed tomography of the brain was performed on eight children aged 1 to 4 years with severe protein energy malnutrition. Clinical features typical of kwashiorkor were present in all the children studied. Severe cerebral atrophy or brain shrinkage according to standard radiological criteria was present in every case. The findings of this study suggest considerable cerebral insult associated with severe protein energy malnutrition.

  10. Data processing of X-ray fluorescence analysis using an electronic computer

    International Nuclear Information System (INIS)

    Yakubovich, A.L.; Przhiyalovskij, S.M.; Tsameryan, G.N.; Golubnichij, G.V.; Nikitin, S.A.

    1979-01-01

    Considered are problems of data processing of multi-element (for 17 elements) X-ray fluorescence analysis of tungsten and molybdenum ores. The analysis was carried out using silicon-lithium spectrometer with the energy resolution of about 300 eV and a 1024-channel analyzer. A characteristic radiation of elements was excited with two 109 Cd radioisotope sources, their general activity being 10 mCi. The period of measurements was 400 s. The data obtained were processed with a computer using the ''Proba-1'' and ''Proba-2'' programs. Data processing algorithms and computer calculation results are presented

  11. Computer simulation study of the displacement threshold-energy surface in Cu

    International Nuclear Information System (INIS)

    King, W.E.; Benedek, R.

    1981-01-01

    Computer simulations were performed using the molecular-dynamics technique to determine the directional dependence of the threshold energy for production of stable Frenkel pairs in copper. Sharp peaks were observed in the simulated threshold energy surface in between the low-index directions. Threshold energies ranged from approx.25 eV for directions near or to 180 eV at the position of the peak between and . The general topographical features of the simulated threshold-energy surface are in good agreement with those determined from an analysis of recent experiments by King et al. on the basis of a Frenkel-pair resistivity rho/sub F/ = 2.85 x 10 -4 Ω cm. Evidence is presented in favor of this number as opposed to the usually assumed value, rho/sub F/ = 2.00 x 10 -4 Ω cm. The energy dependence of defect production in a number of directions was investigated to determine the importance of nonproductive events above threshold

  12. Energy and life-cycle cost analysis of a six-story office building

    Science.gov (United States)

    Turiel, I.

    1981-10-01

    An energy analysis computer program, DOE-2, was used to compute annual energy use for a typical office building as originally designed and with several energy conserving design modifications. The largest energy use reductions were obtained with the incorporation of daylighting techniques, the use of double pane windows, night temperature setback, and the reduction of artificial lighting levels. A life-cycle cost model was developed to assess the cost-effectiveness of the design modifications discussed. The model incorporates such features as inclusion of taxes, depreciation, and financing of conservation investments. The energy conserving strategies are ranked according to economic criteria such as net present benefit, discounted payback period, and benefit to cost ratio.

  13. Energy system analysis of 100% renewable energy systems-The case of Denmark in years 2030 and 2050

    DEFF Research Database (Denmark)

    Lund, Henrik; Mathiesen, Brian Vad

    2009-01-01

    for two energy target years: year 2050 with 100% renewable energy from biomass and combinations of wind, wave and solar power; and year 2030 with 50% renewable energy, emphasising the first important steps on the way. The conclusion is that a 100% renewable energy supply based on domestic resources......This paper presents the methodology and results of the overall energy system analysis of a 100% renewable energy system. The input for the systems is the result of a project of the Danish Association of Engineers, in which 1600 participants during more than 40 seminars discussed and designed...... a model for the future energy system of Denmark. The energy system analysis methodology includes hour by hour computer simulations leading to the design of flexible energy systems with the ability to balance the electricity supply and demand. The results are detailed system designs and energy balances...

  14. Research and development of grid computing technology in center for computational science and e-systems of Japan Atomic Energy Agency

    International Nuclear Information System (INIS)

    Suzuki, Yoshio

    2007-01-01

    Center for Computational Science and E-systems of the Japan Atomic Energy Agency (CCSE/JAEA) has carried out R and D of grid computing technology. Since 1995, R and D to realize computational assistance for researchers called Seamless Thinking Aid (STA) and then to share intellectual resources called Information Technology Based Laboratory (ITBL) have been conducted, leading to construct an intelligent infrastructure for the atomic energy research called Atomic Energy Grid InfraStructure (AEGIS) under the Japanese national project 'Development and Applications of Advanced High-Performance Supercomputer'. It aims to enable synchronization of three themes: 1) Computer-Aided Research and Development (CARD) to realize and environment for STA, 2) Computer-Aided Engineering (CAEN) to establish Multi Experimental Tools (MEXT), and 3) Computer Aided Science (CASC) to promote the Atomic Energy Research and Investigation (AERI). This article reviewed achievements in R and D of grid computing technology so far obtained. (T. Tanaka)

  15. Computational study of energy transfer in two-dimensional J-aggregates

    International Nuclear Information System (INIS)

    Gallos, Lazaros K.; Argyrakis, Panos; Lobanov, A.; Vitukhnovsky, A.

    2004-01-01

    We perform a computational analysis of the intra- and interband energy transfer in two-dimensional J-aggregates. Each aggregate is represented as a two-dimensional array (LB-film or self-assembled film) of two kinds of cyanine dyes. We consider the J-aggregate whose J-band is located at a shorter wavelength to be a donor and an aggregate or a small impurity with longer wavelength to be an acceptor. Light absorption in the blue wing of the donor aggregate gives rise to the population of its excitonic states. The depopulation of these states is possible by (a) radiative transfer to the ground state (b) intraband energy transfer, and (c) interband energy transfer to the acceptor. We study the dependence of energy transfer on properties such as the energy gap, the diagonal disorder, and the exciton-phonon interaction strength. Experimentally observable parameters, such as the position and form of luminescence spectrum, and results of the kinetic spectroscopy measurements strongly depend upon the density of states in excitonic bands, rates of energy exchange between states and oscillator strengths for luminescent transitions originating from these states

  16. High performance computing in power and energy systems

    Energy Technology Data Exchange (ETDEWEB)

    Khaitan, Siddhartha Kumar [Iowa State Univ., Ames, IA (United States); Gupta, Anshul (eds.) [IBM Watson Research Center, Yorktown Heights, NY (United States)

    2013-07-01

    The twin challenge of meeting global energy demands in the face of growing economies and populations and restricting greenhouse gas emissions is one of the most daunting ones that humanity has ever faced. Smart electrical generation and distribution infrastructure will play a crucial role in meeting these challenges. We would need to develop capabilities to handle large volumes of data generated by the power system components like PMUs, DFRs and other data acquisition devices as well as by the capacity to process these data at high resolution via multi-scale and multi-period simulations, cascading and security analysis, interaction between hybrid systems (electric, transport, gas, oil, coal, etc.) and so on, to get meaningful information in real time to ensure a secure, reliable and stable power system grid. Advanced research on development and implementation of market-ready leading-edge high-speed enabling technologies and algorithms for solving real-time, dynamic, resource-critical problems will be required for dynamic security analysis targeted towards successful implementation of Smart Grid initiatives. This books aims to bring together some of the latest research developments as well as thoughts on the future research directions of the high performance computing applications in electric power systems planning, operations, security, markets, and grid integration of alternate sources of energy, etc.

  17. Computational and Physical Analysis of Catalytic Compounds

    Science.gov (United States)

    Wu, Richard; Sohn, Jung Jae; Kyung, Richard

    2015-03-01

    Nanoparticles exhibit unique physical and chemical properties depending on their geometrical properties. For this reason, synthesis of nanoparticles with controlled shape and size is important to use their unique properties. Catalyst supports are usually made of high-surface-area porous oxides or carbon nanomaterials. These support materials stabilize metal catalysts against sintering at high reaction temperatures. Many studies have demonstrated large enhancements of catalytic behavior due to the role of the oxide-metal interface. In this paper, the catalyzing ability of supported nano metal oxides, such as silicon oxide and titanium oxide compounds as catalysts have been analyzed using computational chemistry method. Computational programs such as Gamess and Chemcraft has been used in an effort to compute the efficiencies of catalytic compounds, and bonding energy changes during the optimization convergence. The result illustrates how the metal oxides stabilize and the steps that it takes. The graph of the energy computation step(N) versus energy(kcal/mol) curve shows that the energy of the titania converges faster at the 7th iteration calculation, whereas the silica converges at the 9th iteration calculation.

  18. Advanced computational simulations of water waves interacting with wave energy converters

    Science.gov (United States)

    Pathak, Ashish; Freniere, Cole; Raessi, Mehdi

    2017-03-01

    Wave energy converter (WEC) devices harness the renewable ocean wave energy and convert it into useful forms of energy, e.g. mechanical or electrical. This paper presents an advanced 3D computational framework to study the interaction between water waves and WEC devices. The computational tool solves the full Navier-Stokes equations and considers all important effects impacting the device performance. To enable large-scale simulations in fast turnaround times, the computational solver was developed in an MPI parallel framework. A fast multigrid preconditioned solver is introduced to solve the computationally expensive pressure Poisson equation. The computational solver was applied to two surface-piercing WEC geometries: bottom-hinged cylinder and flap. Their numerically simulated response was validated against experimental data. Additional simulations were conducted to investigate the applicability of Froude scaling in predicting full-scale WEC response from the model experiments.

  19. Optimisation of the energy efficiency of bread-baking ovens using a combined experimental and computational approach

    International Nuclear Information System (INIS)

    Khatir, Zinedine; Paton, Joe; Thompson, Harvey; Kapur, Nik; Toropov, Vassili

    2013-01-01

    Highlights: ► A scientific framework for optimising oven operating conditions is presented. ► Experiments measuring local convective heat transfer coefficient are undertaken. ► An energy efficiency model is developed with experimentally calibrated CFD analysis. ► Designing ovens with optimum heat transfer coefficients reduces energy use. ► Results demonstrate a strong case to design and manufacture energy optimised ovens. - Abstract: Changing legislation and rising energy costs are bringing the need for efficient baking processes into much sharper focus. High-speed air impingement bread-baking ovens are complex systems using air flow to transfer heat to the product. In this paper, computational fluid dynamics (CFD) is combined with experimental analysis to develop a rigorous scientific framework for the rapid generation of forced convection oven designs. A design parameterisation of a three-dimensional generic oven model is carried out for a wide range of oven sizes and flow conditions to optimise desirable features such as temperature uniformity throughout the oven, energy efficiency and manufacturability. Coupled with the computational model, a series of experiments measuring the local convective heat transfer coefficient (h c ) are undertaken. The facility used for the heat transfer experiments is representative of a scaled-down production oven where the air temperature and velocity as well as important physical constraints such as nozzle dimensions and nozzle-to-surface distance can be varied. An efficient energy model is developed using a CFD analysis calibrated using experimentally determined inputs. Results from a range of oven designs are presented together with ensuing energy usage and savings

  20. Visualization of flaws within heavy section ultrasonic test blocks using high energy computed tomography

    International Nuclear Information System (INIS)

    House, M.B.; Ross, D.M.; Janucik, F.X.; Friedman, W.D.; Yancey, R.N.

    1996-05-01

    The feasibility of high energy computed tomography (9 MeV) to detect volumetric and planar discontinuities in large pressure vessel mock-up blocks was studied. The data supplied by the manufacturer of the test blocks on the intended flaw geometry were compared to manual, contact ultrasonic test and computed tomography test data. Subsequently, a visualization program was used to construct fully three-dimensional morphological information enabling interactive data analysis on the detected flaws. Density isosurfaces show the relative shape and location of the volumetric defects within the mock-up blocks. Such a technique may be used to qualify personnel or newly developed ultrasonic test methods without the associated high cost of destructive evaluation. Data is presented showing the capability of the volumetric data analysis program to overlay the computed tomography and destructive evaluation (serial metallography) data for a direct, three-dimensional comparison

  1. Dual-energy computed tomography for characterizing urinary calcified calculi and uric acid calculi: A meta-analysis

    International Nuclear Information System (INIS)

    Zheng, Xingju; Liu, Yuanyuan; Li, Mou; Wang, Qiyan; Song, Bin

    2016-01-01

    Objective: A meta-analysis was conducted to determine the accuracy of dual-energy computed tomography (DECT) for differentiating urinary uric acid and calcified calculi. Methods: The databases PubMed, EMBASE, Web of Science, and the Cochrane Library were searched up to May 2016 for relevant original studies. Data were extracted to calculate the pooled sensitivity, specificity, diagnostic odds ratio (OR), positive and negative likelihood ratios (PLR and NLR), and areas under summary receiver operating characteristic (AUROC) curves for analysis. Results: Nine studies (609 stones in 415 patients) were included. For differentiating uric acid (UA) and non-UA calculi with DECT, the analysis indicated: pooled weighted sensitivity, 0.955 (95% CI, 0.888–0.987); specificity, 0.985 (95% CI, 0.970–0.993); PLR, 0.084 (95% CI, 0.041–0.170); NLR 33.327 (95% CI, 18.516–59.985); and diagnostic OR 538.18 (95% CI, 195.50–1478.5). The AUROC value was 0.9901. For calcified stones, the analysis indicated: pooled weighted sensitivity, 0.994 (95% CI, 0.969–1); specificity, 0.973 (95% CI, 0.906–0.997); PLR, 11.200 (95% CI, 4.922–25.486); NLR 0.027 (95% CI, 0.010–0.072); and diagnostic OR 654.89 (95% CI, 151.31–2834.4). The AUROC value was 0.9915. Conclusion: This meta-analysis found that DECT is a highly accurate noninvasive method for characterizing urinary uric acid and calcified calculi.

  2. Dual-energy computed tomography for characterizing urinary calcified calculi and uric acid calculi: A meta-analysis

    Energy Technology Data Exchange (ETDEWEB)

    Zheng, Xingju; Liu, Yuanyuan; Li, Mou; Wang, Qiyan; Song, Bin, E-mail: binsong65@yahoo.com

    2016-10-15

    Objective: A meta-analysis was conducted to determine the accuracy of dual-energy computed tomography (DECT) for differentiating urinary uric acid and calcified calculi. Methods: The databases PubMed, EMBASE, Web of Science, and the Cochrane Library were searched up to May 2016 for relevant original studies. Data were extracted to calculate the pooled sensitivity, specificity, diagnostic odds ratio (OR), positive and negative likelihood ratios (PLR and NLR), and areas under summary receiver operating characteristic (AUROC) curves for analysis. Results: Nine studies (609 stones in 415 patients) were included. For differentiating uric acid (UA) and non-UA calculi with DECT, the analysis indicated: pooled weighted sensitivity, 0.955 (95% CI, 0.888–0.987); specificity, 0.985 (95% CI, 0.970–0.993); PLR, 0.084 (95% CI, 0.041–0.170); NLR 33.327 (95% CI, 18.516–59.985); and diagnostic OR 538.18 (95% CI, 195.50–1478.5). The AUROC value was 0.9901. For calcified stones, the analysis indicated: pooled weighted sensitivity, 0.994 (95% CI, 0.969–1); specificity, 0.973 (95% CI, 0.906–0.997); PLR, 11.200 (95% CI, 4.922–25.486); NLR 0.027 (95% CI, 0.010–0.072); and diagnostic OR 654.89 (95% CI, 151.31–2834.4). The AUROC value was 0.9915. Conclusion: This meta-analysis found that DECT is a highly accurate noninvasive method for characterizing urinary uric acid and calcified calculi.

  3. Energy Cascade Analysis: from Subscale Eddies to Mean Flow

    Science.gov (United States)

    Cheikh, Mohamad Ibrahim; Wonnell, Louis; Chen, James

    2017-11-01

    Understanding the energy transfer between eddies and mean flow can provide insights into the energy cascade process. Much work has been done to investigate the energy cascade at the level of the smallest eddies using different numerical techniques derived from the Navier-Stokes equations. These methodologies, however, prove to be computationally inefficient when producing energy spectra for a wide range of length scales. In this regard, Morphing Continuum Theory (MCT) resolves the length-scales issues by assuming the fluid continuum to be composed of inner structures that play the role of subscale eddies. The current study show- cases the capabilities of MCT in capturing the dynamics of energy cascade at the level of subscale eddies, through a supersonic turbulent flow of Mach 2.93 over an 8× compression ramp. Analysis of the results using statistical averaging procedure shows the existence of a statistical coupling of the internal and translational kinetic energy fluctuations with the corresponding rotational kinetic energy of the subscale eddies, indicating a multiscale transfer of energy. The results show that MCT gives a new characterization of the energy cascade within compressible turbulence without the use of excessive computational resources. This material is based upon work supported by the Air Force Office of Scientific Research under Award Number FA9550-17-1-0154.

  4. Symbolic derivation of high-order Rayleigh-Schroedinger perturbation energies using computer algebra: Application to vibrational-rotational analysis of diatomic molecules

    Energy Technology Data Exchange (ETDEWEB)

    Herbert, John M. [Kansas State Univ., Manhattan, KS (United States). Dept. of Chemistry

    1997-01-01

    Rayleigh-Schroedinger perturbation theory is an effective and popular tool for describing low-lying vibrational and rotational states of molecules. This method, in conjunction with ab initio techniques for computation of electronic potential energy surfaces, can be used to calculate first-principles molecular vibrational-rotational energies to successive orders of approximation. Because of mathematical complexities, however, such perturbation calculations are rarely extended beyond the second order of approximation, although recent work by Herbert has provided a formula for the nth-order energy correction. This report extends that work and furnishes the remaining theoretical details (including a general formula for the Rayleigh-Schroedinger expansion coefficients) necessary for calculation of energy corrections to arbitrary order. The commercial computer algebra software Mathematica is employed to perform the prohibitively tedious symbolic manipulations necessary for derivation of generalized energy formulae in terms of universal constants, molecular constants, and quantum numbers. As a pedagogical example, a Hamiltonian operator tailored specifically to diatomic molecules is derived, and the perturbation formulae obtained from this Hamiltonian are evaluated for a number of such molecules. This work provides a foundation for future analyses of polyatomic molecules, since it demonstrates that arbitrary-order perturbation theory can successfully be applied with the aid of commercially available computer algebra software.

  5. A computational study of inviscid hypersonic flows using energy relaxation method

    International Nuclear Information System (INIS)

    Nagdewe, Suryakant; Kim, H. D.; Shevare, G. R.

    2008-01-01

    Reasonable analysis of hypersonic flows requires a thermodynamic non-equilibrium model to properly simulate strong shock waves or high pressure and temperature states in the flow field. The energy relaxation method (ERM) has been used to model such a non-equilibrium effect which is generally expressed as a hyperbolic system of equations with a stiff relaxation source term. Relaxation time that is multiplied with source terms is responsible for nonequilibrium in the system. In the present study, a numerical analysis has been carried out with varying values of relaxation time for several hypersonic flows with AUSM (advection upstream splitting method) as a numerical scheme. Vibration modes of thermodynamic nonequilibrium effects are considered. The results obtained showed that, as the relaxation time reduces to zero, the solution marches toward equilibrium, while it shows non-equilibrium effects, as the relaxation time increases. The present computations predicted the experiment results of hypersonic flows with good accuracy. The work carried out suggests that the present energy relaxation method can be robust for analysis of hypersonic flows

  6. Analysis of Solar Energy Use for Multi-Flat Buildings Renovation

    Directory of Open Access Journals (Sweden)

    Kęstutis Valančius

    2016-10-01

    Full Text Available The paper analyses the energy and financial possibilities to install renewable energy sources (solar energy generating systems when renovating multi-flat buildings. The aim is to analyse solar energy system possibilities for modernization of multi-flat buildings (5-storey, 9-storey and 16-storey, providing detailed conclusions about the appropriateness of the energy systems and financial aspects. It is also intended to determine the optimal technological combinations and solutions to reach the maximum energy benefits. For the research computer simulation tools “EnergyPRO” and “PV*SOL Premium” are chosen. Also actual collected heat and electricity consumption data is used for the analysis.

  7. Computer code to predict the heat of explosion of high energy materials

    International Nuclear Information System (INIS)

    Muthurajan, H.; Sivabalan, R.; Pon Saravanan, N.; Talawar, M.B.

    2009-01-01

    The computational approach to the thermochemical changes involved in the process of explosion of a high energy materials (HEMs) vis-a-vis its molecular structure aids a HEMs chemist/engineers to predict the important thermodynamic parameters such as heat of explosion of the HEMs. Such a computer-aided design will be useful in predicting the performance of a given HEM as well as in conceiving futuristic high energy molecules that have significant potential in the field of explosives and propellants. The software code viz., LOTUSES developed by authors predicts various characteristics of HEMs such as explosion products including balanced explosion reactions, density of HEMs, velocity of detonation, CJ pressure, etc. The new computational approach described in this paper allows the prediction of heat of explosion (ΔH e ) without any experimental data for different HEMs, which are comparable with experimental results reported in literature. The new algorithm which does not require any complex input parameter is incorporated in LOTUSES (version 1.5) and the results are presented in this paper. The linear regression analysis of all data point yields the correlation coefficient R 2 = 0.9721 with a linear equation y = 0.9262x + 101.45. The correlation coefficient value 0.9721 reveals that the computed values are in good agreement with experimental values and useful for rapid hazard assessment of energetic materials

  8. An efficient and accurate method for computation of energy release rates in beam structures with longitudinal cracks

    DEFF Research Database (Denmark)

    Blasques, José Pedro Albergaria Amaral; Bitsche, Robert

    2015-01-01

    This paper proposes a novel, efficient, and accurate framework for fracture analysis of beam structures with longitudinal cracks. The three-dimensional local stress field is determined using a high-fidelity beam model incorporating a finite element based cross section analysis tool. The Virtual...... Crack Closure Technique is used for computation of strain energy release rates. The devised framework was employed for analysis of cracks in beams with different cross section geometries. The results show that the accuracy of the proposed method is comparable to that of conventional three......-dimensional solid finite element models while using only a fraction of the computation time....

  9. Computer-controlled system for rapid soil analysis of 226Ra

    International Nuclear Information System (INIS)

    Doane, R.W.; Berven, B.A.; Blair, M.S.

    1984-01-01

    A computer-controlled multichannel analysis system has been developed by the Radiological Survey Activities Group at Oak Ridge National Laboratory (ORNL) for the Department of Energy (DOE) in support of the DOE's remedial action programs. The purpose of this system is to provide a rapid estimate of the 226 Ra concentration in soil samples using a 6 x 9-in. NaI(Tl) crystal containing a 3.25-in. deep by 3.5-in. diameter well. This gamma detection system is controlled by a mini-computer with a dual floppy disk storage medium. A two-chip interface was also designed at ORNL which handles all control signals generated from the computer keyboard. These computer-generated control signals are processed in machine language for rapid data transfer and BASIC language is used for data processing

  10. EnergyPlus Run Time Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Tianzhen; Buhl, Fred; Haves, Philip

    2008-09-20

    EnergyPlus is a new generation building performance simulation program offering many new modeling capabilities and more accurate performance calculations integrating building components in sub-hourly time steps. However, EnergyPlus runs much slower than the current generation simulation programs. This has become a major barrier to its widespread adoption by the industry. This paper analyzed EnergyPlus run time from comprehensive perspectives to identify key issues and challenges of speeding up EnergyPlus: studying the historical trends of EnergyPlus run time based on the advancement of computers and code improvements to EnergyPlus, comparing EnergyPlus with DOE-2 to understand and quantify the run time differences, identifying key simulation settings and model features that have significant impacts on run time, and performing code profiling to identify which EnergyPlus subroutines consume the most amount of run time. This paper provides recommendations to improve EnergyPlus run time from the modeler?s perspective and adequate computing platforms. Suggestions of software code and architecture changes to improve EnergyPlus run time based on the code profiling results are also discussed.

  11. Methodology for Validating Building Energy Analysis Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Judkoff, R.; Wortman, D.; O' Doherty, B.; Burch, J.

    2008-04-01

    The objective of this report was to develop a validation methodology for building energy analysis simulations, collect high-quality, unambiguous empirical data for validation, and apply the validation methodology to the DOE-2.1, BLAST-2MRT, BLAST-3.0, DEROB-3, DEROB-4, and SUNCAT 2.4 computer programs. This report covers background information, literature survey, validation methodology, comparative studies, analytical verification, empirical validation, comparative evaluation of codes, and conclusions.

  12. Available computer codes and data for radiation transport analysis

    International Nuclear Information System (INIS)

    Trubey, D.K.; Maskewitz, B.F.; Roussin, R.W.

    1975-01-01

    The Radiation Shielding Information Center (RSIC), sponsored and supported by the Energy Research and Development Administration (ERDA) and the Defense Nuclear Agency (DNA), is a technical institute serving the radiation transport and shielding community. It acquires, selects, stores, retrieves, evaluates, analyzes, synthesizes, and disseminates information on shielding and ionizing radiation transport. The major activities include: (1) operating a computer-based information system and answering inquiries on radiation analysis, (2) collecting, checking out, packaging, and distributing large computer codes, and evaluated and processed data libraries. The data packages include multigroup coupled neutron-gamma-ray cross sections and kerma coefficients, other nuclear data, and radiation transport benchmark problem results

  13. Energy Efficiency in Computing (2/2)

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    We will start the second day of our energy efficient computing series with a brief discussion of software and the impact it has on energy consumption. A second major point of this lecture will be the current state of research and a few future technologies, ranging from mainstream (e.g. the Internet of Things) to exotic. Lecturer's short bio: Andrzej Nowak has 10 years of experience in computing technologies, primarily from CERN openlab and Intel. At CERN, he managed a research lab collaborating with Intel and was part of the openlab Chief Technology Office. Andrzej also worked closely and initiated projects with the private sector (e.g. HP and Google), as well as international research institutes, such as EPFL. Currently, Andrzej acts as a consultant on technology and innovation with TIK Services (http://tik.services), and runs a peer-to-peer lending start-up. NB! All Academic Lectures are recorded. No webcast! Because of a problem of the recording equipment, this lecture will be repeated for recording pu...

  14. A computer-controlled system for rapid soil analysis of 226Ra

    International Nuclear Information System (INIS)

    Doane, R.W.; Berven, B.A.; Blair, M.S.

    1984-01-01

    A computer-controlled multichannel analysis system has been developed by the Radiological Survey Activities (RASA) Group at Oak Ridge National Laboratory (ORNL) for the Department of Energy (DOE) in support of the DOE's remedial action programs. The purpose of this system is to provide a rapid estimate of the 226 Ra concentration in soil samples using a 6 x 9 inch NaI(T1) crystal containing a 3.25 inch deep by 3.5 inch diameter well. This gamma detection system is controlled by a minicomputer with a dual floppy disk storage medium, line printer, and optional X-Y plotter. A two-chip interface was also designed at ORNL which handles all control signals generated from the computer keyboard. These computer-generated control signals are processed in machine language for rapid data transfer and BASIC language is used for data processing. The computer system is a Commodore Business Machines (CBM) Model 8032 personal computer with CBM peripherals. Control and data signals are utilized via the parallel user's port to the interface unit. The analog-to-digital converter (ADC) is controlled in machine language, bootstrapped to high memory, and is addressed through the BASIC program. The BASIC program is designed to be ''user friendly'' and provides the operator with several modes of operation such as background and analysis acquisition. Any number of energy regions-of-interest (ROI) may be analyzed with automatic background substraction. Also employed in the BASIC program are the 226 Ra algorithms which utilize linear and polynomial regression equations for data conversion and look-up tables for radon equilibrating coefficients. The optional X-Y plotter may be used with two- or three-dimensional curve programs to enhance data analysis and presentation. A description of the system is presented and typical applications are discussed

  15. Recovery Act - CAREER: Sustainable Silicon -- Energy-Efficient VLSI Interconnect for Extreme-Scale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Chiang, Patrick [Oregon State Univ., Corvallis, OR (United States)

    2014-01-31

    The research goal of this CAREER proposal is to develop energy-efficient, VLSI interconnect circuits and systems that will facilitate future massively-parallel, high-performance computing. Extreme-scale computing will exhibit massive parallelism on multiple vertical levels, from thou­ sands of computational units on a single processor to thousands of processors in a single data center. Unfortunately, the energy required to communicate between these units at every level (on­ chip, off-chip, off-rack) will be the critical limitation to energy efficiency. Therefore, the PI's career goal is to become a leading researcher in the design of energy-efficient VLSI interconnect for future computing systems.

  16. Opportunities for discovery: Theory and computation in Basic Energy Sciences

    Energy Technology Data Exchange (ETDEWEB)

    Harmon, Bruce; Kirby, Kate; McCurdy, C. William

    2005-01-11

    New scientific frontiers, recent advances in theory, and rapid increases in computational capabilities have created compelling opportunities for theory and computation to advance the scientific mission of the Office of Basic Energy Sciences (BES). The prospects for success in the experimental programs of BES will be enhanced by pursuing these opportunities. This report makes the case for an expanded research program in theory and computation in BES. The Subcommittee on Theory and Computation of the Basic Energy Sciences Advisory Committee was charged with identifying current and emerging challenges and opportunities for theoretical research within the scientific mission of BES, paying particular attention to how computing will be employed to enable that research. A primary purpose of the Subcommittee was to identify those investments that are necessary to ensure that theoretical research will have maximum impact in the areas of importance to BES, and to assure that BES researchers will be able to exploit the entire spectrum of computational tools, including leadership class computing facilities. The Subcommittee s Findings and Recommendations are presented in Section VII of this report.

  17. Convolutional networks for fast, energy-efficient neuromorphic computing.

    Science.gov (United States)

    Esser, Steven K; Merolla, Paul A; Arthur, John V; Cassidy, Andrew S; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J; McKinstry, Jeffrey L; Melano, Timothy; Barch, Davis R; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D; Modha, Dharmendra S

    2016-10-11

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.

  18. Urban energy consumption: Different insights from energy flow analysis, input–output analysis and ecological network analysis

    International Nuclear Information System (INIS)

    Chen, Shaoqing; Chen, Bin

    2015-01-01

    Highlights: • Urban energy consumption was assessed from three different perspectives. • A new concept called controlled energy was developed from network analysis. • Embodied energy and controlled energy consumption of Beijing were compared. • The integration of all three perspectives will elucidate sustainable energy use. - Abstract: Energy consumption has always been a central issue for sustainable urban assessment and planning. Different forms of energy analysis can provide various insights for energy policy making. This paper brought together three approaches for energy consumption accounting, i.e., energy flow analysis (EFA), input–output analysis (IOA) and ecological network analysis (ENA), and compared their different perspectives and the policy implications for urban energy use. Beijing was used to exemplify the different energy analysis processes, and the 42 economic sectors of the city were aggregated into seven components. It was determined that EFA quantifies both the primary and final energy consumption of the urban components by tracking the different types of fuel used by the urban economy. IOA accounts for the embodied energy consumption (direct and indirect) used to produce goods and services in the city, whereas the control analysis of ENA quantifies the specific embodied energy that is regulated by the activities within the city’s boundary. The network control analysis can also be applied to determining which economic sectors drive the energy consumption and to what extent these sectors are dependent on each other for energy. So-called “controlled energy” is a new concept that adds to the analysis of urban energy consumption, indicating the adjustable energy consumed by sectors. The integration of insights from all three accounting perspectives further our understanding of sustainable energy use in cities

  19. Energy- and cost-efficient lattice-QCD computations using graphics processing units

    Energy Technology Data Exchange (ETDEWEB)

    Bach, Matthias

    2014-07-01

    Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD

  20. Energy- and cost-efficient lattice-QCD computations using graphics processing units

    International Nuclear Information System (INIS)

    Bach, Matthias

    2014-01-01

    Quarks and gluons are the building blocks of all hadronic matter, like protons and neutrons. Their interaction is described by Quantum Chromodynamics (QCD), a theory under test by large scale experiments like the Large Hadron Collider (LHC) at CERN and in the future at the Facility for Antiproton and Ion Research (FAIR) at GSI. However, perturbative methods can only be applied to QCD for high energies. Studies from first principles are possible via a discretization onto an Euclidean space-time grid. This discretization of QCD is called Lattice QCD (LQCD) and is the only ab-initio option outside of the high-energy regime. LQCD is extremely compute and memory intensive. In particular, it is by definition always bandwidth limited. Thus - despite the complexity of LQCD applications - it led to the development of several specialized compute platforms and influenced the development of others. However, in recent years General-Purpose computation on Graphics Processing Units (GPGPU) came up as a new means for parallel computing. Contrary to machines traditionally used for LQCD, graphics processing units (GPUs) are a massmarket product. This promises advantages in both the pace at which higher-performing hardware becomes available and its price. CL2QCD is an OpenCL based implementation of LQCD using Wilson fermions that was developed within this thesis. It operates on GPUs by all major vendors as well as on central processing units (CPUs). On the AMD Radeon HD 7970 it provides the fastest double-precision D kernel for a single GPU, achieving 120GFLOPS. D - the most compute intensive kernel in LQCD simulations - is commonly used to compare LQCD platforms. This performance is enabled by an in-depth analysis of optimization techniques for bandwidth-limited codes on GPUs. Further, analysis of the communication between GPU and CPU, as well as between multiple GPUs, enables high-performance Krylov space solvers and linear scaling to multiple GPUs within a single system. LQCD

  1. Gas analysis by computer-controlled microwave rotational spectrometry

    International Nuclear Information System (INIS)

    Hrubesh, L.W.

    1978-01-01

    Microwave rotational spectrometry has inherently high resolution and is thus nearly ideal for qualitative gas mixture analysis. Quantitative gas analysis is also possible by a simplified method which utilizes the ease with which molecular rotational transitions can be saturated at low microwave power densities. This article describes a computer-controlled microwave spectrometer which is used to demonstrate for the first time a totally automated analysis of a complex gas mixture. Examples are shown for a complete qualitative and quantitative analysis, in which a search of over 100 different compounds is made in less than 7 min, with sensitivity for most compounds in the 10 to 100 ppm range. This technique is expected to find increased use in view of the reduced complexity and increased reliabiity of microwave spectrometers and because of new energy-related applications for analysis of mixtures of small molecules

  2. Optimum energies for dual-energy computed tomography

    International Nuclear Information System (INIS)

    Talbert, A.J.; Brooks, R.A.; Morgenthaler, D.G.

    1980-01-01

    By performing a dual-energy scan, separate information can be obtained on the Compton and photoelectric components of attenuation for an unknown material. This procedure has been analysed for the optimum energies, and for the optimum dose distribution between the two scans. It was found that an equal dose at both energies was a good compromise, compared with optimising the dose distributing for either the Compton or photoelectric components individually. For monoenergetic beams, it was found that low energy of 40 keV produced minimum noise when using high-energy beams of 80 to 100 keV. This was true whether one maintained constant integral dose or constant surface dose. A low energy of 50 keV which is more nearly attainable in practice, produced almost as good a degree of accuracy. The analysis can be extended to polyenergetic beams by the inclusion of a noise factor. The above results were qualitatively unchanged, although the noise was increased by about 20% with integral dose equivalence and 50% with surface dose equivalence. It is very important to make the spectra as narrow as possible, especially at the low energy, in order to minimise the noise. (author)

  3. Discovering Unique, Low-Energy Transition States Using Evolutionary Molecular Memetic Computing

    DEFF Research Database (Denmark)

    Ellabaan, Mostafa M Hashim; Ong, Y.S.; Handoko, S.D.

    2013-01-01

    In the last few decades, identification of transition states has experienced significant growth in research interests from various scientific communities. As per the transition states theory, reaction paths and landscape analysis as well as many thermodynamic properties of biochemical systems can...... be accurately identified through the transition states. Transition states describe the paths of molecular systems in transiting across stable states. In this article, we present the discovery of unique, low-energy transition states and showcase the efficacy of their identification using the memetic computing...... paradigm under a Molecular Memetic Computing (MMC) framework. In essence, the MMC is equipped with the tree-based representation of non-cyclic molecules and the covalent-bond-driven evolutionary operators, in addition to the typical backbone of memetic algorithms. Herein, we employ genetic algorithm...

  4. Computer simulation of high energy displacement cascades

    International Nuclear Information System (INIS)

    Heinisch, H.L.

    1990-01-01

    A methodology developed for modeling many aspects of high energy displacement cascades with molecular level computer simulations is reviewed. The initial damage state is modeled in the binary collision approximation (using the MARLOWE computer code), and the subsequent disposition of the defects within a cascade is modeled with a Monte Carlo annealing simulation (the ALSOME code). There are few adjustable parameters, and none are set to physically unreasonable values. The basic configurations of the simulated high energy cascades in copper, i.e., the number, size and shape of damage regions, compare well with observations, as do the measured numbers of residual defects and the fractions of freely migrating defects. The success of these simulations is somewhat remarkable, given the relatively simple models of defects and their interactions that are employed. The reason for this success is that the behavior of the defects is very strongly influenced by their initial spatial distributions, which the binary collision approximation adequately models. The MARLOWE/ALSOME system, with input from molecular dynamics and experiments, provides a framework for investigating the influence of high energy cascades on microstructure evolution. (author)

  5. Computer Simulation in Predicting Biochemical Processes and Energy Balance at WWTPs

    Science.gov (United States)

    Drewnowski, Jakub; Zaborowska, Ewa; Hernandez De Vega, Carmen

    2018-02-01

    Nowadays, the use of mathematical models and computer simulation allow analysis of many different technological solutions as well as testing various scenarios in a short time and at low financial budget in order to simulate the scenario under typical conditions for the real system and help to find the best solution in design or operation process. The aim of the study was to evaluate different concepts of biochemical processes and energy balance modelling using a simulation platform GPS-x and a comprehensive model Mantis2. The paper presents the example of calibration and validation processes in the biological reactor as well as scenarios showing an influence of operational parameters on the WWTP energy balance. The results of batch tests and full-scale campaign obtained in the former work were used to predict biochemical and operational parameters in a newly developed plant model. The model was extended with sludge treatment devices, including anaerobic digester. Primary sludge removal efficiency was found as a significant factor determining biogas production and further renewable energy production in cogeneration. Water and wastewater utilities, which run and control WWTP, are interested in optimizing the process in order to save environment, their budget and decrease the pollutant emissions to water and air. In this context, computer simulation can be the easiest and very useful tool to improve the efficiency without interfering in the actual process performance.

  6. Computer Simulation in Predicting Biochemical Processes and Energy Balance at WWTPs

    Directory of Open Access Journals (Sweden)

    Drewnowski Jakub

    2018-01-01

    Full Text Available Nowadays, the use of mathematical models and computer simulation allow analysis of many different technological solutions as well as testing various scenarios in a short time and at low financial budget in order to simulate the scenario under typical conditions for the real system and help to find the best solution in design or operation process. The aim of the study was to evaluate different concepts of biochemical processes and energy balance modelling using a simulation platform GPS-x and a comprehensive model Mantis2. The paper presents the example of calibration and validation processes in the biological reactor as well as scenarios showing an influence of operational parameters on the WWTP energy balance. The results of batch tests and full-scale campaign obtained in the former work were used to predict biochemical and operational parameters in a newly developed plant model. The model was extended with sludge treatment devices, including anaerobic digester. Primary sludge removal efficiency was found as a significant factor determining biogas production and further renewable energy production in cogeneration. Water and wastewater utilities, which run and control WWTP, are interested in optimizing the process in order to save environment, their budget and decrease the pollutant emissions to water and air. In this context, computer simulation can be the easiest and very useful tool to improve the efficiency without interfering in the actual process performance.

  7. Investigation on structural analysis computer program of spent nuclear fuel shipping cask

    International Nuclear Information System (INIS)

    Yagawa, Ganki; Ikushima, Takeshi.

    1987-10-01

    This report describes the results done by the Sub-Committee of Research Cooperation Committee (RC-62) of the Japan Society of Mechanical Engineers under the trust of the Japan Atomic Energy Research Institute. The principal fulfilments and accomplishments are summarized as follows: (1) Regarding the survey of structural analysis methods of spent fuel shipping cask, several documents, which explain the features and applications of the exclusive computer programs for impact analysis on the basis of 2 or 3 dimensional finite element or difference methods such as HONDO, STEALTH and DYNA-3D, were reviewed. (2) In comparative evaluation of the existing computer programs, the common benchmark test problems for 9 m vertical drop impact of the axisymmetric lead cylinder with and without stainless steel clads were adopted where the calculational evaluations for taking into account the strain rate effect were carried out. (3) Evaluation of impact analysis algorithm of computer programs were conducted and the requirements for computer programs to be developed in future and an index for further studies have been clarified. (author)

  8. PREFACE: 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2013)

    Science.gov (United States)

    Wang, Jianxiong

    2014-06-01

    This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16-21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China. The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas. ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon. We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at http://acat2013.ihep.ac.cn. Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science Details of committees and sponsors are available in the PDF

  9. Computational methods for high-energy source shielding

    International Nuclear Information System (INIS)

    Armstrong, T.W.; Cloth, P.; Filges, D.

    1983-01-01

    The computational methods for high-energy radiation transport related to shielding of the SNQ-spallation source are outlined. The basic approach is to couple radiation-transport computer codes which use Monte Carlo methods and discrete ordinates methods. A code system is suggested that incorporates state-of-the-art radiation-transport techniques. The stepwise verification of that system is briefly summarized. The complexity of the resulting code system suggests a more straightforward code specially tailored for thick shield calculations. A short guide line to future development of such a Monte Carlo code is given

  10. Experimental high energy physics and modern computer architectures

    International Nuclear Information System (INIS)

    Hoek, J.

    1988-06-01

    The paper examines how experimental High Energy Physics can use modern computer architectures efficiently. In this connection parallel and vector architectures are investigated, and the types available at the moment for general use are discussed. A separate section briefly describes some architectures that are either a combination of both, or exemplify other architectures. In an appendix some directions in which computing seems to be developing in the USA are mentioned. (author)

  11. Large Scale Computing and Storage Requirements for Fusion Energy Sciences: Target 2017

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard

    2014-05-02

    The National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,500 users working on some 650 projects that involve nearly 600 codes in a wide variety of scientific disciplines. In March 2013, NERSC, DOE?s Office of Advanced Scientific Computing Research (ASCR) and DOE?s Office of Fusion Energy Sciences (FES) held a review to characterize High Performance Computing (HPC) and storage requirements for FES research through 2017. This report is the result.

  12. Norwegian computers in European energy research project

    International Nuclear Information System (INIS)

    Anon.

    1979-01-01

    16 NORD computers have been ordered for the JET data acquisition and storage system. The computers will be arranged in a 'double star' configuration, developed by CERN. Two control consoles each have their own computer. All computers for communication, control, diagnostics, consoles and testing are NORD-100s while the computer for data storage and analysis is a NORD-500. The operating system is SINTRAN CAMAC SERIAL HIGHWAY with fibre optics to be used for long communications paths. The programming languages FORTRAN, NODAL, NORD PL, PASCAL and BASIC may be used. The JET project and TOKAMAK type machines are briefly described. (JIW)

  13. Morphing continuum analysis of energy transfer in compressible turbulence

    Science.gov (United States)

    Cheikh, Mohamad Ibrahim; Wonnell, Louis B.; Chen, James

    2018-02-01

    A shock-preserving finite volume solver with the generalized Lax-Friedrichs splitting flux for morphing continuum theory (MCT) is presented and verified. The numerical MCT solver is showcased in a supersonic turbulent flow with Mach 2.93 over an 8∘ compression ramp. The simulation results validated MCT with experiments as an alternative for modeling compressible turbulence. The required size of the smallest mesh cell for the MCT simulation is shown to be almost an order larger than that in a similar direct numerical simulation study. The comparison shows MCT is a much more computationally friendly theory than the classical Navier-Stokes equations. The dynamics of energy cascade at the length scale of individual eddies is illuminated through the subscale rotation introduced by MCT. In this regard, MCT provides a statistical averaging procedure for capturing energy transfer in compressible turbulence, not found in classical fluid theories. Analysis of the MCT results show the existence of a statistical coupling of the internal and translational kinetic energy fluctuations with the corresponding eddy rotational energy fluctuations, indicating a multiscale transfer of energy. In conclusion, MCT gives a new characterization of the energy cascade within compressible turbulence without the use of excessive computational resources.

  14. Generalized Energy Flow Analysis Considering Electricity Gas and Heat Subsystems in Local-Area Energy Systems Integration

    Directory of Open Access Journals (Sweden)

    Jiaqi Shi

    2017-04-01

    Full Text Available To alleviate environmental pollution and improve the efficient use of energy, energy systems integration (ESI—covering electric power systems, heat systems and natural gas systems—has become an important trend in energy utilization. The traditional power flow calculation method, with the object as the power system, will prove difficult in meeting the requirements of the coupled energy flow analysis. This paper proposes a generalized energy flow (GEF analysis method which is suitable for an ESI containing electricity, heat and gas subsystems. First, the models of electricity, heat, and natural gas networks in the ESI are established. In view of the complexity of the conventional method to solve the gas network including the compressor, an improved practical equivalent method was adopted based on different control modes. On this basis, a hybrid method combining homotopy and the Newton-Raphson algorithm was executed to compute the nonlinear equations of GEF, and the Jacobi matrix reflecting the coupling relationship of multi-energy was derived considering the grid connected mode and island modes of the power system in the ESI. Finally, the validity of the proposed method in multi-energy flow calculation and the analysis of interacting characteristics was verified using practical cases.

  15. Computational fluid dynamics application: slosh analysis of a fuel tank model

    International Nuclear Information System (INIS)

    Iu, H.S.; Cleghorn, W.L.; Mills, J.K.

    2004-01-01

    This paper presents the analysis of fluid slosh behaviour inside a fuel tank model. The fuel tank model was a simplified version of a stock fuel tank that has a sloshing noise problem. A commercial CFD software, FLOW-3D, was used to simulate the slosh behaviour. Slosh experiments were performed to verify the computer simulation results. High speed video equipment enhanced with a data acquisition system was used to record the slosh experiments and to obtain the instantaneous sound level of each video frame. Five baffle configurations including the no baffle configuration were considered in the computer simulations and the experiments. The simulation results showed that the best baffle configuration can reduce the mean kinetic energy by 80% from the no baffle configuration in a certain slosh situation. The experimental results showed that 15dB(A) noise reduction can be achieved by the best baffle configuration. The correlation analysis between the mean kinetic energy and the noise level showed that high mean kinetic energy of the fluid does not always correspond to high sloshing noise. High correlation between them only occurs for the slosh situations where the fluid hits the top of the tank and creates noise. (author)

  16. Energy retrofit analysis toolkits for commercial buildings: A review

    International Nuclear Information System (INIS)

    Lee, Sang Hoon; Hong, Tianzhen; Piette, Mary Ann; Taylor-Lange, Sarah C.

    2015-01-01

    Retrofit analysis toolkits can be used to optimize energy or cost savings from retrofit strategies, accelerating the adoption of ECMs (energy conservation measures) in buildings. This paper provides an up-to-date review of the features and capabilities of 18 energy retrofit toolkits, including ECMs and the calculation engines. The fidelity of the calculation techniques, a driving component of retrofit toolkits, were evaluated. An evaluation of the issues that hinder effective retrofit analysis in terms of accessibility, usability, data requirement, and the application of efficiency measures, provides valuable insights into advancing the field forward. Following this review the general concepts were determined: (1) toolkits developed primarily in the private sector use empirically data-driven methods or benchmarking to provide ease of use, (2) almost all of the toolkits which used EnergyPlus or DOE-2 were freely accessible, but suffered from complexity, longer data input and simulation run time, (3) in general, there appeared to be a fine line between having too much detail resulting in a long analysis time or too little detail which sacrificed modeling fidelity. These insights provide an opportunity to enhance the design and development of existing and new retrofit toolkits in the future. - Highlights: • Retrofit analysis toolkits can accelerate the adoption of energy efficiency measures. • A comprehensive review of 19 retrofit analysis toolkits was conducted. • Retrofit toolkits have diverse features, data requirement and computing methods. • Empirical data-driven, normative and detailed energy modeling methods are used. • Identified immediate areas for improvement for retrofit analysis toolkits

  17. Convolutional networks for fast, energy-efficient neuromorphic computing

    Science.gov (United States)

    Esser, Steven K.; Merolla, Paul A.; Arthur, John V.; Cassidy, Andrew S.; Appuswamy, Rathinakumar; Andreopoulos, Alexander; Berg, David J.; McKinstry, Jeffrey L.; Melano, Timothy; Barch, Davis R.; di Nolfo, Carmelo; Datta, Pallab; Amir, Arnon; Taba, Brian; Flickner, Myron D.; Modha, Dharmendra S.

    2016-01-01

    Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware’s underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer. PMID:27651489

  18. Significant decimal digits for energy representation on short-word computers

    International Nuclear Information System (INIS)

    Sartori, E.

    1989-01-01

    The general belief that single precision floating point numbers have always at least seven significant decimal digits on short word computers such as IBM is erroneous. Seven significant digits are required however for representing the energy variable in nuclear cross-section data sets containing sharp p-wave resonances at 0 Kelvin. It is suggested that either the energy variable is stored in double precision or that cross-section resonances are reconstructed to room temperature or higher on short word computers

  19. Wireless-Uplinks-Based Energy-Efficient Scheduling in Mobile Cloud Computing

    OpenAIRE

    Xing Liu; Chaowei Yuan; Zhen Yang; Enda Peng

    2015-01-01

    Mobile cloud computing (MCC) combines cloud computing and mobile internet to improve the computational capabilities of resource-constrained mobile devices (MDs). In MCC, mobile users could not only improve the computational capability of MDs but also save operation consumption by offloading the mobile applications to the cloud. However, MCC faces the problem of energy efficiency because of time-varying channels when the offloading is being executed. In this paper, we address the issue of ener...

  20. The Role of Energy Reservoirs in Distributed Computing: Manufacturing, Implementing, and Optimizing Energy Storage in Energy-Autonomous Sensor Nodes

    Science.gov (United States)

    Cowell, Martin Andrew

    The world already hosts more internet connected devices than people, and that ratio is only increasing. These devices seamlessly integrate with peoples lives to collect rich data and give immediate feedback about complex systems from business, health care, transportation, and security. As every aspect of global economies integrate distributed computing into their industrial systems and these systems benefit from rich datasets. Managing the power demands of these distributed computers will be paramount to ensure the continued operation of these networks, and is elegantly addressed by including local energy harvesting and storage on a per-node basis. By replacing non-rechargeable batteries with energy harvesting, wireless sensor nodes will increase their lifetimes by an order of magnitude. This work investigates the coupling of high power energy storage with energy harvesting technologies to power wireless sensor nodes; with sections covering device manufacturing, system integration, and mathematical modeling. First we consider the energy storage mechanism of supercapacitors and batteries, and identify favorable characteristics in both reservoir types. We then discuss experimental methods used to manufacture high power supercapacitors in our labs. We go on to detail the integration of our fabricated devices with collaborating labs to create functional sensor node demonstrations. With the practical knowledge gained through in-lab manufacturing and system integration, we build mathematical models to aid in device and system design. First, we model the mechanism of energy storage in porous graphene supercapacitors to aid in component architecture optimization. We then model the operation of entire sensor nodes for the purpose of optimally sizing the energy harvesting and energy reservoir components. In consideration of deploying these sensor nodes in real-world environments, we model the operation of our energy harvesting and power management systems subject to

  1. Piping stress analysis with personal computers

    International Nuclear Information System (INIS)

    Revesz, Z.

    1987-01-01

    The growing market of the personal computers is providing an increasing number of professionals with unprecedented and surprisingly inexpensive computing capacity, which if using with powerful software, can enhance immensely the engineers capabilities. This paper focuses on the possibilities which opened in piping stress analysis by the widespread distribution of personal computers, on the necessary changes in the software and on the limitations of using personal computers for engineering design and analysis. Reliability and quality assurance aspects of using personal computers for nuclear applications are also mentioned. The paper resumes with personal views of the author and experiences gained during interactive graphic piping software development for personal computers. (orig./GL)

  2. Instrumental aspects of tube-excited energy-dispersive X-ray fluorescence analysis

    International Nuclear Information System (INIS)

    Adams, F.; Nullens, H.; Espen, P. van

    1983-01-01

    Energy-dispersive X-ray fluorescence spectrometry is an attractive and widely used method for sensitive multi-element analysis. The method suffers from the extreme density of spectral components in a rather limited energy range which implies the need for computer based spectrum analysis. The method of iterative least squares analysis is the most powerful tool for this. It requires a systematic and accurate description of the spectral features. Other important necessities for accurate analysis are the calibration of the spectrometer and the correction for matrix absorption effects in the sample; they can be calculated from available physical constants. Ours and similar procedures prove that semi-automatic analyses are possible with an accuracy of the order of 5%. (author)

  3. Large Scale Computing and Storage Requirements for High Energy Physics

    International Nuclear Information System (INIS)

    Gerber, Richard A.; Wasserman, Harvey

    2010-01-01

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  4. 16th International workshop on Advanced Computing and Analysis Techniques in physics (ACAT)

    CERN Document Server

    Lokajicek, M; Tumova, N

    2015-01-01

    16th International workshop on Advanced Computing and Analysis Techniques in physics (ACAT). The ACAT workshop series, formerly AIHENP (Artificial Intelligence in High Energy and Nuclear Physics), was created back in 1990. Its main purpose is to gather researchers related with computing in physics research together, from both physics and computer science sides, and bring them a chance to communicate with each other. It has established bridges between physics and computer science research, facilitating the advances in our understanding of the Universe at its smallest and largest scales. With the Large Hadron Collider and many astronomy and astrophysics experiments collecting larger and larger amounts of data, such bridges are needed now more than ever. The 16th edition of ACAT aims to bring related researchers together, once more, to explore and confront the boundaries of computing, automatic data analysis and theoretical calculation technologies. It will create a forum for exchanging ideas among the fields an...

  5. Computational materials design for energy applications

    Science.gov (United States)

    Ozolins, Vidvuds

    2013-03-01

    General adoption of sustainable energy technologies depends on the discovery and development of new high-performance materials. For instance, waste heat recovery and electricity generation via the solar thermal route require bulk thermoelectrics with a high figure of merit (ZT) and thermal stability at high-temperatures. Energy recovery applications (e.g., regenerative braking) call for the development of rapidly chargeable systems for electrical energy storage, such as electrochemical supercapacitors. Similarly, use of hydrogen as vehicular fuel depends on the ability to store hydrogen at high volumetric and gravimetric densities, as well as on the ability to extract it at ambient temperatures at sufficiently rapid rates. We will discuss how first-principles computational methods based on quantum mechanics and statistical physics can drive the understanding, improvement and prediction of new energy materials. We will cover prediction and experimental verification of new earth-abundant thermoelectrics, transition metal oxides for electrochemical supercapacitors, and kinetics of mass transport in complex metal hydrides. Research has been supported by the US Department of Energy under grant Nos. DE-SC0001342, DE-SC0001054, DE-FG02-07ER46433, and DE-FC36-08GO18136.

  6. Operating Wireless Sensor Nodes without Energy Storage: Experimental Results with Transient Computing

    Directory of Open Access Journals (Sweden)

    Faisal Ahmed

    2016-12-01

    Full Text Available Energy harvesting is increasingly used for powering wireless sensor network nodes. Recently, it has been suggested to combine it with the concept of transient computing whereby the wireless sensor nodes operate without energy storage capabilities. This new combined approach brings benefits, for instance ultra-low power nodes and reduced maintenance, but also raises new challenges, foremost dealing with nodes that may be left without power for various time periods. Although transient computing has been demonstrated on microcontrollers, reports on experiments with wireless sensor nodes are still scarce in the literature. In this paper, we describe our experiments with solar, thermal, and RF energy harvesting sources that are used to power sensor nodes (including wireless ones without energy storage, but with transient computing capabilities. The results show that the selected solar and thermal energy sources can operate both the wired and wireless nodes without energy storage, whereas in our specific implementation, the developed RF energy source can only be used for the selected nodes without wireless connectivity.

  7. Energy Aware Computing in Cooperative Wireless Networks

    DEFF Research Database (Denmark)

    Olsen, Anders Brødløs; Fitzek, Frank H. P.; Koch, Peter

    2005-01-01

    In this work the idea of cooperation is applied to wireless communication systems. It is generally accepted that energy consumption is a significant design constraint for mobile handheld systems. We propose a novel method of cooperative task computing by distributing tasks among terminals over...... the unreliable wireless link. Principles of multi–processor energy aware task scheduling are used exploiting performance scalable technologies such as Dynamic Voltage Scaling (DVS). We introduce a novel mechanism referred to as D2VS and here it is shown by means of simulation that savings of 40% can be achieved....

  8. Batch Computed Tomography Analysis of Projectiles

    Science.gov (United States)

    2016-05-01

    ARL-TR-7681 ● MAY 2016 US Army Research Laboratory Batch Computed Tomography Analysis of Projectiles by Michael C Golt, Chris M...Laboratory Batch Computed Tomography Analysis of Projectiles by Michael C Golt and Matthew S Bratcher Weapons and Materials Research...values to account for projectile variability in the ballistic evaluation of armor. 15. SUBJECT TERMS computed tomography , CT, BS41, projectiles

  9. Large Scale Computing and Storage Requirements for Basic Energy Sciences Research

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard; Wasserman, Harvey

    2011-03-31

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility supporting research within the Department of Energy's Office of Science. NERSC provides high-performance computing (HPC) resources to approximately 4,000 researchers working on about 400 projects. In addition to hosting large-scale computing facilities, NERSC provides the support and expertise scientists need to effectively and efficiently use HPC systems. In February 2010, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR) and DOE's Office of Basic Energy Sciences (BES) held a workshop to characterize HPC requirements for BES research through 2013. The workshop was part of NERSC's legacy of anticipating users future needs and deploying the necessary resources to meet these demands. Workshop participants reached a consensus on several key findings, in addition to achieving the workshop's goal of collecting and characterizing computing requirements. The key requirements for scientists conducting research in BES are: (1) Larger allocations of computational resources; (2) Continued support for standard application software packages; (3) Adequate job turnaround time and throughput; and (4) Guidance and support for using future computer architectures. This report expands upon these key points and presents others. Several 'case studies' are included as significant representative samples of the needs of science teams within BES. Research teams scientific goals, computational methods of solution, current and 2013 computing requirements, and special software and support needs are summarized in these case studies. Also included are researchers strategies for computing in the highly parallel, 'multi-core' environment that is expected to dominate HPC architectures over the next few years. NERSC has strategic plans and initiatives already underway that address key workshop findings. This report includes a

  10. A review of residential computer oriented energy control systems

    Energy Technology Data Exchange (ETDEWEB)

    North, Greg

    2000-07-01

    The purpose of this report is to bring together as much information on Residential Computer Oriented Energy Control Systems as possible within a single document. This report identifies the main elements of the system and is intended to provide many technical options for the design and implementation of various energy related services.

  11. Planning for a program design for energy environmental analysis. Final report, draft

    Energy Technology Data Exchange (ETDEWEB)

    Denton, J; Saaty, T; Blair, P; Ma, F; Buneman, P

    1976-04-01

    The objective of the work reported here is to assist BER/ERDA in program planning with respect to a regional assessment study program for energy environmental analysis. The focus of the work was to examine the use of operational gaming fof regional assessment studies. Specific concerns were gaming applications (1) in regional assessment or management and direction of regional assessments; (2) for achieving a higher level of public understanding of environmental, health, and safety problems of energy; (3) with respect to the supply of adequately trained manpower for energy; (4) with respect to computational requirements; and (5) with respect to current state-of-the-art in computer simulation. In order to investigate these concerns and examine the feasibility of using operational gaming in a regional assessment study program, a Regional Energy Environment Game (REEG) was designed and implemented on an IBM 370/168 digital computer employing APL (A Programming Language). The applicability of interactive operational gaming has been demonstrated by the REEG as applied to a region consisting of Delaware, Maryland, New Jersey, Pennsylvania, and the District of Columbia.

  12. New Challenges for Computing in High Energy Physics

    International Nuclear Information System (INIS)

    Santoro, Alberto

    2003-01-01

    In view of the new scientific programs established for the LHC (Large Hadron Collider) era, the way to face the technological challenges in computing was develop a new concept of GRID computing. We show some examples and, in particular, a proposal for high energy physicists in countries like Brazil. Due to the big amount of data and the need of close collaboration it will be impossible to work in research centers and universities very far from Fermilab or CERN unless a GRID architecture is built. An important effort is being made by the international community to up to date their computing infrastructure and networks

  13. Intelligent analysis of energy consumption in school buildings

    International Nuclear Information System (INIS)

    Raatikainen, Mika; Skön, Jukka-Pekka; Leiviskä, Kauko; Kolehmainen, Mikko

    2016-01-01

    Highlights: • Electricity and heating energy consumptions of six school buildings were compared. • Complex multivariate data was analysed using modern computational methods. • Variation in electricity consumption cost is considerably low between study schools. • District heating variation is very slight in two new study schools. • District heating cost describes energy efficiency and state of building automation. - Abstract: Even though industry consumes nearly half of total energy production, the relative share of total energy consumption related to heating and operating buildings is growing constantly. The motivation for this study was to reveal the differences in electricity use and district heating consumption in school buildings of various ages during the working day and also during the night when human-based consumption is low. The overall aim of this study is to compare the energy (electricity and heating) consumption of six school buildings in Kuopio, Eastern Finland. The selected school buildings were built in different decades, and their ventilation and building automation systems are also inconsistent. The hourly energy consumption data was received from Kuopion Energia, the local energy supply company. In this paper, the results of data analysis on the energy consumption in these school buildings are presented. Preliminary results show that, generally speaking, new school buildings are more energy-efficient than older ones. However, concerning energy efficiency, two very new schools were exceptional because ventilation was on day and night in order to dry the building materials in the constructions. The novelty of this study is that it makes use of hourly smart metering consumption data on electricity and district heating, using modern computational methods to analyse complex multivariate data in order to increase knowledge of the buildings’ consumption profiles and energy efficiency.

  14. HEPLIB '91: International users meeting on the support and environments of high energy physics computing

    International Nuclear Information System (INIS)

    Johnstad, H.

    1991-01-01

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, data base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards

  15. Contingency Analysis Post-Processing With Advanced Computing and Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Yousu; Glaesemann, Kurt; Fitzhenry, Erin

    2017-07-01

    Contingency analysis is a critical function widely used in energy management systems to assess the impact of power system component failures. Its outputs are important for power system operation for improved situational awareness, power system planning studies, and power market operations. With the increased complexity of power system modeling and simulation caused by increased energy production and demand, the penetration of renewable energy and fast deployment of smart grid devices, and the trend of operating grids closer to their capacity for better efficiency, more and more contingencies must be executed and analyzed quickly in order to ensure grid reliability and accuracy for the power market. Currently, many researchers have proposed different techniques to accelerate the computational speed of contingency analysis, but not much work has been published on how to post-process the large amount of contingency outputs quickly. This paper proposes a parallel post-processing function that can analyze contingency analysis outputs faster and display them in a web-based visualization tool to help power engineers improve their work efficiency by fast information digestion. Case studies using an ESCA-60 bus system and a WECC planning system are presented to demonstrate the functionality of the parallel post-processing technique and the web-based visualization tool.

  16. Energy Sector Market Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Arent, D.; Benioff, R.; Mosey, G.; Bird, L.; Brown, J.; Brown, E.; Vimmerstedt, L.; Aabakken, J.; Parks, K.; Lapsa, M.; Davis, S.; Olszewski, M.; Cox, D.; McElhaney, K.; Hadley, S.; Hostick, D.; Nicholls, A.; McDonald, S.; Holloman, B.

    2006-10-01

    This paper presents the results of energy market analysis sponsored by the Department of Energy's (DOE) Weatherization and International Program (WIP) within the Office of Energy Efficiency and Renewable Energy (EERE). The analysis was conducted by a team of DOE laboratory experts from the National Renewable Energy Laboratory (NREL), Oak Ridge National Laboratory (ORNL), and Pacific Northwest National Laboratory (PNNL), with additional input from Lawrence Berkeley National Laboratory (LBNL). The analysis was structured to identify those markets and niches where government can create the biggest impact by informing management decisions in the private and public sectors. The analysis identifies those markets and niches where opportunities exist for increasing energy efficiency and renewable energy use.

  17. Image analysis and modeling in medical image computing. Recent developments and advances.

    Science.gov (United States)

    Handels, H; Deserno, T M; Meinzer, H-P; Tolxdorff, T

    2012-01-01

    Medical image computing is of growing importance in medical diagnostics and image-guided therapy. Nowadays, image analysis systems integrating advanced image computing methods are used in practice e.g. to extract quantitative image parameters or to support the surgeon during a navigated intervention. However, the grade of automation, accuracy, reproducibility and robustness of medical image computing methods has to be increased to meet the requirements in clinical routine. In the focus theme, recent developments and advances in the field of modeling and model-based image analysis are described. The introduction of models in the image analysis process enables improvements of image analysis algorithms in terms of automation, accuracy, reproducibility and robustness. Furthermore, model-based image computing techniques open up new perspectives for prediction of organ changes and risk analysis of patients. Selected contributions are assembled to present latest advances in the field. The authors were invited to present their recent work and results based on their outstanding contributions to the Conference on Medical Image Computing BVM 2011 held at the University of Lübeck, Germany. All manuscripts had to pass a comprehensive peer review. Modeling approaches and model-based image analysis methods showing new trends and perspectives in model-based medical image computing are described. Complex models are used in different medical applications and medical images like radiographic images, dual-energy CT images, MR images, diffusion tensor images as well as microscopic images are analyzed. The applications emphasize the high potential and the wide application range of these methods. The use of model-based image analysis methods can improve segmentation quality as well as the accuracy and reproducibility of quantitative image analysis. Furthermore, image-based models enable new insights and can lead to a deeper understanding of complex dynamic mechanisms in the human body

  18. Developmental dysplasia of the hip: A computational biomechanical model of the path of least energy for closed reduction.

    Science.gov (United States)

    Zwawi, Mohammed A; Moslehy, Faissal A; Rose, Christopher; Huayamave, Victor; Kassab, Alain J; Divo, Eduardo; Jones, Brendan J; Price, Charles T

    2017-08-01

    This study utilized a computational biomechanical model and applied the least energy path principle to investigate two pathways for closed reduction of high grade infantile hip dislocation. The principle of least energy when applied to moving the femoral head from an initial to a final position considers all possible paths that connect them and identifies the path of least resistance. Clinical reports of severe hip dysplasia have concluded that reduction of the femoral head into the acetabulum may occur by a direct pathway over the posterior rim of the acetabulum when using the Pavlik harness, or by an indirect pathway with reduction through the acetabular notch when using the modified Hoffman-Daimler method. This computational study also compared the energy requirements for both pathways. The anatomical and muscular aspects of the model were derived using a combination of MRI and OpenSim data. Results of this study indicate that the path of least energy closely approximates the indirect pathway of the modified Hoffman-Daimler method. The direct pathway over the posterior rim of the acetabulum required more energy for reduction. This biomechanical analysis confirms the clinical observations of the two pathways for closed reduction of severe hip dysplasia. The path of least energy closely approximated the modified Hoffman-Daimler method. Further study of the modified Hoffman-Daimler method for reduction of severe hip dysplasia may be warranted based on this computational biomechanical analysis. © 2016 The Authors. Journal of Orthopaedic Research Published by Wiley Periodicals, Inc. on behalf of Orthopaedic Research Society. J Orthop Res 35:1799-1805, 2017. © 2016 The Authors. Journal of Orthopaedic Research Published by Wiley Periodicals, Inc. on behalf of Orthopaedic Research Society.

  19. Soft computing analysis of the possible correlation between temporal and energy release patterns in seismic activity

    Science.gov (United States)

    Konstantaras, Anthony; Katsifarakis, Emmanouil; Artzouxaltzis, Xristos; Makris, John; Vallianatos, Filippos; Varley, Martin

    2010-05-01

    This paper is a preliminary investigation of the possible correlation of temporal and energy release patterns of seismic activity involving the preparation processes of consecutive sizeable seismic events [1,2]. The background idea is that during periods of low-level seismic activity, stress processes in the crust accumulate energy at the seismogenic area whilst larger seismic events act as a decongesting mechanism releasing considerable energy [3,4]. A dynamic algorithm is being developed aiming to identify and cluster pre- and post- seismic events to the main earthquake following on research carried out by Zubkov [5] and Dobrovolsky [6,7]. This clustering technique along with energy release equations dependent on Richter's scale [8,9] allow for an estimate to be drawn regarding the amount of the energy being released by the seismic sequence. The above approach is being implemented as a monitoring tool to investigate the behaviour of the underlying energy management system by introducing this information to various neural [10,11] and soft computing models [1,12,13,14]. The incorporation of intelligent systems aims towards the detection and simulation of the possible relationship between energy release patterns and time-intervals among consecutive sizeable earthquakes [1,15]. Anticipated successful training of the imported intelligent systems may result in a real-time, on-line processing methodology [1,16] capable to dynamically approximate the time-interval between the latest and the next forthcoming sizeable seismic event by monitoring the energy release process in a specific seismogenic area. Indexing terms: pattern recognition, long-term earthquake precursors, neural networks, soft computing, earthquake occurrence intervals References [1] Konstantaras A., Vallianatos F., Varley M.R. and Makris J. P.: ‘Soft computing modelling of seismicity in the southern Hellenic arc', IEEE Geoscience and Remote Sensing Letters, vol. 5 (3), pp. 323-327, 2008 [2] Eneva M. and

  20. Investigation on structural analysis computer program of spent nuclear fuel shipping cask, (2)

    International Nuclear Information System (INIS)

    Yagawa, Ganki; Ikushima, Takeshi.

    1987-10-01

    This report describes the results (II) done by the Sub-Committee of Research Cooperation Committee (RC-62) of the Japan Society of Mechanical Engineers under the trust of the Japan Atomic Energy Research Institute. The principal fulfilments and accomplishments are summarized as follows: (1) Regarding the survey of structural analysis methods of spent fuel shipping cask, several documents, which explain the features and applications of the exclusive computer programs for impact analysis on the basis of 2 or 3 dimensional finite element or difference methods, were reviewed. (2) In comparative evaluation of the existing computer programs, the common benchmark test problems for drop impact of the axisymmetric cylinder and plate were adopted the calculational evaluations for taking into account the strain rate effect of material properties, effect of artificial viscosity and effect of time integration step size were carried out. (3) Evaluation of impact analysis algorithm of computer programs were conducted and the requirements for computer programs to be developed in future and an index for further studies have been clarified. (author)

  1. Basic Energy Sciences Exascale Requirements Review. An Office of Science review sponsored jointly by Advanced Scientific Computing Research and Basic Energy Sciences, November 3-5, 2015, Rockville, Maryland

    Energy Technology Data Exchange (ETDEWEB)

    Windus, Theresa [Ames Lab., Ames, IA (United States); Banda, Michael [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Devereaux, Thomas [SLAC National Accelerator Lab., Menlo Park, CA (United States); White, Julia C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Antypas, Katie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Coffey, Richard [Argonne National Lab. (ANL), Argonne, IL (United States); Dart, Eli [Energy Sciences Network (ESNet), Berkeley, CA (United States); Dosanjh, Sudip [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Gerber, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hack, James [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Monga, Inder [Energy Sciences Network (ESNet), Berkeley, CA (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Riley, Katherine [Argonne National Lab. (ANL), Argonne, IL (United States); Rotman, Lauren [Energy Sciences Network (ESNet), Berkeley, CA (United States); Straatsma, Tjerk [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wells, Jack [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Baruah, Tunna [Univ. of Texas, El Paso, TX (United States); Benali, Anouar [Argonne National Lab. (ANL), Argonne, IL (United States); Borland, Michael [Argonne National Lab. (ANL), Argonne, IL (United States); Brabec, Jiri [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Carter, Emily [Princeton Univ., NJ (United States); Ceperley, David [Univ. of Illinois, Urbana-Champaign, IL (United States); Chan, Maria [Argonne National Lab. (ANL), Argonne, IL (United States); Chelikowsky, James [Univ. of Texas, Austin, TX (United States); Chen, Jackie [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Cheng, Hai-Ping [Univ. of Florida, Gainesville, FL (United States); Clark, Aurora [Washington State Univ., Pullman, WA (United States); Darancet, Pierre [Argonne National Lab. (ANL), Argonne, IL (United States); DeJong, Wibe [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Deslippe, Jack [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Dixon, David [Univ. of Alabama, Tuscaloosa, AL (United States); Donatelli, Jeffrey [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dunning, Thomas [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Fernandez-Serra, Marivi [Stony Brook Univ., NY (United States); Freericks, James [Georgetown Univ., Washington, DC (United States); Gagliardi, Laura [Univ. of Minnesota, Minneapolis, MN (United States); Galli, Giulia [Univ. of Chicago, IL (United States); Garrett, Bruce [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Glezakou, Vassiliki-Alexandra [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gordon, Mark [Iowa State Univ., Ames, IA (United States); Govind, Niri [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Gray, Stephen [Argonne National Lab. (ANL), Argonne, IL (United States); Gull, Emanuel [Univ. of Michigan, Ann Arbor, MI (United States); Gygi, Francois [Univ. of California, Davis, CA (United States); Hexemer, Alexander [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Isborn, Christine [Univ. of California, Merced, CA (United States); Jarrell, Mark [Louisiana State Univ., Baton Rouge, LA (United States); Kalia, Rajiv K. [Univ. of Southern California, Los Angeles, CA (United States); Kent, Paul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Klippenstein, Stephen [Argonne National Lab. (ANL), Argonne, IL (United States); Kowalski, Karol [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Krishnamurthy, Hulikal [Indian Inst. of Science, Bangalore (India); Kumar, Dinesh [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lena, Charles [Univ. of Texas, Austin, TX (United States); Li, Xiaosong [Univ. of Washington, Seattle, WA (United States); Maier, Thomas [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Markland, Thomas [Stanford Univ., CA (United States); McNulty, Ian [Argonne National Lab. (ANL), Argonne, IL (United States); Millis, Andrew [Columbia Univ., New York, NY (United States); Mundy, Chris [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Nakano, Aiichiro [Univ. of Southern California, Los Angeles, CA (United States); Niklasson, A.M.N. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Panagiotopoulos, Thanos [Princeton Univ., NJ (United States); Pandolfi, Ron [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Parkinson, Dula [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Pask, John [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Perazzo, Amedeo [SLAC National Accelerator Lab., Menlo Park, CA (United States); Rehr, John [Univ. of Washington, Seattle, WA (United States); Rousseau, Roger [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Sankaranarayanan, Subramanian [Argonne National Lab. (ANL), Argonne, IL (United States); Schenter, Greg [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Selloni, Annabella [Princeton Univ., NJ (United States); Sethian, Jamie [Univ. of California, Berkeley, CA (United States); Siepmann, Ilja [Univ. of Minnesota, Minneapolis, MN (United States); Slipchenko, Lyudmila [Purdue Univ., West Lafayette, IN (United States); Sternberg, Michael [Argonne National Lab. (ANL), Argonne, IL (United States); Stevens, Mark [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Summers, Michael [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Sumpter, Bobby [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Sushko, Peter [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Thayer, Jana [SLAC National Accelerator Lab., Menlo Park, CA (United States); Toby, Brian [Argonne National Lab. (ANL), Argonne, IL (United States); Tull, Craig [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Valeev, Edward [Virginia Polytechnic Inst. and State Univ. (Virginia Tech), Blacksburg, VA (United States); Vashishta, Priya [Univ. of Southern California, Los Angeles, CA (United States); Venkatakrishnan, V. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yang, C. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yang, Ping [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Zwart, Peter H. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2017-02-03

    Computers have revolutionized every aspect of our lives. Yet in science, the most tantalizing applications of computing lie just beyond our reach. The current quest to build an exascale computer with one thousand times the capability of today’s fastest machines (and more than a million times that of a laptop) will take researchers over the next horizon. The field of materials, chemical reactions, and compounds is inherently complex. Imagine millions of new materials with new functionalities waiting to be discovered — while researchers also seek to extend those materials that are known to a dizzying number of new forms. We could translate massive amounts of data from high precision experiments into new understanding through data mining and analysis. We could have at our disposal the ability to predict the properties of these materials, to follow their transformations during reactions on an atom-by-atom basis, and to discover completely new chemical pathways or physical states of matter. Extending these predictions from the nanoscale to the mesoscale, from the ultrafast world of reactions to long-time simulations to predict the lifetime performance of materials, and to the discovery of new materials and processes will have a profound impact on energy technology. In addition, discovery of new materials is vital to move computing beyond Moore’s law. To realize this vision, more than hardware is needed. New algorithms to take advantage of the increase in computing power, new programming paradigms, and new ways of mining massive data sets are needed as well. This report summarizes the opportunities and the requisite computing ecosystem needed to realize the potential before us. In addition to pursuing new and more complete physical models and theoretical frameworks, this review found that the following broadly grouped areas relevant to the U.S. Department of Energy (DOE) Office of Advanced Scientific Computing Research (ASCR) would directly affect the Basic Energy

  2. Symbolic computation and its application to high energy physics

    International Nuclear Information System (INIS)

    Hearn, A.C.

    1981-01-01

    It is clear that we are in the middle of an electronic revolution whose effect will be as profound as the industrial revolution. The continuing advances in computing technology will provide us with devices which will make present day computers appear primitive. In this environment, the algebraic and other non-mumerical capabilities of such devices will become increasingly important. These lectures will review the present state of the field of algebraic computation and its potential for problem solving in high energy physics and related areas. We shall begin with a brief description of the available systems and examine the data objects which they consider. As an example of the facilities which these systems can offer, we shall then consider the problem of analytic integration, since this is so fundamental to many of the calculational techniques used by high energy physicists. Finally, we shall study the implications which the current developments in hardware technology hold for scientific problem solving. (orig.)

  3. Impact of office productivity cloud computing on energy consumption and greenhouse gas emissions.

    Science.gov (United States)

    Williams, Daniel R; Tang, Yinshan

    2013-05-07

    Cloud computing is usually regarded as being energy efficient and thus emitting less greenhouse gases (GHG) than traditional forms of computing. When the energy consumption of Microsoft's cloud computing Office 365 (O365) and traditional Office 2010 (O2010) software suites were tested and modeled, some cloud services were found to consume more energy than the traditional form. The developed model in this research took into consideration the energy consumption at the three main stages of data transmission; data center, network, and end user device. Comparable products from each suite were selected and activities were defined for each product to represent a different computing type. Microsoft provided highly confidential data for the data center stage, while the networking and user device stages were measured directly. A new measurement and software apportionment approach was defined and utilized allowing the power consumption of cloud services to be directly measured for the user device stage. Results indicated that cloud computing is more energy efficient for Excel and Outlook which consumed less energy and emitted less GHG than the standalone counterpart. The power consumption of the cloud based Outlook (8%) and Excel (17%) was lower than their traditional counterparts. However, the power consumption of the cloud version of Word was 17% higher than its traditional equivalent. A third mixed access method was also measured for Word which emitted 5% more GHG than the traditional version. It is evident that cloud computing may not provide a unified way forward to reduce energy consumption and GHG. Direct conversion from the standalone package into the cloud provision platform can now consider energy and GHG emissions at the software development and cloud service design stage using the methods described in this research.

  4. Computational Fluid Dynamics and Building Energy Performance Simulation

    DEFF Research Database (Denmark)

    Nielsen, Peter Vilhelm; Tryggvason, T.

    1998-01-01

    An interconnection between a building energy performance simulation program and a Computational Fluid Dynamics program (CFD) for room air distribution will be introduced for improvement of the predictions of both the energy consumption and the indoor environment. The building energy performance...... simulation program requires a detailed description of the energy flow in the air movement which can be obtained by a CFD program. The paper describes an energy consumption calculation in a large building, where the building energy simulation program is modified by CFD predictions of the flow between three...... zones connected by open areas with pressure and buoyancy driven air flow. The two programs are interconnected in an iterative procedure. The paper shows also an evaluation of the air quality in the main area of the buildings based on CFD predictions. It is shown that an interconnection between a CFD...

  5. Energy consumption analysis for various memristive networks under different learning strategies

    Science.gov (United States)

    Deng, Lei; Wang, Dong; Zhang, Ziyang; Tang, Pei; Li, Guoqi; Pei, Jing

    2016-02-01

    Recently, various memristive systems emerge to emulate the efficient computing paradigm of the brain cortex; whereas, how to make them energy efficient still remains unclear, especially from an overall perspective. Here, a systematical and bottom-up energy consumption analysis is demonstrated, including the memristor device level and the network learning level. We propose an energy estimating methodology when modulating the memristive synapses, which is simulated in three typical neural networks with different synaptic structures and learning strategies for both offline and online learning. These results provide an in-depth insight to create energy efficient brain-inspired neuromorphic devices in the future.

  6. Energy consumption analysis of the Venus Deep Space Station (DSS-13)

    Science.gov (United States)

    Hayes, N. V.

    1983-01-01

    This report continues the energy consumption analysis and verification study of the tracking stations of the Goldstone Deep Space Communications Complex, and presents an audit of the Venus Deep Space Station (DSS 13). Due to the non-continuous radioastronomy research and development operations at the station, estimations of energy usage were employed in the energy consumption simulation of both the 9-meter and 26-meter antenna buildings. A 17.9% decrease in station energy consumption was experienced over the 1979-1981 years under study. A comparison of the ECP computer simulations and the station's main watt-hour meter readings showed good agreement.

  7. Computer Architecture for Energy Efficient SFQ

    Science.gov (United States)

    2014-08-27

    IBM Corporation (T.J. Watson Research Laboratory) 1101 Kitchawan Road Yorktown Heights, NY 10598 -0000 2 ABSTRACT Number of Papers published in peer...accomplished during this ARO-sponsored project at IBM Research to identify and model an energy efficient SFQ-based computer architecture. The... IBM Windsor Blue (WB), illustrated schematically in Figure 2. The basic building block of WB is a "tile" comprised of a 64-bit arithmetic logic unit

  8. Computer usage and national energy consumption: Results from a field-metering study

    Energy Technology Data Exchange (ETDEWEB)

    Desroches, Louis-Benoit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Fuchs, Heidi [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Greenblatt, Jeffery [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Pratt, Stacy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Willem, Henry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Claybaugh, Erin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Beraki, Bereket [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Nagaraju, Mythri [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Price, Sarah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Young, Scott [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division

    2014-12-01

    The electricity consumption of miscellaneous electronic loads (MELs) in the home has grown in recent years, and is expected to continue rising. Consumer electronics, in particular, are characterized by swift technological innovation, with varying impacts on energy use. Desktop and laptop computers make up a significant share of MELs electricity consumption, but their national energy use is difficult to estimate, given uncertainties around shifting user behavior. This report analyzes usage data from 64 computers (45 desktop, 11 laptop, and 8 unknown) collected in 2012 as part of a larger field monitoring effort of 880 households in the San Francisco Bay Area, and compares our results to recent values from the literature. We find that desktop computers are used for an average of 7.3 hours per day (median = 4.2 h/d), while laptops are used for a mean 4.8 hours per day (median = 2.1 h/d). The results for laptops are likely underestimated since they can be charged in other, unmetered outlets. Average unit annual energy consumption (AEC) for desktops is estimated to be 194 kWh/yr (median = 125 kWh/yr), and for laptops 75 kWh/yr (median = 31 kWh/yr). We estimate national annual energy consumption for desktop computers to be 20 TWh. National annual energy use for laptops is estimated to be 11 TWh, markedly higher than previous estimates, likely reflective of laptops drawing more power in On mode in addition to greater market penetration. This result for laptops, however, carries relatively higher uncertainty compared to desktops. Different study methodologies and definitions, changing usage patterns, and uncertainty about how consumers use computers must be considered when interpreting our results with respect to existing analyses. Finally, as energy consumption in On mode is predominant, we outline several energy savings opportunities: improved power management (defaulting to low-power modes after periods of inactivity as well as power scaling), matching the rated power

  9. Plastic collapse and energy absorption of circular filled tubes under quasi-static loads by computational analysis

    Energy Technology Data Exchange (ETDEWEB)

    Beng, Yeo Kiam; Tzeng, Woo Wen [Universiti Malaysia Sabah, Sabah (Malaysia)

    2017-02-15

    This study presents the finite element analysis of plastic collapse and energy absorption of polyurethane-filled aluminium circular tubes under quasi-static transverse loading. Increasing focuses were given to impact damage of structures where energy absorbed during impact could be controlled to avoid total structure collapse of energy absorbers and devices designed to dissipate energy. ABAQUS finite element analysis application was utilized for modelling and simulating the polyurethane-filled aluminium tubes, different set of diameterto- thickness ratios and span lengths, subjected to transverse three-point-bending load. Different sets of polyurethane-filled aluminium tubes subjected to the transverse loading were modelled and simulated. The failure modes and mechanisms of filled tubes and its capabilities as energy absorbers to further improve and strengthening of empty tube were also identified. The results showed that plastic deformation response was affected by the geometric constraints and parameters of the specimens. The diameter-to-thickness ratio and span lengths had shown to play crucial role in optimizing the PU-filled tube as energy absorber.

  10. Evaluation of reinitialization-free nonvolatile computer systems for energy-harvesting Internet of things applications

    Science.gov (United States)

    Onizawa, Naoya; Tamakoshi, Akira; Hanyu, Takahiro

    2017-08-01

    In this paper, reinitialization-free nonvolatile computer systems are designed and evaluated for energy-harvesting Internet of things (IoT) applications. In energy-harvesting applications, as power supplies generated from renewable power sources cause frequent power failures, data processed need to be backed up when power failures occur. Unless data are safely backed up before power supplies diminish, reinitialization processes are required when power supplies are recovered, which results in low energy efficiencies and slow operations. Using nonvolatile devices in processors and memories can realize a faster backup than a conventional volatile computer system, leading to a higher energy efficiency. To evaluate the energy efficiency upon frequent power failures, typical computer systems including processors and memories are designed using 90 nm CMOS or CMOS/magnetic tunnel junction (MTJ) technologies. Nonvolatile ARM Cortex-M0 processors with 4 kB MRAMs are evaluated using a typical computing benchmark program, Dhrystone, which shows a few order-of-magnitude reductions in energy in comparison with a volatile processor with SRAM.

  11. The implementation of CP1 computer code in the Honeywell Bull computer in Brazilian Nuclear Energy Commission (CNEN)

    International Nuclear Information System (INIS)

    Couto, R.T.

    1987-01-01

    The implementation of the CP1 computer code in the Honeywell Bull computer in Brazilian Nuclear Energy Comission is presented. CP1 is a computer code used to solve the equations of punctual kinetic with Doppler feed back from the system temperature variation based on the Newton refrigeration equation (E.G.) [pt

  12. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  13. Science panel to study mega-computers to assess potential energy contributions

    CERN Multimedia

    Jones, D

    2003-01-01

    "Energy Department advisers plan to examine high-end computing in the coming year and assess how computing power could be used to further DOE's basic research agenda on combustion, fusion and other topics" (1 page).

  14. Wind energy analysis system

    OpenAIRE

    2014-01-01

    M.Ing. (Electrical & Electronic Engineering) One of the most important steps to be taken before a site is to be selected for the extraction of wind energy is the analysis of the energy within the wind on that particular site. No wind energy analysis system exists for the measurement and analysis of wind power. This dissertation documents the design and development of a Wind Energy Analysis System (WEAS). Using a micro-controller based design in conjunction with sensors, WEAS measure, calcu...

  15. Building energy analysis tool

    Science.gov (United States)

    Brackney, Larry; Parker, Andrew; Long, Nicholas; Metzger, Ian; Dean, Jesse; Lisell, Lars

    2016-04-12

    A building energy analysis system includes a building component library configured to store a plurality of building components, a modeling tool configured to access the building component library and create a building model of a building under analysis using building spatial data and using selected building components of the plurality of building components stored in the building component library, a building analysis engine configured to operate the building model and generate a baseline energy model of the building under analysis and further configured to apply one or more energy conservation measures to the baseline energy model in order to generate one or more corresponding optimized energy models, and a recommendation tool configured to assess the one or more optimized energy models against the baseline energy model and generate recommendations for substitute building components or modifications.

  16. ASAS: Computational code for Analysis and Simulation of Atomic Spectra

    Directory of Open Access Journals (Sweden)

    Jhonatha R. dos Santos

    2017-01-01

    Full Text Available The laser isotopic separation process is based on the selective photoionization principle and, because of this, it is necessary to know the absorption spectrum of the desired atom. Computational resource has become indispensable for the planning of experiments and analysis of the acquired data. The ASAS (Analysis and Simulation of Atomic Spectra software presented here is a helpful tool to be used in studies involving atomic spectroscopy. The input for the simulations is friendly and essentially needs a database containing the energy levels and spectral lines of the atoms subjected to be studied.

  17. Development of a computer code for thermohydraulic analysis of a heated channel in transients

    International Nuclear Information System (INIS)

    Jafari, J.; Kazeminejad, H.; Davilu, H.

    2004-01-01

    This paper discusses the thermohydraulic analysis of a heated channel of a nuclear reactor in transients by a computer code that has been developed by the writer. The considered geometry is a channel of a nuclear reactor with cylindrical or planar fuel rods. The coolant is water and flows from the outer surface of the fuel rod. To model the heat transfer in the fuel rod, two dimensional time dependent conduction equations has been solved by combination of numerical methods, O rthogonal Collocation Method in radial direction and finite difference method in axial direction . For coolant modelling the single phase time dependent energy equation has been used and solved by finite difference method . The combination of the first module that solves the conduction in the fuel rod and a second one that solved the energy balance in the coolant region constitute the computer code (Thyc-1) to analysis thermohydraulic of a heated channel in transients. The Orthogonal collocation method maintains the accuracy and computing time of conventional finite difference methods, while the computer storage is reduced by a factor of two. The same problem has been modelled by RELAP5/M3 system code to asses the validity of the Thyc-1 code. The good agreement of the results qualifies the developed code

  18. Comparative analysis of bone mineral contents with dual-energy quantitative computed tomography

    International Nuclear Information System (INIS)

    Choi, T. J.; Yoon, S. M.; Kim, O. B.; Lee, S. M.; Suh, S. J.

    1997-01-01

    The Dual-Energy Quantitative Computed Tomography(DEQCT) was compared with bone equivalent K 2 HPO 4 standard solution and ash weight of animal cadaveric trabecular bone in the measurement of bone mineral contents(BMC). The attenuation coefficient of tissues highly depends on the radiation energy, density and effective atomic number of composition. The bone mineral content of DEQCT in this experiments was determined from empirical constants and mass attenuation coefficients of bone, fat and soft tissue equivalent solution in two photon spectra. In this experiments, the BMC of DEQCT with 80 and 120kV p X rays was compared to ash weight of animal trabecular bone. We obtained the mass attenuation coefficient of 0.2409, 0.5608 and 0.2206 in 80kV p , and 0.2046, 0.3273 and 0.1971 cm 2 /g in 120kV p X-ray spectra for water, bone and fat equivalent materials, respectively. The BMC with DEQCT was accomplished with empirical constants K 1 =0.3232, K 2 =0.2450 and mass attenuation coefficients has very closed to ash weight of animal trabecular bone. The BMC of empirical DEQCT and that of manufacturing DEQCT were correlated with ash weight as a correlation r=0.998 and r=0.996, respectively. The BMC of empirical DEQCT using the experimental mass attenuation coefficients and that of manufacture have showed very close to ash weight of animal trabecular bone. (author)

  19. Analysis of computer programming languages

    International Nuclear Information System (INIS)

    Risset, Claude Alain

    1967-01-01

    This research thesis aims at trying to identify some methods of syntax analysis which can be used for computer programming languages while putting aside computer devices which influence the choice of the programming language and methods of analysis and compilation. In a first part, the author proposes attempts of formalization of Chomsky grammar languages. In a second part, he studies analytical grammars, and then studies a compiler or analytic grammar for the Fortran language

  20. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  1. An integrated economic and distributional analysis of energy policies

    Energy Technology Data Exchange (ETDEWEB)

    Labandeira, Xavier [Facultade de CC. Economicas, University of Vigo, 36310 Vigo (Spain); Labeaga, Jose M. [Instituto de Estudios Fiscales, Avda. Cardenal Herrera Oria 378, 28035 Madrid (Spain); Rodriguez, Miguel [Facultade de CC. Empresariais e Turismo, University of Vigo, 32004 Ourense (Spain)

    2009-12-15

    Most public policies, particularly those in the energy sphere, have not only efficiency but also distributional effects. However, there is a trade-off between modelling approaches suitable for calculating those impacts on the economy. For the former most of the studies have been conducted with general equilibrium models, whereas partial equilibrium models represent the main approach for distributional analysis. This paper proposes a methodology to simultaneously carry out an analysis of the distributional and efficiency consequences of changes in energy taxation. In order to do so, we have integrated a microeconomic household demand model and a computable general equilibrium model for the Spanish economy. We illustrate the advantages of this approach by simulating a revenue-neutral reform in Spanish indirect taxation, with a large increase of energy taxes that serve an environmental purpose. The results show that the reforms bring about significant efficiency and distributional effects, in some cases counterintuitive, and demonstrate the academic and social utility of this approximation. (author)

  2. An integrated economic and distributional analysis of energy policies

    International Nuclear Information System (INIS)

    Labandeira, Xavier; Labeaga, Jose M.; Rodriguez, Miguel

    2009-01-01

    Most public policies, particularly those in the energy sphere, have not only efficiency but also distributional effects. However, there is a trade-off between modelling approaches suitable for calculating those impacts on the economy. For the former most of the studies have been conducted with general equilibrium models, whereas partial equilibrium models represent the main approach for distributional analysis. This paper proposes a methodology to simultaneously carry out an analysis of the distributional and efficiency consequences of changes in energy taxation. In order to do so, we have integrated a microeconomic household demand model and a computable general equilibrium model for the Spanish economy. We illustrate the advantages of this approach by simulating a revenue-neutral reform in Spanish indirect taxation, with a large increase of energy taxes that serve an environmental purpose. The results show that the reforms bring about significant efficiency and distributional effects, in some cases counterintuitive, and demonstrate the academic and social utility of this approximation. (author)

  3. Soft computing based on hierarchical evaluation approach and criteria interdependencies for energy decision-making problems: A case study

    International Nuclear Information System (INIS)

    Gitinavard, Hossein; Mousavi, S. Meysam; Vahdani, Behnam

    2017-01-01

    In numerous real-world energy decision problems, decision makers often encounter complex environments, in which existent imprecise data and uncertain information lead us to make an appropriate decision. In this paper, a new soft computing group decision-making approach is introduced based on novel compromise ranking method and interval-valued hesitant fuzzy sets (IVHFSs) for energy decision-making problems under multiple criteria. In the proposed approach, the assessment information is provided by energy experts or decision makers based on interval-valued hesitant fuzzy elements under incomplete criteria weights. In this respect, a new ranking index is presented respecting to interval-valued hesitant fuzzy Hamming distance measure to prioritize energy candidates, and criteria weights are computed based on an extended maximizing deviation method by considering the preferences experts' judgments about the relative importance of each criterion. Also, a decision making trial and evaluation laboratory (DEMATEL) method is extended under an IVHF-environment to compute the interdependencies between and within the selected criteria in the hierarchical structure. Accordingly, to demonstrate the applicability of the presented approach a case study and a practical example are provided regarding to hierarchical structure and criteria interdependencies relations for renewable energy and energy policy selection problems. Hence, the obtained computational results are compared with a fuzzy decision-making method from the recent literature based on some comparison parameters to show the advantages and constraints of the proposed approach. Finally, a sensitivity analysis is prepared to indicate effects of different criteria weights on ranking results to present the robustness or sensitiveness of the proposed soft computing approach versus the relative importance of criteria. - Highlights: • Introducing a novel interval-valued hesitant fuzzy compromise ranking method. • Presenting

  4. Impact of energy conservation policy measures on innovation, investment and long-term development of the Swiss economy. Results from the computable induced technical change and energy (CITE) model - Final report

    Energy Technology Data Exchange (ETDEWEB)

    Bretschger, L.; Ramer, R.; Schwark, F.

    2010-09-15

    This comprehensive final report for the Swiss Federal Office of Energy (SFOE) presents the results of a study made on the Computable Induced Technical Change and Energy (CITE) model. The authors note that, in the past two centuries, the Swiss economy experienced an unprecedented increase in living standards. At the same time, the stock of various natural resources declined and the environmental conditions changed substantially. The evaluation of the sustainability of a low energy and low carbon society as well as an optimum transition to this state is discussed. An economic analysis is made and the CITE and GCE (Computable General Equilibrium) numerical simulation models are discussed. The results obtained are presented and discussed.

  5. Energy efficiency of computer power supply units - Final report

    Energy Technology Data Exchange (ETDEWEB)

    Aebischer, B. [cepe - Centre for Energy Policy and Economics, Swiss Federal Institute of Technology Zuerich, Zuerich (Switzerland); Huser, H. [Encontrol GmbH, Niederrohrdorf (Switzerland)

    2002-11-15

    This final report for the Swiss Federal Office of Energy (SFOE) takes a look at the efficiency of computer power supply units, which decreases rapidly during average computer use. The background and the purpose of the project are examined. The power supplies for personal computers are discussed and the testing arrangement used is described. Efficiency, power-factor and operating points of the units are examined. Potentials for improvement and measures to be taken are discussed. Also, action to be taken by those involved in the design and operation of such power units is proposed. Finally, recommendations for further work are made.

  6. Energy conservation in ICT-businesses. Green computing in the USA; Energiereductie topprioriteit ICT-bedrijven. Green computing is hot in de USA

    Energy Technology Data Exchange (ETDEWEB)

    Hulsebos, M.

    2007-09-15

    A brief overview of the initiatives in ICT-businesses in the USA to save energy, also known as 'green computing'. [Dutch] Een kort overzicht van de initiatieven bij ICT-bedrijven in de USA om energie te besparen, ook bekend onder de naam 'green computing'.

  7. Process energy analysis

    International Nuclear Information System (INIS)

    Kaiser, V.

    1993-01-01

    In Chapter 2 process energy cost analysis for chemical processing is treated in a general way, independent of the specific form of energy and power production. Especially, energy data collection and data treatment, energy accounting (metering, balance setting), specific energy input, and utility energy costs and prices are discussed. (R.P.) 14 refs., 4 figs., 16 tabs

  8. Quantum computing applied to calculations of molecular energies

    Czech Academy of Sciences Publication Activity Database

    Pittner, Jiří; Veis, L.

    2011-01-01

    Roč. 241, - (2011), 151-phys ISSN 0065-7727. [National Meeting and Exposition of the American-Chemical-Society (ACS) /241./. 27.03.2011-31.03.2011, Anaheim] Institutional research plan: CEZ:AV0Z40400503 Keywords : molecular energie * quantum computers Subject RIV: CF - Physical ; Theoretical Chemistry

  9. Applied & Computational MathematicsChallenges for the Design and Control of Dynamic Energy Systems

    Energy Technology Data Exchange (ETDEWEB)

    Brown, D L; Burns, J A; Collis, S; Grosh, J; Jacobson, C A; Johansen, H; Mezic, I; Narayanan, S; Wetter, M

    2011-03-10

    The Energy Independence and Security Act of 2007 (EISA) was passed with the goal 'to move the United States toward greater energy independence and security.' Energy security and independence cannot be achieved unless the United States addresses the issue of energy consumption in the building sector and significantly reduces energy consumption in buildings. Commercial and residential buildings account for approximately 40% of the U.S. energy consumption and emit 50% of CO{sub 2} emissions in the U.S. which is more than twice the total energy consumption of the entire U.S. automobile and light truck fleet. A 50%-80% improvement in building energy efficiency in both new construction and in retrofitting existing buildings could significantly reduce U.S. energy consumption and mitigate climate change. Reaching these aggressive building efficiency goals will not happen without significant Federal investments in areas of computational and mathematical sciences. Applied and computational mathematics are required to enable the development of algorithms and tools to design, control and optimize energy efficient buildings. The challenge has been issued by the U.S. Secretary of Energy, Dr. Steven Chu (emphasis added): 'We need to do more transformational research at DOE including computer design tools for commercial and residential buildings that enable reductions in energy consumption of up to 80 percent with investments that will pay for themselves in less than 10 years.' On July 8-9, 2010 a team of technical experts from industry, government and academia were assembled in Arlington, Virginia to identify the challenges associated with developing and deploying newcomputational methodologies and tools thatwill address building energy efficiency. These experts concluded that investments in fundamental applied and computational mathematics will be required to build enabling technology that can be used to realize the target of 80% reductions in energy

  10. Computational methods for analyzing the transmission characteristics of a beta particle magnetic analysis system

    Science.gov (United States)

    Singh, J. J.

    1979-01-01

    Computational methods were developed to study the trajectories of beta particles (positrons) through a magnetic analysis system as a function of the spatial distribution of the radionuclides in the beta source, size and shape of the source collimator, and the strength of the analyzer magnetic field. On the basis of these methods, the particle flux, their energy spectrum, and source-to-target transit times have been calculated for Na-22 positrons as a function of the analyzer magnetic field and the size and location of the target. These data are in studies requiring parallel beams of positrons of uniform energy such as measurement of the moisture distribution in composite materials. Computer programs for obtaining various trajectories are included.

  11. Solving difficult problems creatively: A role for energy optimised deterministic/stochastic hybrid computing

    Directory of Open Access Journals (Sweden)

    Tim ePalmer

    2015-10-01

    Full Text Available How is the brain configured for creativity? What is the computational substrate for ‘eureka’ moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.

  12. Solving difficult problems creatively: a role for energy optimised deterministic/stochastic hybrid computing.

    Science.gov (United States)

    Palmer, Tim N; O'Shea, Michael

    2015-01-01

    How is the brain configured for creativity? What is the computational substrate for 'eureka' moments of insight? Here we argue that creative thinking arises ultimately from a synergy between low-energy stochastic and energy-intensive deterministic processing, and is a by-product of a nervous system whose signal-processing capability per unit of available energy has become highly energy optimised. We suggest that the stochastic component has its origin in thermal (ultimately quantum decoherent) noise affecting the activity of neurons. Without this component, deterministic computational models of the brain are incomplete.

  13. Computer code MLCOSP for multiple-correlation and spectrum analysis with a hybrid computer

    International Nuclear Information System (INIS)

    Oguma, Ritsuo; Fujii, Yoshio; Usui, Hozumi; Watanabe, Koichi

    1975-10-01

    Usage of the computer code MLCOSP(Multiple Correlation and Spectrum) developed is described for a hybrid computer installed in JAERI Functions of the hybrid computer and its terminal devices are utilized ingeniously in the code to reduce complexity of the data handling which occurrs in analysis of the multivariable experimental data and to perform the analysis in perspective. Features of the code are as follows; Experimental data can be fed to the digital computer through the analog part of the hybrid computer by connecting with a data recorder. The computed results are displayed in figures, and hardcopies are taken when necessary. Series-messages to the code are shown on the terminal, so man-machine communication is possible. And further the data can be put in through a keyboard, so case study according to the results of analysis is possible. (auth.)

  14. Nuclear Computational Low Energy Initiative (NUCLEI)

    Energy Technology Data Exchange (ETDEWEB)

    Reddy, Sanjay K. [University of Washington

    2017-08-14

    This is the final report for University of Washington for the NUCLEI SciDAC-3. The NUCLEI -project, as defined by the scope of work, will develop, implement and run codes for large-scale computations of many topics in low-energy nuclear physics. Physics to be studied include the properties of nuclei and nuclear decays, nuclear structure and reactions, and the properties of nuclear matter. The computational techniques to be used include Quantum Monte Carlo, Configuration Interaction, Coupled Cluster, and Density Functional methods. The research program will emphasize areas of high interest to current and possible future DOE nuclear physics facilities, including ATLAS and FRIB (nuclear structure and reactions, and nuclear astrophysics), TJNAF (neutron distributions in nuclei, few body systems, and electroweak processes), NIF (thermonuclear reactions), MAJORANA and FNPB (neutrino-less double-beta decay and physics beyond the Standard Model), and LANSCE (fission studies).

  15. Error Mitigation in Computational Design of Sustainable Energy Materials

    DEFF Research Database (Denmark)

    Christensen, Rune

    by individual C=O bonds. Energy corrections applied to C=O bonds significantly reduce systematic errors and can be extended to adsorbates. A similar study is performed for intermediates in the oxygen evolution and oxygen reduction reactions. An identified systematic error on peroxide bonds is found to also...... be present in the OOH* adsorbate. However, the systematic error will almost be canceled by inclusion of van der Waals energy. The energy difference between key adsorbates is thus similar to that previously found. Finally, a method is developed for error estimation in computationally inexpensive neural...

  16. Dual-Energy Computed Tomography: Image Acquisition, Processing, and Workflow.

    Science.gov (United States)

    Megibow, Alec J; Kambadakone, Avinash; Ananthakrishnan, Lakshmi

    2018-07-01

    Dual energy computed tomography has been available for more than 10 years; however, it is currently on the cusp of widespread clinical use. The way dual energy data are acquired and assembled must be appreciated at the clinical level so that the various reconstruction types can extend its diagnostic power. The type of scanner that is present in a given practice dictates the way in which the dual energy data can be presented and used. This article compares and contrasts how dual source, rapid kV switching, and spectral technologies acquire and present dual energy reconstructions to practicing radiologists. Copyright © 2018 Elsevier Inc. All rights reserved.

  17. ASTEC: Controls analysis for personal computers

    Science.gov (United States)

    Downing, John P.; Bauer, Frank H.; Thorpe, Christopher J.

    1989-01-01

    The ASTEC (Analysis and Simulation Tools for Engineering Controls) software is under development at Goddard Space Flight Center (GSFC). The design goal is to provide a wide selection of controls analysis tools at the personal computer level, as well as the capability to upload compute-intensive jobs to a mainframe or supercomputer. The project is a follow-on to the INCA (INteractive Controls Analysis) program that has been developed at GSFC over the past five years. While ASTEC makes use of the algorithms and expertise developed for the INCA program, the user interface was redesigned to take advantage of the capabilities of the personal computer. The design philosophy and the current capabilities of the ASTEC software are described.

  18. Energy-1: a computer code for thermohydraulic analysis of a LMBFR rod bundles, in a mixed convection regime

    International Nuclear Information System (INIS)

    Braz Filho, F.A.

    1987-01-01

    A code was set up in which velocity, temperature and pressure distributions are calculated, using the porous body model, for a rod bundle where mixed convection regime plays an important role. Results show satisfactory agreement with experimental data, as well as a reduction in computational time when compared to ENERGY-III code. (author) [pt

  19. Distributed computing and nuclear reactor analysis

    International Nuclear Information System (INIS)

    Brown, F.B.; Derstine, K.L.; Blomquist, R.N.

    1994-01-01

    Large-scale scientific and engineering calculations for nuclear reactor analysis can now be carried out effectively in a distributed computing environment, at costs far lower than for traditional mainframes. The distributed computing environment must include support for traditional system services, such as a queuing system for batch work, reliable filesystem backups, and parallel processing capabilities for large jobs. All ANL computer codes for reactor analysis have been adapted successfully to a distributed system based on workstations and X-terminals. Distributed parallel processing has been demonstrated to be effective for long-running Monte Carlo calculations

  20. Introduction to massively-parallel computing in high-energy physics

    CERN Document Server

    AUTHOR|(CDS)2083520

    1993-01-01

    Ever since computers were first used for scientific and numerical work, there has existed an "arms race" between the technical development of faster computing hardware, and the desires of scientists to solve larger problems in shorter time-scales. However, the vast leaps in processor performance achieved through advances in semi-conductor science have reached a hiatus as the technology comes up against the physical limits of the speed of light and quantum effects. This has lead all high performance computer manufacturers to turn towards a parallel architecture for their new machines. In these lectures we will introduce the history and concepts behind parallel computing, and review the various parallel architectures and software environments currently available. We will then introduce programming methodologies that allow efficient exploitation of parallel machines, and present case studies of the parallelization of typical High Energy Physics codes for the two main classes of parallel computing architecture (S...

  1. A Computer Program for Modeling the Conversion of Organic Waste to Energy

    Directory of Open Access Journals (Sweden)

    Pragasen Pillay

    2011-11-01

    Full Text Available This paper presents a tool for the analysis of conversion of organic waste into energy. The tool is a program that uses waste characterization parameters and mass flow rates at each stage of the waste treatment process to predict the given products. The specific waste treatment process analysed in this paper is anaerobic digestion. The different waste treatment stages of the anaerobic digestion process are: conditioning of input waste, secondary treatment, drying of sludge, conditioning of digestate, treatment of digestate, storage of liquid and solid effluent, disposal of liquid and solid effluents, purification, utilization and storage of combustible gas. The program uses mass balance equations to compute the amount of CH4, NH3, CO2 and H2S produced from anaerobic digestion of organic waste, and hence the energy available. Case studies are also presented.

  2. Multi-attribute criteria applied to electric generation energy system analysis LDRD.

    Energy Technology Data Exchange (ETDEWEB)

    Kuswa, Glenn W.; Tsao, Jeffrey Yeenien; Drennen, Thomas E.; Zuffranieri, Jason V.; Paananen, Orman Henrie; Jones, Scott A.; Ortner, Juergen G. (DLR, German Aerospace, Cologne); Brewer, Jeffrey D.; Valdez, Maximo M.

    2005-10-01

    This report began with a Laboratory-Directed Research and Development (LDRD) project to improve Sandia National Laboratories multidisciplinary capabilities in energy systems analysis. The aim is to understand how various electricity generating options can best serve needs in the United States. The initial product is documented in a series of white papers that span a broad range of topics, including the successes and failures of past modeling studies, sustainability, oil dependence, energy security, and nuclear power. Summaries of these projects are included here. These projects have provided a background and discussion framework for the Energy Systems Analysis LDRD team to carry out an inter-comparison of many of the commonly available electric power sources in present use, comparisons of those options, and efforts needed to realize progress towards those options. A computer aid has been developed to compare various options based on cost and other attributes such as technological, social, and policy constraints. The Energy Systems Analysis team has developed a multi-criteria framework that will allow comparison of energy options with a set of metrics that can be used across all technologies. This report discusses several evaluation techniques and introduces the set of criteria developed for this LDRD.

  3. Methodology and computational framework used for the US Department of Energy Environmental Restoration and Waste Management Programmatic Environmental Impact Statement accident analysis

    International Nuclear Information System (INIS)

    Mueller, C.; Roglans-Ribas, J.; Folga, S.; Huttenga, A.; Jackson, R.; TenBrook, W.; Russell, J.

    1994-01-01

    A methodology, computational framework, and integrated PC-based database have been developed to assess the risks of facility accidents in support of the US Department of Energy (DOE) Environmental Restoration and Waste Management Programmatic Environmental Impact Statement. The methodology includes the following interrelated elements: (1) screening of storage and treatment processes and related waste inventories to determine risk-dominant facilities across the DOE complex, (2) development and frequency estimation of the risk-dominant sequences of accidents, and (3) determination of the evolution of and final compositions of radiological or chemically hazardous source terms predicted to be released as a function of the storage inventory or treatment process throughput. The computational framework automates these elements to provide source term input for the second part of the analysis which includes (1) development or integration of existing site-specific demographics and meteorological data and calculation of attendant unit-risk factors and (2) assessment of the radiological or toxicological consequences of accident releases to the general public and to the occupational work force

  4. Analysis of a Model for Computer Virus Transmission

    Directory of Open Access Journals (Sweden)

    Peng Qin

    2015-01-01

    Full Text Available Computer viruses remain a significant threat to computer networks. In this paper, the incorporation of new computers to the network and the removing of old computers from the network are considered. Meanwhile, the computers are equipped with antivirus software on the computer network. The computer virus model is established. Through the analysis of the model, disease-free and endemic equilibrium points are calculated. The stability conditions of the equilibria are derived. To illustrate our theoretical analysis, some numerical simulations are also included. The results provide a theoretical basis to control the spread of computer virus.

  5. Commercial Building Energy Saver: An energy retrofit analysis toolkit

    International Nuclear Information System (INIS)

    Hong, Tianzhen; Piette, Mary Ann; Chen, Yixing; Lee, Sang Hoon; Taylor-Lange, Sarah C.; Zhang, Rongpeng; Sun, Kaiyu; Price, Phillip

    2015-01-01

    Highlights: • Commercial Building Energy Saver is a powerful toolkit for energy retrofit analysis. • CBES provides benchmarking, load shape analysis, and model-based retrofit assessment. • CBES covers 7 building types, 6 vintages, 16 climates, and 100 energy measures. • CBES includes a web app, API, and a database of energy efficiency performance. • CBES API can be extended and integrated with third party energy software tools. - Abstract: Small commercial buildings in the United States consume 47% of the total primary energy of the buildings sector. Retrofitting small and medium commercial buildings poses a huge challenge for owners because they usually lack the expertise and resources to identify and evaluate cost-effective energy retrofit strategies. This paper presents the Commercial Building Energy Saver (CBES), an energy retrofit analysis toolkit, which calculates the energy use of a building, identifies and evaluates retrofit measures in terms of energy savings, energy cost savings and payback. The CBES Toolkit includes a web app (APP) for end users and the CBES Application Programming Interface (API) for integrating CBES with other energy software tools. The toolkit provides a rich set of features including: (1) Energy Benchmarking providing an Energy Star score, (2) Load Shape Analysis to identify potential building operation improvements, (3) Preliminary Retrofit Analysis which uses a custom developed pre-simulated database and, (4) Detailed Retrofit Analysis which utilizes real-time EnergyPlus simulations. CBES includes 100 configurable energy conservation measures (ECMs) that encompass IAQ, technical performance and cost data, for assessing 7 different prototype buildings in 16 climate zones in California and 6 vintages. A case study of a small office building demonstrates the use of the toolkit for retrofit analysis. The development of CBES provides a new contribution to the field by providing a straightforward and uncomplicated decision

  6. The impact of increased efficiency in the industrial use of energy: A computable general equilibrium analysis for the United Kingdom

    International Nuclear Information System (INIS)

    Allan, Grant; Hanley, Nick; McGregor, Peter; Swales, Kim; Turner, Karen

    2007-01-01

    The conventional wisdom is that improving energy efficiency will lower energy use. However, there is an extensive debate in the energy economics/policy literature concerning 'rebound' effects. These occur because an improvement in energy efficiency produces a fall in the effective price of energy services. The response of the economic system to this price fall at least partially offsets the expected beneficial impact of the energy efficiency gain. In this paper we use an economy-energy-environment computable general equilibrium (CGE) model for the UK to measure the impact of a 5% across the board improvement in the efficiency of energy use in all production sectors. We identify rebound effects of the order of 30-50%, but no backfire (no increase in energy use). However, these results are sensitive to the assumed structure of the labour market, key production elasticities, the time period under consideration and the mechanism through which increased government revenues are recycled back to the economy

  7. Analytical Computation of Energy-Energy Correlation at Next-to-Leading Order in QCD.

    Science.gov (United States)

    Dixon, Lance J; Luo, Ming-Xing; Shtabovenko, Vladyslav; Yang, Tong-Zhi; Zhu, Hua Xing

    2018-03-09

    The energy-energy correlation (EEC) between two detectors in e^{+}e^{-} annihilation was computed analytically at leading order in QCD almost 40 years ago, and numerically at next-to-leading order (NLO) starting in the 1980s. We present the first analytical result for the EEC at NLO, which is remarkably simple, and facilitates analytical study of the perturbative structure of the EEC. We provide the expansion of the EEC in the collinear and back-to-back regions through next-to-leading power, information which should aid resummation in these regions.

  8. KEYNOTE: Simulation, computation, and the Global Nuclear Energy Partnership

    Science.gov (United States)

    Reis, Victor, Dr.

    2006-01-01

    Dr. Victor Reis delivered the keynote talk at the closing session of the conference. The talk was forward looking and focused on the importance of advanced computing for large-scale nuclear energy goals such as Global Nuclear Energy Partnership (GNEP). Dr. Reis discussed the important connections of GNEP to the Scientific Discovery through Advanced Computing (SciDAC) program and the SciDAC research portfolio. In the context of GNEP, Dr. Reis talked about possible fuel leasing configurations, strategies for their implementation, and typical fuel cycle flow sheets. A major portion of the talk addressed lessons learnt from ‘Science Based Stockpile Stewardship’ and the Accelerated Strategic Computing Initiative (ASCI) initiative and how they can provide guidance for advancing GNEP and SciDAC goals. Dr. Reis’s colorful and informative presentation included international proverbs, quotes and comments, in tune with the international flavor that is part of the GNEP philosophy and plan. He concluded with a positive and motivating outlook for peaceful nuclear energy and its potential to solve global problems. An interview with Dr. Reis, addressing some of the above issues, is the cover story of Issue 2 of the SciDAC Review and available at http://www.scidacreview.org This summary of Dr. Reis’s PowerPoint presentation was prepared by Institute of Physics Publishing, the complete PowerPoint version of Dr. Reis’s talk at SciDAC 2006 is given as a multimedia attachment to this summary.

  9. Feasibility of dual-energy computed tomography in radiation therapy planning

    Science.gov (United States)

    Sheen, Heesoon; Shin, Han-Back; Cho, Sungkoo; Cho, Junsang; Han, Youngyih

    2017-12-01

    In this study, the noise level, effective atomic number ( Z eff), accuracy of the computed tomography (CT) number, and the CT number to the relative electron density EDconversion curve were estimated for virtual monochromatic energy and polychromatic energy. These values were compared to the theoretically predicted values to investigate the feasibility of the use of dual-energy CT in routine radiation therapy planning. The accuracies of the parameters were within the range of acceptability. These results can serve as a stepping stone toward the routine use of dual-energy CT in radiotherapy planning.

  10. Review of the Fusion Theory and Computing Program. Fusion Energy Sciences Advisory Committee (FESAC)

    International Nuclear Information System (INIS)

    Antonsen, Thomas M.; Berry, Lee A.; Brown, Michael R.; Dahlburg, Jill P.; Davidson, Ronald C.; Greenwald, Martin; Hegna, Chris C.; McCurdy, William; Newman, David E.; Pellegrini, Claudio; Phillips, Cynthia K.; Post, Douglass E.; Rosenbluth, Marshall N.; Sheffield, John; Simonen, Thomas C.; Van Dam, James

    2001-01-01

    At the November 14-15, 2000, meeting of the Fusion Energy Sciences Advisory Committee, a Panel was set up to address questions about the Theory and Computing program, posed in a charge from the Office of Fusion Energy Sciences (see Appendix A). This area was of theory and computing/simulations had been considered in the FESAC Knoxville meeting of 1999 and in the deliberations of the Integrated Program Planning Activity (IPPA) in 2000. A National Research Council committee provided a detailed review of the scientific quality of the fusion energy sciences program, including theory and computing, in 2000.

  11. Complexity vs energy: theory of computation and theoretical physics

    International Nuclear Information System (INIS)

    Manin, Y I

    2014-01-01

    This paper is a survey based upon the talk at the satellite QQQ conference to ECM6, 3Quantum: Algebra Geometry Information, Tallinn, July 2012. It is dedicated to the analogy between the notions of complexity in theoretical computer science and energy in physics. This analogy is not metaphorical: I describe three precise mathematical contexts, suggested recently, in which mathematics related to (un)computability is inspired by and to a degree reproduces formalisms of statistical physics and quantum field theory.

  12. Department of Energy research in utilization of high-performance computers

    International Nuclear Information System (INIS)

    Buzbee, B.L.; Worlton, W.J.; Michael, G.; Rodrigue, G.

    1980-08-01

    Department of Energy (DOE) and other Government research laboratories depend on high-performance computer systems to accomplish their programmatic goals. As the most powerful computer systems become available, they are acquired by these laboratories so that advances can be made in their disciplines. These advances are often the result of added sophistication to numerical models, the execution of which is made possible by high-performance computer systems. However, high-performance computer systems have become increasingly complex, and consequently it has become increasingly difficult to realize their potential performance. The result is a need for research on issues related to the utilization of these systems. This report gives a brief description of high-performance computers, and then addresses the use of and future needs for high-performance computers within DOE, the growing complexity of applications within DOE, and areas of high-performance computer systems warranting research. 1 figure

  13. Computer-Aided Modelling and Analysis of PV Systems: A Comparative Study

    Directory of Open Access Journals (Sweden)

    Charalambos Koukouvaos

    2014-01-01

    Full Text Available Modern scientific advances have enabled remarkable efficacy for photovoltaic systems with regard to the exploitation of solar energy, boosting them into having a rapidly growing position among the systems developed for the production of renewable energy. However, in many cases the design, analysis, and control of photovoltaic systems are tasks which are quite complex and thus difficult to be carried out. In order to cope with this kind of problems, appropriate software tools have been developed either as standalone products or parts of general purpose software platforms used to model and simulate the generation, transmission, and distribution of solar energy. The utilization of this kind of software tools may be extremely helpful to the successful performance evaluation of energy systems with maximum accuracy and minimum cost in time and effort. The work presented in this paper aims on a first level at the performance analysis of various configurations of photovoltaic systems through computer-aided modelling. On a second level, it provides a comparative evaluation of the credibility of two of the most advanced graphical programming environments, namely, Simulink and LabVIEW, with regard to their application in photovoltaic systems.

  14. Three numerical methods for the computation of the electrostatic energy

    International Nuclear Information System (INIS)

    Poenaru, D.N.; Galeriu, D.

    1975-01-01

    The FORTRAN programs for computation of the electrostatic energy of a body with axial symmetry by Lawrence, Hill-Wheeler and Beringer methods are presented in detail. The accuracy, time of computation and the required memory of these methods are tested at various deformations for two simple parametrisations: two overlapping identical spheres and a spheroid. On this basis the field of application of each method is recomended

  15. Building Energy Assessment and Computer Simulation Applied to Social Housing in Spain

    Directory of Open Access Journals (Sweden)

    Juan Aranda

    2018-01-01

    Full Text Available The actual energy consumption and simulated energy performance of a building usually differ. This gap widens in social housing, owing to the characteristics of these buildings and the consumption patterns of economically vulnerable households affected by energy poverty. The aim of this work is to characterise the energy poverty of the households that are representative of those residing in social housing, specifically in blocks of apartments in Southern Europe. The main variables that affect energy consumption and costs are analysed, and the models developed for software energy-performance simulations (which are applied to predict energy consumption in social housing are validated against actual energy-consumption values. The results demonstrate that this type of household usually lives in surroundings at a temperature below the average thermal comfort level. We have taken into account that a standard thermal comfort level may lead to significant differences between computer-aided energy building simulation and actual consumption data (which are 40–140% lower than simulated consumption. This fact is of integral importance, as we use computer simulation to predict building energy performance in social housing.

  16. Analysis of Project Finance | Energy Analysis | NREL

    Science.gov (United States)

    Analysis of Project Finance Analysis of Project Finance NREL analysis helps potential renewable energy developers and investors gain insights into the complex world of project finance. Renewable energy project finance is complex, requiring knowledge of federal tax credits, state-level incentives, renewable

  17. New computing systems, future computing environment, and their implications on structural analysis and design

    Science.gov (United States)

    Noor, Ahmed K.; Housner, Jerrold M.

    1993-01-01

    Recent advances in computer technology that are likely to impact structural analysis and design of flight vehicles are reviewed. A brief summary is given of the advances in microelectronics, networking technologies, and in the user-interface hardware and software. The major features of new and projected computing systems, including high performance computers, parallel processing machines, and small systems, are described. Advances in programming environments, numerical algorithms, and computational strategies for new computing systems are reviewed. The impact of the advances in computer technology on structural analysis and the design of flight vehicles is described. A scenario for future computing paradigms is presented, and the near-term needs in the computational structures area are outlined.

  18. Computation of the average energy for LXY electrons

    International Nuclear Information System (INIS)

    Grau Carles, A.; Grau, A.

    1996-01-01

    The application of an atomic rearrangement model in which we only consider the three shells K, L and M, to compute the counting efficiency for electron capture nuclides, requires a fine averaged energy value for LMN electrons. In this report, we illustrate the procedure with two example, ''125 I and ''109 Cd. (Author) 4 refs

  19. Uncertainty analysis in Monte Carlo criticality computations

    International Nuclear Information System (INIS)

    Qi Ao

    2011-01-01

    Highlights: ► Two types of uncertainty methods for k eff Monte Carlo computations are examined. ► Sampling method has the least restrictions on perturbation but computing resources. ► Analytical method is limited to small perturbation on material properties. ► Practicality relies on efficiency, multiparameter applicability and data availability. - Abstract: Uncertainty analysis is imperative for nuclear criticality risk assessments when using Monte Carlo neutron transport methods to predict the effective neutron multiplication factor (k eff ) for fissionable material systems. For the validation of Monte Carlo codes for criticality computations against benchmark experiments, code accuracy and precision are measured by both the computational bias and uncertainty in the bias. The uncertainty in the bias accounts for known or quantified experimental, computational and model uncertainties. For the application of Monte Carlo codes for criticality analysis of fissionable material systems, an administrative margin of subcriticality must be imposed to provide additional assurance of subcriticality for any unknown or unquantified uncertainties. Because of a substantial impact of the administrative margin of subcriticality on economics and safety of nuclear fuel cycle operations, recently increasing interests in reducing the administrative margin of subcriticality make the uncertainty analysis in criticality safety computations more risk-significant. This paper provides an overview of two most popular k eff uncertainty analysis methods for Monte Carlo criticality computations: (1) sampling-based methods, and (2) analytical methods. Examples are given to demonstrate their usage in the k eff uncertainty analysis due to uncertainties in both neutronic and non-neutronic parameters of fissionable material systems.

  20. Surprisal analysis and probability matrices for rotational energy transfer

    International Nuclear Information System (INIS)

    Levine, R.D.; Bernstein, R.B.; Kahana, P.; Procaccia, I.; Upchurch, E.T.

    1976-01-01

    The information-theoretic approach is applied to the analysis of state-to-state rotational energy transfer cross sections. The rotational surprisal is evaluated in the usual way, in terms of the deviance of the cross sections from their reference (''prior'') values. The surprisal is found to be an essentially linear function of the energy transferred. This behavior accounts for the experimentally observed exponential gap law for the hydrogen halide systems. The data base here analyzed (taken from the literature) is largely computational in origin: quantal calculations for the hydrogenic systems H 2 +H, He, Li + ; HD+He; D 2 +H and for the N 2 +Ar system; and classical trajectory results for H 2 +Li + ; D 2 +Li + and N 2 +Ar. The surprisal analysis not only serves to compact a large body of data but also aids in the interpretation of the results. A single surprisal parameter theta/subR/ suffices to account for the (relative) magnitude of all state-to-state inelastic cross sections at a given energy

  1. School of Analytic Computing in Theoretical High-Energy Physics

    CERN Document Server

    2015-01-01

    In recent years, a huge progress has been made on computing rates for production processes of direct relevance to experiments at the Large Hadron Collider (LHC). Crucial to that remarkable advance has been our understanding and ability to compute scattering amplitudes and cross sections. The aim of the School is to bring together young theorists working on the phenomenology of LHC physics with those working in more formal areas, and to provide them the analytic tools to compute amplitudes in gauge theories. The school is addressed to Ph.D. students and post-docs in Theoretical High-Energy Physics. 30 hours of lectures and 4 hours of tutorials will be delivered over the 6 days of the School.

  2. Systems analysis and the computer

    Energy Technology Data Exchange (ETDEWEB)

    Douglas, A S

    1983-08-01

    The words systems analysis are used in at least two senses. Whilst the general nature of the topic is well understood in the or community, the nature of the term as used by computer scientists is less familiar. In this paper, the nature of systems analysis as it relates to computer-based systems is examined from the point of view that the computer system is an automaton embedded in a human system, and some facets of this are explored. It is concluded that or analysts and computer analysts have things to learn from each other and that this ought to be reflected in their education. The important role played by change in the design of systems is also highlighted, and it is concluded that, whilst the application of techniques developed in the artificial intelligence field have considerable relevance to constructing automata able to adapt to change in the environment, study of the human factors affecting the overall systems within which the automata are embedded has an even more important role. 19 references.

  3. FINCRUSH : a computer program for impact analysis of radioactive material transport cask with fins

    International Nuclear Information System (INIS)

    Ikushima, Takeshi

    1997-05-01

    In drop impact analyses for radioactive material transport cask with cooling fins, relationship between fin plastic deformation and fin energy absorption is used. This relationship was obtained by ORNL experiments and MONSER Co. in Canada. Based on ORNL experiments, a computer program FINCRUSH has been developed for rapid safety analysis of cask drop impact to obtain the maximum impact acceleration and the maximum fin deformation. Main features of FINCRUSH are as follows: (1) annulus fins on a cylindrical shell and plate fins on a disk can be treated, (2) it is capable of graphical representations for calculation results and fin absorption energy data and (3) not only main frame computer but also work stations (OS UNIX) and personal computer (OS Windows) are available for use of the FINCRUSH. In the paper, brief illustration of calculation method of FINCRUSH is presented. The second section presents comparisons between FINCRUSH and experimental results. The third section provides a use's guide for FINCRUSH. (author)

  4. FINCRUSH : a computer program for impact analysis of radioactive material transport cask with fins

    Energy Technology Data Exchange (ETDEWEB)

    Ikushima, Takeshi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1997-05-01

    In drop impact analyses for radioactive material transport cask with cooling fins, relationship between fin plastic deformation and fin energy absorption is used. This relationship was obtained by ORNL experiments and MONSER Co. in Canada. Based on ORNL experiments, a computer program FINCRUSH has been developed for rapid safety analysis of cask drop impact to obtain the maximum impact acceleration and the maximum fin deformation. Main features of FINCRUSH are as follows: (1) annulus fins on a cylindrical shell and plate fins on a disk can be treated, (2) it is capable of graphical representations for calculation results and fin absorption energy data and (3) not only main frame computer but also work stations (OS UNIX) and personal computer (OS Windows) are available for use of the FINCRUSH. In the paper, brief illustration of calculation method of FINCRUSH is presented. The second section presents comparisons between FINCRUSH and experimental results. The third section provides a use`s guide for FINCRUSH. (author)

  5. A computational description of simple mediation analysis

    Directory of Open Access Journals (Sweden)

    Caron, Pier-Olivier

    2018-04-01

    Full Text Available Simple mediation analysis is an increasingly popular statistical analysis in psychology and in other social sciences. However, there is very few detailed account of the computations within the model. Articles are more often focusing on explaining mediation analysis conceptually rather than mathematically. Thus, the purpose of the current paper is to introduce the computational modelling within simple mediation analysis accompanied with examples with R. Firstly, mediation analysis will be described. Then, the method to simulate data in R (with standardized coefficients will be presented. Finally, the bootstrap method, the Sobel test and the Baron and Kenny test all used to evaluate mediation (i.e., indirect effect will be developed. The R code to implement the computation presented is offered as well as a script to carry a power analysis and a complete example.

  6. Computer-Assisted Linguistic Analysis of the Peshitta

    NARCIS (Netherlands)

    Roorda, D.; Talstra, Eep; Dyk, Janet; van Keulen, Percy; Sikkel, Constantijn; Bosman, H.J.; Jenner, K.D.; Bakker, Dirk; Volkmer, J.A.; Gutman, Ariel; van Peursen, Wido Th.

    2014-01-01

    CALAP (Computer-Assisted Linguistic Analysis of the Peshitta), a joint research project of the Peshitta Institute Leiden and the Werkgroep Informatica at the Vrije Universiteit Amsterdam (1999-2005) CALAP concerned the computer-assisted analysis of the Peshitta to Kings (Janet Dyk and Percy van

  7. Computer aided safety analysis 1989

    International Nuclear Information System (INIS)

    1990-04-01

    The meeting was conducted in a workshop style, to encourage involvement of all participants during the discussions. Forty-five (45) experts from 19 countries, plus 22 experts from the GDR participated in the meeting. A list of participants can be found at the end of this volume. Forty-two (42) papers were presented and discussed during the meeting. Additionally an open discussion was held on the possible directions of the IAEA programme on Computer Aided Safety Analysis. A summary of the conclusions of these discussions is presented in the publication. The remainder of this proceedings volume comprises the transcript of selected technical papers (22) presented in the meeting. It is the intention of the IAEA that the publication of these proceedings will extend the benefits of the discussions held during the meeting to a larger audience throughout the world. The Technical Committee/Workshop on Computer Aided Safety Analysis was organized by the IAEA in cooperation with the National Board for Safety and Radiological Protection (SAAS) of the German Democratic Republic in Berlin. The purpose of the meeting was to provide an opportunity for discussions on experiences in the use of computer codes used for safety analysis of nuclear power plants. In particular it was intended to provide a forum for exchange of information among experts using computer codes for safety analysis under the Technical Cooperation Programme on Safety of WWER Type Reactors (RER/9/004) and other experts throughout the world. A separate abstract was prepared for each of the 22 selected papers. Refs, figs tabs and pictures

  8. Use of computer codes for system reliability analysis

    International Nuclear Information System (INIS)

    Sabek, M.; Gaafar, M.; Poucet, A.

    1988-01-01

    This paper gives a collective summary of the studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRANTIC, FTAP, computer code package RALLY, and BOUNDS codes. Two reference study cases were executed by each code. The results obtained logic/probabilistic analysis as well as computation time are compared

  9. Challenges in reducing the computational time of QSTS simulations for distribution system analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Deboever, Jeremiah [Georgia Inst. of Technology, Atlanta, GA (United States); Zhang, Xiaochen [Georgia Inst. of Technology, Atlanta, GA (United States); Reno, Matthew J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Broderick, Robert Joseph [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Grijalva, Santiago [Georgia Inst. of Technology, Atlanta, GA (United States); Therrien, Francis [CME International T& D, St. Bruno, QC (Canada)

    2017-06-01

    The rapid increase in penetration of distributed energy resources on the electric power distribution system has created a need for more comprehensive interconnection modelling and impact analysis. Unlike conventional scenario - based studies , quasi - static time - series (QSTS) simulation s can realistically model time - dependent voltage controllers and the diversity of potential impacts that can occur at different times of year . However, to accurately model a distribution system with all its controllable devices, a yearlong simulation at 1 - second resolution is often required , which could take conventional computers a computational time of 10 to 120 hours when an actual unbalanced distribution feeder is modeled . This computational burden is a clear l imitation to the adoption of QSTS simulation s in interconnection studies and for determining optimal control solutions for utility operations . Our ongoing research to improve the speed of QSTS simulation has revealed many unique aspects of distribution system modelling and sequential power flow analysis that make fast QSTS a very difficult problem to solve. In this report , the most relevant challenges in reducing the computational time of QSTS simulations are presented: number of power flows to solve, circuit complexity, time dependence between time steps, multiple valid power flow solutions, controllable element interactions, and extensive accurate simulation analysis.

  10. Computing energy budget within a crop canopy from Penmann's ...

    Indian Academy of Sciences (India)

    R. Narasimhan, Krishtel eMaging Solutions

    Computing energy budget within a crop canopy from. Penmann's formulae. Mahendra Mohan∗ and K K Srivastava∗∗. ∗Radio and Atmospheric Science Division, National Physical Laboratory, New Delhi 110012, India. ∗∗Department of Chemical Engineering, Institute of Technology, Banaras Hindu University, Varanasi.

  11. Thermal energy systems design and analysis

    CERN Document Server

    Penoncello, Steven G

    2015-01-01

    IntroductionThermal Energy Systems Design and AnalysisSoftwareThermal Energy System TopicsUnits and Unit SystemsThermophysical PropertiesEngineering DesignEngineering EconomicsIntroductionCommon Engineering Economics NomenclatureEconomic Analysis Tool: The Cash Flow DiagramTime Value of MoneyTime Value of Money ExamplesUsing Software to Calculate Interest FactorsEconomic Decision MakingDepreciation and TaxesProblemsAnalysis of Thermal Energy SystemsIntroductionNomenclatureThermophysical Properties of SubstancesSuggested Thermal Energy Systems Analysis ProcedureConserved and Balanced QuantitiesConservation of MassConservation of Energy (The First Law of Thermodynamics)Entropy Balance (The Second Law of Thermodynamics)Exergy Balance: The Combined LawEnergy and Exergy Analysis of Thermal Energy CyclesDetailed Analysis of Thermal Energy CyclesProblemsFluid Transport in Thermal Energy SystemsIntroductionPiping and Tubing StandardsFluid Flow FundamentalsValves and FittingsDesign and Analysis of Pipe NetworksEconomi...

  12. Utility of single-energy and dual-energy computed tomography in clot characterization: An in-vitro study.

    Science.gov (United States)

    Brinjikji, Waleed; Michalak, Gregory; Kadirvel, Ramanathan; Dai, Daying; Gilvarry, Michael; Duffy, Sharon; Kallmes, David F; McCollough, Cynthia; Leng, Shuai

    2017-06-01

    Background and purpose Because computed tomography (CT) is the most commonly used imaging modality for the evaluation of acute ischemic stroke patients, developing CT-based techniques for improving clot characterization could prove useful. The purpose of this in-vitro study was to determine which single-energy or dual-energy CT techniques provided optimum discrimination between red blood cell (RBC) and fibrin-rich clots. Materials and methods Seven clot types with varying fibrin and RBC densities were made (90% RBC, 99% RBC, 63% RBC, 36% RBC, 18% RBC and 0% RBC with high and low fibrin density) and their composition was verified histologically. Ten of each clot type were created and scanned with a second generation dual source scanner using three single (80 kV, 100 kV, 120 kV) and two dual-energy protocols (80/Sn 140 kV and 100/Sn 140 kV). A region of interest (ROI) was placed over each clot and mean attenuation was measured. Receiver operating characteristic curves were calculated at each energy level to determine the accuracy at differentiating RBC-rich clots from fibrin-rich clots. Results Clot attenuation increased with RBC content at all energy levels. Single-energy at 80 kV and 120 kV and dual-energy 80/Sn 140 kV protocols allowed for distinguishing between all clot types, with the exception of 36% RBC and 18% RBC. On receiver operating characteristic curve analysis, the 80/Sn 140 kV dual-energy protocol had the highest area under the curve for distinguishing between fibrin-rich and RBC-rich clots (area under the curve 0.99). Conclusions Dual-energy CT with 80/Sn 140 kV had the highest accuracy for differentiating RBC-rich and fibrin-rich in-vitro thrombi. Further studies are needed to study the utility of non-contrast dual-energy CT in thrombus characterization in acute ischemic stroke.

  13. An accessibility solution of cloud computing by solar energy

    Directory of Open Access Journals (Sweden)

    Zuzana Priščáková

    2013-01-01

    Full Text Available Cloud computing is a modern innovative technology of solution of a problem with data storage, data processing, company infrastructure building and so on. Many companies worry over the changes by the implementation of this solution because these changes could have a negative impact on the company, or, in the case of establishment of new companies, this worry results from an unfamiliar environment. Data accessibility, integrity and security belong among basic problems of cloud computing. The aim of this paper is to offer and scientifically confirm a proposal of an accessibility solution of cloud by implementing of solar energy as a primary source. Problems with accessibility rise from power failures when data may be stolen or lost. Since cloud is often started from a server, the server dependence on power is strong. Modern conditions offer us a new, more innovative solution regarding the ecological as well as an economical company solution. The Sun as a steady source of energy offers us a possibility to produce necessary energy by a solar technique – solar panels. The connection of a solar panel as a primary source of energy for a server would remove its power dependence as well as possible failures. The power dependence would stay as a secondary source. Such an ecological solution would influence the economical side of company because the energy consumption would be lower. Besides a proposal of an accessibility solution, this paper involves a physical and mathematical solution to a calculation of solar energy showered on the Earth, a calculation of the panel size by cosines method and a simulation of these calculations in MATLAB conditions.

  14. Computational System For Rapid CFD Analysis In Engineering

    Science.gov (United States)

    Barson, Steven L.; Ascoli, Edward P.; Decroix, Michelle E.; Sindir, Munir M.

    1995-01-01

    Computational system comprising modular hardware and software sub-systems developed to accelerate and facilitate use of techniques of computational fluid dynamics (CFD) in engineering environment. Addresses integration of all aspects of CFD analysis process, including definition of hardware surfaces, generation of computational grids, CFD flow solution, and postprocessing. Incorporates interfaces for integration of all hardware and software tools needed to perform complete CFD analysis. Includes tools for efficient definition of flow geometry, generation of computational grids, computation of flows on grids, and postprocessing of flow data. System accepts geometric input from any of three basic sources: computer-aided design (CAD), computer-aided engineering (CAE), or definition by user.

  15. Analysis of energy flow during playground surface impacts.

    Science.gov (United States)

    Davidson, Peter L; Wilson, Suzanne J; Chalmers, David J; Wilson, Barry D; Eager, David; McIntosh, Andrew S

    2013-10-01

    The amount of energy dissipated away from or returned to a child falling onto a surface will influence fracture risk but is not considered in current standards for playground impact-attenuating surfaces. A two-mass rheological computer simulation was used to model energy flow within the wrist and surface during hand impact with playground surfaces, and the potential of this approach to provide insights into such impacts and predict injury risk examined. Acceleration data collected on-site from typical playground surfaces and previously obtained data from children performing an exercise involving freefalling with a fully extended arm provided input. The model identified differences in energy flow properties between playground surfaces and two potentially harmful surface characteristics: more energy was absorbed by (work done on) the wrist during both impact and rebound on rubber surfaces than on bark, and rubber surfaces started to rebound (return energy to the wrist) while the upper limb was still moving downward. Energy flow analysis thus provides information on playground surface characteristics and the impact process, and has the potential to identify fracture risks, inform the development of safer impact-attenuating surfaces, and contribute to development of new energy-based arm fracture injury criteria and tests for use in conjunction with current methods.

  16. Computer programs simplify optical system analysis

    Science.gov (United States)

    1965-01-01

    The optical ray-trace computer program performs geometrical ray tracing. The energy-trace program calculates the relative monochromatic flux density on a specific target area. This program uses the ray-trace program as a subroutine to generate a representation of the optical system.

  17. Energy Consumption and Indoor Environment Predicted by a Combination of Computational Fluid Dynamics and Building Energy Performance Simulation

    DEFF Research Database (Denmark)

    Nielsen, Peter Vilhelm

    2003-01-01

    An interconnection between a building energy performance simulation program and a Computational Fluid Dynamics program (CFD) for room air distribution is introduced for improvement of the predictions of both the energy consumption and the indoor environment.The article describes a calculation...

  18. Computer-controlled system for plasma ion energy auto-analyzer

    International Nuclear Information System (INIS)

    Wu Xianqiu; Chen Junfang; Jiang Zhenmei; Zhong Qinghua; Xiong Yuying; Wu Kaihua

    2003-01-01

    A computer-controlled system for plasma ion energy auto-analyzer was technically studied for rapid and online measurement of plasma ion energy distribution. The system intelligently controls all the equipments via a RS-232 port, a printer port and a home-built circuit. The software designed by LabVIEW G language automatically fulfils all of the tasks such as system initializing, adjustment of scanning-voltage, measurement of weak-current, data processing, graphic export, etc. By using the system, a few minutes are taken to acquire the whole ion energy distribution, which rapidly provide important parameters of plasma process techniques based on semiconductor devices and microelectronics

  19. BigData and computing challenges in high energy and nuclear physics

    Science.gov (United States)

    Klimentov, A.; Grigorieva, M.; Kiryanov, A.; Zarochentsev, A.

    2017-06-01

    In this contribution we discuss the various aspects of the computing resource needs experiments in High Energy and Nuclear Physics, in particular at the Large Hadron Collider. This will evolve in the future when moving from LHC to HL-LHC in ten years from now, when the already exascale levels of data we are processing could increase by a further order of magnitude. The distributed computing environment has been a great success and the inclusion of new super-computing facilities, cloud computing and volunteering computing for the future is a big challenge, which we are successfully mastering with a considerable contribution from many super-computing centres around the world, academic and commercial cloud providers. We also discuss R&D computing projects started recently in National Research Center ``Kurchatov Institute''

  20. BigData and computing challenges in high energy and nuclear physics

    International Nuclear Information System (INIS)

    Klimentov, A.; Grigorieva, M.; Kiryanov, A.; Zarochentsev, A.

    2017-01-01

    In this contribution we discuss the various aspects of the computing resource needs experiments in High Energy and Nuclear Physics, in particular at the Large Hadron Collider. This will evolve in the future when moving from LHC to HL-LHC in ten years from now, when the already exascale levels of data we are processing could increase by a further order of magnitude. The distributed computing environment has been a great success and the inclusion of new super-computing facilities, cloud computing and volunteering computing for the future is a big challenge, which we are successfully mastering with a considerable contribution from many super-computing centres around the world, academic and commercial cloud providers. We also discuss R and D computing projects started recently in National Research Center ''Kurchatov Institute''

  1. 1988 CERN school of computing

    International Nuclear Information System (INIS)

    Verkerk, C.

    1989-01-01

    These Proceedings contain written versions of most of the lectures delivered at the 1988 CERN School of Computing. Five lecture series concerned different aspects of parallel and vector processing: advanced computer architectures; parallel architectures for neurocomputers; Occam and transputers; vectorization of Monte Carlo code; and vectorization of high-energy physics code. Software engineering was the topic of three series of lectures: formal methods for program design; introduction to software engineering; and tutorial lectures on structured analysis and structured design. Lectures on data-acquisition and recording were followed by lectures on new techniques for data analysis in high-energy physics. Computer-assisted design of electronic systems, and silicon compilation and design synthesis for digital systems, were the topic of two other, closely related, lecture series. Lectures on accelerator controls and on robotics are also recorded in these Proceedings. Various other aspects of computing were covered in lectures on high-speed networks document preparation systems, interpersonal communication using computers, and on Fortran 8x. Two general lectures gave an introduction to high-energy physics at CERN. (orig.)

  2. Affective Computing and Sentiment Analysis

    CERN Document Server

    Ahmad, Khurshid

    2011-01-01

    This volume maps the watershed areas between two 'holy grails' of computer science: the identification and interpretation of affect -- including sentiment and mood. The expression of sentiment and mood involves the use of metaphors, especially in emotive situations. Affect computing is rooted in hermeneutics, philosophy, political science and sociology, and is now a key area of research in computer science. The 24/7 news sites and blogs facilitate the expression and shaping of opinion locally and globally. Sentiment analysis, based on text and data mining, is being used in the looking at news

  3. Soft-error tolerance and energy consumption evaluation of embedded computer with magnetic random access memory in practical systems using computer simulations

    Science.gov (United States)

    Nebashi, Ryusuke; Sakimura, Noboru; Sugibayashi, Tadahiko

    2017-08-01

    We evaluated the soft-error tolerance and energy consumption of an embedded computer with magnetic random access memory (MRAM) using two computer simulators. One is a central processing unit (CPU) simulator of a typical embedded computer system. We simulated the radiation-induced single-event-upset (SEU) probability in a spin-transfer-torque MRAM cell and also the failure rate of a typical embedded computer due to its main memory SEU error. The other is a delay tolerant network (DTN) system simulator. It simulates the power dissipation of wireless sensor network nodes of the system using a revised CPU simulator and a network simulator. We demonstrated that the SEU effect on the embedded computer with 1 Gbit MRAM-based working memory is less than 1 failure in time (FIT). We also demonstrated that the energy consumption of the DTN sensor node with MRAM-based working memory can be reduced to 1/11. These results indicate that MRAM-based working memory enhances the disaster tolerance of embedded computers.

  4. Research in applied mathematics, numerical analysis, and computer science

    Science.gov (United States)

    1984-01-01

    Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.

  5. Cloud computing for energy management in smart grid - an application survey

    International Nuclear Information System (INIS)

    Naveen, P; Ing, Wong Kiing; Danquah, Michael Kobina; Sidhu, Amandeep S; Abu-Siada, Ahmed

    2016-01-01

    The smart grid is the emerging energy system wherein the application of information technology, tools and techniques that make the grid run more efficiently. It possesses demand response capacity to help balance electrical consumption with supply. The challenges and opportunities of emerging and future smart grids can be addressed by cloud computing. To focus on these requirements, we provide an in-depth survey on different cloud computing applications for energy management in the smart grid architecture. In this survey, we present an outline of the current state of research on smart grid development. We also propose a model of cloud based economic power dispatch for smart grid. (paper)

  6. Octopus: embracing the energy efficiency of handheld multimedia computers

    NARCIS (Netherlands)

    Havinga, Paul J.M.; Smit, Gerardus Johannes Maria

    1999-01-01

    In the MOBY DICK project we develop and define the architecture of a new generation of mobile hand-held computers called Mobile Digital Companions. The Companions must meet several major requirements: high performance, energy efficient, a notion of Quality of Service (QoS), small size, and low

  7. On the energy benefit of compute-and-forward on the hexagonal lattice

    NARCIS (Netherlands)

    Ren, Zhijie; Goseling, Jasper; Weber, Jos; Gastpar, Michael; Skoric, B.; Ignatenko, T.

    2014-01-01

    We study the energy benefit of applying compute-and-forward on a wireless hexagonal lattice network with multiple unicast sessions with a specific session placement. Two compute-and-forward based transmission schemes are proposed, which allow the relays to exploit both the broadcast and

  8. Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.

    Science.gov (United States)

    Heald, Emerson F.

    1978-01-01

    Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)

  9. Energy consumption analysis for various memristive networks under different learning strategies

    Energy Technology Data Exchange (ETDEWEB)

    Deng, Lei; Wang, Dong [Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing 100084 (China); Zhang, Ziyang; Tang, Pei [Optical Memory National Engineering Research Center, Department of Precision Instrument, Tsinghua University, Beijing 100084 (China); Li, Guoqi, E-mail: liguoqi@mail.tsinghua.edu.cn [Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing 100084 (China); Pei, Jing, E-mail: peij@mail.tsinghua.edu.cn [Center for Brain Inspired Computing Research (CBICR), Department of Precision Instrument, Tsinghua University, Beijing 100084 (China); Optical Memory National Engineering Research Center, Department of Precision Instrument, Tsinghua University, Beijing 100084 (China)

    2016-02-22

    Highlights: • Estimation methodology for energy consumed by memristor is established. • Energy comparisons for different learning strategies in various networks are touched. • Less-pulses and low-power-first modulation methods are energy efficient. • Proper decreasing the memristor modulation precision reduces the energy consumption. • Helpful solutions for power improving in memristive systems are proposed. - Abstract: Recently, various memristive systems emerge to emulate the efficient computing paradigm of the brain cortex; whereas, how to make them energy efficient still remains unclear, especially from an overall perspective. Here, a systematical and bottom-up energy consumption analysis is demonstrated, including the memristor device level and the network learning level. We propose an energy estimating methodology when modulating the memristive synapses, which is simulated in three typical neural networks with different synaptic structures and learning strategies for both offline and online learning. These results provide an in-depth insight to create energy efficient brain-inspired neuromorphic devices in the future.

  10. Energy consumption analysis for various memristive networks under different learning strategies

    International Nuclear Information System (INIS)

    Deng, Lei; Wang, Dong; Zhang, Ziyang; Tang, Pei; Li, Guoqi; Pei, Jing

    2016-01-01

    Highlights: • Estimation methodology for energy consumed by memristor is established. • Energy comparisons for different learning strategies in various networks are touched. • Less-pulses and low-power-first modulation methods are energy efficient. • Proper decreasing the memristor modulation precision reduces the energy consumption. • Helpful solutions for power improving in memristive systems are proposed. - Abstract: Recently, various memristive systems emerge to emulate the efficient computing paradigm of the brain cortex; whereas, how to make them energy efficient still remains unclear, especially from an overall perspective. Here, a systematical and bottom-up energy consumption analysis is demonstrated, including the memristor device level and the network learning level. We propose an energy estimating methodology when modulating the memristive synapses, which is simulated in three typical neural networks with different synaptic structures and learning strategies for both offline and online learning. These results provide an in-depth insight to create energy efficient brain-inspired neuromorphic devices in the future.

  11. High Energy Physics Exascale Requirements Review. An Office of Science review sponsored jointly by Advanced Scientific Computing Research and High Energy Physics, June 10-12, 2015, Bethesda, Maryland

    Energy Technology Data Exchange (ETDEWEB)

    Habib, Salman [Argonne National Lab. (ANL), Argonne, IL (United States); Roser, Robert [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Gerber, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Antypas, Katie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Dart, Eli [Esnet, Berkeley, CA (United States); Dosanjh, Sudip [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hack, James [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Monga, Inder [Esnet, Berkeley, CA (United States); Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States); Riley, Katherine [Argonne National Lab. (ANL), Argonne, IL (United States); Rotman, Lauren [Esnet, Berkeley, CA (United States); Straatsma, Tjerk [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wells, Jack [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Williams, Tim [Argonne National Lab. (ANL), Argonne, IL (United States); Almgren, A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Amundson, J. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Bailey, Stephen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bard, Deborah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bloom, Ken [Univ. of Nebraska, Lincoln, NE (United States); Bockelman, Brian [Univ. of Nebraska, Lincoln, NE (United States); Borgland, Anders [SLAC National Accelerator Lab., Menlo Park, CA (United States); Borrill, Julian [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Boughezal, Radja [Argonne National Lab. (ANL), Argonne, IL (United States); Brower, Richard [Boston Univ., MA (United States); Cowan, Benjamin [SLAC National Accelerator Lab., Menlo Park, CA (United States); Finkel, Hal [Argonne National Lab. (ANL), Argonne, IL (United States); Frontiere, Nicholas [Argonne National Lab. (ANL), Argonne, IL (United States); Fuess, Stuart [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Ge, Lixin [SLAC National Accelerator Lab., Menlo Park, CA (United States); Gnedin, Nick [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Gottlieb, Steven [Indiana Univ., Bloomington, IN (United States); Gutsche, Oliver [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Han, T. [Indiana Univ., Bloomington, IN (United States); Heitmann, Katrin [Argonne National Lab. (ANL), Argonne, IL (United States); Hoeche, Stefan [SLAC National Accelerator Lab., Menlo Park, CA (United States); Ko, Kwok [SLAC National Accelerator Lab., Menlo Park, CA (United States); Kononenko, Oleksiy [SLAC National Accelerator Lab., Menlo Park, CA (United States); LeCompte, Thomas [Argonne National Lab. (ANL), Argonne, IL (United States); Li, Zheng [SLAC National Accelerator Lab., Menlo Park, CA (United States); Lukic, Zarija [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Mori, Warren [Univ. of California, Los Angeles, CA (United States); Ng, Cho-Kuen [SLAC National Accelerator Lab., Menlo Park, CA (United States); Nugent, Peter [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Oleynik, Gene [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); O’Shea, Brian [Michigan State Univ., East Lansing, MI (United States); Padmanabhan, Nikhil [Yale Univ., New Haven, CT (United States); Petravick, Donald [Univ. of Illinois, Urbana, IL (United States). National Center for Supercomputing Applications; Petriello, Frank J. [Argonne National Lab. (ANL), Argonne, IL (United States); Pope, Adrian [Argonne National Lab. (ANL), Argonne, IL (United States); Power, John [Argonne National Lab. (ANL), Argonne, IL (United States); Qiang, Ji [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Reina, Laura [Florida State Univ., Tallahassee, FL (United States); Rizzo, Thomas Gerard [SLAC National Accelerator Lab., Menlo Park, CA (United States); Ryne, Robert [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Schram, Malachi [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Spentzouris, P. [Fermi National Accelerator Lab. (FNAL), Batavia, IL (United States); Toussaint, Doug [Univ. of Arizona, Tucson, AZ (United States); Vay, Jean Luc [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Viren, B. [Brookhaven National Lab. (BNL), Upton, NY (United States); Wuerthwein, Frank [Univ. of California, San Diego, CA (United States); Xiao, Liling [SLAC National Accelerator Lab., Menlo Park, CA (United States); Coffey, Richard [Argonne National Lab. (ANL), Argonne, IL (United States)

    2016-11-29

    The U.S. Department of Energy (DOE) Office of Science (SC) Offices of High Energy Physics (HEP) and Advanced Scientific Computing Research (ASCR) convened a programmatic Exascale Requirements Review on June 10–12, 2015, in Bethesda, Maryland. This report summarizes the findings, results, and recommendations derived from that meeting. The high-level findings and observations are as follows. Larger, more capable computing and data facilities are needed to support HEP science goals in all three frontiers: Energy, Intensity, and Cosmic. The expected scale of the demand at the 2025 timescale is at least two orders of magnitude — and in some cases greater — than that available currently. The growth rate of data produced by simulations is overwhelming the current ability of both facilities and researchers to store and analyze it. Additional resources and new techniques for data analysis are urgently needed. Data rates and volumes from experimental facilities are also straining the current HEP infrastructure in its ability to store and analyze large and complex data volumes. Appropriately configured leadership-class facilities can play a transformational role in enabling scientific discovery from these datasets. A close integration of high-performance computing (HPC) simulation and data analysis will greatly aid in interpreting the results of HEP experiments. Such an integration will minimize data movement and facilitate interdependent workflows. Long-range planning between HEP and ASCR will be required to meet HEP’s research needs. To best use ASCR HPC resources, the experimental HEP program needs (1) an established, long-term plan for access to ASCR computational and data resources, (2) the ability to map workflows to HPC resources, (3) the ability for ASCR facilities to accommodate workflows run by collaborations potentially comprising thousands of individual members, (4) to transition codes to the next-generation HPC platforms that will be available at ASCR

  12. A Research Roadmap for Computation-Based Human Reliability Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald [Idaho National Lab. (INL), Idaho Falls, ID (United States); Mandelli, Diego [Idaho National Lab. (INL), Idaho Falls, ID (United States); Joe, Jeffrey [Idaho National Lab. (INL), Idaho Falls, ID (United States); Smith, Curtis [Idaho National Lab. (INL), Idaho Falls, ID (United States); Groth, Katrina [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-08-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

  13. A Research Roadmap for Computation-Based Human Reliability Analysis

    International Nuclear Information System (INIS)

    Boring, Ronald; Mandelli, Diego; Joe, Jeffrey; Smith, Curtis; Groth, Katrina

    2015-01-01

    The United States (U.S.) Department of Energy (DOE) is sponsoring research through the Light Water Reactor Sustainability (LWRS) program to extend the life of the currently operating fleet of commercial nuclear power plants. The Risk Informed Safety Margin Characterization (RISMC) research pathway within LWRS looks at ways to maintain and improve the safety margins of these plants. The RISMC pathway includes significant developments in the area of thermalhydraulics code modeling and the development of tools to facilitate dynamic probabilistic risk assessment (PRA). PRA is primarily concerned with the risk of hardware systems at the plant; yet, hardware reliability is often secondary in overall risk significance to human errors that can trigger or compound undesirable events at the plant. This report highlights ongoing efforts to develop a computation-based approach to human reliability analysis (HRA). This computation-based approach differs from existing static and dynamic HRA approaches in that it: (i) interfaces with a dynamic computation engine that includes a full scope plant model, and (ii) interfaces with a PRA software toolset. The computation-based HRA approach presented in this report is called the Human Unimodels for Nuclear Technology to Enhance Reliability (HUNTER) and incorporates in a hybrid fashion elements of existing HRA methods to interface with new computational tools developed under the RISMC pathway. The goal of this research effort is to model human performance more accurately than existing approaches, thereby minimizing modeling uncertainty found in current plant risk models.

  14. TRANGE: computer code to calculate the energy beam degradation in target stack

    International Nuclear Information System (INIS)

    Bellido, Luis F.

    1995-07-01

    A computer code to calculate the projectile energy degradation along a target stack was developed for an IBM or compatible personal microcomputer. A comparison of protons and deuterons bombarding uranium and aluminium targets was made. The results showed that the data obtained with TRANGE were in agreement with other computers code such as TRIM, EDP and also using Williamsom and Janni range and stopping power tables. TRANGE can be used for any charged particle ion, for energies between 1 to 100 MeV, in metal foils and solid compounds targets. (author). 8 refs., 2 tabs

  15. ENERGY USE ANALYSIS FOR RICE PRODUCTION IN NASARAWA STATE, NIGERIA

    Directory of Open Access Journals (Sweden)

    Hussaini Yusuf Ibrahim

    2012-12-01

    Full Text Available The study was conducted to analyze energy use for in rice production in Nasarawa state Nigeria using a sample of 120 randomly selected rice farmers. Energy productivity, energy efficiency and specific energy were computed and simple descriptive statistics was used for data analysis. The energy use pattern shows that, rice production consumed an average total energy of 12906.8 MJha-1, with herbicide energy input contributing the largest share (53.55 %. Human labour had the least share (0.74 % of the total energy input used. The energy productivity, Specific energy and energy efficiency were 0.3 MJ-1, 3.6 MJ-1 and 4.1 respectively. A total of 10925.0 MJ of energy was used in the form of indirect energy and 1981.8MJ was in the direct form of energy. Non-renewable energy forms contributed the largest share (80.63 % of the total energy input used for rice production in the study area. Rice production in the study area was observed to be mainly dependent on non-renewable and indirect energy input especially herbicide. Thus, the study recommends the introduction of integrated weed management system in order to reduce cost and dependence on a non-renewable input for weed control.

  16. Proceeding of 29th domestic symposium on computational science and nuclear energy in the 21st century

    International Nuclear Information System (INIS)

    2001-10-01

    As the 29th domestic symposium of Atomic Energy Research Committee, the Japan Welding Engineering Society, the symposium was held titled as Computational science and nuclear energy in the 21st century'. Keynote speech was delivered titled as 'Nuclear power plants safety secured by computational science in the 21st century'. Three speakers gave lectures titled as 'Materials design and computational science', 'Development of advanced reactor in the 21st century' and 'Application of computational science to operation and maintenance management of plants'. Lectures held panel discussion titled as 'Computational science and nuclear energy in the 21st century'. (T. Tanaka)

  17. Money for Research, Not for Energy Bills: Finding Energy and Cost Savings in High Performance Computer Facility Designs

    Energy Technology Data Exchange (ETDEWEB)

    Drewmark Communications; Sartor, Dale; Wilson, Mark

    2010-07-01

    High-performance computing facilities in the United States consume an enormous amount of electricity, cutting into research budgets and challenging public- and private-sector efforts to reduce energy consumption and meet environmental goals. However, these facilities can greatly reduce their energy demand through energy-efficient design of the facility itself. Using a case study of a facility under design, this article discusses strategies and technologies that can be used to help achieve energy reductions.

  18. COMPUTATIONAL MODELS USED FOR MINIMIZING THE NEGATIVE IMPACT OF ENERGY ON THE ENVIRONMENT

    Directory of Open Access Journals (Sweden)

    Oprea D.

    2012-04-01

    Full Text Available Optimizing energy system is a problem that is extensively studied for many years by scientists. This problem can be studied from different views and using different computer programs. The work is characterized by one of the following calculation methods used in Europe for modelling, power system optimization. This method shall be based on reduce action of energy system on environment. Computer program used and characterized in this article is GEMIS.

  19. Safety analysis of control rod drive computers

    International Nuclear Information System (INIS)

    Ehrenberger, W.; Rauch, G.; Schmeil, U.; Maertz, J.; Mainka, E.U.; Nordland, O.; Gloee, G.

    1985-01-01

    The analysis of the most significant user programmes revealed no errors in these programmes. The evaluation of approximately 82 cumulated years of operation demonstrated that the operating system of the control rod positioning processor has a reliability that is sufficiently good for the tasks this computer has to fulfil. Computers can be used for safety relevant tasks. The experience gained with the control rod positioning processor confirms that computers are not less reliable than conventional instrumentation and control system for comparable tasks. The examination and evaluation of computers for safety relevant tasks can be done with programme analysis or statistical evaluation of the operating experience. Programme analysis is recommended for seldom used and well structured programmes. For programmes with a long, cumulated operating time a statistical evaluation is more advisable. The effort for examination and evaluation is not greater than the corresponding effort for conventional instrumentation and control systems. This project has also revealed that, where it is technologically sensible, process controlling computers or microprocessors can be qualified for safety relevant tasks without undue effort. (orig./HP) [de

  20. Computer aided plant engineering: An analysis and suggestions for computer use

    International Nuclear Information System (INIS)

    Leinemann, K.

    1979-09-01

    To get indications to and boundary conditions for computer use in plant engineering, an analysis of the engineering process was done. The structure of plant engineering is represented by a network of substaks and subsets of data which are to be manipulated. Main tool for integration of CAD-subsystems in plant engineering should be a central database which is described by characteristical requirements and a possible simple conceptual schema. The main features of an interactive system for computer aided plant engineering are shortly illustrated by two examples. The analysis leads to the conclusion, that an interactive graphic system for manipulation of net-like structured data, usable for various subtasks, should be the base for computer aided plant engineering. (orig.) [de

  1. MeReg: Managing Energy-SLA Tradeoff for Green Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Rahul Yadav

    2017-01-01

    Full Text Available Mobile cloud computing (MCC provides various cloud computing services to mobile users. The rapid growth of MCC users requires large-scale MCC data centers to provide them with data processing and storage services. The growth of these data centers directly impacts electrical energy consumption, which affects businesses as well as the environment through carbon dioxide (CO2 emissions. Moreover, large amount of energy is wasted to maintain the servers running during low workload. To reduce the energy consumption of mobile cloud data centers, energy-aware host overload detection algorithm and virtual machines (VMs selection algorithms for VM consolidation are required during detected host underload and overload. After allocating resources to all VMs, underloaded hosts are required to assume energy-saving mode in order to minimize power consumption. To address this issue, we proposed an adaptive heuristics energy-aware algorithm, which creates an upper CPU utilization threshold using recent CPU utilization history to detect overloaded hosts and dynamic VM selection algorithms to consolidate the VMs from overloaded or underloaded host. The goal is to minimize total energy consumption and maximize Quality of Service, including the reduction of service level agreement (SLA violations. CloudSim simulator is used to validate the algorithm and simulations are conducted on real workload traces in 10 different days, as provided by PlanetLab.

  2. NREL Analysis: Reimagining What's Possible for Clean Energy, Continuum Magazine, Summer 2015 / Issue 8; NREL (National Renewable Energy Laboratory)

    Energy Technology Data Exchange (ETDEWEB)

    None

    2015-08-01

    This issue of Continuum Magazine covers the depth and breadth of NREL's ever-expanding analytical capabilities. For example, in one project we are leading national efforts to create a computer model of one of the most complex systems ever built. This system, the eastern part of the North American power grid, will likely host an increasing percentage of renewable energy in years to come. Understanding how this system will work is important to its success - and NREL analysis is playing a major role. We are also identifying the connections among energy, the environment and the economy through analysis that will point us toward a 'water smart' future.

  3. A novel dual energy method for enhanced quantitative computed tomography

    Science.gov (United States)

    Emami, A.; Ghadiri, H.; Rahmim, A.; Ay, M. R.

    2018-01-01

    Accurate assessment of bone mineral density (BMD) is critically important in clinical practice, and conveniently enabled via quantitative computed tomography (QCT). Meanwhile, dual-energy QCT (DEQCT) enables enhanced detection of small changes in BMD relative to single-energy QCT (SEQCT). In the present study, we aimed to investigate the accuracy of QCT methods, with particular emphasis on a new dual-energy approach, in comparison to single-energy and conventional dual-energy techniques. We used a sinogram-based analytical CT simulator to model the complete chain of CT data acquisitions, and assessed performance of SEQCT and different DEQCT techniques in quantification of BMD. We demonstrate a 120% reduction in error when using a proposed dual-energy Simultaneous Equation by Constrained Least-squares method, enabling more accurate bone mineral measurements.

  4. Computer system for environmental sample analysis and data storage and analysis

    International Nuclear Information System (INIS)

    Brauer, F.P.; Fager, J.E.

    1976-01-01

    A mini-computer based environmental sample analysis and data storage system has been developed. The system is used for analytical data acquisition, computation, storage of analytical results, and tabulation of selected or derived results for data analysis, interpretation and reporting. This paper discussed the structure, performance and applications of the system

  5. Energy Finite Element Analysis Developments for Vibration Analysis of Composite Aircraft Structures

    Science.gov (United States)

    Vlahopoulos, Nickolas; Schiller, Noah H.

    2011-01-01

    The Energy Finite Element Analysis (EFEA) has been utilized successfully for modeling complex structural-acoustic systems with isotropic structural material properties. In this paper, a formulation for modeling structures made out of composite materials is presented. An approach based on spectral finite element analysis is utilized first for developing the equivalent material properties for the composite material. These equivalent properties are employed in the EFEA governing differential equations for representing the composite materials and deriving the element level matrices. The power transmission characteristics at connections between members made out of non-isotropic composite material are considered for deriving suitable power transmission coefficients at junctions of interconnected members. These coefficients are utilized for computing the joint matrix that is needed to assemble the global system of EFEA equations. The global system of EFEA equations is solved numerically and the vibration levels within the entire system can be computed. The new EFEA formulation for modeling composite laminate structures is validated through comparison to test data collected from a representative composite aircraft fuselage that is made out of a composite outer shell and composite frames and stiffeners. NASA Langley constructed the composite cylinder and conducted the test measurements utilized in this work.

  6. Statistical Energy Analysis (SEA) and Energy Finite Element Analysis (EFEA) Predictions for a Floor-Equipped Composite Cylinder

    Science.gov (United States)

    Grosveld, Ferdinand W.; Schiller, Noah H.; Cabell, Randolph H.

    2011-01-01

    Comet Enflow is a commercially available, high frequency vibroacoustic analysis software founded on Energy Finite Element Analysis (EFEA) and Energy Boundary Element Analysis (EBEA). Energy Finite Element Analysis (EFEA) was validated on a floor-equipped composite cylinder by comparing EFEA vibroacoustic response predictions with Statistical Energy Analysis (SEA) and experimental results. Statistical Energy Analysis (SEA) predictions were made using the commercial software program VA One 2009 from ESI Group. The frequency region of interest for this study covers the one-third octave bands with center frequencies from 100 Hz to 4000 Hz.

  7. Design and analysis of tubular permanent magnet linear generator for small-scale wave energy converter

    Science.gov (United States)

    Kim, Jeong-Man; Koo, Min-Mo; Jeong, Jae-Hoon; Hong, Keyyong; Cho, Il-Hyoung; Choi, Jang-Young

    2017-05-01

    This paper reports the design and analysis of a tubular permanent magnet linear generator (TPMLG) for a small-scale wave-energy converter. The analytical field computation is performed by applying a magnetic vector potential and a 2-D analytical model to determine design parameters. Based on analytical solutions, parametric analysis is performed to meet the design specifications of a wave-energy converter (WEC). Then, 2-D FEA is employed to validate the analytical method. Finally, the experimental result confirms the predictions of the analytical and finite element analysis (FEA) methods under regular and irregular wave conditions.

  8. CDF GlideinWMS usage in Grid computing of high energy physics

    International Nuclear Information System (INIS)

    Zvada, Marian; Sfiligoi, Igor; Benjamin, Doug

    2010-01-01

    Many members of large science collaborations already have specialized grids available to advance their research in the need of getting more computing resources for data analysis. This has forced the Collider Detector at Fermilab (CDF) collaboration to move beyond the usage of dedicated resources and start exploiting Grid resources. Nowadays, CDF experiment is increasingly relying on glidein-based computing pools for data reconstruction. Especially, Monte Carlo production and user data analysis, serving over 400 users by central analysis farm middleware (CAF) on the top of Condor batch system and CDF Grid infrastructure. Condor is designed as distributed architecture and its glidein mechanism of pilot jobs is ideal for abstracting the Grid computing by making a virtual private computing pool. We would like to present the first production use of the generic pilot-based Workload Management System (glideinWMS), which is an implementation of the pilot mechanism based on the Condor distributed infrastructure. CDF Grid computing uses glideinWMS for its data reconstruction on the FNAL campus Grid, user analysis and Monte Carlo production across Open Science Grid (OSG). We review this computing model and setup used including CDF specific configuration within the glideinWMS system which provides powerful scalability and makes Grid computing working like in a local batch environment with ability to handle more than 10000 running jobs at a time.

  9. Can a dual-energy computed tomography predict unsuitable stone components for extracorporeal shock wave lithotripsy?

    Science.gov (United States)

    Ahn, Sung Hoon; Oh, Tae Hoon; Seo, Ill Young

    2015-09-01

    To assess the potential of dual-energy computed tomography (DECT) to identify urinary stone components, particularly uric acid and calcium oxalate monohydrate, which are unsuitable for extracorporeal shock wave lithotripsy (ESWL). This clinical study included 246 patients who underwent removal of urinary stones and an analysis of stone components between November 2009 and August 2013. All patients received preoperative DECT using two energy values (80 kVp and 140 kVp). Hounsfield units (HU) were measured and matched to the stone component. Significant differences in HU values were observed between uric acid and nonuric acid stones at the 80 and 140 kVp energy values (penergy values (p<0.001). DECT improved the characterization of urinary stone components and was a useful method for identifying uric acid and calcium oxalate monohydrate stones, which are unsuitable for ESWL.

  10. Transition towards a low carbon economy: A computable general equilibrium analysis for Poland

    International Nuclear Information System (INIS)

    Böhringer, Christoph; Rutherford, Thomas F.

    2013-01-01

    In the transition to sustainable economic structures the European Union assumes a leading role with its climate and energy package which sets ambitious greenhouse gas emission reduction targets by 2020. Among EU Member States, Poland with its heavy energy system reliance on coal is particularly worried on the pending trade-offs between emission regulation and economic growth. In our computable general equilibrium analysis of the EU climate and energy package we show that economic adjustment cost for Poland hinge crucially on restrictions to where-flexibility of emission abatement, revenue recycling, and technological options in the power system. We conclude that more comprehensive flexibility provisions at the EU level and a diligent policy implementation at the national level could achieve the transition towards a low carbon economy at little cost thereby broadening societal support. - Highlights: ► Economic impact assessment of the EU climate and energy package for Poland. ► Sensitivity analysis on where-flexibility, revenue recycling and technology choice. ► Application of a hybrid bottom-up, top-down CGE model

  11. Computational methods in power system analysis

    CERN Document Server

    Idema, Reijer

    2014-01-01

    This book treats state-of-the-art computational methods for power flow studies and contingency analysis. In the first part the authors present the relevant computational methods and mathematical concepts. In the second part, power flow and contingency analysis are treated. Furthermore, traditional methods to solve such problems are compared to modern solvers, developed using the knowledge of the first part of the book. Finally, these solvers are analyzed both theoretically and experimentally, clearly showing the benefits of the modern approach.

  12. Connecting free energy surfaces in implicit and explicit solvent: an efficient method to compute conformational and solvation free energies.

    Science.gov (United States)

    Deng, Nanjie; Zhang, Bin W; Levy, Ronald M

    2015-06-09

    The ability to accurately model solvent effects on free energy surfaces is important for understanding many biophysical processes including protein folding and misfolding, allosteric transitions, and protein–ligand binding. Although all-atom simulations in explicit solvent can provide an accurate model for biomolecules in solution, explicit solvent simulations are hampered by the slow equilibration on rugged landscapes containing multiple basins separated by barriers. In many cases, implicit solvent models can be used to significantly speed up the conformational sampling; however, implicit solvent simulations do not fully capture the effects of a molecular solvent, and this can lead to loss of accuracy in the estimated free energies. Here we introduce a new approach to compute free energy changes in which the molecular details of explicit solvent simulations are retained while also taking advantage of the speed of the implicit solvent simulations. In this approach, the slow equilibration in explicit solvent, due to the long waiting times before barrier crossing, is avoided by using a thermodynamic cycle which connects the free energy basins in implicit solvent and explicit solvent using a localized decoupling scheme. We test this method by computing conformational free energy differences and solvation free energies of the model system alanine dipeptide in water. The free energy changes between basins in explicit solvent calculated using fully explicit solvent paths agree with the corresponding free energy differences obtained using the implicit/explicit thermodynamic cycle to within 0.3 kcal/mol out of ∼3 kcal/mol at only ∼8% of the computational cost. We note that WHAM methods can be used to further improve the efficiency and accuracy of the implicit/explicit thermodynamic cycle.

  13. Analysis of Energy Consumption for Ad Hoc Wireless Sensor Networks Using a Bit-Meter-per-Joule Metric

    Science.gov (United States)

    Gao, J. L.

    2002-04-01

    In this article, we present a system-level characterization of the energy consumption for sensor network application scenarios. We compute a power efficiency metric -- average watt-per-meter -- for each radio transmission and extend this local metric to find the global energy consumption. This analysis shows how overall energy consumption varies with transceiver characteristics, node density, data traffic distribution, and base-station location.

  14. Energy analysis program. 1994 annual report

    Energy Technology Data Exchange (ETDEWEB)

    Levine, M.D.

    1995-04-01

    This report provides an energy analysis overview. The following topics are described: building energy analysis; urban and energy environmental issues; appliance energy efficiency standards; utility planning and policy; energy efficiency, economics, and policy issues; and international energy and environmental issues.

  15. Computer-assisted qualitative data analysis software.

    Science.gov (United States)

    Cope, Diane G

    2014-05-01

    Advances in technology have provided new approaches for data collection methods and analysis for researchers. Data collection is no longer limited to paper-and-pencil format, and numerous methods are now available through Internet and electronic resources. With these techniques, researchers are not burdened with entering data manually and data analysis is facilitated by software programs. Quantitative research is supported by the use of computer software and provides ease in the management of large data sets and rapid analysis of numeric statistical methods. New technologies are emerging to support qualitative research with the availability of computer-assisted qualitative data analysis software (CAQDAS).CAQDAS will be presented with a discussion of advantages, limitations, controversial issues, and recommendations for this type of software use.

  16. Modified energy-deposition model, for the computation of the stopping-power ratio for small cavity sizes

    International Nuclear Information System (INIS)

    Janssens, A.C.A.

    1981-01-01

    This paper presents a modification to the Spencer-Attix theory, which allows application of the theory to larger cavity sizes. The modified theory is in better agreement with the actual process of energy deposition by delta rays. In the first part of the paper it is recalled how the Spencer-Attix theory can be derived from basic principles, which allows a physical interpretation of the theory in terms of a function describing the space and direction average of the deposited energy. A realistic model for the computation of this function is described and the resulting expression for the stopping-power ratio is calculated. For the comparison between the Spencer-Attix theory and this modified expression a correction factor to the ''Bragg-Gray inhomogeneous term'' has been defined. This factor has been computed as a function of cavity size for different source energies and mean excitation energies; thus, general properties of this factor have been elucidated. The computations have been extended to include the density effect. It has been shown that the computation of the inhomogeneous term can be performed for any expression describing the energy loss per unit distance of the electrons as a function of their energy. Thus an expression has been calculated which is in agreement with a quadratic range-energy relationship. In conclusion, the concrete procedure for computing the stopping-power ratio is reviewed

  17. Dual-energy X-ray analysis using synchrotron computed tomography at 35 and 60 keV for the estimation of photon interaction coefficients describing attenuation and energy absorption.

    Science.gov (United States)

    Midgley, Stewart; Schleich, Nanette

    2015-05-01

    A novel method for dual-energy X-ray analysis (DEXA) is tested using measurements of the X-ray linear attenuation coefficient μ. The key is a mathematical model that describes elemental cross sections using a polynomial in atomic number. The model is combined with the mixture rule to describe μ for materials, using the same polynomial coefficients. Materials are characterized by their electron density Ne and statistical moments Rk describing their distribution of elements, analogous to the concept of effective atomic number. In an experiment with materials of known density and composition, measurements of μ are written as a system of linear simultaneous equations, which is solved for the polynomial coefficients. DEXA itself involves computed tomography (CT) scans at two energies to provide a system of non-linear simultaneous equations that are solved for Ne and the fourth statistical moment R4. Results are presented for phantoms containing dilute salt solutions and for a biological specimen. The experiment identifies 1% systematic errors in the CT measurements, arising from third-harmonic radiation, and 20-30% noise, which is reduced to 3-5% by pre-processing with the median filter and careful choice of reconstruction parameters. DEXA accuracy is quantified for the phantom as the mean absolute differences for Ne and R4: 0.8% and 1.0% for soft tissue and 1.2% and 0.8% for bone-like samples, respectively. The DEXA results for the biological specimen are combined with model coefficients obtained from the tabulations to predict μ and the mass energy absorption coefficient at energies of 10 keV to 20 MeV.

  18. PREFACE: International Conference on Computing in High Energy and Nuclear Physics (CHEP'09)

    Science.gov (United States)

    Gruntorad, Jan; Lokajicek, Milos

    2010-11-01

    The 17th International Conference on Computing in High Energy and Nuclear Physics (CHEP) was held on 21-27 March 2009 in Prague, Czech Republic. CHEP is a major series of international conferences for physicists and computing professionals from the worldwide High Energy and Nuclear Physics community, Computer Science, and Information Technology. The CHEP conference provides an international forum to exchange information on computing experience and needs for the community, and to review recent, ongoing and future activities. Recent conferences were held in Victoria, Canada 2007, Mumbai, India in 2006, Interlaken, Switzerland in 2004, San Diego, USA in 2003, Beijing, China in 2001, Padua, Italy in 2000. The CHEP'09 conference had 600 attendees with a program that included plenary sessions of invited oral presentations, a number of parallel sessions comprising 200 oral and 300 poster presentations, and an industrial exhibition. We thanks all the presenters, for the excellent scientific content of their contributions to the conference. Conference tracks covered topics on Online Computing, Event Processing, Software Components, Tools and Databases, Hardware and Computing Fabrics, Grid Middleware and Networking Technologies, Distributed Processing and Analysis and Collaborative Tools. The conference included excursions to Prague and other Czech cities and castles and a banquet held at the Zofin palace in Prague. The next CHEP conference will be held in Taipei, Taiwan on 18-22 October 2010. We would like thank the Ministry of Education Youth and Sports of the Czech Republic and the EU ACEOLE project for the conference support, further to commercial sponsors, the International Advisory Committee, the Local Organizing Committee members representing the five collaborating Czech institutions Jan Gruntorad (co-chair), CESNET, z.s.p.o., Prague Andrej Kugler, Nuclear Physics Institute AS CR v.v.i., Rez Rupert Leitner, Charles University in Prague, Faculty of Mathematics and

  19. Trajectory resolved analysis of LEIS energy spectra: Neutralization and surface structure

    International Nuclear Information System (INIS)

    Beikler, Robert; Taglauer, Edmund

    2001-01-01

    For a quantitative evaluation of low-energy ion scattering (LEIS) data with respect to surface composition and structure a detailed analysis of the energy spectra is required. This includes the identification of multiple scattering processes and the determination of ion survival probabilities. We analyzed scattered ion energy spectra by using the computer code MARLOWE for which we developed a new analysis routine that allows to record energy distributions in dependence of the number of projectile-target atom collisions, in dependence of the distance of closest approach, or in dependence of the scattering crystalline layer. This procedure also permits the determination of ion survival probabilities by applying simple collision-dependent neutralization models. Experimental energy spectra for various projectile (He + , Ne + , Na + ) and target (transition metals, oxides) combinations are well reproduced and quantitative results for ion survival probabilities are obtained. These are largely in agreement with results obtained for bimetallic crystal surfaces obtained in a different way. Such MARLOWE calculations are also useful for the identification of structure relevant processes. This is shown exemplarily for the reconstructed Au(1 1 0) surface including a possibility to determine the (1x2)→(1x1) transition temperature

  20. Industry-level total-factor energy efficiency in developed countries: A Japan-centered analysis

    International Nuclear Information System (INIS)

    Honma, Satoshi; Hu, Jin-Li

    2014-01-01

    Highlights: • This study compares Japan with other developed countries for energy efficiency at the industry level. • We compute the total-factor energy efficiency (TFEE) for industries in 14 developed countries in 1995–2005. • Energy conservation can be further optimized in Japan’s industry sector. • Japan experienced a slight decrease in the weighted TFEE from 0.986 in 1995 to 0.927 in 2005. • Japan should adapt energy conservation technologies from the primary benchmark countries: Germany, UK, and USA. - Abstract: Japan’s energy security is more vulnerable today than it was before the Fukushima Daiichi nuclear power plant accident in March 2011. To alleviate its energy vulnerability, Japan has no choice but to improve energy efficiency. To aid in this improvement, this study compares Japan’s energy efficiency at the industry level with that of other developed countries. We compute the total-factor energy efficiency (TFEE) of industries in 14 developed countries for 1995–2005 using data envelopment analysis. We use four inputs: labor, capital stock, energy, and non-energy intermediate inputs. Value added is the only relevant output. Results indicate that Japan can further optimize energy conservation because it experienced only a marginal decrease in the weighted TFEE, from 0.986 in 1995 to 0.927 in 2005. To improve inefficient industries, Japan should adapt energy conservation technologies from benchmark countries such as Germany, the United Kingdom, and the United States

  1. A non-oscillatory energy-splitting method for the computation of compressible multi-fluid flows

    Science.gov (United States)

    Lei, Xin; Li, Jiequan

    2018-04-01

    This paper proposes a new non-oscillatory energy-splitting conservative algorithm for computing multi-fluid flows in the Eulerian framework. In comparison with existing multi-fluid algorithms in the literature, it is shown that the mass fraction model with isobaric hypothesis is a plausible choice for designing numerical methods for multi-fluid flows. Then we construct a conservative Godunov-based scheme with the high order accurate extension by using the generalized Riemann problem solver, through the detailed analysis of kinetic energy exchange when fluids are mixed under the hypothesis of isobaric equilibrium. Numerical experiments are carried out for the shock-interface interaction and shock-bubble interaction problems, which display the excellent performance of this type of schemes and demonstrate that nonphysical oscillations are suppressed around material interfaces substantially.

  2. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  3. Accident sequence analysis of human-computer interface design

    International Nuclear Information System (INIS)

    Fan, C.-F.; Chen, W.-H.

    2000-01-01

    It is important to predict potential accident sequences of human-computer interaction in a safety-critical computing system so that vulnerable points can be disclosed and removed. We address this issue by proposing a Multi-Context human-computer interaction Model along with its analysis techniques, an Augmented Fault Tree Analysis, and a Concurrent Event Tree Analysis. The proposed augmented fault tree can identify the potential weak points in software design that may induce unintended software functions or erroneous human procedures. The concurrent event tree can enumerate possible accident sequences due to these weak points

  4. SIVEH: Numerical Computing Simulation of Wireless Energy-Harvesting Sensor Nodes

    Directory of Open Access Journals (Sweden)

    Pedro Yuste

    2013-09-01

    Full Text Available The paper presents a numerical energy harvesting model for sensor nodes, SIVEH (Simulator I–V for EH, based on I–V hardware tracking. I–V tracking is demonstrated to be more accurate than traditional energy modeling techniques when some of the components present different power dissipation at either different operating voltages or drawn currents. SIVEH numerical computing allows fast simulation of long periods of time—days, weeks, months or years—using real solar radiation curves. Moreover, SIVEH modeling has been enhanced with sleep time rate dynamic adjustment, while seeking energy-neutral operation. This paper presents the model description, a functional verification and a critical comparison with the classic energy approach.

  5. Energy models: methods and trends

    Energy Technology Data Exchange (ETDEWEB)

    Reuter, A [Division of Energy Management and Planning, Verbundplan, Klagenfurt (Austria); Kuehner, R [IER Institute for Energy Economics and the Rational Use of Energy, University of Stuttgart, Stuttgart (Germany); Wohlgemuth, N [Department of Economy, University of Klagenfurt, Klagenfurt (Austria)

    1997-12-31

    Energy environmental and economical systems do not allow for experimentation since this would be dangerous, too expensive or even impossible. Instead, mathematical models are applied for energy planning. Experimenting is replaced by varying the structure and some parameters of `energy models`, computing the values of depending parameters, comparing variations, and interpreting their outcomings. Energy models are as old as computers. In this article the major new developments in energy modeling will be pointed out. We distinguish between 3 reasons of new developments: progress in computer technology, methodological progress and novel tasks of energy system analysis and planning. 2 figs., 19 refs.

  6. Energy models: methods and trends

    International Nuclear Information System (INIS)

    Reuter, A.; Kuehner, R.; Wohlgemuth, N.

    1996-01-01

    Energy environmental and economical systems do not allow for experimentation since this would be dangerous, too expensive or even impossible. Instead, mathematical models are applied for energy planning. Experimenting is replaced by varying the structure and some parameters of 'energy models', computing the values of depending parameters, comparing variations, and interpreting their outcomings. Energy models are as old as computers. In this article the major new developments in energy modeling will be pointed out. We distinguish between 3 reasons of new developments: progress in computer technology, methodological progress and novel tasks of energy system analysis and planning

  7. Computing at the leading edge: Research in the energy sciences

    Energy Technology Data Exchange (ETDEWEB)

    Mirin, A.A.; Van Dyke, P.T. [eds.

    1994-02-01

    The purpose of this publication is to highlight selected scientific challenges that have been undertaken by the DOE Energy Research community. The high quality of the research reflected in these contributions underscores the growing importance both to the Grand Challenge scientific efforts sponsored by DOE and of the related supporting technologies that the National Energy Research Supercomputer Center (NERSC) and other facilities are able to provide. The continued improvement of the computing resources available to DOE scientists is prerequisite to ensuring their future progress in solving the Grand Challenges. Titles of articles included in this publication include: the numerical tokamak project; static and animated molecular views of a tumorigenic chemical bound to DNA; toward a high-performance climate systems model; modeling molecular processes in the environment; lattice Boltzmann models for flow in porous media; parallel algorithms for modeling superconductors; parallel computing at the Superconducting Super Collider Laboratory; the advanced combustion modeling environment; adaptive methodologies for computational fluid dynamics; lattice simulations of quantum chromodynamics; simulating high-intensity charged-particle beams for the design of high-power accelerators; electronic structure and phase stability of random alloys.

  8. Computing at the leading edge: Research in the energy sciences

    International Nuclear Information System (INIS)

    Mirin, A.A.; Van Dyke, P.T.

    1994-01-01

    The purpose of this publication is to highlight selected scientific challenges that have been undertaken by the DOE Energy Research community. The high quality of the research reflected in these contributions underscores the growing importance both to the Grand Challenge scientific efforts sponsored by DOE and of the related supporting technologies that the National Energy Research Supercomputer Center (NERSC) and other facilities are able to provide. The continued improvement of the computing resources available to DOE scientists is prerequisite to ensuring their future progress in solving the Grand Challenges. Titles of articles included in this publication include: the numerical tokamak project; static and animated molecular views of a tumorigenic chemical bound to DNA; toward a high-performance climate systems model; modeling molecular processes in the environment; lattice Boltzmann models for flow in porous media; parallel algorithms for modeling superconductors; parallel computing at the Superconducting Super Collider Laboratory; the advanced combustion modeling environment; adaptive methodologies for computational fluid dynamics; lattice simulations of quantum chromodynamics; simulating high-intensity charged-particle beams for the design of high-power accelerators; electronic structure and phase stability of random alloys

  9. Economic analysis of energy supply and national economy on the basis of general equilibrium models. Applications of the input-output decomposition analysis and the Computable General Equilibrium models shown by the example of Korea

    International Nuclear Information System (INIS)

    Ko, Jong-Hwan.

    1993-01-01

    Firstly, this study investigaties the causes of sectoral growth and structural changes in the Korean economy. Secondly, it develops the borders of a consistent economic model in order to investigate simultaneously the different impacts of changes in energy and in the domestic economy. This is done any both the Input-Output-Decomposition analysis and a Computable General Equilibrium model (CGE Model). The CGE Model eliminates the disadvantages of the IO Model and allows the investigation of the interdegenerative of the various energy sectors with the economy. The Social Accounting Matrix serves as the data basis of the GCE Model. Simulated experiments have been comet out with the help of the GCE Model, indicating the likely impact of an oil price shock in the economy-sectorally and generally. (orig.) [de

  10. Computing conformational free energy differences in explicit solvent: An efficient thermodynamic cycle using an auxiliary potential and a free energy functional constructed from the end points.

    Science.gov (United States)

    Harris, Robert C; Deng, Nanjie; Levy, Ronald M; Ishizuka, Ryosuke; Matubayasi, Nobuyuki

    2017-06-05

    Many biomolecules undergo conformational changes associated with allostery or ligand binding. Observing these changes in computer simulations is difficult if their timescales are long. These calculations can be accelerated by observing the transition on an auxiliary free energy surface with a simpler Hamiltonian and connecting this free energy surface to the target free energy surface with free energy calculations. Here, we show that the free energy legs of the cycle can be replaced with energy representation (ER) density functional approximations. We compute: (1) The conformational free energy changes for alanine dipeptide transitioning from the right-handed free energy basin to the left-handed basin and (2) the free energy difference between the open and closed conformations of β-cyclodextrin, a "host" molecule that serves as a model for molecular recognition in host-guest binding. β-cyclodextrin contains 147 atoms compared to 22 atoms for alanine dipeptide, making β-cyclodextrin a large molecule for which to compute solvation free energies by free energy perturbation or integration methods and the largest system for which the ER method has been compared to exact free energy methods. The ER method replaced the 28 simulations to compute each coupling free energy with two endpoint simulations, reducing the computational time for the alanine dipeptide calculation by about 70% and for the β-cyclodextrin by > 95%. The method works even when the distribution of conformations on the auxiliary free energy surface differs substantially from that on the target free energy surface, although some degree of overlap between the two surfaces is required. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  11. High Energy Physics Computer Networking: Report of the HEPNET Review Committee

    International Nuclear Information System (INIS)

    1988-06-01

    This paper discusses the computer networks available to high energy physics facilities for transmission of data. Topics covered in this paper are: Existing and planned networks and HEPNET requirements

  12. Energy-Efficient Abundant-Data Computing: The N3XT 1,000X

    OpenAIRE

    Aly Mohamed M. Sabry; Gao Mingyu; Hills Gage; Lee Chi-Shuen; Pinter Greg; Shulaker Max M.; Wu Tony F.; Asheghi Mehdi; Bokor Jeff; Franchetti Franz; Goodson Kenneth E.; Kozyrakis Christos; Markov Igor; Olukotun Kunle; Pileggi Larry

    2015-01-01

    Next generation information technologies will process unprecedented amounts of loosely structured data that overwhelm existing computing systems. N3XT improves the energy efficiency of abundant data applications 1000 fold by using new logic and memory technologies 3D integration with fine grained connectivity and new architectures for computation immersed in memory.

  13. An energy-efficient failure detector for vehicular cloud computing.

    Science.gov (United States)

    Liu, Jiaxi; Wu, Zhibo; Dong, Jian; Wu, Jin; Wen, Dongxin

    2018-01-01

    Failure detectors are one of the fundamental components for maintaining the high availability of vehicular cloud computing. In vehicular cloud computing, lots of RSUs are deployed along the road to improve the connectivity. Many of them are equipped with solar battery due to the unavailability or excess expense of wired electrical power. So it is important to reduce the battery consumption of RSU. However, the existing failure detection algorithms are not designed to save battery consumption RSU. To solve this problem, a new energy-efficient failure detector 2E-FD has been proposed specifically for vehicular cloud computing. 2E-FD does not only provide acceptable failure detection service, but also saves the battery consumption of RSU. Through the comparative experiments, the results show that our failure detector has better performance in terms of speed, accuracy and battery consumption.

  14. A Practice-Oriented Bifurcation Analysis for Pulse Energy Converters: A Stability Margin

    Science.gov (United States)

    Kolokolov, Yury; Monovskaya, Anna

    The popularity of systems of pulse energy conversion (PEC-systems) for practical applications is due to the heightened efficiency of energy conversion processes with comparatively simple realizations. Nevertheless, a PEC-system represents a nonlinear object with a variable structure, and the bifurcation analysis remains the basic tool to describe PEC dynamics evolution. The paper is devoted to the discussion on whether the scientific viewpoint on the natural nonlinear dynamics evolution can be involved in practical applications. We focus on the problems connected with stability boundaries of an operating regime. The results of both small-signal analysis and computational bifurcation analysis are considered in the parametrical space in comparison with the results of the experimental identification of the zonal heterogeneity of the operating process. This allows to propose an adapted stability margin as a sufficiently safe distance before the point after which the operating process begins to lose the stability. Such stability margin can extend the permissible operating domain in the parametrical space at the expense of using cause-and-effect relations in the context of natural regularities of nonlinear dynamics. Reasoning and discussion are based on the experimental and computational results for a synchronous buck converter with a pulse-width modulation. The presented results can be useful, first of all, for PEC-systems with significant variation of equivalent inductance and/or capacity. We believe that the discussion supports a viewpoint by which the contemporary methods of the computational and experimental bifurcation analyses possess both analytical abilities and experimental techniques for promising solutions which could be practice-oriented for PEC-systems.

  15. Data intensive high energy physics analysis in a distributed cloud

    International Nuclear Information System (INIS)

    Charbonneau, A; Impey, R; Podaima, W; Agarwal, A; Anderson, M; Armstrong, P; Fransham, K; Gable, I; Harris, D; Leavett-Brown, C; Paterson, M; Sobie, R J; Vliet, M

    2012-01-01

    We show that distributed Infrastructure-as-a-Service (IaaS) compute clouds can be effectively used for the analysis of high energy physics data. We have designed a distributed cloud system that works with any application using large input data sets requiring a high throughput computing environment. The system uses IaaS-enabled science and commercial clusters in Canada and the United States. We describe the process in which a user prepares an analysis virtual machine (VM) and submits batch jobs to a central scheduler. The system boots the user-specific VM on one of the IaaS clouds, runs the jobs and returns the output to the user. The user application accesses a central database for calibration data during the execution of the application. Similarly, the data is located in a central location and streamed by the running application. The system can easily run one hundred simultaneous jobs in an efficient manner and should scale to many hundreds and possibly thousands of user jobs.

  16. Data intensive high energy physics analysis in a distributed cloud

    Science.gov (United States)

    Charbonneau, A.; Agarwal, A.; Anderson, M.; Armstrong, P.; Fransham, K.; Gable, I.; Harris, D.; Impey, R.; Leavett-Brown, C.; Paterson, M.; Podaima, W.; Sobie, R. J.; Vliet, M.

    2012-02-01

    We show that distributed Infrastructure-as-a-Service (IaaS) compute clouds can be effectively used for the analysis of high energy physics data. We have designed a distributed cloud system that works with any application using large input data sets requiring a high throughput computing environment. The system uses IaaS-enabled science and commercial clusters in Canada and the United States. We describe the process in which a user prepares an analysis virtual machine (VM) and submits batch jobs to a central scheduler. The system boots the user-specific VM on one of the IaaS clouds, runs the jobs and returns the output to the user. The user application accesses a central database for calibration data during the execution of the application. Similarly, the data is located in a central location and streamed by the running application. The system can easily run one hundred simultaneous jobs in an efficient manner and should scale to many hundreds and possibly thousands of user jobs.

  17. Analysis of Wigner energy release process in graphite stack of shut-down uranium-graphite reactor

    OpenAIRE

    Bespala, E. V.; Pavliuk, A. O.; Kotlyarevskiy, S. G.

    2015-01-01

    Data, which finding during thermal differential analysis of sampled irradiated graphite are presented. Results of computational modeling of Winger energy release process from irradiated graphite staking are demonstrated. It's shown, that spontaneous combustion of graphite possible only in adiabatic case.

  18. Belle computing system

    International Nuclear Information System (INIS)

    Adachi, Ichiro; Hibino, Taisuke; Hinz, Luc; Itoh, Ryosuke; Katayama, Nobu; Nishida, Shohei; Ronga, Frederic; Tsukamoto, Toshifumi; Yokoyama, Masahiko

    2004-01-01

    We describe the present status of the computing system in the Belle experiment at the KEKB e+e- asymmetric-energy collider. So far, we have logged more than 160fb-1 of data, corresponding to the world's largest data sample of 170M BB-bar pairs at the -bar (4S) energy region. A large amount of event data has to be processed to produce an analysis event sample in a timely fashion. In addition, Monte Carlo events have to be created to control systematic errors accurately. This requires stable and efficient usage of computing resources. Here, we review our computing model and then describe how we efficiently proceed DST/MC productions in our system

  19. Development of analytical software for semi-quantitative analysis of x-ray spectrum acquired from energy-dispersive spectrometer

    International Nuclear Information System (INIS)

    Karim, A.; Rana, M.A.; Qamar, R.; Latif, A; Ahmad, M.; Farooq, M.A.; Ahmad, Z.

    2003-12-01

    Software package for elemental analysis for X-ray spectrum obtained from Energy Dispersive Spectrometer (EDS) attached with Scanning Electron Microscope (SEM) has been developed: A Personal Computer Analyzer card PCA-800 is used to acquire data from the EDS. This spectrum is obtained in binary format, which is transformed into ASCII format using PCAII card software. The program is modular in construction and coded using Microsoft's QUICKBASIC compiler linker. Energy line library containing all lines of elements is created for analysis of acquired characteristic X-ray spectrum. Two techniques of peak identification are provided. Statistical tools are employed for smoothing of a curve and for computing area under the curve. Elemental concentration is calculated in weight % and in atomic. (author)

  20. An approach to quantum-computational hydrologic inverse analysis.

    Science.gov (United States)

    O'Malley, Daniel

    2018-05-02

    Making predictions about flow and transport in an aquifer requires knowledge of the heterogeneous properties of the aquifer such as permeability. Computational methods for inverse analysis are commonly used to infer these properties from quantities that are more readily observable such as hydraulic head. We present a method for computational inverse analysis that utilizes a type of quantum computer called a quantum annealer. While quantum computing is in an early stage compared to classical computing, we demonstrate that it is sufficiently developed that it can be used to solve certain subsurface flow problems. We utilize a D-Wave 2X quantum annealer to solve 1D and 2D hydrologic inverse problems that, while small by modern standards, are similar in size and sometimes larger than hydrologic inverse problems that were solved with early classical computers. Our results and the rapid progress being made with quantum computing hardware indicate that the era of quantum-computational hydrology may not be too far in the future.

  1. Interface between computational fluid dynamics (CFD) and plant analysis computer codes

    International Nuclear Information System (INIS)

    Coffield, R.D.; Dunckhorst, F.F.; Tomlinson, E.T.; Welch, J.W.

    1993-01-01

    Computational fluid dynamics (CFD) can provide valuable input to the development of advanced plant analysis computer codes. The types of interfacing discussed in this paper will directly contribute to modeling and accuracy improvements throughout the plant system and should result in significant reduction of design conservatisms that have been applied to such analyses in the past

  2. Computational design of RNAs with complex energy landscapes.

    Science.gov (United States)

    Höner zu Siederdissen, Christian; Hammer, Stefan; Abfalter, Ingrid; Hofacker, Ivo L; Flamm, Christoph; Stadler, Peter F

    2013-12-01

    RNA has become an integral building material in synthetic biology. Dominated by their secondary structures, which can be computed efficiently, RNA molecules are amenable not only to in vitro and in vivo selection, but also to rational, computation-based design. While the inverse folding problem of constructing an RNA sequence with a prescribed ground-state structure has received considerable attention for nearly two decades, there have been few efforts to design RNAs that can switch between distinct prescribed conformations. We introduce a user-friendly tool for designing RNA sequences that fold into multiple target structures. The underlying algorithm makes use of a combination of graph coloring and heuristic local optimization to find sequences whose energy landscapes are dominated by the prescribed conformations. A flexible interface allows the specification of a wide range of design goals. We demonstrate that bi- and tri-stable "switches" can be designed easily with moderate computational effort for the vast majority of compatible combinations of desired target structures. RNAdesign is freely available under the GPL-v3 license. Copyright © 2013 Wiley Periodicals, Inc.

  3. Computational force, mass, and energy

    International Nuclear Information System (INIS)

    Numrich, R.W.

    1997-01-01

    This paper describes a correspondence between computational quantities commonly used to report computer performance measurements and mechanical quantities from classical Newtonian mechanics. It defines a set of three fundamental computational quantities that are sufficient to establish a system of computational measurement. From these quantities, it defines derived computational quantities that have analogous physical counterparts. These computational quantities obey three laws of motion in computational space. The solutions to the equations of motion, with appropriate boundary conditions, determine the computational mass of the computer. Computational forces, with magnitudes specific to each instruction and to each computer, overcome the inertia represented by this mass. The paper suggests normalizing the computational mass scale by picking the mass of a register on the CRAY-1 as the standard unit of mass

  4. Structural Modeling and Analysis of a Wave Energy Converter Applying Dynamical Substructuring Method

    DEFF Research Database (Denmark)

    Zurkinden, Andrew Stephen; Damkilde, Lars; Gao, Zhen

    2013-01-01

    to the relative stiff behavior of the arm the calculation can be reduced to a quasi-static analysis. The hydrodynamic and the structural analyses are thus performed separately. In order to reduce the computational time of the finite element calculation the main structure is modeled as a superelement......This paper deals with structural modeling and analysis of a wave energy converter. The device, called Wavestar, is a bottom fixed structure, located in a shallow water environment at the Danish Northwest coast. The analysis is concentrated on a single float and its structural arm which connects...... the WEC to a jackup structure. The wave energy converter is characterized by having an operational and survival mode. The survival mode drastically reduces the exposure to waves and therfore to the wave loads. Structural response analysis of the Wavestar arm is carried out in this study. Due...

  5. Computer-Aided Qualitative Data Analysis with Word

    Directory of Open Access Journals (Sweden)

    Bruno Nideröst

    2002-05-01

    Full Text Available Despite some fragmentary references in the literature about qualitative methods, it is fairly unknown that Word can be successfully used for computer-aided Qualitative Data Analyses (QDA. Based on several Word standard operations, elementary QDA functions such as sorting data, code-and-retrieve and frequency counts can be realized. Word is particularly interesting for those users who wish to have first experiences with computer-aided analysis before investing time and money in a specialized QDA Program. The well-known standard software could also be an option for those qualitative researchers who usually work with word processing but have certain reservations towards computer-aided analysis. The following article deals with the most important requirements and options of Word for computer-aided QDA. URN: urn:nbn:de:0114-fqs0202225

  6. Computational analysis of a multistage axial compressor

    Science.gov (United States)

    Mamidoju, Chaithanya

    Turbomachines are used extensively in Aerospace, Power Generation, and Oil & Gas Industries. Efficiency of these machines is often an important factor and has led to the continuous effort to improve the design to achieve better efficiency. The axial flow compressor is a major component in a gas turbine with the turbine's overall performance depending strongly on compressor performance. Traditional analysis of axial compressors involves throughflow calculations, isolated blade passage analysis, Quasi-3D blade-to-blade analysis, single-stage (rotor-stator) analysis, and multi-stage analysis involving larger design cycles. In the current study, the detailed flow through a 15 stage axial compressor is analyzed using a 3-D Navier Stokes CFD solver in a parallel computing environment. Methodology is described for steady state (frozen rotor stator) analysis of one blade passage per component. Various effects such as mesh type and density, boundary conditions, tip clearance and numerical issues such as turbulence model choice, advection model choice, and parallel processing performance are analyzed. A high sensitivity of the predictions to the above was found. Physical explanation to the flow features observed in the computational study are given. The total pressure rise verses mass flow rate was computed.

  7. Surface computing and collaborative analysis work

    CERN Document Server

    Brown, Judith; Gossage, Stevenson; Hack, Chris

    2013-01-01

    Large surface computing devices (wall-mounted or tabletop) with touch interfaces and their application to collaborative data analysis, an increasingly important and prevalent activity, is the primary topic of this book. Our goals are to outline the fundamentals of surface computing (a still maturing technology), review relevant work on collaborative data analysis, describe frameworks for understanding collaborative processes, and provide a better understanding of the opportunities for research and development. We describe surfaces as display technologies with which people can interact directly, and emphasize how interaction design changes when designing for large surfaces. We review efforts to use large displays, surfaces or mixed display environments to enable collaborative analytic activity. Collaborative analysis is important in many domains, but to provide concrete examples and a specific focus, we frequently consider analysis work in the security domain, and in particular the challenges security personne...

  8. Energy-resolved computed tomography: first experimental results

    International Nuclear Information System (INIS)

    Shikhaliev, Polad M

    2008-01-01

    First experimental results with energy-resolved computed tomography (CT) are reported. The contrast-to-noise ratio (CNR) in CT has been improved with x-ray energy weighting for the first time. Further, x-ray energy weighting improved the CNR in material decomposition CT when applied to CT projections prior to dual-energy subtraction. The existing CT systems use an energy (charge) integrating x-ray detector that provides a signal proportional to the energy of the x-ray photon. Thus, the x-ray photons with lower energies are scored less than those with higher energies. This underestimates contribution of lower energy photons that would provide higher contrast. The highest CNR can be achieved if the x-ray photons are scored by a factor that would increase as the x-ray energy decreases. This could be performed by detecting each x-ray photon separately and measuring its energy. The energy selective CT data could then be saved, and any weighting factor could be applied digitally to a detected x-ray photon. The CT system includes a photon counting detector with linear arrays of pixels made from cadmium zinc telluride (CZT) semiconductor. A cylindrical phantom with 10.2 cm diameter made from tissue-equivalent material was used for CT imaging. The phantom included contrast elements representing calcifications, iodine, adipose and glandular tissue. The x-ray tube voltage was 120 kVp. The energy selective CT data were acquired, and used to generate energy-weighted and material-selective CT images. The energy-weighted and material decomposition CT images were generated using a single CT scan at a fixed x-ray tube voltage. For material decomposition the x-ray spectrum was digitally spilt into low- and high-energy parts and dual-energy subtraction was applied. The x-ray energy weighting resulted in CNR improvement of calcifications and iodine by a factor of 1.40 and 1.63, respectively, as compared to conventional charge integrating CT. The x-ray energy weighting was also applied

  9. 17th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2016)

    International Nuclear Information System (INIS)

    2016-01-01

    Preface The 2016 version of the International Workshop on Advanced Computing and Analysis Techniques in Physics Research took place on January 18-22, 2016, at the Universidad Técnica Federico Santa Maria -UTFSM- in Valparaiso, Chile. The present volume of IOP Conference Series is devoted to the selected scientific contributions presented at the workshop. In order to guarantee the scientific quality of the Proceedings all papers were thoroughly peer-reviewed by an ad-hoc Editorial Committee with the help of many careful reviewers. The ACAT Workshop series has a long tradition starting in 1990 (Lyon, France), and takes place in intervals of a year and a half. Formerly these workshops were known under the name AIHENP (Artificial Intelligence for High Energy and Nuclear Physics). Each edition brings together experimental and theoretical physicists and computer scientists/experts, from particle and nuclear physics, astronomy and astrophysics in order to exchange knowledge and experience in computing and data analysis in physics. Three tracks cover the main topics: Computing technology: languages and system architectures. Data analysis: algorithms and tools. Theoretical Physics: techniques and methods. Although most contributions and discussions are related to particle physics and computing, other fields like condensed matter physics, earth physics, biophysics are often addressed in the hope to share our approaches and visions. It created a forum for exchanging ideas among fields, exploring and promoting cutting-edge computing technologies and debating hot topics. (paper)

  10. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    Science.gov (United States)

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.

  11. Ferrofluids: Modeling, numerical analysis, and scientific computation

    Science.gov (United States)

    Tomas, Ignacio

    This dissertation presents some developments in the Numerical Analysis of Partial Differential Equations (PDEs) describing the behavior of ferrofluids. The most widely accepted PDE model for ferrofluids is the Micropolar model proposed by R.E. Rosensweig. The Micropolar Navier-Stokes Equations (MNSE) is a subsystem of PDEs within the Rosensweig model. Being a simplified version of the much bigger system of PDEs proposed by Rosensweig, the MNSE are a natural starting point of this thesis. The MNSE couple linear velocity u, angular velocity w, and pressure p. We propose and analyze a first-order semi-implicit fully-discrete scheme for the MNSE, which decouples the computation of the linear and angular velocities, is unconditionally stable and delivers optimal convergence rates under assumptions analogous to those used for the Navier-Stokes equations. Moving onto the much more complex Rosensweig's model, we provide a definition (approximation) for the effective magnetizing field h, and explain the assumptions behind this definition. Unlike previous definitions available in the literature, this new definition is able to accommodate the effect of external magnetic fields. Using this definition we setup the system of PDEs coupling linear velocity u, pressure p, angular velocity w, magnetization m, and magnetic potential ϕ We show that this system is energy-stable and devise a numerical scheme that mimics the same stability property. We prove that solutions of the numerical scheme always exist and, under certain simplifying assumptions, that the discrete solutions converge. A notable outcome of the analysis of the numerical scheme for the Rosensweig's model is the choice of finite element spaces that allow the construction of an energy-stable scheme. Finally, with the lessons learned from Rosensweig's model, we develop a diffuse-interface model describing the behavior of two-phase ferrofluid flows and present an energy-stable numerical scheme for this model. For a

  12. Reduction of energy consumption peaks in a greenhouse by computer control

    Energy Technology Data Exchange (ETDEWEB)

    Amsen, M.G.; Froesig Nielsen, O.; Jacobsen, L.H. (Danish Research Service for Plant and Soil Science, Research Centre for Horticulture, Department of Horticultural Engineering, Aarslev (DK))

    1990-01-01

    The results of using a computer for environmental control in one greenhouse is in this paper compared with using modified analogue control equipment in another one. Energy consumption peaks can be almost prevented by properly applying the computer control and strategy. Both treatments were based upon negative DIF, i.e. low day and high night minimum set points (14 deg. C/ 22 deg. C) for room temperature. No difference in production time and quality was observed in six different pot plant species. Only Kalanchoe showed significant increase in fresh weight and dry weight. By applying computer control, the lack of flexibility of analogue control can be avoided by applying computer control and a more accurate room temperature control can be obtained. (author).

  13. Computer simulation of 2D grain growth using a cellular automata model based on the lowest energy principle

    International Nuclear Information System (INIS)

    He Yizhu; Ding Hanlin; Liu Liufa; Shin, Keesam

    2006-01-01

    The morphology, topology and kinetics of normal grain growth in two-dimension were studied by computer simulation using a cellular automata (Canada) model based on the lowest energy principle. The thermodynamic energy that follows Maxwell-Boltzmann statistics has been introduced into this model for the calculation of energy change. The transition that can reduce the system energy to the lowest level is chosen to occur when there is more than one possible transition direction. The simulation results show that the kinetics of normal grain growth follows the Burke equation with the growth exponent m = 2. The analysis of topology further indicates that normal grain growth can be simulated fairly well by the present CA model. The vanishing of grains with different number of sides is discussed in the simulation

  14. An urban energy performance evaluation system and its computer implementation.

    Science.gov (United States)

    Wang, Lei; Yuan, Guan; Long, Ruyin; Chen, Hong

    2017-12-15

    To improve the urban environment and effectively reflect and promote urban energy performance, an urban energy performance evaluation system was constructed, thereby strengthening urban environmental management capabilities. From the perspectives of internalization and externalization, a framework of evaluation indicators and key factors that determine urban energy performance and explore the reasons for differences in performance was proposed according to established theory and previous studies. Using the improved stochastic frontier analysis method, an urban energy performance evaluation and factor analysis model was built that brings performance evaluation and factor analysis into the same stage for study. According to data obtained for the Chinese provincial capitals from 2004 to 2013, the coefficients of the evaluation indicators and key factors were calculated by the urban energy performance evaluation and factor analysis model. These coefficients were then used to compile the program file. The urban energy performance evaluation system developed in this study was designed in three parts: a database, a distributed component server, and a human-machine interface. Its functions were designed as login, addition, edit, input, calculation, analysis, comparison, inquiry, and export. On the basis of these contents, an urban energy performance evaluation system was developed using Microsoft Visual Studio .NET 2015. The system can effectively reflect the status of and any changes in urban energy performance. Beijing was considered as an example to conduct an empirical study, which further verified the applicability and convenience of this evaluation system. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Building Energy Monitoring and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Tianzhen; Feng, Wei; Lu, Alison; Xia, Jianjun; Yang, Le; Shen, Qi; Im, Piljae; Bhandari, Mahabir

    2013-06-01

    This project aimed to develop a standard methodology for building energy data definition, collection, presentation, and analysis; apply the developed methods to a standardized energy monitoring platform, including hardware and software, to collect and analyze building energy use data; and compile offline statistical data and online real-time data in both countries for fully understanding the current status of building energy use. This helps decode the driving forces behind the discrepancy of building energy use between the two countries; identify gaps and deficiencies of current building energy monitoring, data collection, and analysis; and create knowledge and tools to collect and analyze good building energy data to provide valuable and actionable information for key stakeholders.

  16. Analysis of Japanese energy demand structure based on the interindustry-relations table

    International Nuclear Information System (INIS)

    Kanai, Akira; Kashihara, Toshinori

    1990-01-01

    Matching of energy-supply system and demand system is very important in dealing with the energy problem. Especially the energy-demand system is important for determing the quantity and quality of the energy demand. The energy demand is created by activities of industry and human life. The best materials which describe these activity conditions is the interindustry-relations table. Authors rely on this table as the basic data for assuming the energy demand analysis of energy system. The defect of this table is that an industrial classification differs in publishing years. So the table is lacking in the time sequential consistency. Therefore we discuss the method to improve the defect in consistency. In addition, this report analyses the energy demand structure in Japan according to the improved method. The research is done by the following procedure, 1. The unified common sector data is made so that an industrial classification in the interindustry-relations tables become common. 2. The quantity of input energy in each section is extracted from the tables. 3. The input energy is converted into the characteristic indicator and the calorific indicator. 4. The section is united using the common sector data. 5. The result is shown in table or graph. 6. The energy demand structure is analyzed based on the tables and the graphs. This interindustry-relations table is offered by request in the form of the magnetic tape. All the data is processed by computer due to the abundant amount of data. This report shows the idea how to process the fable instead of displaying the details. In addition, the problem in the analysis of the table is pointed out as results of the analysis. This report describes the feature of 23-sections classification in analysis of the energy demand structure. This report offers a basic data to make energy scenario to the energy system analysists. (J.P.N.)

  17. Free energy analysis of cell spreading.

    Science.gov (United States)

    McEvoy, Eóin; Deshpande, Vikram S; McGarry, Patrick

    2017-10-01

    In this study we present a steady-state adaptation of the thermodynamically motivated stress fiber (SF) model of Vigliotti et al. (2015). We implement this steady-state formulation in a non-local finite element setting where we also consider global conservation of the total number of cytoskeletal proteins within the cell, global conservation of the number of binding integrins on the cell membrane, and adhesion limiting ligand density on the substrate surface. We present a number of simulations of cell spreading in which we consider a limited subset of the possible deformed spread-states assumed by the cell in order to examine the hypothesis that free energy minimization drives the process of cell spreading. Simulations suggest that cell spreading can be viewed as a competition between (i) decreasing cytoskeletal free energy due to strain induced assembly of cytoskeletal proteins into contractile SFs, and (ii) increasing elastic free energy due to stretching of the mechanically passive components of the cell. The computed minimum free energy spread area is shown to be lower for a cell on a compliant substrate than on a rigid substrate. Furthermore, a low substrate ligand density is found to limit cell spreading. The predicted dependence of cell spread area on substrate stiffness and ligand density is in agreement with the experiments of Engler et al. (2003). We also simulate the experiments of Théry et al. (2006), whereby initially circular cells deform and adhere to "V-shaped" and "Y-shaped" ligand patches. Analysis of a number of different spread states reveals that deformed configurations with the lowest free energy exhibit a SF distribution that corresponds to experimental observations, i.e. a high concentration of highly aligned SFs occurs along free edges, with lower SF concentrations in the interior of the cell. In summary, the results of this study suggest that cell spreading is driven by free energy minimization based on a competition between decreasing

  18. An Analysis of 27 Years of Research into Computer Education Published in Australian Educational Computing

    Science.gov (United States)

    Zagami, Jason

    2015-01-01

    Analysis of three decades of publications in Australian Educational Computing (AEC) provides insight into the historical trends in Australian educational computing, highlighting an emphasis on pedagogy, comparatively few articles on educational technologies, and strong research topic alignment with similar international journals. Analysis confirms…

  19. Data analysis through interactive computer animation method (DATICAM)

    International Nuclear Information System (INIS)

    Curtis, J.N.; Schwieder, D.H.

    1983-01-01

    DATICAM is an interactive computer animation method designed to aid in the analysis of nuclear research data. DATICAM was developed at the Idaho National Engineering Laboratory (INEL) by EG and G Idaho, Inc. INEL analysts use DATICAM to produce computer codes that are better able to predict the behavior of nuclear power reactors. In addition to increased code accuracy, DATICAM has saved manpower and computer costs. DATICAM has been generalized to assist in the data analysis of virtually any data-producing dynamic process

  20. A general purpose program system for high energy physics experiment data acquisition and analysis

    International Nuclear Information System (INIS)

    Li Shuren; Xing Yuguo; Jin Bingnian

    1985-01-01

    This paper introduced the functions, structure and system generation of a general purpose program system (Fermilab MULTI) for high energy physics experiment data acquisition and analysis. Works concerning the reconstruction of MULTI system level 0.5 which can be run on the computer PDP-11/23 are also introduced briefly

  1. Automated uncertainty analysis methods in the FRAP computer codes

    International Nuclear Information System (INIS)

    Peck, S.O.

    1980-01-01

    A user oriented, automated uncertainty analysis capability has been incorporated in the Fuel Rod Analysis Program (FRAP) computer codes. The FRAP codes have been developed for the analysis of Light Water Reactor fuel rod behavior during steady state (FRAPCON) and transient (FRAP-T) conditions as part of the United States Nuclear Regulatory Commission's Water Reactor Safety Research Program. The objective of uncertainty analysis of these codes is to obtain estimates of the uncertainty in computed outputs of the codes is to obtain estimates of the uncertainty in computed outputs of the codes as a function of known uncertainties in input variables. This paper presents the methods used to generate an uncertainty analysis of a large computer code, discusses the assumptions that are made, and shows techniques for testing them. An uncertainty analysis of FRAP-T calculated fuel rod behavior during a hypothetical loss-of-coolant transient is presented as an example and carried through the discussion to illustrate the various concepts

  2. Experience with a distributed computing system for magnetic field analysis

    International Nuclear Information System (INIS)

    Newman, M.J.

    1978-08-01

    The development of a general purpose computer system, THESEUS, is described the initial use for which has been magnetic field analysis. The system involves several computers connected by data links. Some are small computers with interactive graphics facilities and limited analysis capabilities, and others are large computers for batch execution of analysis programs with heavy processor demands. The system is highly modular for easy extension and highly portable for transfer to different computers. It can easily be adapted for a completely different application. It provides a highly efficient and flexible interface between magnet designers and specialised analysis programs. Both the advantages and problems experienced are highlighted, together with a mention of possible future developments. (U.K.)

  3. Computed potential energy surfaces for chemical reactions

    Science.gov (United States)

    Walch, Stephen P.

    1988-01-01

    The minimum energy path for the addition of a hydrogen atom to N2 is characterized in CASSCF/CCI calculations using the (4s3p2d1f/3s2p1d) basis set, with additional single point calculations at the stationary points of the potential energy surface using the (5s4p3d2f/4s3p2d) basis set. These calculations represent the most extensive set of ab initio calculations completed to date, yielding a zero point corrected barrier for HN2 dissociation of approx. 8.5 kcal mol/1. The lifetime of the HN2 species is estimated from the calculated geometries and energetics using both conventional Transition State Theory and a method which utilizes an Eckart barrier to compute one dimensional quantum mechanical tunneling effects. It is concluded that the lifetime of the HN2 species is very short, greatly limiting its role in both termolecular recombination reactions and combustion processes.

  4. Computer Controlled Portable Greenhouse Climate Control System for Enhanced Energy Efficiency

    Science.gov (United States)

    Datsenko, Anthony; Myer, Steve; Petties, Albert; Hustek, Ryan; Thompson, Mark

    2010-04-01

    This paper discusses a student project at Kettering University focusing on the design and construction of an energy efficient greenhouse climate control system. In order to maintain acceptable temperatures and stabilize temperature fluctuations in a portable plastic greenhouse economically, a computer controlled climate control system was developed to capture and store thermal energy incident on the structure during daylight periods and release the stored thermal energy during dark periods. The thermal storage mass for the greenhouse system consisted of a water filled base unit. The heat exchanger consisted of a system of PVC tubing. The control system used a programmable LabView computer interface to meet functional specifications that minimized temperature fluctuations and recorded data during operation. The greenhouse was a portable sized unit with a 5' x 5' footprint. Control input sensors were temperature, water level, and humidity sensors and output control devices were fan actuating relays and water fill solenoid valves. A Graphical User Interface was developed to monitor the system, set control parameters, and to provide programmable data recording times and intervals.

  5. Performance Analysis of Cloud Computing Architectures Using Discrete Event Simulation

    Science.gov (United States)

    Stocker, John C.; Golomb, Andrew M.

    2011-01-01

    Cloud computing offers the economic benefit of on-demand resource allocation to meet changing enterprise computing needs. However, the flexibility of cloud computing is disadvantaged when compared to traditional hosting in providing predictable application and service performance. Cloud computing relies on resource scheduling in a virtualized network-centric server environment, which makes static performance analysis infeasible. We developed a discrete event simulation model to evaluate the overall effectiveness of organizations in executing their workflow in traditional and cloud computing architectures. The two part model framework characterizes both the demand using a probability distribution for each type of service request as well as enterprise computing resource constraints. Our simulations provide quantitative analysis to design and provision computing architectures that maximize overall mission effectiveness. We share our analysis of key resource constraints in cloud computing architectures and findings on the appropriateness of cloud computing in various applications.

  6. SALP-PC, a computer program for fault tree analysis on personal computers

    International Nuclear Information System (INIS)

    Contini, S.; Poucet, A.

    1987-01-01

    The paper presents the main characteristics of the SALP-PC computer code for fault tree analysis. The program has been developed in Fortran 77 on an Olivetti M24 personal computer (IBM compatible) in order to reach a high degree of portability. It is composed of six processors implementing the different phases of the analysis procedure. This particular structure presents some advantages like, for instance, the restart facility and the possibility to develop an event tree analysis code. The set of allowed logical operators, i.e. AND, OR, NOT, K/N, XOR, INH, together with the possibility to define boundary conditions, make the SALP-PC code a powerful tool for risk assessment. (orig.)

  7. Structural Loads Analysis for Wave Energy Converters: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    van Rij, Jennifer A [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Yu, Yi-Hsiang [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Guo, Yi [National Renewable Energy Laboratory (NREL), Golden, CO (United States)

    2017-08-09

    This study explores and verifies the generalized body-modes method for evaluating the structural loads on a wave energy converter (WEC). Historically, WEC design methodologies have focused primarily on accurately evaluating hydrodynamic loads, while methodologies for evaluating structural loads have yet to be fully considered and incorporated into the WEC design process. As wave energy technologies continue to advance, however, it has become increasingly evident that an accurate evaluation of the structural loads will enable an optimized structural design, as well as the potential utilization of composites and flexible materials, and hence reduce WEC costs. Although there are many computational fluid dynamics, structural analyses and fluid-structure-interaction (FSI) codes available, the application of these codes is typically too computationally intensive to be practical in the early stages of the WEC design process. The generalized body-modes method, however, is a reduced order, linearized, frequency-domain FSI approach, performed in conjunction with the linear hydrodynamic analysis, with computation times that could realistically be incorporated into the WEC design process. The objective of this study is to verify the generalized body-modes approach in comparison to high-fidelity FSI simulations to accurately predict structural deflections and stress loads in a WEC. Two verification cases are considered, a free-floating barge and a fixed-bottom column. Details for both the generalized body-modes models and FSI models are first provided. Results for each of the models are then compared and discussed. Finally, based on the verification results obtained, future plans for incorporating the generalized body-modes method into the WEC simulation tool, WEC-Sim, and the overall WEC design process are discussed.

  8. A data acquisition computer for high energy physics applications DAFNE:- hardware manual

    International Nuclear Information System (INIS)

    Barlow, J.; Seller, P.; De-An, W.

    1983-07-01

    A high performance stand alone computer system based on the Motorola 68000 micro processor has been built at the Rutherford Appleton Laboratory. Although the design was strongly influenced by the requirement to provide a compact data acquisition computer for the high energy physics environment, the system is sufficiently general to find applications in a wider area. It provides colour graphics and tape and disc storage together with access to CAMAC systems. This report is the hardware manual of the data acquisition computer, DAFNE (Data Acquisition For Nuclear Experiments), and as such contains a full description of the hardware structure of the computer system. (author)

  9. Model for Analysis of Energy Demand (MAED-2)

    International Nuclear Information System (INIS)

    2007-01-01

    The IAEA has been supporting its Member States in the area of energy planning for sustainable development. Development and dissemination of appropriate methodologies and their computer codes are important parts of this support. This manual has been produced to facilitate the use of the MAED model: Model for Analysis of Energy Demand. The methodology of the MAED model was originally developed by. B. Chateau and B. Lapillonne of the Institute Economique et Juridique de l'Energie (IEJE) of the University of Grenoble, France, and was presented as the MEDEE model. Since then the MEDEE model has been developed and adopted to be appropriate for modelling of various energy demand system. The IAEA adopted MEDEE-2 model and incorporated important modifications to make it more suitable for application in the developing countries, and it was named as the MAED model. The first version of the MAED model was designed for the DOS based system, which was later on converted for the Windows system. This manual presents the latest version of the MAED model. The most prominent feature of this version is its flexibility for representing structure of energy consumption. The model now allows country-specific representations of energy consumption patterns using the MAED methodology. The user can now disaggregate energy consumption according to the needs and/or data availability in her/his country. As such, MAED has now become a powerful tool for modelling widely diverse energy consumption patterns. This manual presents the model in details and provides guidelines for its application

  10. Free surface profiles in river flows: Can standard energy-based gradually-varied flow computations be pursued?

    Science.gov (United States)

    Cantero, Francisco; Castro-Orgaz, Oscar; Garcia-Marín, Amanda; Ayuso, José Luis; Dey, Subhasish

    2015-10-01

    Is the energy equation for gradually-varied flow the best approximation for the free surface profile computations in river flows? Determination of flood inundation in rivers and natural waterways is based on the hydraulic computation of flow profiles. This is usually done using energy-based gradually-varied flow models, like HEC-RAS, that adopts a vertical division method for discharge prediction in compound channel sections. However, this discharge prediction method is not so accurate in the context of advancements over the last three decades. This paper firstly presents a study of the impact of discharge prediction on the gradually-varied flow computations by comparing thirteen different methods for compound channels, where both energy and momentum equations are applied. The discharge, velocity distribution coefficients, specific energy, momentum and flow profiles are determined. After the study of gradually-varied flow predictions, a new theory is developed to produce higher-order energy and momentum equations for rapidly-varied flow in compound channels. These generalized equations enable to describe the flow profiles with more generality than the gradually-varied flow computations. As an outcome, results of gradually-varied flow provide realistic conclusions for computations of flow in compound channels, showing that momentum-based models are in general more accurate; whereas the new theory developed for rapidly-varied flow opens a new research direction, so far not investigated in flows through compound channels.

  11. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    Science.gov (United States)

    Cotes-Ruiz, Iván Tomás; Prado, Rocío P; García-Galán, Sebastián; Muñoz-Expósito, José Enrique; Ruiz-Reyes, Nicolás

    2017-01-01

    Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS). The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.

  12. Dynamic Voltage Frequency Scaling Simulator for Real Workflows Energy-Aware Management in Green Cloud Computing.

    Directory of Open Access Journals (Sweden)

    Iván Tomás Cotes-Ruiz

    Full Text Available Nowadays, the growing computational capabilities of Cloud systems rely on the reduction of the consumed power of their data centers to make them sustainable and economically profitable. The efficient management of computing resources is at the heart of any energy-aware data center and of special relevance is the adaptation of its performance to workload. Intensive computing applications in diverse areas of science generate complex workload called workflows, whose successful management in terms of energy saving is still at its beginning. WorkflowSim is currently one of the most advanced simulators for research on workflows processing, offering advanced features such as task clustering and failure policies. In this work, an expected power-aware extension of WorkflowSim is presented. This new tool integrates a power model based on a computing-plus-communication design to allow the optimization of new management strategies in energy saving considering computing, reconfiguration and networks costs as well as quality of service, and it incorporates the preeminent strategy for on host energy saving: Dynamic Voltage Frequency Scaling (DVFS. The simulator is designed to be consistent in different real scenarios and to include a wide repertory of DVFS governors. Results showing the validity of the simulator in terms of resources utilization, frequency and voltage scaling, power, energy and time saving are presented. Also, results achieved by the intra-host DVFS strategy with different governors are compared to those of the data center using a recent and successful DVFS-based inter-host scheduling strategy as overlapped mechanism to the DVFS intra-host technique.

  13. The use of symbolic computation in radiative, energy, and neutron transport calculations

    Science.gov (United States)

    Frankel, J. I.

    This investigation uses symbolic computation in developing analytical methods and general computational strategies for solving both linear and nonlinear, regular and singular, integral and integro-differential equations which appear in radiative and combined mode energy transport. This technical report summarizes the research conducted during the first nine months of the present investigation. The use of Chebyshev polynomials augmented with symbolic computation has clearly been demonstrated in problems involving radiative (or neutron) transport, and mixed-mode energy transport. Theoretical issues related to convergence, errors, and accuracy have also been pursued. Three manuscripts have resulted from the funded research. These manuscripts have been submitted to archival journals. At the present time, an investigation involving a conductive and radiative medium is underway. The mathematical formulation leads to a system of nonlinear, weakly-singular integral equations involving the unknown temperature and various Legendre moments of the radiative intensity in a participating medium. Some preliminary results are presented illustrating the direction of the proposed research.

  14. SHEAT for PC. A computer code for probabilistic seismic hazard analysis for personal computer, user's manual

    International Nuclear Information System (INIS)

    Yamada, Hiroyuki; Tsutsumi, Hideaki; Ebisawa, Katsumi; Suzuki, Masahide

    2002-03-01

    The SHEAT code developed at Japan Atomic Energy Research Institute is for probabilistic seismic hazard analysis which is one of the tasks needed for seismic Probabilistic Safety Assessment (PSA) of a nuclear power plant. At first, SHEAT was developed as the large sized computer version. In addition, a personal computer version was provided to improve operation efficiency and generality of this code in 2001. It is possible to perform the earthquake hazard analysis, display and the print functions with the Graphical User Interface. With the SHEAT for PC code, seismic hazard which is defined as an annual exceedance frequency of occurrence of earthquake ground motions at various levels of intensity at a given site is calculated by the following two steps as is done with the large sized computer. One is the modeling of earthquake generation around a site. Future earthquake generation (locations, magnitudes and frequencies of postulated earthquake) is modeled based on the historical earthquake records, active fault data and expert judgment. Another is the calculation of probabilistic seismic hazard at the site. An earthquake ground motion is calculated for each postulated earthquake using an attenuation model taking into account its standard deviation. Then the seismic hazard at the site is calculated by summing the frequencies of ground motions by all the earthquakes. This document is the user's manual of the SHEAT for PC code. It includes: (1) Outline of the code, which include overall concept, logical process, code structure, data file used and special characteristics of code, (2) Functions of subprogram and analytical models in them, (3) Guidance of input and output data, (4) Sample run result, and (5) Operational manual. (author)

  15. Limitations to depth resolution in high-energy, heavy-ion elastic recoil detection analysis

    International Nuclear Information System (INIS)

    Elliman, R.G.; Palmer, G.R.; Ophel, T.R.; Timmers, H.

    1998-01-01

    The depth resolution of heavy-ion elastic recoil detection analysis was examined for Al and Co thin films ranging in thickness from 100 to 400 nm. Measurements were performed with 154 MeV Au ions as the incident beam, and recoils were detected using a gas ionisation detector. Energy spectra were extracted for the Al and Co recoils and the depth resolution determined as a function of film thickness from the width of the high- and low- energy edges. These results were compared with theoretical estimates calculated using the computer program DEPTH. (authors)

  16. Intelligent battery energy management and control for vehicle-to-grid via cloud computing network

    International Nuclear Information System (INIS)

    Khayyam, Hamid; Abawajy, Jemal; Javadi, Bahman; Goscinski, Andrzej; Stojcevski, Alex; Bab-Hadiashar, Alireza

    2013-01-01

    Highlights: • The intelligent battery energy management substantially reduces the interactions of PEV with parking lots. • The intelligent battery energy management improves the energy efficiency. • The intelligent battery energy management predicts the road load demand for vehicles. - Abstract: Plug-in Electric Vehicles (PEVs) provide new opportunities to reduce fuel consumption and exhaust emission. PEVs need to draw and store energy from an electrical grid to supply propulsive energy for the vehicle. As a result, it is important to know when PEVs batteries are available for charging and discharging. Furthermore, battery energy management and control is imperative for PEVs as the vehicle operation and even the safety of passengers depend on the battery system. Thus, scheduling the grid power electricity with parking lots would be needed for efficient charging and discharging of PEV batteries. This paper aims to propose a new intelligent battery energy management and control scheduling service charging that utilize Cloud computing networks. The proposed intelligent vehicle-to-grid scheduling service offers the computational scalability required to make decisions necessary to allow PEVs battery energy management systems to operate efficiently when the number of PEVs and charging devices are large. Experimental analyses of the proposed scheduling service as compared to a traditional scheduling service are conducted through simulations. The results show that the proposed intelligent battery energy management scheduling service substantially reduces the required number of interactions of PEV with parking lots and grid as well as predicting the load demand calculated in advance with regards to their limitations. Also it shows that the intelligent scheduling service charging using Cloud computing network is more efficient than the traditional scheduling service network for battery energy management and control

  17. A computer program for activation analysis

    International Nuclear Information System (INIS)

    Rantanen, J.; Rosenberg, R.J.

    1983-01-01

    A computer program for calculating the results of activation analysis is described. The program comprises two gamma spectrum analysis programs, STOAV and SAMPO and one program for calculating elemental concentrations, KVANT. STOAV is based on a simple summation of channels and SAMPO is based on fitting of mathematical functions. The programs are tested by analyzing the IAEA G-1 test spectra. In the determination of peak location SAMPO is somewhat better than STOAV and in the determination of peak area SAMPO is more than twice as accurate as STOAV. On the other hand, SAMPO is three times as expensive as STOAV with the use of a Cyber 170 computer. (author)

  18. Providing a computing environment for a high energy physics workshop

    International Nuclear Information System (INIS)

    Nicholls, J.

    1991-03-01

    Although computing facilities have been provided at conferences and workshops remote from the hose institution for some years, the equipment provided has rarely been capable of providing for much more than simple editing and electronic mail over leased lines. This presentation describes the pioneering effort involved by the Computing Department/Division at Fermilab in providing a local computing facility with world-wide networking capability for the Physics at Fermilab in the 1990's workshop held in Breckenridge, Colorado, in August 1989, as well as the enhanced facilities provided for the 1990 Summer Study on High Energy Physics at Snowmass, Colorado, in June/July 1990. Issues discussed include type and sizing of the facilities, advance preparations, shipping, on-site support, as well as an evaluation of the value of the facility to the workshop participants

  19. Plutonium Worlds. Fast Breeders, Systems Analysis and Computer Simulation in the Age of Hypotheticality

    Directory of Open Access Journals (Sweden)

    Sebastian Vehlken

    2014-09-01

    Full Text Available This article examines the media history of one of the hallmark civil nuclear energy programs in Western Germany – the development of Liquid Metal Fast Breeder Reactor (LMFBR technology. Promoted as a kind of perpetuum mobile of the Atomic Age, the "German Manhattan Project" not only imported big science thinking. In its context, nuclear technology was also put forth as an avantgarde of scientific inquiry, dealing with the most complex and critical technological endeavors. In the face of the risks of nuclear technology, German physicist Wolf Häfele thus announced a novel epistemology of "hypotheticality". In a context where traditional experimental engineering strategies became inappropiate, he called for the application of advanced media technologies: Computer Simulations (CS and Systems Analysis (SA generated computerized spaces for the production of knowledge. In the course of the German Fast Breeder program, such methods had a twofold impact. One the one hand, Häfele emphazised – as the "father of the German Fast Breeder" – the utilization of CS for the actual planning and construction of the novel reactor type. On the other, namely as the director of the department of Energy Systems at the International Institute for Applied Systems Analysis (IIASA, Häfele advised SA-based projections of energy consumption. These computerized scenarios provided the rationale for the conception of Fast Breeder programs as viable and necessary alternative energy sources in the first place. By focusing on the role of the involved CS techniques, the paper thus investigates the intertwined systems thinking of nuclear facilities’s planning and construction and the design of large-scale energy consumption and production scenarios in the 1970s and 1980s, as well as their conceptual afterlives in our contemporary era of computer simulation.

  20. Synthetic wind speed scenarios generation for probabilistic analysis of hybrid energy systems

    International Nuclear Information System (INIS)

    Chen, Jun; Rabiti, Cristian

    2017-01-01

    Hybrid energy systems consisting of multiple energy inputs and multiple energy outputs have been proposed to be an effective element to enable ever increasing penetration of clean energy. In order to better understand the dynamic and probabilistic behavior of hybrid energy systems, this paper proposes a model combining Fourier series and autoregressive moving average (ARMA) to characterize historical weather measurements and to generate synthetic weather (e.g., wind speed) data. In particular, Fourier series is used to characterize the seasonal trend in historical data, while ARMA is applied to capture the autocorrelation in residue time series (e.g., measurements with seasonal trends subtracted). The generated synthetic wind speed data is then utilized to perform probabilistic analysis of a particular hybrid energy system configuration, which consists of nuclear power plant, wind farm, battery storage, natural gas boiler, and chemical plant. Requirements on component ramping rate, economic and environmental impacts of hybrid energy systems, and the effects of deploying different sizes of batteries in smoothing renewable variability, are all investigated. - Highlights: • Computational model to synthesize artificial wind speed data with consistent characteristics with database. • Fourier series to capture seasonal trends in the database. • Monte Carlo simulation and probabilistic analysis of hybrid energy systems. • Investigation of the effect of battery in smoothing variability of wind power generation.

  1. Energy life-cycle analysis modeling and decision support tool

    Energy Technology Data Exchange (ETDEWEB)

    Hoza, M.; White, M.E.

    1993-06-01

    As one of DOE`s five multi-program national laboratories, Pacific Northwest Laboratory (PNL) develops and deploys technology for national missions in energy and the environment. The Energy Information Systems Group, within the Laboratory`s Computer Sciences Department, focuses on the development of the computational and data communications infrastructure and automated tools for the Transmission and Distribution energy sector and for advanced process engineering applications. The energy industry is being forced to operate in new ways and under new constraints. It is in a reactive mode, reacting to policies and politics, and to economics and environmental pressures. The transmission and distribution sectors are being forced to find new ways to maximize the use of their existing infrastructure, increase energy efficiency, and minimize environmental impacts, while continuing to meet the demands of an ever increasing population. The creation of a sustainable energy future will be a challenge for both the soft and hard sciences. It will require that we as creators of our future be bold in the way we think about our energy future and aggressive in its development. The development of tools to help bring about a sustainable future will not be simple either. The development of ELCAM, for example, represents a stretch for the computational sciences as well as for each of the domain sciences such as economics, which will have to be team members.

  2. Regional energy facility siting analysis

    International Nuclear Information System (INIS)

    Eberhart, R.C.; Eagles, T.W.

    1976-01-01

    Results of the energy facility siting analysis portion of a regional pilot study performed for the anticipated National Energy Siting and Facility Report are presented. The question of cell analysis versus site-specific analysis is explored, including an evaluation of the difference in depth between the two approaches. A discussion of the possible accomplishments of regional analysis is presented. It is concluded that regional sitting analysis could be of use in a national siting study, if its inherent limits are recognized

  3. Analysis of gold in jewellery articles by energy dispersive XRF

    International Nuclear Information System (INIS)

    Meor Yusoff Meor Sulaiman; Latifah Amin

    2001-01-01

    The value of a precious metal article is much related to its fineness. For gold assay, conventional fire assay technique has been used as the standard technique for more than 500 years. Alternative modern techniques like energy dispersive x-ray fluorescence can also be used in the determination of gold purity. Advantages of this technique compared to the conventional method including non-destructive analysis, does not use any toxic or hazardous chemicals, automatic computer control and is user friendly, requires minimum number of personnel, shorter analysis time and able to determine associated elements in the metal. Analysis was performed on different sizes and purity of gold. Comparison results for the analysis using different reference standards show small differences between technique and its certified value. The technique also gives small standard deviation value in its repeatability test. (Author)

  4. Computer-aided power systems analysis

    CERN Document Server

    Kusic, George

    2008-01-01

    Computer applications yield more insight into system behavior than is possible by using hand calculations on system elements. Computer-Aided Power Systems Analysis: Second Edition is a state-of-the-art presentation of basic principles and software for power systems in steady-state operation. Originally published in 1985, this revised edition explores power systems from the point of view of the central control facility. It covers the elements of transmission networks, bus reference frame, network fault and contingency calculations, power flow on transmission networks, generator base power setti

  5. PREFACE: International Conference on Computing in High Energy and Nuclear Physics (CHEP 2010)

    Science.gov (United States)

    Lin, Simon C.; Shen, Stella; Neufeld, Niko; Gutsche, Oliver; Cattaneo, Marco; Fisk, Ian; Panzer-Steindel, Bernd; Di Meglio, Alberto; Lokajicek, Milos

    2011-12-01

    The International Conference on Computing in High Energy and Nuclear Physics (CHEP) was held at Academia Sinica in Taipei from 18-22 October 2010. CHEP is a major series of international conferences for physicists and computing professionals from the worldwide High Energy and Nuclear Physics community, Computer Science, and Information Technology. The CHEP conference provides an international forum to exchange information on computing progress and needs for the community, and to review recent, ongoing and future activities. CHEP conferences are held at roughly 18 month intervals, alternating between Europe, Asia, America and other parts of the world. Recent CHEP conferences have been held in Prauge, Czech Republic (2009); Victoria, Canada (2007); Mumbai, India (2006); Interlaken, Switzerland (2004); San Diego, California(2003); Beijing, China (2001); Padova, Italy (2000) CHEP 2010 was organized by Academia Sinica Grid Computing Centre. There was an International Advisory Committee (IAC) setting the overall themes of the conference, a Programme Committee (PC) responsible for the content, as well as Conference Secretariat responsible for the conference infrastructure. There were over 500 attendees with a program that included plenary sessions of invited speakers, a number of parallel sessions comprising around 260 oral and 200 poster presentations, and industrial exhibitions. We thank all the presenters, for the excellent scientific content of their contributions to the conference. Conference tracks covered topics on Online Computing, Event Processing, Software Engineering, Data Stores, and Databases, Distributed Processing and Analysis, Computing Fabrics and Networking Technologies, Grid and Cloud Middleware, and Collaborative Tools. The conference included excursions to various attractions in Northern Taiwan, including Sanhsia Tsu Shih Temple, Yingko, Chiufen Village, the Northeast Coast National Scenic Area, Keelung, Yehliu Geopark, and Wulai Aboriginal Village

  6. Computational analysis of supercritical carbon dioxide flow around a turbine and compressor BLADE

    International Nuclear Information System (INIS)

    Kim, Tae W.; Kim, Nam H.; Suh, Kune Y.; Kim, Seung O.

    2007-01-01

    The turbine and compressor isentropic efficiencies are one of the major parameters affecting the overall Brayton cycle efficiency. Thus, the optimal turbine and compressor design should contribute to the economics of future nuclear fission and fusion energy systems. A computation analysis was performed utilizing CFX for the supercritical carbon dioxide (SCO 2 ) flow around a turbine and compressor blade to check on the potential efficiency of the turbine and compressor which determine such basic design values as the blade (or impeller) and nozzle (or diffuser) types, blade height, and minimum and maximum radii of the hub and tip. Basic design values of the turbine and compressor blades based on the Argonne National Laboratory (ANL) design code was generated by ANSYS BladeGen TM . The boundary conditions were based on the KALIMER-600 secondary loop. Optimal SCO 2 turbine and compressor blades were developed for high efficiency of 90% by the computational analysis. (author)

  7. On-Site Renewable Energy and Green Buildings: A System-Level Analysis.

    Science.gov (United States)

    Al-Ghamdi, Sami G; Bilec, Melissa M

    2016-05-03

    Adopting a green building rating system (GBRSs) that strongly considers use of renewable energy can have important environmental consequences, particularly in developing countries. In this paper, we studied on-site renewable energy and GBRSs at the system level to explore potential benefits and challenges. While we have focused on GBRSs, the findings can offer additional insight for renewable incentives across sectors. An energy model was built for 25 sites to compute the potential solar and wind power production on-site and available within the building footprint and regional climate. A life-cycle approach and cost analysis were then completed to analyze the environmental and economic impacts. Environmental impacts of renewable energy varied dramatically between sites, in some cases, the environmental benefits were limited despite the significant economic burden of those renewable systems on-site and vice versa. Our recommendation for GBRSs, and broader policies and regulations, is to require buildings with higher environmental impacts to achieve higher levels of energy performance and on-site renewable energy utilization, instead of fixed percentages.

  8. IUE Data Analysis Software for Personal Computers

    Science.gov (United States)

    Thompson, R.; Caplinger, J.; Taylor, L.; Lawton , P.

    1996-01-01

    This report summarizes the work performed for the program titled, "IUE Data Analysis Software for Personal Computers" awarded under Astrophysics Data Program NRA 92-OSSA-15. The work performed was completed over a 2-year period starting in April 1994. As a result of the project, 450 IDL routines and eight database tables are now available for distribution for Power Macintosh computers and Personal Computers running Windows 3.1.

  9. LHCb Distributed Data Analysis on the Computing Grid

    CERN Document Server

    Paterson, S; Parkes, C

    2006-01-01

    LHCb is one of the four Large Hadron Collider (LHC) experiments based at CERN, the European Organisation for Nuclear Research. The LHC experiments will start taking an unprecedented amount of data when they come online in 2007. Since no single institute has the compute resources to handle this data, resources must be pooled to form the Grid. Where the Internet has made it possible to share information stored on computers across the world, Grid computing aims to provide access to computing power and storage capacity on geographically distributed systems. LHCb software applications must work seamlessly on the Grid allowing users to efficiently access distributed compute resources. It is essential to the success of the LHCb experiment that physicists can access data from the detector, stored in many heterogeneous systems, to perform distributed data analysis. This thesis describes the work performed to enable distributed data analysis for the LHCb experiment on the LHC Computing Grid.

  10. Analysis of molten salt thermal-hydraulics using computational fluid dynamics

    International Nuclear Information System (INIS)

    Yamaji, B.; Csom, G.; Aszodi, A.

    2003-01-01

    To give a good solution for the problem of high level radioactive waste partitioning and transmutation is expected to be a pro missing option. Application of this technology also could extend the possibilities of nuclear energy. Large number of liquid-fuelled reactor concepts or accelerator driven subcritical systems was proposed as transmutors. Several of these consider fluoride based molten salts as the liquid fuel and coolant medium. The thermal-hydraulic behaviour of these systems is expected to be fundamentally different than the behaviour of widely used water-cooled reactors with solid fuel. Considering large flow domains three-dimensional thermal-hydraulic analysis is the method seeming to be applicable. Since the fuel is the coolant medium as well, one can expect a strong coupling between neutronics and thermal-hydraulics too. In the present paper the application of Computational Fluid Dynamics for three-dimensional thermal-hydraulics simulations of molten salt reactor concepts is introduced. In our past and recent works several calculations were carried out to investigate the capabilities of Computational Fluid Dynamics through the analysis of different molten salt reactor concepts. Homogenous single region molten salt reactor concept is studied and optimised. Another single region reactor concept is introduced also. This concept has internal heat exchanges in the flow domain and the molten salt is circulated by natural convection. The analysis of the MSRE experiment is also a part of our work since it may form a good background from the validation point of view. In the paper the results of the Computational Fluid Dynamics calculations with these concepts are presented. In the further work our objective is to investigate the thermal-hydraulics of the multi-region molten salt reactor (Authors)

  11. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  12. Computational intelligence for big data analysis frontier advances and applications

    CERN Document Server

    Dehuri, Satchidananda; Sanyal, Sugata

    2015-01-01

    The work presented in this book is a combination of theoretical advancements of big data analysis, cloud computing, and their potential applications in scientific computing. The theoretical advancements are supported with illustrative examples and its applications in handling real life problems. The applications are mostly undertaken from real life situations. The book discusses major issues pertaining to big data analysis using computational intelligence techniques and some issues of cloud computing. An elaborate bibliography is provided at the end of each chapter. The material in this book includes concepts, figures, graphs, and tables to guide researchers in the area of big data analysis and cloud computing.

  13. Computer use and carpal tunnel syndrome: A meta-analysis.

    Science.gov (United States)

    Shiri, Rahman; Falah-Hassani, Kobra

    2015-02-15

    Studies have reported contradictory results on the role of keyboard or mouse use in carpal tunnel syndrome (CTS). This meta-analysis aimed to assess whether computer use causes CTS. Literature searches were conducted in several databases until May 2014. Twelve studies qualified for a random-effects meta-analysis. Heterogeneity and publication bias were assessed. In a meta-analysis of six studies (N=4964) that compared computer workers with the general population or other occupational populations, computer/typewriter use (pooled odds ratio (OR)=0.72, 95% confidence interval (CI) 0.58-0.90), computer/typewriter use ≥1 vs. computer/typewriter use ≥4 vs. computer/typewriter use (pooled OR=1.34, 95% CI 1.08-1.65), mouse use (OR=1.93, 95% CI 1.43-2.61), frequent computer use (OR=1.89, 95% CI 1.15-3.09), frequent mouse use (OR=1.84, 95% CI 1.18-2.87) and with years of computer work (OR=1.92, 95% CI 1.17-3.17 for long vs. short). There was no evidence of publication bias for both types of studies. Studies that compared computer workers with the general population or several occupational groups did not control their estimates for occupational risk factors. Thus, office workers with no or little computer use are a more appropriate comparison group than the general population or several occupational groups. This meta-analysis suggests that excessive computer use, particularly mouse usage might be a minor occupational risk factor for CTS. Further prospective studies among office workers with objectively assessed keyboard and mouse use, and CTS symptoms or signs confirmed by a nerve conduction study are needed. Copyright © 2014 Elsevier B.V. All rights reserved.

  14. Gasohol and energy analysis

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, M A

    1983-03-01

    Energy analysis contributed to the public debate on the gasohol programme in the U.S. where this analysis became a legal requirement. The published energy analyses for gasohol are reviewed and we assess their inherent assumptions and data sources. The analyses are normalised to S.I. units to faciltate comparisons. The process of rationalising the various treatments uncovered areas of uncertainties particularly in the methodologies which could be used to analyse some parts of the process. Although the definitive study has still to be written, the consensus is that maize to fuel ethanol via the traditional fermentation route is a net consumer of energy. (Refs. 13).

  15. Time Series Analysis of Monte Carlo Fission Sources - I: Dominance Ratio Computation

    International Nuclear Information System (INIS)

    Ueki, Taro; Brown, Forrest B.; Parsons, D. Kent; Warsa, James S.

    2004-01-01

    In the nuclear engineering community, the error propagation of the Monte Carlo fission source distribution through cycles is known to be a linear Markov process when the number of histories per cycle is sufficiently large. In the statistics community, linear Markov processes with linear observation functions are known to have an autoregressive moving average (ARMA) representation of orders p and p - 1. Therefore, one can perform ARMA fitting of the binned Monte Carlo fission source in order to compute physical and statistical quantities relevant to nuclear criticality analysis. In this work, the ARMA fitting of a binary Monte Carlo fission source has been successfully developed as a method to compute the dominance ratio, i.e., the ratio of the second-largest to the largest eigenvalues. The method is free of binning mesh refinement and does not require the alteration of the basic source iteration cycle algorithm. Numerical results are presented for problems with one-group isotropic, two-group linearly anisotropic, and continuous-energy cross sections. Also, a strategy for the analysis of eigenmodes higher than the second-largest eigenvalue is demonstrated numerically

  16. Automatic analysis of gamma spectra using a desk computer

    International Nuclear Information System (INIS)

    Rocca, H.C.

    1976-10-01

    A code for the analysis of gamma spectra obtained with a Ge(Li) detector was developed for use with a desk computer (Hewlett-Packard Model 9810 A). The process is performed in a totally automatic way, data are conveniently smoothed and the background is generated by a convolutive equation. A calibration of the equipment with well-known standard sources gives the necessary data for adjusting a third degree equation by minimun squares, relating the energy with the peak position. Criteria are given for determining if certain groups of values constitute or not a peak or if it is a double line. All the peaks are adjusted to a gaussian curve and if necessary decomposed in their components. Data entry is by punched tape, ASCII Code. An alf-numeric printer provides (a) the position of the peak and its energy, (b) its resolution if it is larger than expected, (c) the area of the peak with its statistic error determined by the method of Wasson. As option, the complete spectra with the determined background can be plotted. (author) [es

  17. Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments

    Directory of Open Access Journals (Sweden)

    Sonia Yassa

    2013-01-01

    Full Text Available We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach.

  18. Multi-Objective Approach for Energy-Aware Workflow Scheduling in Cloud Computing Environments

    Science.gov (United States)

    Kadima, Hubert; Granado, Bertrand

    2013-01-01

    We address the problem of scheduling workflow applications on heterogeneous computing systems like cloud computing infrastructures. In general, the cloud workflow scheduling is a complex optimization problem which requires considering different criteria so as to meet a large number of QoS (Quality of Service) requirements. Traditional research in workflow scheduling mainly focuses on the optimization constrained by time or cost without paying attention to energy consumption. The main contribution of this study is to propose a new approach for multi-objective workflow scheduling in clouds, and present the hybrid PSO algorithm to optimize the scheduling performance. Our method is based on the Dynamic Voltage and Frequency Scaling (DVFS) technique to minimize energy consumption. This technique allows processors to operate in different voltage supply levels by sacrificing clock frequencies. This multiple voltage involves a compromise between the quality of schedules and energy. Simulation results on synthetic and real-world scientific applications highlight the robust performance of the proposed approach. PMID:24319361

  19. Characteristics of high-energy X-rays using computed radiography systems

    International Nuclear Information System (INIS)

    Matsumoto, Mitsuhiro; Mori, Yoshinobu

    1993-01-01

    A computed radiography (CR) with storage phosphor technology has advanced remarkably. Its application has been also discussed regarding the field of radiotherapy and studies have been made to shift from the film/screen system to the portal film using the CR system. The authors started to research CR portal imaging with high energy X-ray (10MV) on a regular scale in 1989. This paper deals with characteristics of high energy X-rays using the CR system. The digital characteristic curve corresponded with calculated value for dynamic range (L-value). The monitor unit (MU) counts at pixel (digital) value saturation point were L-value 0.5:28 MU, L-value 1.0:50 MU, L-value 2.0:167 MU, L-value 3.0:450 MU, L-value 4.0:1614 MU. The image contrast with the Mix-Dp phantom was L-value 0.5: about a 300 pixel value and L-value 4.0: about a 30 pixel value by a phantom 10 to 18 cm in thickness. Optimum L-value was 0.5, and tone-scale was the straight type of CR portal imaging using the graphy count mode. Optimum L-value was 4.0, and tone-scale was a rectangular wave type of CR portal imaging using therapeutic doses, and those were also described by the histogram analysis. (author)

  20. Contributing to global computing platform: gliding, tunneling standard services and high energy physics application

    International Nuclear Information System (INIS)

    Lodygensky, O.

    2006-09-01

    Centralized computers have been replaced by 'client/server' distributed architectures which are in turn in competition with new distributed systems known as 'peer to peer'. These new technologies are widely spread, and trading, industry and the research world have understood the new goals involved and massively invest around these new technologies, named 'grid'. One of the fields is about calculating. This is the subject of the works presented here. At the Paris Orsay University, a synergy emerged between the Computing Science Laboratory (LRI) and the Linear Accelerator Laboratory (LAL) on grid infrastructure, opening new investigations fields for the first and new high computing perspective for the other. Works presented here are the results of this multi-discipline collaboration. They are based on XtremWeb, the LRI global computing platform. We first introduce a state of the art of the large scale distributed systems, its principles, its architecture based on services. We then introduce XtremWeb and detail modifications and improvements we had to specify and implement to achieve our goals. We present two different studies, first interconnecting grids in order to generalize resource sharing and secondly, be able to use legacy services on such platforms. We finally explain how a research community like the community of high energy cosmic radiation detection can gain access to these services and detail Monte Carlos and data analysis processes over the grids. (author)

  1. Computer based workstation for development of software for high energy physics experiments

    International Nuclear Information System (INIS)

    Ivanchenko, I.M.; Sedykh, Yu.V.

    1987-01-01

    Methodical principles and results of a successful attempt to create on the base of IBM-PC/AT personal computer of effective means for development of programs for high energy physics experiments are analysed. The obtained results permit to combine the best properties and a positive materialized experience accumulated on the existing time sharing collective systems with a high quality of data representation, reliability and convenience of personal computer applications

  2. 1987 CERN school of computing

    International Nuclear Information System (INIS)

    Verkerk, C.

    1988-01-01

    These Proceedings contain written versions of most of the lectures delivered at the 1987 CERN School of Computing. Five lecture series treated various aspects of data communications: integrated services networks, standard LANs and optical LANs, open systems networking in practice, and distributed operating systems. Present and future computer architectures were covered and an introduction to vector processing was given, followed by lectures on vectorization of pattern recognition and Monte Carlo code. Aspects of computing in high-energy physics were treated in lectures on data acquisition and analysis at LEP, on data-base systems in high-energy physics experiments, and on Fastbus. The experience gained with personal work stations was also presented. Various other topics were covered: the use of computers in number theory and in astronomy, fractals, and computer security and access control. (orig.)

  3. Quantifying the geopolitical dimension of energy risks: A tool for energy modelling and planning

    International Nuclear Information System (INIS)

    Muñoz, Beatriz; García-Verdugo, Javier; San-Martín, Enrique

    2015-01-01

    Energy risk and security are topical issues in energy analysis and policy. However, the quantitative analysis of energy risk presents significant methodological difficulties, especially when dealing with certain of its more qualitative dimensions. The aim of this paper is to quantitatively estimate the geopolitical risk of energy supply with the help of a multivariate statistical technique, factor analysis. Four partial energy risk factors were computed for 122 countries, which were subsequently aggregated to form the composite GESRI (Geopolitical Energy Supply Risk Index). The results demonstrate that advanced economies present a lower level of geopolitical energy risk, especially countries with energy resources, while less-developed countries register higher levels of risk regardless of their energy production. Although this indicator is computed for countries, it can be aggregated for regions or corridors, and it could also be applied to model and scenario building. The different uses of the GESRI could eventually lead to practical implications in the energy policy field, as well as in the energy planning and energy management areas. - Highlights: • We quantitatively estimate the multidimensional geopolitical risk of energy supply. • Factor analysis was used to reveal energy risk, a variable not directly observable. • Advanced economies with energy resources present the lowest level of energy risk. • Less-developed countries obtain high risk values even when they are energy producers. • The proposed index can be used for energy planning and energy management purposes

  4. Energy Analysis Program 1990 annual report

    International Nuclear Information System (INIS)

    1992-01-01

    The Energy Analysis Program has played an active role in the analysis and discussion of energy and environmental issues at several levels. (1) at the international level, with programs as developing scenarios for long-term energy demand in developing countries and organizing leading an analytic effort, ''Energy Efficiency, Developing Countries, and Eastern Europe,'' part of a major effort to increase support for energy efficiency programs worldwide; (2) at national level, the Program has been responsible for assessing energy forecasts and policies affecting energy use (e.g., appliance standards, National Energy Strategy scenarios); and (3) at the state and utility levels, the Program has been a leader in promoting integrated resource utility planning; the collaborative process has led to agreement on a new generation of utility demand-site programs in California, providing an opportunity to use knowledge and analytic techniques of the Program's researchers. We continue to place highest on analyzing energy efficiency, with particular attention given to energy use in buildings. The Program continues its active analysis of international energy issues in Asia (including China), the Soviet Union, South America, and Western Europe. Analyzing the costs and benefits of different levels of standards for residential appliances continues to be the largest single area of research within the Program. The group has developed and applied techniques for forecasting energy demand (or constructing scenarios) for the United States. We have built a new model of industrial energy demand, are in the process of making major changes in our tools for forecasting residential energy demand, have built an extensive and documented energy conservation supply curve of residential energy use, and are beginning an analysis of energy-demand forecasting for commercial buildings

  5. Energy Analysis Program 1990 annual report

    Energy Technology Data Exchange (ETDEWEB)

    1992-01-01

    The Energy Analysis Program has played an active role in the analysis and discussion of energy and environmental issues at several levels. (1) at the international level, with programs as developing scenarios for long-term energy demand in developing countries and organizing leading an analytic effort, ``Energy Efficiency, Developing Countries, and Eastern Europe,`` part of a major effort to increase support for energy efficiency programs worldwide; (2) at national level, the Program has been responsible for assessing energy forecasts and policies affecting energy use (e.g., appliance standards, National Energy Strategy scenarios); and (3) at the state and utility levels, the Program has been a leader in promoting integrated resource utility planning; the collaborative process has led to agreement on a new generation of utility demand-site programs in California, providing an opportunity to use knowledge and analytic techniques of the Program`s researchers. We continue to place highest on analyzing energy efficiency, with particular attention given to energy use in buildings. The Program continues its active analysis of international energy issues in Asia (including China), the Soviet Union, South America, and Western Europe. Analyzing the costs and benefits of different levels of standards for residential appliances continues to be the largest single area of research within the Program. The group has developed and applied techniques for forecasting energy demand (or constructing scenarios) for the United States. We have built a new model of industrial energy demand, are in the process of making major changes in our tools for forecasting residential energy demand, have built an extensive and documented energy conservation supply curve of residential energy use, and are beginning an analysis of energy-demand forecasting for commercial buildings.

  6. Report of the Subpanel on Theoretical Computing of the High Energy Physics Advisory Panel

    International Nuclear Information System (INIS)

    1984-09-01

    The Subpanel on Theoretical Computing of the High Energy Physics Advisory Panel (HEPAP) was formed in July 1984 to make recommendations concerning the need for state-of-the-art computing for theoretical studies. The specific Charge to the Subpanel is attached as Appendix A, and the full membership is listed in Appendix B. For the purposes of this study, theoretical computing was interpreted as encompassing both investigations in the theory of elementary particles and computation-intensive aspects of accelerator theory and design. Many problems in both areas are suited to realize the advantages of vectorized processing. The body of the Subpanel Report is organized as follows. The Introduction, Section I, explains some of the goals of computational physics as it applies to elementary particle theory and accelerator design. Section II reviews the availability of mainframe supercomputers to researchers in the United States, in Western Europe, and in Japan. Other promising approaches to large-scale computing are summarized in Section III. Section IV details the current computing needs for problems in high energy theory, and for beam dynamics studies. The Subpanel Recommendations appear in Section V. The Appendices attached to this Report give the Charge to the Subpanel, the Subpanel membership, and some background information on the financial implications of establishing a supercomputer center

  7. Optimizing the Number of Cooperating Terminals for Energy Aware Task Computing in Wireless Networks

    DEFF Research Database (Denmark)

    Olsen, Anders Brødløs; Fitzek, Frank H. P.; Koch, Peter

    2005-01-01

    It is generally accepted that energy consumption is a significant design constraint for mobile handheld systems, therefore motivations for methods optimizing the energy consumption making better use of the restricted battery resources are evident. A novel concept of distributed task computing...... is previously proposed (D2VS), where the overall idea of selective distribution of tasks among terminals is made. In this paper the optimal number of terminals for cooperative task computing in a wireless network will be investigated. The paper presents an energy model for the proposed scheme. Energy...... consumption of the terminals with respect to their workload and the overhead of distributing tasks among terminals are taken into account. The paper shows, that the number of cooperating terminals is in general limited to a few, though alternating with respect to the various system parameters....

  8. Energy and supply concepts. Pt. 3. Energie- und Versorgungskonzepte. T. 3

    Energy Technology Data Exchange (ETDEWEB)

    Kolodziejczyk, K

    1989-01-01

    Part three deals with the classification of energy and supply concepts (primary and secondary energy sources, energy conversion processes). A discussion of classification criteria (4 criteria, different process levels) is followed by a description of process and energy flows (flowsheet showing the energy flow of an interconnected system combining electric power/steam/heat supplies and refrigeration), a presentation of concrete energy and supply concepts (flow sheet, selection and evaluation criteria, situation and process analysis, cost-benefit analysis, use of computers, system value analysis), approaches and solutions (decisions). The complex task of finding appropriate supply solutions is found to be depending on the knowledge, creativity, and methodical skill of those in charge. (HWJ).

  9. Plasma geometric optics analysis and computation

    International Nuclear Information System (INIS)

    Smith, T.M.

    1983-01-01

    Important practical applications in the generation, manipulation, and diagnosis of laboratory thermonuclear plasmas have created a need for elaborate computational capabilities in the study of high frequency wave propagation in plasmas. A reduced description of such waves suitable for digital computation is provided by the theory of plasma geometric optics. The existing theory is beset by a variety of special cases in which the straightforward analytical approach fails, and has been formulated with little attention to problems of numerical implementation of that analysis. The standard field equations are derived for the first time from kinetic theory. A discussion of certain terms previously, and erroneously, omitted from the expansion of the plasma constitutive relation is given. A powerful but little known computational prescription for determining the geometric optics field in the neighborhood of caustic singularities is rigorously developed, and a boundary layer analysis for the asymptotic matching of the plasma geometric optics field across caustic singularities is performed for the first time with considerable generality. A proper treatment of birefringence is detailed, wherein a breakdown of the fundamental perturbation theory is identified and circumvented. A general ray tracing computer code suitable for applications to radiation heating and diagnostic problems is presented and described

  10. Economic analysis of nuclear energy

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Han Myung; Lee, M.K.; Moon, K.H.; Kim, S.S.; Lim, C.Y.; Song, K.D.; Kim, H

    2001-12-01

    The objective of this study is to evaluate the contribution of nuclear energy to the energy use in the economical way, based on the factor survey performed on the internal and external environmental changes occurred recent years. Internal and external environmental changes are being occurred recent years involving with using nuclear energy. This study summarizes the recent environmental changes in nuclear energy such as sustainable development issues, climate change talks, Doha round and newly created electricity fund. This study also carried out the case studies on nuclear energy, based on the environmental analysis performed above. The case studies cover following topics: role of nuclear power in energy/environment/economy, estimation of environmental external cost in electric generation sector, economic comparison of hydrogen production, and inter-industrial analysis of nuclear power generation.

  11. Economic analysis of nuclear energy

    International Nuclear Information System (INIS)

    Lee, Han Myung; Lee, M.K.; Moon, K.H.; Kim, S.S.; Lim, C.Y.; Song, K.D.; Kim, H.

    2001-12-01

    The objective of this study is to evaluate the contribution of nuclear energy to the energy use in the economical way, based on the factor survey performed on the internal and external environmental changes occurred recent years. Internal and external environmental changes are being occurred recent years involving with using nuclear energy. This study summarizes the recent environmental changes in nuclear energy such as sustainable development issues, climate change talks, Doha round and newly created electricity fund. This study also carried out the case studies on nuclear energy, based on the environmental analysis performed above. The case studies cover following topics: role of nuclear power in energy/environment/economy, estimation of environmental external cost in electric generation sector, economic comparison of hydrogen production, and inter-industrial analysis of nuclear power generation

  12. Computational analysis of difenoconazole interaction with soil chitinases

    International Nuclear Information System (INIS)

    Vlǎdoiu, D L; Filimon, M N; Ostafe, V; Isvoran, A

    2015-01-01

    This study focusses on the investigation of the potential binding of the fungicide difenoconazole to soil chitinases using a computational approach. Computational characterization of the substrate binding sites of Serratia marcescens and Bacillus cereus chitinases using Fpocket tool reflects the role of hydrophobic residues for the substrate binding and the high local hydrophobic density of both sites. Molecular docking study reveals that difenoconazole is able to bind to Serratia marcescens and Bacillus cereus chitinases active sites, the binding energies being comparable

  13. Process for computing geometric perturbations for probabilistic analysis

    Science.gov (United States)

    Fitch, Simeon H. K. [Charlottesville, VA; Riha, David S [San Antonio, TX; Thacker, Ben H [San Antonio, TX

    2012-04-10

    A method for computing geometric perturbations for probabilistic analysis. The probabilistic analysis is based on finite element modeling, in which uncertainties in the modeled system are represented by changes in the nominal geometry of the model, referred to as "perturbations". These changes are accomplished using displacement vectors, which are computed for each node of a region of interest and are based on mean-value coordinate calculations.

  14. Hybrid soft computing systems for electromyographic signals analysis: a review.

    Science.gov (United States)

    Xie, Hong-Bo; Guo, Tianruo; Bai, Siwei; Dokos, Socrates

    2014-02-03

    Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis.

  15. Transport Energy Impact Analysis; NREL (National Renewable Energy Laboratory)

    Energy Technology Data Exchange (ETDEWEB)

    Gonder, J.

    2015-05-13

    Presented at the Sustainable Transportation Energy Pathways Spring 2015 Symposium on May 13, 2015, this presentation by Jeff Gonder of the National Renewable Energy Laboratory (NREL) provides information about NREL's transportation energy impact analysis of connected and automated vehicles.

  16. Computer-Based Interaction Analysis with DEGREE Revisited

    Science.gov (United States)

    Barros, B.; Verdejo, M. F.

    2016-01-01

    We review our research with "DEGREE" and analyse how our work has impacted the collaborative learning community since 2000. Our research is framed within the context of computer-based interaction analysis and the development of computer-supported collaborative learning (CSCL) tools. We identify some aspects of our work which have been…

  17. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1986-01-01

    An automated procedure for performing sensitivity analysis has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with direct and adjoint sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies

  18. Estimation of numerical uncertainty in computational fluid dynamics simulations of a passively controlled wave energy converter

    DEFF Research Database (Denmark)

    Wang, Weizhi; Wu, Minghao; Palm, Johannes

    2018-01-01

    for almost linear incident waves. First, we show that the computational fluid dynamics simulations have acceptable agreement to experimental data. We then present a verification and validation study focusing on the solution verification covering spatial and temporal discretization, iterative and domain......The wave loads and the resulting motions of floating wave energy converters are traditionally computed using linear radiation–diffraction methods. Yet for certain cases such as survival conditions, phase control and wave energy converters operating in the resonance region, more complete...... dynamics simulations have largely been overlooked in the wave energy sector. In this article, we apply formal verification and validation techniques to computational fluid dynamics simulations of a passively controlled point absorber. The phase control causes the motion response to be highly nonlinear even...

  19. Computer graphics in reactor safety analysis

    International Nuclear Information System (INIS)

    Fiala, C.; Kulak, R.F.

    1989-01-01

    This paper describes a family of three computer graphics codes designed to assist the analyst in three areas: the modelling of complex three-dimensional finite element models of reactor structures; the interpretation of computational results; and the reporting of the results of numerical simulations. The purpose and key features of each code are presented. The graphics output used in actual safety analysis are used to illustrate the capabilities of each code. 5 refs., 10 figs

  20. PIXAN: the Lucas Heights PIXE analysis computer package

    International Nuclear Information System (INIS)

    Clayton, E.

    1986-11-01

    To fully utilise the multielement capability and short measurement time of PIXE it is desirable to have an automated computer evaluation of the measured spectra. Because of the complex nature of PIXE spectra, a critical step in the analysis is the data reduction, in which the areas of characteristic peaks in the spectrum are evaluated. In this package the computer program BATTY is presented for such an analysis. The second step is to determine element concentrations, knowing the characteristic peak areas in the spectrum. This requires a knowledge of the expected X-ray yield for that element in the sample. The computer program THICK provides that information for both thick and thin PIXE samples. Together, these programs form the package PIXAN used at Lucas Heights for PIXE analysis

  1. Current Work in Energy Analysis (Energy Analysis Program -1996 Annual Report)

    Energy Technology Data Exchange (ETDEWEB)

    Energy Analysis Program

    1998-03-01

    This report describes the work that Environmental Energy Technologies Division of Lawrence Berkeley National Laboratory has been doing most recently. One of our proudest accomplishments is the publication of Scenarios of U.S. Carbon Reductions, an analysis of the potential of energy technologies to reduce carbon emissions in the U.S. This analysis played a key role in shaping the U.S. position on climate change in the Kyoto Protocol negotiations. Our participation in the fundamental characterization of the climate change issue by the IPCC is described. We are also especially proud of our study of ''leaking electricity,'' which is stimulating an international campaign for a one-watt ceiling for standby electricity losses from appliances. This ceiling has the potential to save two-thirds of the 5% of U.S. residential electricity currently expended on standby losses. The 54 vignettes contained in the following pages summarize results of research. activities ranging in scale from calculating the efficacy of individual lamp ballasts to estimating the cost-effectiveness of the national ENERGY STAR{reg_sign} labeling program, and ranging in location from a scoping study of energy-efficiency market transformation in California to development of an energy-efficiency project in the auto parts industry in Shandong Province, China. These are the intellectual endeavors of a talented team of researchers dedicated to public service.

  2. Energy Analysis Program 1990 annual report

    Energy Technology Data Exchange (ETDEWEB)

    1992-01-01

    The Energy Analysis Program has played an active role in the analysis and discussion of energy and environmental issues at several levels. (1) at the international level, with programs as developing scenarios for long-term energy demand in developing countries and organizing leading an analytic effort, Energy Efficiency, Developing Countries, and Eastern Europe,'' part of a major effort to increase support for energy efficiency programs worldwide; (2) at national level, the Program has been responsible for assessing energy forecasts and policies affecting energy use (e.g., appliance standards, National Energy Strategy scenarios); and (3) at the state and utility levels, the Program has been a leader in promoting integrated resource utility planning; the collaborative process has led to agreement on a new generation of utility demand-site programs in California, providing an opportunity to use knowledge and analytic techniques of the Program's researchers. We continue to place highest on analyzing energy efficiency, with particular attention given to energy use in buildings. The Program continues its active analysis of international energy issues in Asia (including China), the Soviet Union, South America, and Western Europe. Analyzing the costs and benefits of different levels of standards for residential appliances continues to be the largest single area of research within the Program. The group has developed and applied techniques for forecasting energy demand (or constructing scenarios) for the United States. We have built a new model of industrial energy demand, are in the process of making major changes in our tools for forecasting residential energy demand, have built an extensive and documented energy conservation supply curve of residential energy use, and are beginning an analysis of energy-demand forecasting for commercial buildings.

  3. Computer-based quantitative computed tomography image analysis in idiopathic pulmonary fibrosis: A mini review.

    Science.gov (United States)

    Ohkubo, Hirotsugu; Nakagawa, Hiroaki; Niimi, Akio

    2018-01-01

    Idiopathic pulmonary fibrosis (IPF) is the most common type of progressive idiopathic interstitial pneumonia in adults. Many computer-based image analysis methods of chest computed tomography (CT) used in patients with IPF include the mean CT value of the whole lungs, density histogram analysis, density mask technique, and texture classification methods. Most of these methods offer good assessment of pulmonary functions, disease progression, and mortality. Each method has merits that can be used in clinical practice. One of the texture classification methods is reported to be superior to visual CT scoring by radiologist for correlation with pulmonary function and prediction of mortality. In this mini review, we summarize the current literature on computer-based CT image analysis of IPF and discuss its limitations and several future directions. Copyright © 2017 The Japanese Respiratory Society. Published by Elsevier B.V. All rights reserved.

  4. Energy law preserving C0 finite element schemes for phase field models in two-phase flow computations

    International Nuclear Information System (INIS)

    Hua Jinsong; Lin Ping; Liu Chun; Wang Qi

    2011-01-01

    Highlights: → We study phase-field models for multi-phase flow computation. → We develop an energy-law preserving C0 FEM. → We show that the energy-law preserving method work better. → We overcome unphysical oscillation associated with the Cahn-Hilliard model. - Abstract: We use the idea in to develop the energy law preserving method and compute the diffusive interface (phase-field) models of Allen-Cahn and Cahn-Hilliard type, respectively, governing the motion of two-phase incompressible flows. We discretize these two models using a C 0 finite element in space and a modified midpoint scheme in time. To increase the stability in the pressure variable we treat the divergence free condition by a penalty formulation, under which the discrete energy law can still be derived for these diffusive interface models. Through an example we demonstrate that the energy law preserving method is beneficial for computing these multi-phase flow models. We also demonstrate that when applying the energy law preserving method to the model of Cahn-Hilliard type, un-physical interfacial oscillations may occur. We examine the source of such oscillations and a remedy is presented to eliminate the oscillations. A few two-phase incompressible flow examples are computed to show the good performance of our method.

  5. Hybrid soft computing systems for electromyographic signals analysis: a review

    Science.gov (United States)

    2014-01-01

    Electromyographic (EMG) is a bio-signal collected on human skeletal muscle. Analysis of EMG signals has been widely used to detect human movement intent, control various human-machine interfaces, diagnose neuromuscular diseases, and model neuromusculoskeletal system. With the advances of artificial intelligence and soft computing, many sophisticated techniques have been proposed for such purpose. Hybrid soft computing system (HSCS), the integration of these different techniques, aims to further improve the effectiveness, efficiency, and accuracy of EMG analysis. This paper reviews and compares key combinations of neural network, support vector machine, fuzzy logic, evolutionary computing, and swarm intelligence for EMG analysis. Our suggestions on the possible future development of HSCS in EMG analysis are also given in terms of basic soft computing techniques, further combination of these techniques, and their other applications in EMG analysis. PMID:24490979

  6. Multicriteria analysis of thermal and energy systems for tourist facilities

    International Nuclear Information System (INIS)

    Raguzin, I.

    1999-01-01

    The introductory part of the paper briefly presents the technological, economic and environmental optimisation procedure of thermal and energy systems for tourist facilities with the multicriteria ranging method when choosing an optimum solution. The procedure described includes a systematic analysis of the system's structure, energy-mass balance, balance of costs, environmental impact analysis and the choice of an optimum solution. Special attention was paid to criteria quantification for the choice of solution and the most appropriate ranging method.The procedure's application has been illustrated on an example of a potential tourist facility on the Island of Loinj, i.e. the locality with a potential highest category tourist development. This example includes (a) consumers (heating of rooms, preparation of hot water, heating of swimming pool water and cooling of rooms), and (b) producers (boiler room, cooling engine-rooms, a cogeneration plant and heat pumps). The data have been supplied from the project documentation for the reconstruction of the existing facilities mainly preliminary designs. The multicriteria ranging was conducted based on an appropriate computer programme for problem solution. (author)

  7. Introduction Of Computational Materials Science

    International Nuclear Information System (INIS)

    Lee, Jun Geun

    2006-08-01

    This book gives, descriptions of computer simulation, computational materials science, typical three ways of computational materials science, empirical methods ; molecular dynamics such as potential energy, Newton's equation of motion, data production and analysis of results, quantum mechanical methods like wave equation, approximation, Hartree method, and density functional theory, dealing of solid such as pseudopotential method, tight-binding methods embedded atom method, Car-Parrinello method and combination simulation.

  8. Computational Fluid Dynamics Analysis of an Evaporative Cooling System

    Directory of Open Access Journals (Sweden)

    Kapilan N.

    2016-11-01

    Full Text Available The use of chlorofluorocarbon based refrigerants in the air-conditioning system increases the global warming and causes the climate change. The climate change is expected to present a number of challenges for the built environment and an evaporative cooling system is one of the simplest and environmentally friendly cooling system. The evaporative cooling system is most widely used in summer and in rural and urban areas of India for human comfort. In evaporative cooling system, the addition of water into air reduces the temperature of the air as the energy needed to evaporate the water is taken from the air. Computational fluid dynamics is a numerical analysis and was used to analyse the evaporative cooling system. The CFD results are matches with the experimental results.

  9. Energy Savings Analysis of the Proposed NYStretch-Energy Code 2018

    Energy Technology Data Exchange (ETDEWEB)

    Liu, Bing [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Zhang, Jian [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Chen, Yan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Edelson, Jim [New Buildings Inst. (NBI), Portland, OR (United States); Lyles, Mark [New Buildings Inst. (NBI), Portland, OR (United States)

    2018-01-20

    This study was conducted by the Pacific Northwest National Laboratory (PNNL) in support of the stretch energy code development led by the New York State Energy Research and Development Authority (NYSERDA). In 2017 NYSERDA developed its 2016 Stretch Code Supplement to the 2016 New York State Energy Conservation Construction Code (hereinafter referred to as “NYStretch-Energy”). NYStretch-Energy is intended as a model energy code for statewide voluntary adoption that anticipates other code advancements culminating in the goal of a statewide Net Zero Energy Code by 2028. Since then, NYSERDA continues to develop the NYStretch-Energy Code 2018 edition. To support the effort, PNNL conducted energy simulation analysis to quantify the energy savings of proposed commercial provisions of the NYStretch-Energy Code (2018) in New York. The focus of this project is the 20% improvement over existing commercial model energy codes. A key requirement of the proposed stretch code is that it be ‘adoptable’ as an energy code, meaning that it must align with current code scope and limitations, and primarily impact building components that are currently regulated by local building departments. It is largely limited to prescriptive measures, which are what most building departments and design projects are most familiar with. This report describes a set of energy-efficiency measures (EEMs) that demonstrate 20% energy savings over ANSI/ASHRAE/IES Standard 90.1-2013 (ASHRAE 2013) across a broad range of commercial building types and all three climate zones in New York. In collaboration with New Building Institute, the EEMs were developed from national model codes and standards, high-performance building codes and standards, regional energy codes, and measures being proposed as part of the on-going code development process. PNNL analyzed these measures using whole building energy models for selected prototype commercial buildings and multifamily buildings representing buildings in New

  10. Spatial analysis statistics, visualization, and computational methods

    CERN Document Server

    Oyana, Tonny J

    2015-01-01

    An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...

  11. Computer analysis and comparison of chess players' game-playing styles

    OpenAIRE

    Krevs, Urša

    2015-01-01

    Today's computer chess programs are very good at evaluating chess positions. Research has shown that we can rank chess players by the quality of their game play, using a computer chess program. In the master's thesis Computer analysis and comparison of chess players' game-playing styles, we focus on the content analysis of chess games using a computer chess program's evaluation and attributes we determined for each individual position. We defined meaningful attributes that can be used for com...

  12. Computer assisted functional analysis. Computer gestuetzte funktionelle Analyse

    Energy Technology Data Exchange (ETDEWEB)

    Schmidt, H A.E.; Roesler, H

    1982-01-01

    The latest developments in computer-assisted functional analysis (CFA) in nuclear medicine are presented in about 250 papers of the 19th international annual meeting of the Society of Nuclear Medicine (Bern, September 1981). Apart from the mathematical and instrumental aspects of CFA, computerized emission tomography is given particular attention. Advances in nuclear medical diagnosis in the fields of radiopharmaceuticals, cardiology, angiology, neurology, ophthalmology, pulmonology, gastroenterology, nephrology, endocrinology, oncology and osteology are discussed.

  13. San Carlos Apache Tribe - Energy Organizational Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Rapp, James; Albert, Steve

    2012-04-01

    The San Carlos Apache Tribe (SCAT) was awarded $164,000 in late-2011 by the U.S. Department of Energy (U.S. DOE) Tribal Energy Program's "First Steps Toward Developing Renewable Energy and Energy Efficiency on Tribal Lands" Grant Program. This grant funded:  The analysis and selection of preferred form(s) of tribal energy organization (this Energy Organization Analysis, hereinafter referred to as "EOA").  Start-up staffing and other costs associated with the Phase 1 SCAT energy organization.  An intern program.  Staff training.  Tribal outreach and workshops regarding the new organization and SCAT energy programs and projects, including two annual tribal energy summits (2011 and 2012). This report documents the analysis and selection of preferred form(s) of a tribal energy organization.

  14. Review on the applications of the very high speed computing technique to atomic energy field

    International Nuclear Information System (INIS)

    Hoshino, Tsutomu

    1981-01-01

    The demand of calculation in atomic energy field is enormous, and the physical and technological knowledge obtained by experiments are summarized into mathematical models, and accumulated as the computer programs for design, safety analysis of operational management. These calculation code systems are classified into reactor physics, reactor technology, operational management and nuclear fusion. In this paper, the demand of calculation speed in the diffusion and transport of neutrons, shielding, technological safety, core control and particle simulation is explained as the typical calculation. These calculations are divided into two models, the one is fluid model which regards physical systems as continuum, and the other is particle model which regards physical systems as composed of finite number of particles. The speed of computers in present state is too slow, and the capability 1000 to 10000 times as much as the present general purpose machines is desirable. The calculation techniques of pipeline system and parallel processor system are described. As an example of the practical system, the computer network OCTOPUS in the Lorence Livermore Laboratory is shown. Also, the CHI system in UCLA is introduced. (Kako, I.)

  15. Research Progress in Mathematical Analysis of Map Projection by Computer Algebra

    Directory of Open Access Journals (Sweden)

    BIAN Shaofeng

    2017-10-01

    Full Text Available Map projection is an important component of modern cartography, and involves many fussy mathematical analysis processes, such as the power series expansions of elliptical functions, differential of complex and implicit functions, elliptical integral and the operation of complex numbers. The derivation of these problems by hand not only consumes much time and energy but also makes mistake easily, and sometimes can not be realized at all because of the impossible complexity. The research achievements in mathematical analysis of map projection by computer algebra are systematically reviewed in five aspects, i.e., the symbolic expressions of forward and inverse solution of ellipsoidal latitudes, the direct transformations between map projections with different distortion properties, expressions of Gauss projection by complex function, mathematical analysis of oblique Mercator projection, polar chart projection with its transformation. Main problems that need to be further solved in this research field are analyzed. It will be helpful to promote the development of map projection.

  16. Computation of Hemagglutinin Free Energy Difference by the Confinement Method

    Science.gov (United States)

    2017-01-01

    Hemagglutinin (HA) mediates membrane fusion, a crucial step during influenza virus cell entry. How many HAs are needed for this process is still subject to debate. To aid in this discussion, the confinement free energy method was used to calculate the conformational free energy difference between the extended intermediate and postfusion state of HA. Special care was taken to comply with the general guidelines for free energy calculations, thereby obtaining convergence and demonstrating reliability of the results. The energy that one HA trimer contributes to fusion was found to be 34.2 ± 3.4kBT, similar to the known contributions from other fusion proteins. Although computationally expensive, the technique used is a promising tool for the further energetic characterization of fusion protein mechanisms. Knowledge of the energetic contributions per protein, and of conserved residues that are crucial for fusion, aids in the development of fusion inhibitors for antiviral drugs. PMID:29151344

  17. Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hasenkamp, Daren; Sim, Alexander; Wehner, Michael; Wu, Kesheng

    2010-09-30

    Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, while we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.

  18. Finding Tropical Cyclones on a Cloud Computing Cluster: Using Parallel Virtualization for Large-Scale Climate Simulation Analysis

    International Nuclear Information System (INIS)

    Hasenkamp, Daren; Sim, Alexander; Wehner, Michael; Wu, Kesheng

    2010-01-01

    Extensive computing power has been used to tackle issues such as climate changes, fusion energy, and other pressing scientific challenges. These computations produce a tremendous amount of data; however, many of the data analysis programs currently only run a single processor. In this work, we explore the possibility of using the emerging cloud computing platform to parallelize such sequential data analysis tasks. As a proof of concept, we wrap a program for analyzing trends of tropical cyclones in a set of virtual machines (VMs). This approach allows the user to keep their familiar data analysis environment in the VMs, while we provide the coordination and data transfer services to ensure the necessary input and output are directed to the desired locations. This work extensively exercises the networking capability of the cloud computing systems and has revealed a number of weaknesses in the current cloud system software. In our tests, we are able to scale the parallel data analysis job to a modest number of VMs and achieve a speedup that is comparable to running the same analysis task using MPI. However, compared to MPI based parallelization, the cloud-based approach has a number of advantages. The cloud-based approach is more flexible because the VMs can capture arbitrary software dependencies without requiring the user to rewrite their programs. The cloud-based approach is also more resilient to failure; as long as a single VM is running, it can make progress while as soon as one MPI node fails the whole analysis job fails. In short, this initial work demonstrates that a cloud computing system is a viable platform for distributed scientific data analyses traditionally conducted on dedicated supercomputing systems.

  19. Energy Scaling Advantages of Resistive Memory Crossbar Based Computation and its Application to Sparse Coding

    Directory of Open Access Journals (Sweden)

    Sapan eAgarwal

    2016-01-01

    Full Text Available The exponential increase in data over the last decade presents a significant challenge to analytics efforts that seek to process and interpret such data for various applications. Neural-inspired computing approaches are being developed in order to leverage the computational advantages of the analog, low-power data processing observed in biological systems. Analog resistive memory crossbars can perform a parallel read or a vector-matrix multiplication as well as a parallel write or a rank-1 update with high computational efficiency. For an NxN crossbar, these two kernels are at a minimum O(N more energy efficient than a digital memory-based architecture. If the read operation is noise limited, the energy to read a column can be independent of the crossbar size (O(1. These two kernels form the basis of many neuromorphic algorithms such as image, text, and speech recognition. For instance, these kernels can be applied to a neural sparse coding algorithm to give an O(N reduction in energy for the entire algorithm. Sparse coding is a rich problem with a host of applications including computer vision, object tracking, and more generally unsupervised learning.

  20. Crosscut report: Exascale Requirements Reviews, March 9–10, 2017 – Tysons Corner, Virginia. An Office of Science review sponsored by: Advanced Scientific Computing Research, Basic Energy Sciences, Biological and Environmental Research, Fusion Energy Sciences, High Energy Physics, Nuclear Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Hack, James [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Riley, Katherine [Argonne National Lab., IL (United States). Argonne Leadership Computing Facility (ALCF); Antypas, Katie [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Coffey, Richard [Argonne National Lab. (ANL), Argonne, IL (United States). Argonne Leadership Computing Facility (ALCF); Dart, Eli [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). ESnet; Straatsma, Tjerk [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Wells, Jack [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF); Bard, Deborah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Dosanjh, Sudip [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Monga, Inder [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). ESnet; Papka, Michael E. [Argonne National Lab. (ANL), Argonne, IL (United States). Argonne Leadership Computing Facility; Rotman, Lauren [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). ESnet

    2018-01-22

    The mission of the U.S. Department of Energy Office of Science (DOE SC) is the delivery of scientific discoveries and major scientific tools to transform our understanding of nature and to advance the energy, economic, and national security missions of the United States. To achieve these goals in today’s world requires investments in not only the traditional scientific endeavors of theory and experiment, but also in computational science and the facilities that support large-scale simulation and data analysis. The Advanced Scientific Computing Research (ASCR) program addresses these challenges in the Office of Science. ASCR’s mission is to discover, develop, and deploy computational and networking capabilities to analyze, model, simulate, and predict complex phenomena important to DOE. ASCR supports research in computational science, three high-performance computing (HPC) facilities — the National Energy Research Scientific Computing Center (NERSC) at Lawrence Berkeley National Laboratory and Leadership Computing Facilities at Argonne (ALCF) and Oak Ridge (OLCF) National Laboratories — and the Energy Sciences Network (ESnet) at Berkeley Lab. ASCR is guided by science needs as it develops research programs, computers, and networks at the leading edge of technologies. As we approach the era of exascale computing, technology changes are creating challenges for science programs in SC for those who need to use high performance computing and data systems effectively. Numerous significant modifications to today’s tools and techniques will be needed to realize the full potential of emerging computing systems and other novel computing architectures. To assess these needs and challenges, ASCR held a series of Exascale Requirements Reviews in 2015–2017, one with each of the six SC program offices,1 and a subsequent Crosscut Review that sought to integrate the findings from each. Participants at the reviews were drawn from the communities of leading domain

  1. Energy Systems Modelling Research and Analysis

    DEFF Research Database (Denmark)

    Møller Andersen, Frits; Alberg Østergaard, Poul

    2015-01-01

    This editorial introduces the seventh volume of the International Journal of Sustainable Energy Planning and Management. The volume presents part of the outcome of the project Energy Systems Modelling Research and Analysis (ENSYMORA) funded by the Danish Innovation Fund. The project carried out b...... by 11 university and industry partners has improved the basis for decision-making within energy planning and energy scenario making by providing new and improved tools and methods for energy systems analyses.......This editorial introduces the seventh volume of the International Journal of Sustainable Energy Planning and Management. The volume presents part of the outcome of the project Energy Systems Modelling Research and Analysis (ENSYMORA) funded by the Danish Innovation Fund. The project carried out...

  2. Efficient Scheduling of Scientific Workflows with Energy Reduction Using Novel Discrete Particle Swarm Optimization and Dynamic Voltage Scaling for Computational Grids

    Directory of Open Access Journals (Sweden)

    M. Christobel

    2015-01-01

    Full Text Available One of the most significant and the topmost parameters in the real world computing environment is energy. Minimizing energy imposes benefits like reduction in power consumption, decrease in cooling rates of the computing processors, provision of a green environment, and so forth. In fact, computation time and energy are directly proportional to each other and the minimization of computation time may yield a cost effective energy consumption. Proficient scheduling of Bag-of-Tasks in the grid environment ravages in minimum computation time. In this paper, a novel discrete particle swarm optimization (DPSO algorithm based on the particle’s best position (pbDPSO and global best position (gbDPSO is adopted to find the global optimal solution for higher dimensions. This novel DPSO yields better schedule with minimum computation time compared to Earliest Deadline First (EDF and First Come First Serve (FCFS algorithms which comparably reduces energy. Other scheduling parameters, such as job completion ratio and lateness, are also calculated and compared with EDF and FCFS. An energy improvement of up to 28% was obtained when Makespan Conservative Energy Reduction (MCER and Dynamic Voltage Scaling (DVS were used in the proposed DPSO algorithm.

  3. Efficient Scheduling of Scientific Workflows with Energy Reduction Using Novel Discrete Particle Swarm Optimization and Dynamic Voltage Scaling for Computational Grids

    Science.gov (United States)

    Christobel, M.; Tamil Selvi, S.; Benedict, Shajulin

    2015-01-01

    One of the most significant and the topmost parameters in the real world computing environment is energy. Minimizing energy imposes benefits like reduction in power consumption, decrease in cooling rates of the computing processors, provision of a green environment, and so forth. In fact, computation time and energy are directly proportional to each other and the minimization of computation time may yield a cost effective energy consumption. Proficient scheduling of Bag-of-Tasks in the grid environment ravages in minimum computation time. In this paper, a novel discrete particle swarm optimization (DPSO) algorithm based on the particle's best position (pbDPSO) and global best position (gbDPSO) is adopted to find the global optimal solution for higher dimensions. This novel DPSO yields better schedule with minimum computation time compared to Earliest Deadline First (EDF) and First Come First Serve (FCFS) algorithms which comparably reduces energy. Other scheduling parameters, such as job completion ratio and lateness, are also calculated and compared with EDF and FCFS. An energy improvement of up to 28% was obtained when Makespan Conservative Energy Reduction (MCER) and Dynamic Voltage Scaling (DVS) were used in the proposed DPSO algorithm. PMID:26075296

  4. Development of small scale cluster computer for numerical analysis

    Science.gov (United States)

    Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.

    2017-09-01

    In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.

  5. Generation of L sub-shell photo-ionization cross-sections for elements 18Z92 at energies .320-115.606 keV (A computer program 'LSPICS')

    International Nuclear Information System (INIS)

    Sharma, Ajay; Mittal, Raj

    2005-01-01

    L sub-shell photo-ionization cross-sections, σ Li , for elements 18Z92 at energies .320-115.606 keV have been generated from an empirical relation fitted to Scofield's L sub-shell photo-ionization cross-section values. The excitation energy E for an element is constrained by the condition that only L and higher shell vacancies are produced in the elements. The closeness of generated and existing values of Scofield's L sub-shell data recommends the use of generated values in the fields of atomic and molecular physics and for trace elemental analysis. For this purpose computer software 'LSPICS' has been developed. On personal computer LSPICS generates L sub-shell photo-ionization cross-section values in barns just by entering the atomic number of element and excitation photon energy in keV

  6. Framework for Computation Offloading in Mobile Cloud Computing

    Directory of Open Access Journals (Sweden)

    Dejan Kovachev

    2012-12-01

    Full Text Available The inherently limited processing power and battery lifetime of mobile phones hinder the possible execution of computationally intensive applications like content-based video analysis or 3D modeling. Offloading of computationally intensive application parts from the mobile platform into a remote cloud infrastructure or nearby idle computers addresses this problem. This paper presents our Mobile Augmentation Cloud Services (MACS middleware which enables adaptive extension of Android application execution from a mobile client into the cloud. Applications are developed by using the standard Android development pattern. The middleware does the heavy lifting of adaptive application partitioning, resource monitoring and computation offloading. These elastic mobile applications can run as usual mobile application, but they can also use remote computing resources transparently. Two prototype applications using the MACS middleware demonstrate the benefits of the approach. The evaluation shows that applications, which involve costly computations, can benefit from offloading with around 95% energy savings and significant performance gains compared to local execution only.

  7. Automatic analysis of digitized TV-images by a computer-driven optical microscope

    International Nuclear Information System (INIS)

    Rosa, G.; Di Bartolomeo, A.; Grella, G.; Romano, G.

    1997-01-01

    New methods of image analysis and three-dimensional pattern recognition were developed in order to perform the automatic scan of nuclear emulsion pellicles. An optical microscope, with a motorized stage, was equipped with a CCD camera and an image digitizer, and interfaced to a personal computer. Selected software routines inspired the design of a dedicated hardware processor. Fast operation, high efficiency and accuracy were achieved. First applications to high-energy physics experiments are reported. Further improvements are in progress, based on a high-resolution fast CCD camera and on programmable digital signal processors. Applications to other research fields are envisaged. (orig.)

  8. An analysis methodology for hot leg break mass and energy release

    International Nuclear Information System (INIS)

    Song, Jin Ho; Kwon, Young Min; Kim, Taek Mo; Chung, Hae Yong; Lee, Sang Jong

    1996-07-01

    An analysis methodology for the hot leg break mass and energy release is developed. For the blowdown period a modified CEFLASH-4A analysis is suggested. For the post-blowdown period a new computer model named COMET is developed. Differently from previous post-blowdown analysis model FLOOD3, COMET is capable of analyzing both cold leg and hot leg break cases. The cold leg break model is essentially same as that of FLOOD3 with some improvements. The analysis results by the newly proposed hot leg break model in the COMET is in the same trend as those observed in scaled-down integral experiment. And the analyses results for the UCN 3 and 4 by COMET are qualitatively and quantitatively in good agreement with those predicted by best-estimate analysis by using RELAP5/MOD3. Therefore, the COMET code is validated and can be used for the licensing analysis. 6 tabs., 82 figs., 9 refs. (Author)

  9. Simple prescription for computing the interparticle potential energy for D-dimensional gravity systems

    International Nuclear Information System (INIS)

    Accioly, Antonio; Helayël-Neto, José; Barone, F E; Herdy, Wallace

    2015-01-01

    A straightforward prescription for computing the D-dimensional potential energy of gravitational models, which is strongly based on the Feynman path integral, is built up. Using this method, the static potential energy for the interaction of two masses is found in the context of D-dimensional higher-derivative gravity models, and its behavior is analyzed afterwards in both ultraviolet and infrared regimes. As a consequence, two new gravity systems in which the potential energy is finite at the origin, respectively, in D = 5 and D = 6, are found. Since the aforementioned prescription is equivalent to that based on the marriage between quantum mechanics (to leading order, i.e., in the first Born approximation) and the nonrelativistic limit of quantum field theory, and bearing in mind that the latter relies basically on the calculation of the nonrelativistic Feynman amplitude (M NR ), a trivial expression for computing M NR is obtained from our prescription as an added bonus. (paper)

  10. Use of computer codes for system reliability analysis

    International Nuclear Information System (INIS)

    Sabek, M.; Gaafar, M.; Poucet, A.

    1989-01-01

    This paper gives a summary of studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRACTIC, FTAP, computer code package RALLY, and BOUNDS. Two reference case studies were executed by each code. The probabilistic results obtained, as well as the computation times are compared. The two cases studied are the auxiliary feedwater system of a 1300 MW PWR reactor and the emergency electrical power supply system. (author)

  11. Use of computer codes for system reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Sabek, M.; Gaafar, M. (Nuclear Regulatory and Safety Centre, Atomic Energy Authority, Cairo (Egypt)); Poucet, A. (Commission of the European Communities, Ispra (Italy). Joint Research Centre)

    1989-01-01

    This paper gives a summary of studies performed at the JRC, ISPRA on the use of computer codes for complex systems analysis. The computer codes dealt with are: CAFTS-SALP software package, FRACTIC, FTAP, computer code package RALLY, and BOUNDS. Two reference case studies were executed by each code. The probabilistic results obtained, as well as the computation times are compared. The two cases studied are the auxiliary feedwater system of a 1300 MW PWR reactor and the emergency electrical power supply system. (author).

  12. Automating sensitivity analysis of computer models using computer calculus

    International Nuclear Information System (INIS)

    Oblow, E.M.; Pin, F.G.

    1985-01-01

    An automated procedure for performing sensitivity analyses has been developed. The procedure uses a new FORTRAN compiler with computer calculus capabilities to generate the derivatives needed to set up sensitivity equations. The new compiler is called GRESS - Gradient Enhanced Software System. Application of the automated procedure with ''direct'' and ''adjoint'' sensitivity theory for the analysis of non-linear, iterative systems of equations is discussed. Calculational efficiency consideration and techniques for adjoint sensitivity analysis are emphasized. The new approach is found to preserve the traditional advantages of adjoint theory while removing the tedious human effort previously needed to apply this theoretical methodology. Conclusions are drawn about the applicability of the automated procedure in numerical analysis and large-scale modelling sensitivity studies. 24 refs., 2 figs

  13. Dynamic performance analysis of two regional Nuclear Hybrid Energy Systems

    International Nuclear Information System (INIS)

    Garcia, Humberto E.; Chen, Jun; Kim, Jong S.; Vilim, Richard B.; Binder, William R.; Bragg Sitton, Shannon M.; Boardman, Richard D.; McKellar, Michael G.; Paredis, Christiaan J.J.

    2016-01-01

    In support of more efficient utilization of clean energy generation sources, including renewable and nuclear options, HES (hybrid energy systems) can be designed and operated as FER (flexible energy resources) to meet both electrical and thermal energy needs in the electric grid and industrial sectors. These conceptual systems could effectively and economically be utilized, for example, to manage the increasing levels of dynamic variability and uncertainty introduced by VER (variable energy resources) such as renewable sources (e.g., wind, solar), distributed energy resources, demand response schemes, and modern energy demands (e.g., electric vehicles) with their ever changing usage patterns. HES typically integrate multiple energy inputs (e.g., nuclear and renewable generation) and multiple energy outputs (e.g., electricity, gasoline, fresh water) using complementary energy conversion processes. This paper reports a dynamic analysis of two realistic HES including a nuclear reactor as the main baseload heat generator and to assess the local (e.g., HES owners) and system (e.g., the electric grid) benefits attainable by their application in scenarios with multiple commodity production and high renewable penetration. It is performed for regional cases – not generic examples – based on available resources, existing infrastructure, and markets within the selected regions. This study also briefly addresses the computational capabilities developed to conduct such analyses. - Highlights: • Hybrids including renewables can operate as dispatchable flexible energy resources. • Nuclear energy can address high variability and uncertainty in energy systems. • Nuclear hybrids can reliably provide grid services over various time horizons. • Nuclear energy can provide operating reserves and grid inertia under high renewables. • Nuclear hybrids can greatly reduce GHG emissions and support grid and industry needs.

  14. Adapting for uncertainty : a scenario analysis of U.S. technology energy futures

    International Nuclear Information System (INIS)

    Laitner, J.A.; Hanson, D.A.; Mintzner, I.; Leonard, J.A.

    2006-01-01

    The pattern of future evolution for United States (US) energy markets is highly uncertain at this time. This article provided details of a study using a scenario analysis technique to investigate key energy issues affecting decision-making processes in the United States. Four scenarios were used to examine the driving forces and critical uncertainties that may shape United States energy markets and the economy for the next 50 years: (1) a reference scenario benchmarked to the 2002 annual energy outlook forecast, (2) abundant and inexpensive supplies of oil and gas, (3) a chaotic future beset with international conflict, faltering new technologies, environmental policy difficulties and slowed economic growth, and (4) a technology-driven market in which a variety of forces converge to reshape the energy sector. Each of the scenarios was quantified using a computable general equilibrium model known as the All Modular Industry Growth Assessment (AMIGA) model. Results suggested that the range of different outcomes for the US is broad. However, energy use is expected to increase in all 4 scenarios. It was observed that the introduction of policies to encourage capital stock turnover and accelerate the commercialization of high efficiency, low-emissions technologies may reduce future primary energy demand. The analysis also showed that lower energy prices may lead to higher economic growth. Policies introduced to improve energy efficiency and accelerate the introduction of new technologies did not appreciably reduce the prospects for economic growth. Results also suggested that lower fossil fuel prices discourage investments in energy efficiency or new technologies and may mask the task of responding to future surprises. It was concluded that an investment path that emphasizes both energy efficiency improvements and advanced energy supply technologies will provide economic growth conditions similar to the implementation of lower energy prices. 11 refs., 1 tab., 2 figs

  15. Temporal fringe pattern analysis with parallel computing

    International Nuclear Information System (INIS)

    Tuck Wah Ng; Kar Tien Ang; Argentini, Gianluca

    2005-01-01

    Temporal fringe pattern analysis is invaluable in transient phenomena studies but necessitates long processing times. Here we describe a parallel computing strategy based on the single-program multiple-data model and hyperthreading processor technology to reduce the execution time. In a two-node cluster workstation configuration we found that execution periods were reduced by 1.6 times when four virtual processors were used. To allow even lower execution times with an increasing number of processors, the time allocated for data transfer, data read, and waiting should be minimized. Parallel computing is found here to present a feasible approach to reduce execution times in temporal fringe pattern analysis

  16. A thermal spike analysis of low energy ion activated surface processes

    International Nuclear Information System (INIS)

    Gilmore, G.M.; Haeri, A.; Sprague, J.A.

    1989-01-01

    This paper reports a thermal spike analysis utilized to predict the time evolution of energy propagation through a solid resulting from energetic particle impact. An analytical solution was developed that can predict the number of surface excitations such as desorption, diffusion or chemical reaction activated by an energetic particle. The analytical solution is limited to substrates at zero Kelvin and to materials with constant thermal diffusivities. These limitations were removed by developing a computer numerical integration of the propagation of the thermal spike through the solid and the subsequent activation of surface processes

  17. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  18. Kokhanok Renewable Energy Retrofit Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Baring-Gould, Edward I. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Haase, Scott G. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Jimenez, Antonio [National Renewable Energy Lab. (NREL), Golden, CO (United States); Olis, Daniel R. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2017-12-21

    In 2010, the community of Kokhanok, Alaska, installed two 90-kW wind turbines, battery storage, a converter, and equipment for integration. Researchers at the National Renewable Energy Laboratory performed an analysis and modeling using the HOMER and REopt software modeling packages.The analysis was designed to answer the following questions: 1) What is required to achieve a 50 percent reduction in power plant diesel fuel consumption in a diesel microgrid? 2) What is required to achieve a 50 percent reduction in 'total' (diesel and heating oil) consumption in a remote community? 3) What is the impact and role of energy efficiency? This presentation provides an introduction to the community of Kokhanok, Alaska; a summary of energy data; and an overview of analysis results and conceptual design.

  19. Can cloud computing benefit health services? - a SWOT analysis.

    Science.gov (United States)

    Kuo, Mu-Hsing; Kushniruk, Andre; Borycki, Elizabeth

    2011-01-01

    In this paper, we discuss cloud computing, the current state of cloud computing in healthcare, and the challenges and opportunities of adopting cloud computing in healthcare. A Strengths, Weaknesses, Opportunities and Threats (SWOT) analysis was used to evaluate the feasibility of adopting this computing model in healthcare. The paper concludes that cloud computing could have huge benefits for healthcare but there are a number of issues that will need to be addressed before its widespread use in healthcare.

  20. On the Design of Energy-Efficient Location Tracking Mechanism in Location-Aware Computing

    Directory of Open Access Journals (Sweden)

    MoonBae Song

    2005-01-01

    Full Text Available The battery, in contrast to other hardware, is not governed by Moore's Law. In location-aware computing, power is a very limited resource. As a consequence, recently, a number of promising techniques in various layers have been proposed to reduce the energy consumption. The paper considers the problem of minimizing the energy used to track the location of mobile user over a wireless link in mobile computing. Energy-efficient location update protocol can be done by reducing the number of location update messages as possible and switching off as long as possible. This can be achieved by the concept of mobility-awareness we propose. For this purpose, this paper proposes a novel mobility model, called state-based mobility model (SMM to provide more generalized framework for both describing the mobility and updating location information of complexly moving objects. We also introduce the state-based location update protocol (SLUP based on this mobility model. An extensive experiment on various synthetic datasets shows that the proposed method improves the energy efficiency by 2 ∼ 3 times with the additional 10% of imprecision cost.

  1. Conference “Computational Analysis and Optimization” (CAO 2011)

    CERN Document Server

    Tiihonen, Timo; Tuovinen, Tero; Numerical Methods for Differential Equations, Optimization, and Technological Problems : Dedicated to Professor P. Neittaanmäki on His 60th Birthday

    2013-01-01

    This book contains the results in numerical analysis and optimization presented at the ECCOMAS thematic conference “Computational Analysis and Optimization” (CAO 2011) held in Jyväskylä, Finland, June 9–11, 2011. Both the conference and this volume are dedicated to Professor Pekka Neittaanmäki on the occasion of his sixtieth birthday. It consists of five parts that are closely related to his scientific activities and interests: Numerical Methods for Nonlinear Problems; Reliable Methods for Computer Simulation; Analysis of Noised and Uncertain Data; Optimization Methods; Mathematical Models Generated by Modern Technological Problems. The book also includes a short biography of Professor Neittaanmäki.

  2. Green Cloud Computing: An Experimental Validation

    International Nuclear Information System (INIS)

    Monteiro, Rogerio Castellar; Dantas, M A R; Rodriguez y Rodriguez, Martius Vicente

    2014-01-01

    Cloud configurations can be computational environment with interesting cost efficiency for several organizations sizes. However, the indiscriminate action of buying servers and network devices may not represent a correspondent performance number. In the academic and commercial literature, some researches highlight that these environments are idle for long periods. Therefore, energy management is an essential approach in any organization, because energy bills can causes remarkable negative impacts to these organizations in term of costs. In this paper, we present a research work that is characterized by an analysis of energy consumption in a private cloud computing environment, considering both computational resources and network devices. This study was motivated by a real case of a large organization. Therefore, the first part of the study we considered empirical experiments. In a second moment we used the GreenCloud simulator which was utilized to foresee some different configurations. The research reached a successful and differentiated goal in presenting key issues from computational resources and network, related to the energy consumption for real private cloud

  3. Building Energy Monitoring and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Hong, Tianzhen; Feng, Wei; Lu, Alison; Xia, Jianjun; Yang, Le; Shen, Qi; Im, Piljae; Bhandari, Mahabir

    2013-06-01

    U.S. and China are the world’s top two economics. Together they consumed one-third of the world’s primary energy. It is an unprecedented opportunity and challenge for governments, researchers and industries in both countries to join together to address energy issues and global climate change. Such joint collaboration has huge potential in creating new jobs in energy technologies and services. Buildings in the US and China consumed about 40% and 25% of the primary energy in both countries in 2010 respectively. Worldwide, the building sector is the largest contributor to the greenhouse gas emission. Better understanding and improving the energy performance of buildings is a critical step towards sustainable development and mitigation of global climate change. This project aimed to develop a standard methodology for building energy data definition, collection, presentation, and analysis; apply the developed methods to a standardized energy monitoring platform, including hardware and software, to collect and analyze building energy use data; and compile offline statistical data and online real-time data in both countries for fully understanding the current status of building energy use. This helps decode the driving forces behind the discrepancy of building energy use between the two countries; identify gaps and deficiencies of current building energy monitoring, data collection, and analysis; and create knowledge and tools to collect and analyze good building energy data to provide valuable and actionable information for key stakeholders.

  4. Computer codes for safety analysis

    International Nuclear Information System (INIS)

    Holland, D.F.

    1986-11-01

    Computer codes for fusion safety analysis have been under development in the United States for about a decade. This paper will discuss five codes that are currently under development by the Fusion Safety Program. The purpose and capability of each code will be presented, a sample given, followed by a discussion of the present status and future development plans

  5. Computing the universe: how large-scale simulations illuminate galaxies and dark energy

    Science.gov (United States)

    O'Shea, Brian

    2015-04-01

    High-performance and large-scale computing is absolutely to understanding astronomical objects such as stars, galaxies, and the cosmic web. This is because these are structures that operate on physical, temporal, and energy scales that cannot be reasonably approximated in the laboratory, and whose complexity and nonlinearity often defies analytic modeling. In this talk, I show how the growth of computing platforms over time has facilitated our understanding of astrophysical and cosmological phenomena, focusing primarily on galaxies and large-scale structure in the Universe.

  6. Digital image processing and analysis human and computer vision applications with CVIPtools

    CERN Document Server

    Umbaugh, Scott E

    2010-01-01

    Section I Introduction to Digital Image Processing and AnalysisDigital Image Processing and AnalysisOverviewImage Analysis and Computer VisionImage Processing and Human VisionKey PointsExercisesReferencesFurther ReadingComputer Imaging SystemsImaging Systems OverviewImage Formation and SensingCVIPtools SoftwareImage RepresentationKey PointsExercisesSupplementary ExercisesReferencesFurther ReadingSection II Digital Image Analysis and Computer VisionIntroduction to Digital Image AnalysisIntroductionPreprocessingBinary Image AnalysisKey PointsExercisesSupplementary ExercisesReferencesFurther Read

  7. Application of microarray analysis on computer cluster and cloud platforms.

    Science.gov (United States)

    Bernau, C; Boulesteix, A-L; Knaus, J

    2013-01-01

    Analysis of recent high-dimensional biological data tends to be computationally intensive as many common approaches such as resampling or permutation tests require the basic statistical analysis to be repeated many times. A crucial advantage of these methods is that they can be easily parallelized due to the computational independence of the resampling or permutation iterations, which has induced many statistics departments to establish their own computer clusters. An alternative is to rent computing resources in the cloud, e.g. at Amazon Web Services. In this article we analyze whether a selection of statistical projects, recently implemented at our department, can be efficiently realized on these cloud resources. Moreover, we illustrate an opportunity to combine computer cluster and cloud resources. In order to compare the efficiency of computer cluster and cloud implementations and their respective parallelizations we use microarray analysis procedures and compare their runtimes on the different platforms. Amazon Web Services provide various instance types which meet the particular needs of the different statistical projects we analyzed in this paper. Moreover, the network capacity is sufficient and the parallelization is comparable in efficiency to standard computer cluster implementations. Our results suggest that many statistical projects can be efficiently realized on cloud resources. It is important to mention, however, that workflows can change substantially as a result of a shift from computer cluster to cloud computing.

  8. Software systems for processing and analysis at the NOVA high-energy laser facility

    International Nuclear Information System (INIS)

    Auerbach, J.M.; Montgomery, D.S.; McCauley, E.W.; Stone, G.F.

    1986-01-01

    A typical laser interaction experiment at the NOVA high-energy laser facility produces in excess of 20 Mbytes of digitized data. Extensive processing and analysis of this raw data from a wide variety of instruments is necessary to produce results that can be readily used to interpret the experiment. Using VAX-based computer hardware, software systems have been set up to convert the digitized instrument output to physics quantities describing the experiment. A relational data-base management system is used to coordinate all levels of processing and analysis. Software development emphasizes structured design, flexibility, automation, and ease of use

  9. Computing the Free Energy Barriers for Less by Sampling with a Coarse Reference Potential while Retaining Accuracy of the Target Fine Model.

    Science.gov (United States)

    Plotnikov, Nikolay V

    2014-08-12

    Proposed in this contribution is a protocol for calculating fine-physics (e.g., ab initio QM/MM) free-energy surfaces at a high level of accuracy locally (e.g., only at reactants and at the transition state for computing the activation barrier) from targeted fine-physics sampling and extensive exploratory coarse-physics sampling. The full free-energy surface is still computed but at a lower level of accuracy from coarse-physics sampling. The method is analytically derived in terms of the umbrella sampling and the free-energy perturbation methods which are combined with the thermodynamic cycle and the targeted sampling strategy of the paradynamics approach. The algorithm starts by computing low-accuracy fine-physics free-energy surfaces from the coarse-physics sampling in order to identify the reaction path and to select regions for targeted sampling. Thus, the algorithm does not rely on the coarse-physics minimum free-energy reaction path. Next, segments of high-accuracy free-energy surface are computed locally at selected regions from the targeted fine-physics sampling and are positioned relative to the coarse-physics free-energy shifts. The positioning is done by averaging the free-energy perturbations computed with multistep linear response approximation method. This method is analytically shown to provide results of the thermodynamic integration and the free-energy interpolation methods, while being extremely simple in implementation. Incorporating the metadynamics sampling to the algorithm is also briefly outlined. The application is demonstrated by calculating the B3LYP//6-31G*/MM free-energy barrier for an enzymatic reaction using a semiempirical PM6/MM reference potential. These modifications allow computing the activation free energies at a significantly reduced computational cost but at the same level of accuracy compared to computing full potential of mean force.

  10. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  11. Computational screening of new inorganic materials for highly efficient solar energy conversion

    DEFF Research Database (Denmark)

    Kuhar, Korina

    2017-01-01

    in solar cells convert solar energy into electricity, and PC uses harvested energy to conduct chemical reactions, such as splitting water into oxygen and, more importantly, hydrogen, also known as the fuel of the future. Further progress in both PV and PC fields is mostly limited by the flaws in materials...... materials. In this work a high-throughput computational search for suitable absorbers for PV and PC applications is presented. A set of descriptors has been developed, such that each descriptor targets an important property or issue of a good solar energy conversion material. The screening study...... that we have access to. Despite the vast amounts of energy at our disposal, we are not able to harvest this solar energy efficiently. Currently, there are a few ways of converting solar power into usable energy, such as photovoltaics (PV) or photoelectrochemical generation of fuels (PC). PV processes...

  12. HYDRA-II: A hydrothermal analysis computer code: Volume 1, Equations and numerics

    International Nuclear Information System (INIS)

    McCann, R.A.

    1987-04-01

    HYDRA-II is a hydrothermal computer code capable of three-dimensional analysis of coupled conduction, convection, and thermal radiation problems. This code is especially appropriate for simulating the steady-state performance of spent fuel storage systems. The code has been evaluated for this application for the US Department of Energy's Commercial Spent Fuel Management Program. HYDRA-II provides a finite difference solution in Cartesian coordinates to the equations governing the conservation of mass, momentum, and energy. A cylindrical coordinate system may also be used to enclose the Cartesian coordinate system. This exterior coordinate system is useful for modeling cylindrical cask bodies. The difference equations for conservation of momentum are enhanced by the incorporation of directional porosities and permeabilities that aid in modeling solid structures whose dimensions may be smaller than the computational mesh. The equation for conservation of energy permits of modeling of orthotropic physical properties and film resistances. Several automated procedures are available to model radiation transfer within enclosures and from fuel rod to fuel rod. The documentation of HYDRA-II is presented in three separate volumes. This volume, Volume I - Equations and Numerics, describes the basic differential equations, illustrates how the difference equations are formulated, and gives the solution procedures employed. Volume II - User's Manual contains code flow charts, discusses the code structure, provides detailed instructions for preparing an input file, and illustrates the operation of the code by means of a model problem. The final volume, Volume III - Verification/Validation Assessments, presents results of numerical simulations of single- and multiassembly storage systems and comparisons with experimental data. 4 refs

  13. Wireless Cloud Computing on Guided Missile Destroyers: A Business Case Analysis

    Science.gov (United States)

    2013-06-01

    Cloud Computing Network (WCCN) onboard Guided Missile Destroyers (DDGs) utilizing tablet computers. It compares the life cycle costs of WCCNs utilizing tablet computers over a mixed network of thin clients and desktop computers. Currently, the Consolidated Afloat Networks and Enterprise Services (CANES) program will install both thin clients and desktops on board new and old DDGs to implement the unclassified portion of its network. The main cost benefits of tablets will be realized through energy savings and an increase in productivity. The net present value of tablets is

  14. Experimental Testing and Computational Fluid Dynamics Simulation of Maple Seeds and Performance Analysis as a Wind Turbine

    Science.gov (United States)

    Holden, Jacob R.

    Descending maple seeds generate lift to slow their fall and remain aloft in a blowing wind; have the wings of these seeds evolved to descend as slowly as possible? A unique energy balance equation, experimental data, and computational fluid dynamics simulations have all been developed to explore this question from a turbomachinery perspective. The computational fluid dynamics in this work is the first to be performed in the relative reference frame. Maple seed performance has been analyzed for the first time based on principles of wind turbine analysis. Application of the Betz Limit and one-dimensional momentum theory allowed for empirical and computational power and thrust coefficients to be computed for maple seeds. It has been determined that the investigated species of maple seeds perform near the Betz limit for power conversion and thrust coefficient. The power coefficient for a maple seed is found to be in the range of 48-54% and the thrust coefficient in the range of 66-84%. From Betz theory, the stream tube area expansion of the maple seed is necessary for power extraction. Further investigation of computational solutions and mechanical analysis find three key reasons for high maple seed performance. First, the area expansion is driven by maple seed lift generation changing the fluid momentum and requiring area to increase. Second, radial flow along the seed surface is promoted by a sustained leading edge vortex that centrifuges low momentum fluid outward. Finally, the area expansion is also driven by the spanwise area variation of the maple seed imparting a radial force on the flow. These mechanisms result in a highly effective device for the purpose of seed dispersal. However, the maple seed also provides insight into fundamental questions about how turbines can most effectively change the momentum of moving fluids in order to extract useful power or dissipate kinetic energy.

  15. A novel cost based model for energy consumption in cloud computing.

    Science.gov (United States)

    Horri, A; Dastghaibyfard, Gh

    2015-01-01

    Cloud data centers consume enormous amounts of electrical energy. To support green cloud computing, providers also need to minimize cloud infrastructure energy consumption while conducting the QoS. In this study, for cloud environments an energy consumption model is proposed for time-shared policy in virtualization layer. The cost and energy usage of time-shared policy were modeled in the CloudSim simulator based upon the results obtained from the real system and then proposed model was evaluated by different scenarios. In the proposed model, the cache interference costs were considered. These costs were based upon the size of data. The proposed model was implemented in the CloudSim simulator and the related simulation results indicate that the energy consumption may be considerable and that it can vary with different parameters such as the quantum parameter, data size, and the number of VMs on a host. Measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment. Also, measured results validate the model and demonstrate that there is a tradeoff between energy consumption and QoS in the cloud environment.

  16. Computed tomography with energy-resolved detection: a feasibility study

    Science.gov (United States)

    Shikhaliev, Polad M.

    2008-03-01

    The feasibility of computed tomography (CT) with energy-resolved x-ray detection has been investigated. A breast CT design with multi slit multi slice (MSMS) data acquisition was used for this study. The MSMS CT includes linear arrays of photon counting detectors separated by gaps. This CT configuration allows for efficient scatter rejection and 3D data acquisition. The energy-resolved CT images were simulated using a digital breast phantom and the design parameters of the proposed MSMS CT. The phantom had 14 cm diameter and 50/50 adipose/glandular composition, and included carcinoma, adipose, blood, iodine and CaCO3 as contrast elements. The x-ray technique was 90 kVp tube voltage with 660 mR skin exposure. Photon counting, charge (energy) integrating and photon energy weighting CT images were generated. The contrast-to-noise (CNR) improvement with photon energy weighting was quantified. The dual energy subtracted images of CaCO3 and iodine were generated using a single CT scan at a fixed x-ray tube voltage. The x-ray spectrum was electronically split into low- and high-energy parts by a photon counting detector. The CNR of the energy weighting CT images of carcinoma, blood, adipose, iodine, and CaCO3 was higher by a factor of 1.16, 1.20, 1.21, 1.36 and 1.35, respectively, as compared to CT with a conventional charge (energy) integrating detector. Photon energy weighting was applied to CT projections prior to dual energy subtraction and reconstruction. Photon energy weighting improved the CNR in dual energy subtracted CT images of CaCO3 and iodine by a factor of 1.35 and 1.33, respectively. The combination of CNR improvements due to scatter rejection and energy weighting was in the range of 1.71-2 depending on the type of the contrast element. The tilted angle CZT detector was considered as the detector of choice. Experiments were performed to test the effect of the tilting angle on the energy spectrum. Using the CZT detector with 20° tilting angle decreased the

  17. Endoleak detection using single-acquisition split-bolus dual-energy computer tomography (DECT)

    Energy Technology Data Exchange (ETDEWEB)

    Javor, D.; Wressnegger, A.; Unterhumer, S.; Kollndorfer, K.; Nolz, R.; Beitzke, D.; Loewe, C. [Medical University of Vienna, Department of Biomedical Imaging and Image-guided Therapy, Vienna (Austria)

    2017-04-15

    To assess a single-phase, dual-energy computed tomography (DECT) with a split-bolus technique and reconstruction of virtual non-enhanced images for the detection of endoleaks after endovascular aneurysm repair (EVAR). Fifty patients referred for routine follow-up post-EVAR CT and a history of at least one post-EVAR follow-up CT examination using our standard biphasic (arterial and venous phase) routine protocol (which was used as the reference standard) were included in this prospective trial. An in-patient comparison and an analysis of the split-bolus protocol and the previously used double-phase protocol were performed with regard to differences in diagnostic accuracy, radiation dose, and image quality. The analysis showed a significant reduction of radiation dose of up to 42 %, using the single-acquisition split-bolus protocol, while maintaining a comparable diagnostic accuracy (primary endoleak detection rate of 96 %). Image quality between the two protocols was comparable and only slightly inferior for the split-bolus scan (2.5 vs. 2.4). Using the single-acquisition, split-bolus approach allows for a significant dose reduction while maintaining high image quality, resulting in effective endoleak identification. (orig.)

  18. Life-cycle analysis of renewable energy systems

    DEFF Research Database (Denmark)

    Sørensen, Bent

    1994-01-01

    An imlementation of life-cycle analysis (LCA) for energy systems is presented and applied to two renewable energy systems (wind turbines and building-integrated photovoltaic modules) and compared with coal plants......An imlementation of life-cycle analysis (LCA) for energy systems is presented and applied to two renewable energy systems (wind turbines and building-integrated photovoltaic modules) and compared with coal plants...

  19. On gasohol and energy analysis

    Energy Technology Data Exchange (ETDEWEB)

    Johnson, M A

    1983-03-01

    Energy analysis contributed to the public debate on the gasohol program in the U.S., where this analysis became a legal requirement. The published energy analyses for gasohol are reviewed and the authors assess their inherent assumptions and data sources. The analyses are normalized to S.I. units to facilitate comparisons. The process of rationalizing the various treatments uncovered areas of uncertainties, particularly in the methodologies which could be used to analyze some parts of the process. Although the definitive study has still to be written, the consensus is that maize-to-fuel ethanol via the traditional fermentation route is a net consumer of energy.

  20. Analysis on imaging features of mammography in computer radiography and investigation on gray scale transform and energy subtraction

    International Nuclear Information System (INIS)

    Feng Shuli

    2003-01-01

    In this dissertation, a novel transform method based on human visual response features for gray scale mammographic imaging in computer radiography (CR) is presented. The parameters for imaging quality on CR imaging for mammography were investigated experimentally. In addition, methods for image energy subtraction and a novel method of image registration for mammography of CR imaging are presented. Because the images are viewed and investigated by humans, the method of displaying differences in gray scale images is more convenient if the gray scale differences are displayed in a manner commensurate with human visual response principles. Through transformation of image gray scale with this method, the contrast of the image will be enhanced and the capability for humans to extract the useful information from the image will be increased. Tumors and microcalcifications are displayed in a form for humans to view more simply after transforming the image. The method is theoretically and experimentally investigated. Through measurement of the parameters of a geometrically blurred image, MTF, DQE, and ROC on CR imaging, and also comparison with the imaging quality of screen-film systems, the results indicate that CR imaging qualities in DQE and ROC are better than those of screen-film systems. In geometric blur of the image and MTF, the differences in image quality between CR and the screen-film system are very small. The results suggest that the CR system can replace the screen-film system for mammography imaging. In addition, the results show that the optimal imaging energy for CR mammography is about 24 kV. This condition indicates that the imaging energy of the CR system is lower than that of the screen-film system and, therefore, the x-ray dose to the patient for mammography with the CR system is lower than that with the screen-film system. Based on the difference of penetrability of x ray with different wavelength, and the fact that the part of the x-ray beam will pass

  1. Open-Source Integrated Design-Analysis Environment For Nuclear Energy Advanced Modeling & Simulation Final Scientific/Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    O' Leary, Patrick [Kitware, Inc., Clifton Park, NY (United States)

    2017-01-30

    The framework created through the Open-Source Integrated Design-Analysis Environment (IDAE) for Nuclear Energy Advanced Modeling & Simulation grant has simplify and democratize advanced modeling and simulation in the nuclear energy industry that works on a range of nuclear engineering applications. It leverages millions of investment dollars from the Department of Energy's Office of Nuclear Energy for modeling and simulation of light water reactors and the Office of Nuclear Energy's research and development. The IDEA framework enhanced Kitware’s Computational Model Builder (CMB) while leveraging existing open-source toolkits and creating a graphical end-to-end umbrella guiding end-users and developers through the nuclear energy advanced modeling and simulation lifecycle. In addition, the work deliver strategic advancements in meshing and visualization for ensembles.

  2. Stakeholder-driven multi-attribute analysis for energy project selection under uncertainty

    International Nuclear Information System (INIS)

    Read, Laura; Madani, Kaveh; Mokhtari, Soroush; Hanks, Catherine

    2017-01-01

    In practice, selecting an energy project for development requires balancing criteria and competing stakeholder priorities to identify the best alternative. Energy source selection can be modeled as multi-criteria decision-maker problems to provide quantitative support to reconcile technical, economic, environmental, social, and political factors with respect to the stakeholders' interests. Decision making among these complex interactions should also account for the uncertainty present in the input data. In response, this work develops a stochastic decision analysis framework to evaluate alternatives by involving stakeholders to identify both quantitative and qualitative selection criteria and performance metrics which carry uncertainties. The developed framework is illustrated using a case study from Fairbanks, Alaska, where decision makers and residents must decide on a new source of energy for heating and electricity. We approach this problem in a five step methodology: (1) engaging experts (role players) to develop criteria of project performance; (2) collecting a range of quantitative and qualitative input information to determine the performance of each proposed solution according to the selected criteria; (3) performing a Monte-Carlo analysis to capture uncertainties given in the inputs; (4) applying multi-criteria decision-making, social choice (voting), and fallback bargaining methods to account for three different levels of cooperation among the stakeholders; and (5) computing an aggregate performance index (API) score for each alternative based on its performance across criteria and cooperation levels. API scores communicate relative performance between alternatives. In this way, our methodology maps uncertainty from the input data to reflect risk in the decision and incorporates varying degrees of cooperation into the analysis to identify an optimal and practical alternative. - Highlights: • We develop an applicable stakeholder-driven framework for

  3. Measurement of breast tissue composition with dual energy cone-beam computed tomography: A postmortem study

    Energy Technology Data Exchange (ETDEWEB)

    Ding Huanjun; Ducote, Justin L.; Molloi, Sabee [Department of Radiological Sciences, University of California, Irvine, California 92697 (United States)

    2013-06-15

    Purpose: To investigate the feasibility of a three-material compositional measurement of water, lipid, and protein content of breast tissue with dual kVp cone-beam computed tomography (CT) for diagnostic purposes. Methods: Simulations were performed on a flat panel-based computed tomography system with a dual kVp technique in order to guide the selection of experimental acquisition parameters. The expected errors induced by using the proposed calibration materials were also estimated by simulation. Twenty pairs of postmortem breast samples were imaged with a flat-panel based dual kVp cone-beam CT system, followed by image-based material decomposition using calibration data obtained from a three-material phantom consisting of water, vegetable oil, and polyoxymethylene plastic. The tissue samples were then chemically decomposed into their respective water, lipid, and protein contents after imaging to allow direct comparison with data from dual energy decomposition. Results: Guided by results from simulation, the beam energies for the dual kVp cone-beam CT system were selected to be 50 and 120 kVp with the mean glandular dose divided equally between each exposure. The simulation also suggested that the use of polyoxymethylene as the calibration material for the measurement of pure protein may introduce an error of -11.0%. However, the tissue decomposition experiments, which employed a calibration phantom made out of water, oil, and polyoxymethylene, exhibited strong correlation with data from the chemical analysis. The average root-mean-square percentage error for water, lipid, and protein contents was 3.58% as compared with chemical analysis. Conclusions: The results of this study suggest that the water, lipid, and protein contents can be accurately measured using dual kVp cone-beam CT. The tissue compositional information may improve the sensitivity and specificity for breast cancer diagnosis.

  4. 3D motion analysis via energy minimization

    Energy Technology Data Exchange (ETDEWEB)

    Wedel, Andreas

    2009-10-16

    This work deals with 3D motion analysis from stereo image sequences for driver assistance systems. It consists of two parts: the estimation of motion from the image data and the segmentation of moving objects in the input images. The content can be summarized with the technical term machine visual kinesthesia, the sensation or perception and cognition of motion. In the first three chapters, the importance of motion information is discussed for driver assistance systems, for machine vision in general, and for the estimation of ego motion. The next two chapters delineate on motion perception, analyzing the apparent movement of pixels in image sequences for both a monocular and binocular camera setup. Then, the obtained motion information is used to segment moving objects in the input video. Thus, one can clearly identify the thread from analyzing the input images to describing the input images by means of stationary and moving objects. Finally, I present possibilities for future applications based on the contents of this thesis. Previous work in each case is presented in the respective chapters. Although the overarching issue of motion estimation from image sequences is related to practice, there is nothing as practical as a good theory (Kurt Lewin). Several problems in computer vision are formulated as intricate energy minimization problems. In this thesis, motion analysis in image sequences is thoroughly investigated, showing that splitting an original complex problem into simplified sub-problems yields improved accuracy, increased robustness, and a clear and accessible approach to state-of-the-art motion estimation techniques. In Chapter 4, optical flow is considered. Optical flow is commonly estimated by minimizing the combined energy, consisting of a data term and a smoothness term. These two parts are decoupled, yielding a novel and iterative approach to optical flow. The derived Refinement Optical Flow framework is a clear and straight-forward approach to

  5. Computational Chemical Synthesis Analysis and Pathway Design

    Directory of Open Access Journals (Sweden)

    Fan Feng

    2018-06-01

    Full Text Available With the idea of retrosynthetic analysis, which was raised in the 1960s, chemical synthesis analysis and pathway design have been transformed from a complex problem to a regular process of structural simplification. This review aims to summarize the developments of computer-assisted synthetic analysis and design in recent years, and how machine-learning algorithms contributed to them. LHASA system started the pioneering work of designing semi-empirical reaction modes in computers, with its following rule-based and network-searching work not only expanding the databases, but also building new approaches to indicating reaction rules. Programs like ARChem Route Designer replaced hand-coded reaction modes with automatically-extracted rules, and programs like Chematica changed traditional designing into network searching. Afterward, with the help of machine learning, two-step models which combine reaction rules and statistical methods became the main stream. Recently, fully data-driven learning methods using deep neural networks which even do not require any prior knowledge, were applied into this field. Up to now, however, these methods still cannot replace experienced human organic chemists due to their relatively low accuracies. Future new algorithms with the aid of powerful computational hardware will make this topic promising and with good prospects.

  6. 76 FR 64931 - Building Energy Codes Cost Analysis

    Science.gov (United States)

    2011-10-19

    ...-0046] Building Energy Codes Cost Analysis AGENCY: Office of Energy Efficiency and Renewable Energy... reopening of the time period for submitting comments on the request for information on Building Energy Codes... the request for information on Building Energy Code Cost Analysis and provide docket number EERE-2011...

  7. Computer simulation of high-energy recoils in FCC metals: cascade shapes and sizes

    International Nuclear Information System (INIS)

    Heinisch, H.L.

    1981-01-01

    Displacement cascades in copper generated by primary knock-on atoms with energies from 1 keV to 500 keV were produced with the computer code MARLOWE. The sizes and other features of the point defect distributions were measured as a function of energy. In the energy range from 30 keV to 50 keV there is a transition from compact single damage regions to chains of generally closely-spaced, but distinct multiple damage regions. The average spacing between multiple damage regions remains constant with energy. Only a small fraction of the recoils from fusion neutrons is expected to produce widely separated subcascades

  8. USERDA computer program summaries. Numbers 177--239

    International Nuclear Information System (INIS)

    1975-10-01

    Since 1960 the Argonne Code Center has served as a U. S. Atomic Energy Commission information center for computer programs developed and used primarily for the solution of problems in nuclear physics, reactor design, reactor engineering and operation. The Center, through a network of registered installations, collects, validates, maintains, and distributes a library of these computer programs and publishes a compilation of abstracts describing them. In 1972 the scope of the Center's activities was officially expanded to include computer programs developed in all of the U. S. Atomic Energy Commission program areas and the compilation and publication of this report. The Computer Program Summary report contains summaries of computer programs at the specification stage, under development, being checked out, in use, or available at ERDA offices, laboratories, and contractor installations. Programs are divided into the following categories: cross section and resonance integral calculations; spectrum calculations, generation of group constants, lattice and cell problems; static design studies; depletion, fuel management, cost analysis, and reactor economics; space-independent kinetics; space--time kinetics, coupled neutronics--hydrodynamics--thermodynamics and excursion simulations; radiological safety, hazard and accident analysis; heat transfer and fluid flow; deformation and stress distribution computations, structural analysis and engineering design studies; gamma heating and shield design programs; reactor systems analysis; data preparation; data management; subsidiary calculations; experimental data processing; general mathematical and computing system routines; materials; environmental and earth sciences; space sciences; electronics and engineering equipment; chemistry; particle accelerators and high-voltage machines; physics; controlled thermonuclear research; biology and medicine; and data

  9. ORLIB: a computer code that produces one-energy group, time- and spatially-averaged neutron cross sections

    International Nuclear Information System (INIS)

    Blink, J.A.; Dye, R.E.; Kimlinger, J.R.

    1981-12-01

    Calculation of neutron activation of proposed fusion reactors requires a library of neutron-activation cross sections. One such library is ACTL, which is being updated and expanded by Howerton. If the energy-dependent neutron flux is also known as a function of location and time, the buildup and decay of activation products can be calculated. In practice, hand calculation is impractical without energy-averaged cross sections because of the large number of energy groups. A widely used activation computer code, ORIGEN2, also requires energy-averaged cross sections. Accordingly, we wrote the ORLIB code to collapse the ACTL library, using the flux as a weighting function. The ORLIB code runs on the LLNL Cray computer network. We have also modified ORIGEN2 to accept the expanded activation libraries produced by ORLIB

  10. Summary of research in applied mathematics, numerical analysis, and computer sciences

    Science.gov (United States)

    1986-01-01

    The major categories of current ICASE research programs addressed include: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effective numerical methods; computational problems in engineering and physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and computer systems and software, especially vector and parallel computers.

  11. Energy Conservation Using Dynamic Voltage Frequency Scaling for Computational Cloud

    Directory of Open Access Journals (Sweden)

    A. Paulin Florence

    2016-01-01

    Full Text Available Cloud computing is a new technology which supports resource sharing on a “Pay as you go” basis around the world. It provides various services such as SaaS, IaaS, and PaaS. Computation is a part of IaaS and the entire computational requests are to be served efficiently with optimal power utilization in the cloud. Recently, various algorithms are developed to reduce power consumption and even Dynamic Voltage and Frequency Scaling (DVFS scheme is also used in this perspective. In this paper we have devised methodology which analyzes the behavior of the given cloud request and identifies the associated type of algorithm. Once the type of algorithm is identified, using their asymptotic notations, its time complexity is calculated. Using best fit strategy the appropriate host is identified and the incoming job is allocated to the victimized host. Using the measured time complexity the required clock frequency of the host is measured. According to that CPU frequency is scaled up or down using DVFS scheme, enabling energy to be saved up to 55% of total Watts consumption.

  12. Institutional analysis for energy policy

    Energy Technology Data Exchange (ETDEWEB)

    Morris, F.A.; Cole, R.J.

    1980-07-01

    This report summarizes principles, techniques, and other information for doing institutional analyses in the area of energy policy. The report was prepared to support DOE's Regional Issues Identification and Assessment (RIIA) program. RIIA identifies environmental, health, safety, socioeconomic, and institutional issues that could accompany hypothetical future scenarios for energy consumption and production on a regional basis. Chapter 1 provides some theoretical grounding in institutional analysis. Chapter 2 provides information on constructing institutional maps of the processes for bringing on line energy technologies and facilities contemplated in RIIA scenarios. Chapter 3 assesses the institutional constraints, opportunities, and impacts that affect whether these technologies and facilities would in fact be developed. Chapters 4 and 5 show how institutional analysis can support use of exercises such as RIIA in planning institutional change and making energy policy choices.

  13. A single-chip computer analysis system for liquid fluorescence

    International Nuclear Information System (INIS)

    Zhang Yongming; Wu Ruisheng; Li Bin

    1998-01-01

    The single-chip computer analysis system for liquid fluorescence is an intelligent analytic instrument, which is based on the principle that the liquid containing hydrocarbons can give out several characteristic fluorescences when irradiated by strong light. Besides a single-chip computer, the system makes use of the keyboard and the calculation and printing functions of a CASIO printing calculator. It combines optics, mechanism and electronics into one, and is small, light and practical, so it can be used for surface water sample analysis in oil field and impurity analysis of other materials

  14. Deterministic sensitivity and uncertainty analysis for large-scale computer models

    International Nuclear Information System (INIS)

    Worley, B.A.; Pin, F.G.; Oblow, E.M.; Maerker, R.E.; Horwedel, J.E.; Wright, R.Q.

    1988-01-01

    This paper presents a comprehensive approach to sensitivity and uncertainty analysis of large-scale computer models that is analytic (deterministic) in principle and that is firmly based on the model equations. The theory and application of two systems based upon computer calculus, GRESS and ADGEN, are discussed relative to their role in calculating model derivatives and sensitivities without a prohibitive initial manpower investment. Storage and computational requirements for these two systems are compared for a gradient-enhanced version of the PRESTO-II computer model. A Deterministic Uncertainty Analysis (DUA) method that retains the characteristics of analytically computing result uncertainties based upon parameter probability distributions is then introduced and results from recent studies are shown. 29 refs., 4 figs., 1 tab

  15. Energy demand analysis in the industrial sector

    International Nuclear Information System (INIS)

    Lapillone, B.

    1991-01-01

    This Chapter of the publication is dealing with Energy Demand Analysis in the Industrial Sector.Different estimates of energy consumption in Industry taking Thailand as an example is given. Major energy consuming industrial sectors in selected Asian countries are given. Suggestion for the analysis of the energy consumption trends in industry, whether at the overall level or at the sub-sector level (e.g. food) using the conventional approach , through energy/output ratio is given. 4 refs, 7 figs, 13 tabs

  16. System analysis of energy utilization from waste - evaluation of energy, environment and economy. Summary report

    International Nuclear Information System (INIS)

    Sundqvist, Jan-Olov; Granath, Jessica; Frostell, Bjoern; Bjoerklund, Anna; Eriksson, Ola; Carlsson, Marcus

    1999-12-01

    Energy, environmental, and economic consequences of different management systems for municipal solid waste have been studied in a systems analysis. In the systems analysis, different combinations of incineration, materials recycling of separated plastic and cardboard containers, and biological treatment (anaerobic digestion and composting) of easily degradable organic waste, were studied and also compared to landfilling. In the study a computer model (ORWARE) based on LCA methodology was used. Case studies were performed for three different municipalities: Uppsala, Stockholm, and Aelvdalen. The following parameters were used for evaluating the different waste management options: consumption of energy resources, global warming potential, acidification, eutrophication, photo oxidant formation, heavy metal flows, financial economy and welfare economy, where welfare economy is the sum of financial economy and environmental economy. The study shows that reduced landfilling to the benefit of an increased use of energy and material from waste is positive from an environmental and energy as well as economic aspect. This is mainly due to the fact that the choice of waste management method affects processes outside the waste management system, such as production of district heating, electricity, vehicle fuel, plastic, cardboard, and fertiliser. This means that landfilling of energy-rich waste should be avoided as far as possible, both because of the the environmental impact, and because of the low recovery of resources. Incineration should constitute a basis in the waste management systems of the three municipalities studied, even if the waste has to be transported to a regional facility. Once the waste is collected, longer regional transports are of little significance, as long as the transports are carried out in an efficient manner. Comparing materials recycling and incineration, and biological treatment and incineration, no unambiguous conclusions can be drawn. There are

  17. On the feasibility of using emergy analysis as a source of benchmarking criteria through data envelopment analysis: A case study for wind energy

    International Nuclear Information System (INIS)

    Iribarren, Diego; Vázquez-Rowe, Ian; Rugani, Benedetto; Benetto, Enrico

    2014-01-01

    The definition of criteria for the benchmarking of similar entities is often a critical issue in analytical studies because of the multiplicity of criteria susceptible to be taken into account. This issue can be aggravated by the need to handle multiple data for multiple facilities. This article presents a methodological framework, named the Em + DEA method, which combines emergy analysis with Data Envelopment Analysis (DEA) for the ecocentric benchmarking of multiple resembling entities (i.e., multiple decision making units or DMUs). Provided that the life-cycle inventories of these DMUs are available, an emergy analysis is performed through the computation of seven different indicators, which refer to the use of fossil, metal, mineral, nuclear, renewable energy, water and land resources. These independent emergy values are then implemented as inputs for DEA computation, thus providing operational emergy-based efficiency scores and, for the inefficient DMUs, target emergy flows (i.e., feasible emergy benchmarks that would turn inefficient DMUs into efficient). The use of the Em + DEA method is exemplified through a case study of wind energy farms. The potential use of CED (cumulative energy demand) and CExD (cumulative exergy demand) indicators as alternative benchmarking criteria to emergy is discussed. The combined use of emergy analysis with DEA is proven to be a valid methodological approach to provide benchmarks oriented towards the optimisation of the life-cycle performance of a set of multiple similar facilities, not being limited to the operational traits of the assessed units. - Highlights: • Combined emergy and DEA method to benchmark multiple resembling entities. • Life-cycle inventory, emergy analysis and DEA as key steps of the Em + DEA method. • Valid ecocentric benchmarking approach proven through a case study of wind farms. • Comparison with life-cycle energy-based benchmarking criteria (CED/CExD + DEA). • Analysts and decision and policy

  18. Analysis of Future Vehicle Energy Demand in China Based on a Gompertz Function Method and Computable General Equilibrium Model

    Directory of Open Access Journals (Sweden)

    Tian Wu

    2014-11-01

    Full Text Available This paper presents a model for the projection of Chinese vehicle stocks and road vehicle energy demand through 2050 based on low-, medium-, and high-growth scenarios. To derive a gross-domestic product (GDP-dependent Gompertz function, Chinese GDP is estimated using a recursive dynamic Computable General Equilibrium (CGE model. The Gompertz function is estimated using historical data on vehicle development trends in North America, Pacific Rim and Europe to overcome the problem of insufficient long-running data on Chinese vehicle ownership. Results indicate that the number of projected vehicle stocks for 2050 is 300, 455 and 463 million for low-, medium-, and high-growth scenarios respectively. Furthermore, the growth in China’s vehicle stock will increase beyond the inflection point of Gompertz curve by 2020, but will not reach saturation point during the period 2014–2050. Of major road vehicle categories, cars are the largest energy consumers, followed by trucks and buses. Growth in Chinese vehicle demand is primarily determined by per capita GDP. Vehicle saturation levels solely influence the shape of the Gompertz curve and population growth weakly affects vehicle demand. Projected total energy consumption of road vehicles in 2050 is 380, 575 and 586 million tonnes of oil equivalent for each scenario.

  19. Dynamic analysis of hybrid energy systems under flexible operation and variable renewable generation – Part II: Dynamic cost analysis

    International Nuclear Information System (INIS)

    Garcia, Humberto E.; Mohanty, Amit; Lin, Wen-Chiao; Cherry, Robert S.

    2013-01-01

    Dynamic analysis of HES (hybrid energy systems) under flexible operation and variable renewable generation is considered in this two-part communication to better understand various challenges and opportunities associated with the high system variability arising from the integration of renewable energy into the power grid. Advanced HES solutions are investigated in which multiple forms of energy commodities, such as electricity and chemical products, may be exchanged. In particular, a comparative dynamic cost analysis is conducted in this part two of the communication to determine best HES options. The cost function includes a set of metrics for computing fixed costs, such as fixed operations and maintenance and overnight capital costs, and also variable operational costs, such as cost of operational variability, variable operations and maintenance cost, and cost of environmental impact, together with revenues. Assuming natural gas, coal, and nuclear as primary heat sources, preliminary results identify the level of renewable penetration at which a given advanced HES option (e.g., a nuclear hybrid) becomes increasingly more economical than a traditional electricity-only generation solution. Conditions are also revealed under which carbon resources may be better utilized as carbon sources for chemical production rather than as combustion material for electricity generation. - Highlights: ► Dynamic analysis of HES to investigate challenges related to renewable penetration. ► Evaluation of dynamic synergies among HES constituents on system performance. ► Comparison of traditional versus advanced HES candidates. ► Dynamic cost analysis of HES candidates to investigate their economic viability. ► Identification of conditions under which an energy commodity may be best utilized

  20. Computational techniques for inelastic analysis and numerical experiments

    International Nuclear Information System (INIS)

    Yamada, Y.

    1977-01-01

    A number of formulations have been proposed for inelastic analysis, particularly for the thermal elastic-plastic creep analysis of nuclear reactor components. In the elastic-plastic regime, which principally concerns with the time independent behavior, the numerical techniques based on the finite element method have been well exploited and computations have become a routine work. With respect to the problems in which the time dependent behavior is significant, it is desirable to incorporate a procedure which is workable on the mechanical model formulation as well as the method of equation of state proposed so far. A computer program should also take into account the strain-dependent and/or time-dependent micro-structural changes which often occur during the operation of structural components at the increasingly high temperature for a long period of time. Special considerations are crucial if the analysis is to be extended to large strain regime where geometric nonlinearities predominate. The present paper introduces a rational updated formulation and a computer program under development by taking into account the various requisites stated above. (Auth.)

  1. Model for Analysis of Energy Demand (MAED-2). User's manual

    International Nuclear Information System (INIS)

    2007-01-01

    The IAEA has been supporting its Member States in the area of energy planning for sustainable development. Development and dissemination of appropriate methodologies and their computer codes are important parts of this support. This manual has been produced to facilitate the use of the MAED model: Model for Analysis of Energy Demand. The methodology of the MAED model was originally developed by. B. Chateau and B. Lapillonne of the Institute Economique et Juridique de l'Energie (IEJE) of the University of Grenoble, France, and was presented as the MEDEE model. Since then the MEDEE model has been developed and adopted to be appropriate for modelling of various energy demand system. The IAEA adopted MEDEE-2 model and incorporated important modifications to make it more suitable for application in the developing countries, and it was named as the MAED model. The first version of the MAED model was designed for the DOS based system, which was later on converted for the Windows system. This manual presents the latest version of the MAED model. The most prominent feature of this version is its flexibility for representing structure of energy consumption. The model now allows country-specific representations of energy consumption patterns using the MAED methodology. The user can now disaggregate energy consumption according to the needs and/or data availability in her/his country. As such, MAED has now become a powerful tool for modelling widely diverse energy consumption patterns. This manual presents the model in details and provides guidelines for its application

  2. Model for Analysis of Energy Demand (MAED-2). User's manual

    International Nuclear Information System (INIS)

    2006-01-01

    The IAEA has been supporting its Member States in the area of energy planning for sustainable development. Development and dissemination of appropriate methodologies and their computer codes are important parts of this support. This manual has been produced to facilitate the use of the MAED model: Model for Analysis of Energy Demand. The methodology of the MAED model was originally developed by. B. Chateau and B. Lapillonne of the Institute Economique et Juridique de l'Energie (IEJE) of the University of Grenoble, France, and was presented as the MEDEE model. Since then the MEDEE model has been developed and adopted to be appropriate for modelling of various energy demand system. The IAEA adopted MEDEE-2 model and incorporated important modifications to make it more suitable for application in the developing countries, and it was named as the MAED model. The first version of the MAED model was designed for the DOS based system, which was later on converted for the Windows system. This manual presents the latest version of the MAED model. The most prominent feature of this version is its flexibility for representing structure of energy consumption. The model now allows country-specific representations of energy consumption patterns using the MAED methodology. The user can now disaggregate energy consumption according to the needs and/or data availability in her/his country. As such, MAED has now become a powerful tool for modelling widely diverse energy consumption patterns. This manual presents the model in details and provides guidelines for its application

  3. Computer simulations of low energy displacement cascades in a face centered cubic lattice

    International Nuclear Information System (INIS)

    Schiffgens, J.O.; Bourquin, R.D.

    1976-09-01

    Computer simulations of atomic motion in a copper lattice following the production of primary knock-on atoms (PKAs) with energies from 25 to 200 eV are discussed. In this study, a mixed Moliere-Englert pair potential is used to model the copper lattice. The computer code COMENT, which employs the dynamical method, is used to analyze the motion of up to 6000 atoms per time step during cascade evolution. The atoms are specified as initially at rest on the sites of an ideal lattice. A matrix of 12 PKA directions and 6 PKA energies is investigated. Displacement thresholds in the [110] and [100] are calculated to be approximately 17 and 20 eV, respectively. A table showing the stability of isolated Frenkel pairs with different vacancy and interstitial orientations and separations is presented. The numbers of Frenkel pairs and atomic replacements are tabulated as a function of PKA direction for each energy. For PKA energies of 25, 50, 75, 100, 150, and 200 eV, the average number of Frenkel pairs per PKA are 0.4, 0.6, 1.0, 1.2, 1.4, and 2.2 and the average numbers of replacements per PKA are 2.4, 4.0, 3.3, 4.9, 9.3, and 15.8

  4. Energy metrics analysis of hybrid - photovoltaic (PV) modules

    Energy Technology Data Exchange (ETDEWEB)

    Tiwari, Arvind [Department of Electronics and Communication, Krishna Institute of Engineering and Technology, 13 k.m. stone, Ghaziabad - Meerut Road, Ghaziabad 201 206, UP (India); Barnwal, P.; Sandhu, G.S.; Sodha, M.S. [Centre for Energy Studies, Indian Institute of Technology Delhi, Hauz Khas, New Delhi 110 016 (India)

    2009-12-15

    In this paper, energy metrics (energy pay back time, electricity production factor and life cycle conversion efficiency) of hybrid photovoltaic (PV) modules have been analyzed and presented for the composite climate of New Delhi, India. For this purpose, it is necessary to calculate (1) the energy consumption in making different components of the PV modules and (2) the annual energy (electrical and thermal) available from the hybrid-PV modules. A set of mathematical relations have been reformulated for computation of the energy metrics. The manufacturing energy, material production energy, energy use and distribution energy of the system have been taken into account, to determine the embodied energy for the hybrid-PV modules. The embodied energy and annual energy outputs have been used for evaluation of the energy metrics. For hybrid PV module, it has been observed that the EPBT gets significantly reduced by taking into account the increase in annual energy availability of the thermal energy in addition to the electrical energy. The values of EPF and LCCE of hybrid PV module become higher as expected. (author)

  5. Computational Flux Balance Analysis Predicts that Stimulation of Energy Metabolism in Astrocytes and their Metabolic Interactions with Neurons Depend on Uptake of K+ Rather than Glutamate.

    Science.gov (United States)

    DiNuzzo, Mauro; Giove, Federico; Maraviglia, Bruno; Mangia, Silvia

    2017-01-01

    Brain activity involves essential functional and metabolic interactions between neurons and astrocytes. The importance of astrocytic functions to neuronal signaling is supported by many experiments reporting high rates of energy consumption and oxidative metabolism in these glial cells. In the brain, almost all energy is consumed by the Na + /K + ATPase, which hydrolyzes 1 ATP to move 3 Na + outside and 2 K + inside the cells. Astrocytes are commonly thought to be primarily involved in transmitter glutamate cycling, a mechanism that however only accounts for few % of brain energy utilization. In order to examine the participation of astrocytic energy metabolism in brain ion homeostasis, here we attempted to devise a simple stoichiometric relation linking glutamatergic neurotransmission to Na + and K + ionic currents. To this end, we took into account ion pumps and voltage/ligand-gated channels using the stoichiometry derived from available energy budget for neocortical signaling and incorporated this stoichiometric relation into a computational metabolic model of neuron-astrocyte interactions. We aimed at reproducing the experimental observations about rates of metabolic pathways obtained by 13 C-NMR spectroscopy in rodent brain. When simulated data matched experiments as well as biophysical calculations, the stoichiometry for voltage/ligand-gated Na + and K + fluxes generated by neuronal activity was close to a 1:1 relationship, and specifically 63/58 Na + /K + ions per glutamate released. We found that astrocytes are stimulated by the extracellular K + exiting neurons in excess of the 3/2 Na + /K + ratio underlying Na + /K + ATPase-catalyzed reaction. Analysis of correlations between neuronal and astrocytic processes indicated that astrocytic K + uptake, but not astrocytic Na + -coupled glutamate uptake, is instrumental for the establishment of neuron-astrocytic metabolic partnership. Our results emphasize the importance of K + in stimulating the activation of

  6. System analysis of energy utilization from waste - evaluation of energy, environment and economy. Case study - Stockholm

    International Nuclear Information System (INIS)

    Sundqvist, Jan-Olov; Granath, Jessica; Frostell, Bjoern; Bjoerklund, Anna; Eriksson, Ola; Carlsson, Marcus

    1999-12-01

    Energy, environmental, and economic consequences of different management systems for municipal solid waste have been studied in a systems analysis. In the systems analysis, different combinations of incineration, materials recycling of separated plastic and cardboard containers, and biological treatment (anaerobic digestion) of easily degradable organic waste, were studied and also compared to landfilling. In the study a computer model (ORWARE) based on LCA methodology was used. The following parameters were used for evaluating the different waste management options: consumption of energy resources, global warming potential, acidification, eutrophication, photo oxidant formation, heavy metal flows, financial economy and welfare economy, where welfare economy is the sum of financial economy and environmental economy. The study shows that reduced landfilling to the benefit of an increased use of energy and material from the waste is positive, from an environmental and energy as well as economic aspect. This is mainly due to the fact that the choice of waste management method affects processes outside the waste management system, such as production of district heating, electricity, vehicle fuel, plastic, cardboard, and fertiliser. This means that landfilling of energy-rich waste should be avoided as far as possible, both because of the the environmental impact, and because of the low recovery of resources. Incineration should constitute a basis in the waste management system of Stockholm. Once the waste is collected, longer regional transports are of little significance, as long as the transports are carried out in an efficient manner. Comparing materials recycling and incineration, and biological treatment and incineration, no unambiguous conclusions can be drawn. There are benefits and drawbacks associated with all these waste management options. Materials recycling of plastic containers is comparable to incineration from a welfare economic aspect, but gives less

  7. Energy-Efficient Management of Data Center Resources for Cloud Computing: A Vision, Architectural Elements, and Open Challenges

    OpenAIRE

    Buyya, Rajkumar; Beloglazov, Anton; Abawajy, Jemal

    2010-01-01

    Cloud computing is offering utility-oriented IT services to users worldwide. Based on a pay-as-you-go model, it enables hosting of pervasive applications from consumer, scientific, and business domains. However, data centers hosting Cloud applications consume huge amounts of energy, contributing to high operational costs and carbon footprints to the environment. Therefore, we need Green Cloud computing solutions that can not only save energy for the environment but also reduce operational cos...

  8. Energy from sugarcane bagasse under electricity rationing in Brazil: a computable general equilibrium model

    International Nuclear Information System (INIS)

    Scaramucci, Jose A.; Perin, Clovis; Pulino, Petronio; Bordoni, Orlando F.J.G.; Cunha, Marcelo P. da; Cortez, Luis A.B.

    2006-01-01

    In the midst of the institutional reforms of the Brazilian electric sectors initiated in the 1990s, a serious electricity shortage crisis developed in 2001. As an alternative to blackout, the government instituted an emergency plan aimed at reducing electricity consumption. From June 2001 to February 2002, Brazilians were compelled to curtail electricity use by 20%. Since the late 1990s, but especially after the electricity crisis, energy policy in Brazil has been directed towards increasing thermoelectricity supply and promoting further gains in energy conservation. Two main issues are addressed here. Firstly, we estimate the economic impacts of constraining the supply of electric energy in Brazil. Secondly, we investigate the possible penetration of electricity generated from sugarcane bagasse. A computable general equilibrium (CGE) model is used. The traditional sector of electricity and the remainder of the economy are characterized by a stylized top-down representation as nested CES (constant elasticity of substitution) production functions. The electricity production from sugarcane bagasse is described through a bottom-up activity analysis, with a detailed representation of the required inputs based on engineering studies. The model constructed is used to study the effects of the electricity shortage in the preexisting sector through prices, production and income changes. It is shown that installing capacity to generate electricity surpluses by the sugarcane agroindustrial system could ease the economic impacts of an electric energy shortage crisis on the gross domestic product (GDP)

  9. A Two-Tier Energy-Aware Resource Management for Virtualized Cloud Computing System

    Directory of Open Access Journals (Sweden)

    Wei Huang

    2016-01-01

    Full Text Available The economic costs caused by electric power take the most significant part in total cost of data center; thus energy conservation is an important issue in cloud computing system. One well-known technique to reduce the energy consumption is the consolidation of Virtual Machines (VMs. However, it may lose some performance points on energy saving and the Quality of Service (QoS for dynamic workloads. Fortunately, Dynamic Frequency and Voltage Scaling (DVFS is an efficient technique to save energy in dynamic environment. In this paper, combined with the DVFS technology, we propose a cooperative two-tier energy-aware management method including local DVFS control and global VM deployment. The DVFS controller adjusts the frequencies of homogenous processors in each server at run-time based on the practical energy prediction. On the other hand, Global Scheduler assigns VMs onto the designate servers based on the cooperation with the local DVFS controller. The final evaluation results demonstrate the effectiveness of our two-tier method in energy saving.

  10. Information-theoretic analysis of rotational distributions from quantal and quasiclassical computations of reactive and nonreactive scattering

    International Nuclear Information System (INIS)

    Bernstein, R.B.

    1976-01-01

    An information-theoretic approach to the analysis of rotational excitation cross sections was developed by Levine, Bernstein, Johnson, Procaccia, and coworkers and applied to state-to-state cross sections available from numerical computations of reactive and nonreactive scattering (for example, by Wyatt and Kuppermann and their coworkers and by Pack and Pattengill and others). The rotational surprisals are approximately linear in the energy transferred, thereby accounting for the so-called ''exponential gap law'' for rotational relaxation discovered experimentally by Polanyi, Woodall, and Ding. For the ''linear surprisal'' case the unique relation between the surprisal parameter theta/sub R/ and the first moment of the rotational energy distribution provides a link between the pattern of the rotational state distribution and those features of the potential surface which govern the average energy transfer

  11. GAUSS VII: a computer program for the analysis of #betta#-ray spectra from Ge semiconductor spectrometers

    International Nuclear Information System (INIS)

    McCullagh, C.M.; Helmer, R.G.

    1982-10-01

    A description is given of a computer program, GAUSS VII, which has been written to determine #betta#-ray spectra from Ge semiconductor spectrometers. The preliminary portions of the program can determine the energy and width calibration equations, loacte individual peaks and define peak regions that are significantly above the local spectral background. The user may edit these lists of peaks and regions. Each peak region is fitted with one or more components in which the peaks are represented by a Gaussian function or a Gaussian with one or two additive exponential tails on the low-energy side and one on the high-energy side. A step-like background function can be used with each component. The program will automatically recycle to add one or more components to a region if needed to improve the fit. The #betta#-ray energies and intensities are computed from resulting Gaussian positions and peak areas. To allow the user to determine the best results, the results from the analyses for each region with different numbers of components can be printed and line-printer plots of the fits to the data can be made. The quality of the results depends primarily on the ability of the program to define a good spectral region for each analysis and the ability to recycle to determine the proper number of components

  12. Computing energy-optimal trajectories for an autonomous underwater vehicle using direct shooting

    Directory of Open Access Journals (Sweden)

    Inge Spangelo

    1992-07-01

    Full Text Available Energy-optimal trajectories for an autonomous underwater vehicle can be computed using a numerical solution of the optimal control problem. The vehicle is modeled with the six dimensional nonlinear and coupled equations of motion, controlled with DC-motors in all degrees of freedom. The actuators are modeled and controlled with velocity loops. The dissipated energy is expressed in terms of the control variables as a nonquadratic function. Direct shooting methods, including control vector parameterization (CVP arc used in this study. Numerical calculations are performed and good results are achieved.

  13. Energy saving analysis and management modeling based on index decomposition analysis integrated energy saving potential method: Application to complex chemical processes

    International Nuclear Information System (INIS)

    Geng, Zhiqiang; Gao, Huachao; Wang, Yanqing; Han, Yongming; Zhu, Qunxiong

    2017-01-01

    Highlights: • The integrated framework that combines IDA with energy-saving potential method is proposed. • Energy saving analysis and management framework of complex chemical processes is obtained. • This proposed method is efficient in energy optimization and carbon emissions of complex chemical processes. - Abstract: Energy saving and management of complex chemical processes play a crucial role in the sustainable development procedure. In order to analyze the effect of the technology, management level, and production structure having on energy efficiency and energy saving potential, this paper proposed a novel integrated framework that combines index decomposition analysis (IDA) with energy saving potential method. The IDA method can obtain the level of energy activity, energy hierarchy and energy intensity effectively based on data-drive to reflect the impact of energy usage. The energy saving potential method can verify the correctness of the improvement direction proposed by the IDA method. Meanwhile, energy efficiency improvement, energy consumption reduction and energy savings can be visually discovered by the proposed framework. The demonstration analysis of ethylene production has verified the practicality of the proposed method. Moreover, we can obtain the corresponding improvement for the ethylene production based on the demonstration analysis. The energy efficiency index and the energy saving potential of these worst months can be increased by 6.7% and 7.4%, respectively. And the carbon emissions can be reduced by 7.4–8.2%.

  14. Multiscale Computational Analysis of Nitrogen and Oxygen Gas-Phase Thermochemistry in Hypersonic Flows

    Science.gov (United States)

    Bender, Jason D.

    Understanding hypersonic aerodynamics is important for the design of next-generation aerospace vehicles for space exploration, national security, and other applications. Ground-level experimental studies of hypersonic flows are difficult and expensive; thus, computational science plays a crucial role in this field. Computational fluid dynamics (CFD) simulations of extremely high-speed flows require models of chemical and thermal nonequilibrium processes, such as dissociation of diatomic molecules and vibrational energy relaxation. Current models are outdated and inadequate for advanced applications. We describe a multiscale computational study of gas-phase thermochemical processes in hypersonic flows, starting at the atomic scale and building systematically up to the continuum scale. The project was part of a larger effort centered on collaborations between aerospace scientists and computational chemists. We discuss the construction of potential energy surfaces for the N4, N2O2, and O4 systems, focusing especially on the multi-dimensional fitting problem. A new local fitting method named L-IMLS-G2 is presented and compared with a global fitting method. Then, we describe the theory of the quasiclassical trajectory (QCT) approach for modeling molecular collisions. We explain how we implemented the approach in a new parallel code for high-performance computing platforms. Results from billions of QCT simulations of high-energy N2 + N2, N2 + N, and N2 + O2 collisions are reported and analyzed. Reaction rate constants are calculated and sets of reactive trajectories are characterized at both thermal equilibrium and nonequilibrium conditions. The data shed light on fundamental mechanisms of dissociation and exchange reactions -- and their coupling to internal energy transfer processes -- in thermal environments typical of hypersonic flows. We discuss how the outcomes of this investigation and other related studies lay a rigorous foundation for new macroscopic models for

  15. Integrated energy efficient data centre management for green cloud computing : the FP7 GENiC project experience

    NARCIS (Netherlands)

    Torrens, J.I.; Mehta, D.; Zavrel, V.; Grimes, D.; Scherer, Th.; Birke, R.; Chen, L.; Rea, S.; Lopez, L.; Pages, E.; Pesch, D.

    2016-01-01

    Energy consumed by computation and cooling represents the greatest percentage of the average energy consumed in a data centre. As these two aspects are not always coordinated, energy consumption is not optimised. Data centres lack an integrated system that jointly optimises and controls all the

  16. Grand Challenges of Advanced Computing for Energy Innovation Report from the Workshop Held July 31-August 2, 2012

    Energy Technology Data Exchange (ETDEWEB)

    Larzelere, Alex R.; Ashby, Steven F.; Christensen, Dana C.; Crawford, Dona L.; Khaleel, Mohammad A.; John, Grosh; Stults, B. Ray; Lee, Steven L.; Hammond, Steven W.; Grover, Benjamin T.; Neely, Rob; Dudney, Lee Ann; Goldstein, Noah C.; Wells, Jack; Peltz, Jim

    2013-03-06

    On July 31-August 2 of 2012, the U.S. Department of Energy (DOE) held a workshop entitled Grand Challenges of Advanced Computing for Energy Innovation. This workshop built on three earlier workshops that clearly identified the potential for the Department and its national laboratories to enable energy innovation. The specific goal of the workshop was to identify the key challenges that the nation must overcome to apply the full benefit of taxpayer-funded advanced computing technologies to U.S. energy innovation in the ways that the country produces, moves, stores, and uses energy. Perhaps more importantly, the workshop also developed a set of recommendations to help the Department overcome those challenges. These recommendations provide an action plan for what the Department can do in the coming years to improve the nation’s energy future.

  17. Quantum computing applied to calculations of molecular energies: CH2 benchmark.

    Science.gov (United States)

    Veis, Libor; Pittner, Jiří

    2010-11-21

    Quantum computers are appealing for their ability to solve some tasks much faster than their classical counterparts. It was shown in [Aspuru-Guzik et al., Science 309, 1704 (2005)] that they, if available, would be able to perform the full configuration interaction (FCI) energy calculations with a polynomial scaling. This is in contrast to conventional computers where FCI scales exponentially. We have developed a code for simulation of quantum computers and implemented our version of the quantum FCI algorithm. We provide a detailed description of this algorithm and the results of the assessment of its performance on the four lowest lying electronic states of CH(2) molecule. This molecule was chosen as a benchmark, since its two lowest lying (1)A(1) states exhibit a multireference character at the equilibrium geometry. It has been shown that with a suitably chosen initial state of the quantum register, one is able to achieve the probability amplification regime of the iterative phase estimation algorithm even in this case.

  18. Economic Analysis of Nuclear Energy

    International Nuclear Information System (INIS)

    Kim, S. S.; Lee, M. K.; Moon, K. H.; Nam, J. H.; Noh, B. C.; Kim, H. R.

    2008-12-01

    The concerns on the global warming issues in the international community are bringing about a paradigm shift in the national economy including energy technology development. In this connection, the green growth mainly utilizing green technology, which emits low carbon, is being initiated by many advanced countries including Korea. The objective of the study is to evaluate the contribution to the national economy from nuclear energy attributable to the characteristics of green technology, to which nuclear energy belongs. The study covers the role of nuclear in addressing climate change issues, the proper share of nuclear in the electricity sector, the cost analyses of decommissioning and radioactive waste management, and the analysis on the economic performance of nuclear R and D including cost benefit analysis

  19. The role of the computer in automated spectral analysis

    International Nuclear Information System (INIS)

    Rasmussen, S.E.

    This report describes how a computer can be an extremely valuable tool for routine analysis of spectra, which is a time consuming process. A number of general-purpose algorithms that are available for the various phases of the analysis can be implemented, if these algorithms are designed to cope with all the variations that may occur. Since this is basically impossible, one must find a compromise between obscure error and program complexity. This is usually possible with human interaction at critical points. In spectral analysis this is possible if the user scans the data on an interactive graphics terminal, makes the necessary changes and then returns control to the computer for completion of the analysis

  20. Analysis of Computer Network Information Based on "Big Data"

    Science.gov (United States)

    Li, Tianli

    2017-11-01

    With the development of the current era, computer network and large data gradually become part of the people's life, people use the computer to provide convenience for their own life, but at the same time there are many network information problems has to pay attention. This paper analyzes the information security of computer network based on "big data" analysis, and puts forward some solutions.