WorldWideScience

Sample records for computational multi-physics analysis

  1. A Global Sensitivity Analysis Methodology for Multi-physics Applications

    Energy Technology Data Exchange (ETDEWEB)

    Tong, C H; Graziani, F R

    2007-02-02

    Experiments are conducted to draw inferences about an entire ensemble based on a selected number of observations. This applies to both physical experiments as well as computer experiments, the latter of which are performed by running the simulation models at different input configurations and analyzing the output responses. Computer experiments are instrumental in enabling model analyses such as uncertainty quantification and sensitivity analysis. This report focuses on a global sensitivity analysis methodology that relies on a divide-and-conquer strategy and uses intelligent computer experiments. The objective is to assess qualitatively and/or quantitatively how the variabilities of simulation output responses can be accounted for by input variabilities. We address global sensitivity analysis in three aspects: methodology, sampling/analysis strategies, and an implementation framework. The methodology consists of three major steps: (1) construct credible input ranges; (2) perform a parameter screening study; and (3) perform a quantitative sensitivity analysis on a reduced set of parameters. Once identified, research effort should be directed to the most sensitive parameters to reduce their uncertainty bounds. This process is repeated with tightened uncertainty bounds for the sensitive parameters until the output uncertainties become acceptable. To accommodate the needs of multi-physics application, this methodology should be recursively applied to individual physics modules. The methodology is also distinguished by an efficient technique for computing parameter interactions. Details for each step will be given using simple examples. Numerical results on large scale multi-physics applications will be available in another report. Computational techniques targeted for this methodology have been implemented in a software package called PSUADE.

  2. Review of multi-physics temporal coupling methods for analysis of nuclear reactors

    International Nuclear Information System (INIS)

    Zerkak, Omar; Kozlowski, Tomasz; Gajev, Ivan

    2015-01-01

    Highlights: • Review of the numerical methods used for the multi-physics temporal coupling. • Review of high-order improvements to the Operator Splitting coupling method. • Analysis of truncation error due to the temporal coupling. • Recommendations on best-practice approaches for multi-physics temporal coupling. - Abstract: The advanced numerical simulation of a realistic physical system typically involves multi-physics problem. For example, analysis of a LWR core involves the intricate simulation of neutron production and transport, heat transfer throughout the structures of the system and the flowing, possibly two-phase, coolant. Such analysis involves the dynamic coupling of multiple simulation codes, each one devoted to the solving of one of the coupled physics. Multiple temporal coupling methods exist, yet the accuracy of such coupling is generally driven by the least accurate numerical scheme. The goal of this paper is to review in detail the approaches and numerical methods that can be used for the multi-physics temporal coupling, including a comprehensive discussion of the issues associated with the temporal coupling, and define approaches that can be used to perform multi-physics analysis. The paper is not limited to any particular multi-physics process or situation, but is intended to provide a generic description of multi-physics temporal coupling schemes for any development stage of the individual (single-physics) tools and methods. This includes a wide spectrum of situation, where the individual (single-physics) solvers are based on pre-existing computation codes embedded as individual components, or a new development where the temporal coupling can be developed and implemented as a part of code development. The discussed coupling methods are demonstrated in the framework of LWR core analysis

  3. Advanced Mesh-Enabled Monte carlo capability for Multi-Physics Reactor Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, Paul; Evans, Thomas; Tautges, Tim

    2012-12-24

    This project will accumulate high-precision fluxes throughout reactor geometry on a non- orthogonal grid of cells to support multi-physics coupling, in order to more accurately calculate parameters such as reactivity coefficients and to generate multi-group cross sections. This work will be based upon recent developments to incorporate advanced geometry and mesh capability in a modular Monte Carlo toolkit with computational science technology that is in use in related reactor simulation software development. Coupling this capability with production-scale Monte Carlo radiation transport codes can provide advanced and extensible test-beds for these developments. Continuous energy Monte Carlo methods are generally considered to be the most accurate computational tool for simulating radiation transport in complex geometries, particularly neutron transport in reactors. Nevertheless, there are several limitations for their use in reactor analysis. Most significantly, there is a trade-off between the fidelity of results in phase space, statistical accuracy, and the amount of computer time required for simulation. Consequently, to achieve an acceptable level of statistical convergence in high-fidelity results required for modern coupled multi-physics analysis, the required computer time makes Monte Carlo methods prohibitive for design iterations and detailed whole-core analysis. More subtly, the statistical uncertainty is typically not uniform throughout the domain, and the simulation quality is limited by the regions with the largest statistical uncertainty. In addition, the formulation of neutron scattering laws in continuous energy Monte Carlo methods makes it difficult to calculate adjoint neutron fluxes required to properly determine important reactivity parameters. Finally, most Monte Carlo codes available for reactor analysis have relied on orthogonal hexahedral grids for tallies that do not conform to the geometric boundaries and are thus generally not well

  4. Progress and challenges in the development and qualification of multi-level multi-physics coupled methodologies for reactor analysis

    International Nuclear Information System (INIS)

    Ivanov, K.; Avramova, M.

    2007-01-01

    Current trends in nuclear power generation and regulation as well as the design of next generation reactor concepts along with the continuing computer technology progress stimulate the development, qualification and application of multi-physics multi-scale coupled code systems. The efforts have been focused on extending the analysis capabilities by coupling models, which simulate different phenomena or system components, as well as on refining the scale and level of detail of the coupling. This paper reviews the progress made in this area and outlines the remaining challenges. The discussion is illustrated with examples based on neutronics/thermohydraulics coupling in the reactor core modeling. In both fields recent advances and developments are towards more physics-based high-fidelity simulations, which require implementation of improved and flexible coupling methodologies. First, the progresses in coupling of different physics codes along with the advances in multi-level techniques for coupled code simulations are discussed. Second, the issues related to the consistent qualification of coupled multi-physics and multi-scale code systems for design and safety evaluation are presented. The increased importance of uncertainty and sensitivity analysis are discussed along with approaches to propagate the uncertainty quantification between the codes. The incoming OECD LWR Uncertainty Analysis in Modeling (UAM) benchmark is the first international activity to address this issue and it is described in the paper. Finally, the remaining challenges with multi-physics coupling are outlined. (authors)

  5. Progress and challenges in the development and qualification of multi-level multi-physics coupled methodologies for reactor analysis

    Energy Technology Data Exchange (ETDEWEB)

    Ivanov, K.; Avramova, M. [Pennsylvania State Univ., University Park, PA (United States)

    2007-07-01

    Current trends in nuclear power generation and regulation as well as the design of next generation reactor concepts along with the continuing computer technology progress stimulate the development, qualification and application of multi-physics multi-scale coupled code systems. The efforts have been focused on extending the analysis capabilities by coupling models, which simulate different phenomena or system components, as well as on refining the scale and level of detail of the coupling. This paper reviews the progress made in this area and outlines the remaining challenges. The discussion is illustrated with examples based on neutronics/thermohydraulics coupling in the reactor core modeling. In both fields recent advances and developments are towards more physics-based high-fidelity simulations, which require implementation of improved and flexible coupling methodologies. First, the progresses in coupling of different physics codes along with the advances in multi-level techniques for coupled code simulations are discussed. Second, the issues related to the consistent qualification of coupled multi-physics and multi-scale code systems for design and safety evaluation are presented. The increased importance of uncertainty and sensitivity analysis are discussed along with approaches to propagate the uncertainty quantification between the codes. The incoming OECD LWR Uncertainty Analysis in Modeling (UAM) benchmark is the first international activity to address this issue and it is described in the paper. Finally, the remaining challenges with multi-physics coupling are outlined. (authors)

  6. Applied computational physics

    CERN Document Server

    Boudreau, Joseph F; Bianchi, Riccardo Maria

    2018-01-01

    Applied Computational Physics is a graduate-level text stressing three essential elements: advanced programming techniques, numerical analysis, and physics. The goal of the text is to provide students with essential computational skills that they will need in their careers, and to increase the confidence with which they write computer programs designed for their problem domain. The physics problems give them an opportunity to reinforce their programming skills, while the acquired programming skills augment their ability to solve physics problems. The C++ language is used throughout the text. Physics problems include Hamiltonian systems, chaotic systems, percolation, critical phenomena, few-body and multi-body quantum systems, quantum field theory, simulation of radiation transport, and data modeling. The book, the fruit of a collaboration between a theoretical physicist and an experimental physicist, covers a broad range of topics from both viewpoints. Examples, program libraries, and additional documentatio...

  7. Physics Model-Based Scatter Correction in Multi-Source Interior Computed Tomography.

    Science.gov (United States)

    Gong, Hao; Li, Bin; Jia, Xun; Cao, Guohua

    2018-02-01

    Multi-source interior computed tomography (CT) has a great potential to provide ultra-fast and organ-oriented imaging at low radiation dose. However, X-ray cross scattering from multiple simultaneously activated X-ray imaging chains compromises imaging quality. Previously, we published two hardware-based scatter correction methods for multi-source interior CT. Here, we propose a software-based scatter correction method, with the benefit of no need for hardware modifications. The new method is based on a physics model and an iterative framework. The physics model was derived analytically, and was used to calculate X-ray scattering signals in both forward direction and cross directions in multi-source interior CT. The physics model was integrated to an iterative scatter correction framework to reduce scatter artifacts. The method was applied to phantom data from both Monte Carlo simulations and physical experimentation that were designed to emulate the image acquisition in a multi-source interior CT architecture recently proposed by our team. The proposed scatter correction method reduced scatter artifacts significantly, even with only one iteration. Within a few iterations, the reconstructed images fast converged toward the "scatter-free" reference images. After applying the scatter correction method, the maximum CT number error at the region-of-interests (ROIs) was reduced to 46 HU in numerical phantom dataset and 48 HU in physical phantom dataset respectively, and the contrast-noise-ratio at those ROIs increased by up to 44.3% and up to 19.7%, respectively. The proposed physics model-based iterative scatter correction method could be useful for scatter correction in dual-source or multi-source CT.

  8. Multi-scale multi-physics computational chemistry simulation based on ultra-accelerated quantum chemical molecular dynamics method for structural materials in boiling water reactor

    International Nuclear Information System (INIS)

    Miyamoto, Akira; Sato, Etsuko; Sato, Ryo; Inaba, Kenji; Hatakeyama, Nozomu

    2014-01-01

    In collaboration with experimental experts we have reported in the present conference (Hatakeyama, N. et al., “Experiment-integrated multi-scale, multi-physics computational chemistry simulation applied to corrosion behaviour of BWR structural materials”) the results of multi-scale multi-physics computational chemistry simulations applied to the corrosion behaviour of BWR structural materials. In macro-scale, a macroscopic simulator of anode polarization curve was developed to solve the spatially one-dimensional electrochemical equations on the material surface in continuum level in order to understand the corrosion behaviour of typical BWR structural material, SUS304. The experimental anode polarization behaviours of each pure metal were reproduced by fitting all the rates of electrochemical reactions and then the anode polarization curve of SUS304 was calculated by using the same parameters and found to reproduce the experimental behaviour successfully. In meso-scale, a kinetic Monte Carlo (KMC) simulator was applied to an actual-time simulation of the morphological corrosion behaviour under the influence of an applied voltage. In micro-scale, an ultra-accelerated quantum chemical molecular dynamics (UA-QCMD) code was applied to various metallic oxide surfaces of Fe 2 O 3 , Fe 3 O 4 , Cr 2 O 3 modelled as same as water molecules and dissolved metallic ions on the surfaces, then the dissolution and segregation behaviours were successfully simulated dynamically by using UA-QCMD. In this paper we describe details of the multi-scale, multi-physics computational chemistry method especially the UA-QCMD method. This method is approximately 10,000,000 times faster than conventional first-principles molecular dynamics methods based on density-functional theory (DFT), and the accuracy was also validated for various metals and metal oxides compared with DFT results. To assure multi-scale multi-physics computational chemistry simulation based on the UA-QCMD method for

  9. Implementation of Grid-computing Framework for Simulation in Multi-scale Structural Analysis

    Directory of Open Access Journals (Sweden)

    Data Iranata

    2010-05-01

    Full Text Available A new grid-computing framework for simulation in multi-scale structural analysis is presented. Two levels of parallel processing will be involved in this framework: multiple local distributed computing environments connected by local network to form a grid-based cluster-to-cluster distributed computing environment. To successfully perform the simulation, a large-scale structural system task is decomposed into the simulations of a simplified global model and several detailed component models using various scales. These correlated multi-scale structural system tasks are distributed among clusters and connected together in a multi-level hierarchy and then coordinated over the internet. The software framework for supporting the multi-scale structural simulation approach is also presented. The program architecture design allows the integration of several multi-scale models as clients and servers under a single platform. To check its feasibility, a prototype software system has been designed and implemented to perform the proposed concept. The simulation results show that the software framework can increase the speedup performance of the structural analysis. Based on this result, the proposed grid-computing framework is suitable to perform the simulation of the multi-scale structural analysis.

  10. A Comparative Study of Multi-material Data Structures for Computational Physics Applications

    Energy Technology Data Exchange (ETDEWEB)

    Garimella, Rao Veerabhadra [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-31

    The data structures used to represent the multi-material state of a computational physics application can have a drastic impact on the performance of the application. We look at efficient data structures for sparse applications where there may be many materials, but only one or few in most computational cells. We develop simple performance models for use in selecting possible data structures and programming patterns. We verify the analytic models of performance through a small test program of the representative cases.

  11. Computation of Thermodynamic Equilibria Pertinent to Nuclear Materials in Multi-Physics Codes

    Science.gov (United States)

    Piro, Markus Hans Alexander

    Nuclear energy plays a vital role in supporting electrical needs and fulfilling commitments to reduce greenhouse gas emissions. Research is a continuing necessity to improve the predictive capabilities of fuel behaviour in order to reduce costs and to meet increasingly stringent safety requirements by the regulator. Moreover, a renewed interest in nuclear energy has given rise to a "nuclear renaissance" and the necessity to design the next generation of reactors. In support of this goal, significant research efforts have been dedicated to the advancement of numerical modelling and computational tools in simulating various physical and chemical phenomena associated with nuclear fuel behaviour. This undertaking in effect is collecting the experience and observations of a past generation of nuclear engineers and scientists in a meaningful way for future design purposes. There is an increasing desire to integrate thermodynamic computations directly into multi-physics nuclear fuel performance and safety codes. A new equilibrium thermodynamic solver is being developed with this matter as a primary objective. This solver is intended to provide thermodynamic material properties and boundary conditions for continuum transport calculations. There are several concerns with the use of existing commercial thermodynamic codes: computational performance; limited capabilities in handling large multi-component systems of interest to the nuclear industry; convenient incorporation into other codes with quality assurance considerations; and, licensing entanglements associated with code distribution. The development of this software in this research is aimed at addressing all of these concerns. The approach taken in this work exploits fundamental principles of equilibrium thermodynamics to simplify the numerical optimization equations. In brief, the chemical potentials of all species and phases in the system are constrained by estimates of the chemical potentials of the system

  12. MultiSIMNRA: A computational tool for self-consistent ion beam analysis using SIMNRA

    International Nuclear Information System (INIS)

    Silva, T.F.; Rodrigues, C.L.; Mayer, M.; Moro, M.V.; Trindade, G.F.; Aguirre, F.R.; Added, N.; Rizzutto, M.A.; Tabacniks, M.H.

    2016-01-01

    Highlights: • MultiSIMNRA enables the self-consistent analysis of multiple ion beam techniques. • Self-consistent analysis enables unequivocal and reliable modeling of the sample. • Four different computational algorithms available for model optimizations. • Definition of constraints enables to include prior knowledge into the analysis. - Abstract: SIMNRA is widely adopted by the scientific community of ion beam analysis for the simulation and interpretation of nuclear scattering techniques for material characterization. Taking advantage of its recognized reliability and quality of the simulations, we developed a computer program that uses multiple parallel sessions of SIMNRA to perform self-consistent analysis of data obtained by different ion beam techniques or in different experimental conditions of a given sample. In this paper, we present a result using MultiSIMNRA for a self-consistent multi-elemental analysis of a thin film produced by magnetron sputtering. The results demonstrate the potentialities of the self-consistent analysis and its feasibility using MultiSIMNRA.

  13. KIT multi-physics tools for the analysis of design and beyond design basis accidents of light water reactors

    International Nuclear Information System (INIS)

    Sanchez, Victor Hugo; Miassoedov, Alexei; Steinbrueck, M.; Tromm, W.

    2016-01-01

    This paper describes the KIT numerical simulation tools under extension and validation for the analysis of design and beyond design basis accidents (DBA) of Light Water Reactors (LWR). The description of the complex thermal hydraulic, neutron kinetics and chemo-physical phenomena going on during off-normal conditions requires the development of multi-physics and multi-scale simulations tools which are fostered by the rapid increase in computer power nowadays. The KIT numerical tools for DBA and beyond DBA are validated using experimental data of KIT or from abroad. The developments, extensions, coupling approaches and validation work performed at KIT are shortly outlined and discussed in this paper.

  14. KIT multi-physics tools for the analysis of design and beyond design basis accidents of light water reactors

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, Victor Hugo; Miassoedov, Alexei; Steinbrueck, M.; Tromm, W. [Karlsruhe Institute of Technology (KIT), Eggenstein-Leopoldshafen (Germany)

    2016-05-15

    This paper describes the KIT numerical simulation tools under extension and validation for the analysis of design and beyond design basis accidents (DBA) of Light Water Reactors (LWR). The description of the complex thermal hydraulic, neutron kinetics and chemo-physical phenomena going on during off-normal conditions requires the development of multi-physics and multi-scale simulations tools which are fostered by the rapid increase in computer power nowadays. The KIT numerical tools for DBA and beyond DBA are validated using experimental data of KIT or from abroad. The developments, extensions, coupling approaches and validation work performed at KIT are shortly outlined and discussed in this paper.

  15. Front-end vision and multi-scale image analysis multi-scale computer vision theory and applications, written in Mathematica

    CERN Document Server

    Romeny, Bart M Haar

    2008-01-01

    Front-End Vision and Multi-Scale Image Analysis is a tutorial in multi-scale methods for computer vision and image processing. It builds on the cross fertilization between human visual perception and multi-scale computer vision (`scale-space') theory and applications. The multi-scale strategies recognized in the first stages of the human visual system are carefully examined, and taken as inspiration for the many geometric methods discussed. All chapters are written in Mathematica, a spectacular high-level language for symbolic and numerical manipulations. The book presents a new and effective

  16. Sensitivity and Uncertainty Analysis of Coupled Reactor Physics Problems : Method Development for Multi-Physics in Reactors

    NARCIS (Netherlands)

    Perkó, Z.

    2015-01-01

    This thesis presents novel adjoint and spectral methods for the sensitivity and uncertainty (S&U) analysis of multi-physics problems encountered in the field of reactor physics. The first part focuses on the steady state of reactors and extends the adjoint sensitivity analysis methods well

  17. Fermilab advanced computer program multi-microprocessor project

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Biel, J.

    1985-06-01

    Fermilab's Advanced Computer Program is constructing a powerful 128 node multi-microprocessor system for data analysis in high-energy physics. The system will use commercial 32-bit microprocessors programmed in Fortran-77. Extensive software supports easy migration of user applications from a uniprocessor environment to the multiprocessor and provides sophisticated program development, debugging, and error handling and recovery tools. This system is designed to be readily copied, providing computing cost effectiveness of below $2200 per VAX 11/780 equivalent. The low cost, commercial availability, compatibility with off-line analysis programs, and high data bandwidths (up to 160 MByte/sec) make the system an ideal choice for applications to on-line triggers as well as an offline data processor

  18. A Multi-step and Multi-level approach for Computer Aided Molecular Design

    DEFF Research Database (Denmark)

    . The problem formulation step incorporates a knowledge base for the identification and setup of the design criteria. Candidate compounds are identified using a multi-level generate and test CAMD solution algorithm capable of designing molecules having a high level of molecular detail. A post solution step...... using an Integrated Computer Aided System (ICAS) for result analysis and verification is included in the methodology. Keywords: CAMD, separation processes, knowledge base, molecular design, solvent selection, substitution, group contribution, property prediction, ICAS Introduction The use of Computer...... Aided Molecular Design (CAMD) for the identification of compounds having specific physic...

  19. Advanced multi-physics simulation capability for very high temperature reactors

    International Nuclear Information System (INIS)

    Lee, Hyun Chul; Tak, Nam Il; Jo Chang Keun; Noh, Jae Man; Cho, Bong Hyun; Cho, Jin Woung; Hong, Ser Gi

    2012-01-01

    The purpose of this research is to develop methodologies and computer code for high-fidelity multi-physics analysis of very high temperature gas-cooled reactors(VHTRs). The research project was performed through Korea-US I-NERI program. The main research topic was development of methodologies for high-fidelity 3-D whole core transport calculation, development of DeCART code for VHTR reactor physics analysis, generation of VHTR specific 190-group cross-section library for DeCART code, development of DeCART/CORONA coupled code system for neutronics/thermo-fluid multi-physics analysis, and benchmark analysis against various benchmark problems derived from PMR200 reactor. The methodologies and the code systems will be utilized a key technologies in the Nuclear Hydrogen Development and Demonstration program. Export of code system is expected in the near future and the code systems developed in this project are expected to contribute to development and export of nuclear hydrogen production system

  20. Two-Step Multi-Physics Analysis of an Annular Linear Induction Pump for Fission Power Systems

    Science.gov (United States)

    Geng, Steven M.; Reid, Terry V.

    2016-01-01

    One of the key technologies associated with fission power systems (FPS) is the annular linear induction pump (ALIP). ALIPs are used to circulate liquid-metal fluid for transporting thermal energy from the nuclear reactor to the power conversion device. ALIPs designed and built to date for FPS project applications have not performed up to expectations. A unique, two-step approach was taken toward the multi-physics examination of an ALIP using ANSYS Maxwell 3D and Fluent. This multi-physics approach was developed so that engineers could investigate design variations that might improve pump performance. Of interest was to determine if simple geometric modifications could be made to the ALIP components with the goal of increasing the Lorentz forces acting on the liquid-metal fluid, which in turn would increase pumping capacity. The multi-physics model first calculates the Lorentz forces acting on the liquid metal fluid in the ALIP annulus. These forces are then used in a computational fluid dynamics simulation as (a) internal boundary conditions and (b) source functions in the momentum equations within the Navier-Stokes equations. The end result of the two-step analysis is a predicted pump pressure rise that can be compared with experimental data.

  1. Multi-scale analysis of lung computed tomography images

    CERN Document Server

    Gori, I; Fantacci, M E; Preite Martinez, A; Retico, A; De Mitri, I; Donadio, S; Fulcheri, C

    2007-01-01

    A computer-aided detection (CAD) system for the identification of lung internal nodules in low-dose multi-detector helical Computed Tomography (CT) images was developed in the framework of the MAGIC-5 project. The three modules of our lung CAD system, a segmentation algorithm for lung internal region identification, a multi-scale dot-enhancement filter for nodule candidate selection and a multi-scale neural technique for false positive finding reduction, are described. The results obtained on a dataset of low-dose and thin-slice CT scans are shown in terms of free response receiver operating characteristic (FROC) curves and discussed.

  2. Integration of Advanced Probabilistic Analysis Techniques with Multi-Physics Models

    Energy Technology Data Exchange (ETDEWEB)

    Cetiner, Mustafa Sacit; none,; Flanagan, George F. [ORNL; Poore III, Willis P. [ORNL; Muhlheim, Michael David [ORNL

    2014-07-30

    An integrated simulation platform that couples probabilistic analysis-based tools with model-based simulation tools can provide valuable insights for reactive and proactive responses to plant operating conditions. The objective of this work is to demonstrate the benefits of a partial implementation of the Small Modular Reactor (SMR) Probabilistic Risk Assessment (PRA) Detailed Framework Specification through the coupling of advanced PRA capabilities and accurate multi-physics plant models. Coupling a probabilistic model with a multi-physics model will aid in design, operations, and safety by providing a more accurate understanding of plant behavior. This represents the first attempt at actually integrating these two types of analyses for a control system used for operations, on a faster than real-time basis. This report documents the development of the basic communication capability to exchange data with the probabilistic model using Reliability Workbench (RWB) and the multi-physics model using Dymola. The communication pathways from injecting a fault (i.e., failing a component) to the probabilistic and multi-physics models were successfully completed. This first version was tested with prototypic models represented in both RWB and Modelica. First, a simple event tree/fault tree (ET/FT) model was created to develop the software code to implement the communication capabilities between the dynamic-link library (dll) and RWB. A program, written in C#, successfully communicates faults to the probabilistic model through the dll. A systems model of the Advanced Liquid-Metal Reactor–Power Reactor Inherently Safe Module (ALMR-PRISM) design developed under another DOE project was upgraded using Dymola to include proper interfaces to allow data exchange with the control application (ConApp). A program, written in C+, successfully communicates faults to the multi-physics model. The results of the example simulation were successfully plotted.

  3. New computing techniques in physics research

    International Nuclear Information System (INIS)

    Perret-Gallix, D.; Wojcik, W.

    1990-01-01

    These proceedings relate in a pragmatic way the use of methods and techniques of software engineering and artificial intelligence in high energy and nuclear physics. Such fundamental research can only be done through the design, the building and the running of equipments and systems among the most complex ever undertaken by mankind. The use of these new methods is mandatory in such an environment. However their proper integration in these real applications raise some unsolved problems. Their solution, beyond the research field, will lead to a better understanding of some fundamental aspects of software engineering and artificial intelligence. Here is a sample of subjects covered in the proceedings : Software engineering in a multi-users, multi-versions, multi-systems environment, project management, software validation and quality control, data structure and management object oriented languages, multi-languages application, interactive data analysis, expert systems for diagnosis, expert systems for real-time applications, neural networks for pattern recognition, symbolic manipulation for automatic computation of complex processes

  4. A theory manual for multi-physics code coupling in LIME.

    Energy Technology Data Exchange (ETDEWEB)

    Belcourt, Noel; Bartlett, Roscoe Ainsworth; Pawlowski, Roger Patrick; Schmidt, Rodney Cannon; Hooper, Russell Warren

    2011-03-01

    The Lightweight Integrating Multi-physics Environment (LIME) is a software package for creating multi-physics simulation codes. Its primary application space is when computer codes are currently available to solve different parts of a multi-physics problem and now need to be coupled with other such codes. In this report we define a common domain language for discussing multi-physics coupling and describe the basic theory associated with multiphysics coupling algorithms that are to be supported in LIME. We provide an assessment of coupling techniques for both steady-state and time dependent coupled systems. Example couplings are also demonstrated.

  5. MULTI - multifunctional interface of the IBM XT and AT type personal computers

    International Nuclear Information System (INIS)

    Gross, T.; Kalavski, D.; Rubin, D.; Tulaev, A.B.; Tumanov, A.V.

    1988-01-01

    MULTI multifunctional interface which enables to solve problems of personal computer connestion with physical equipment without application of intermediate buses is described. Parallel 32-digit bidirectional 1/10 register and buffered bus of personal computer represent MULTI base. Ways of MULTI application are described

  6. Predictive modeling of coupled multi-physics systems: II. Illustrative application to reactor physics

    International Nuclear Information System (INIS)

    Cacuci, Dan Gabriel; Badea, Madalina Corina

    2014-01-01

    Highlights: • We applied the PMCMPS methodology to a paradigm neutron diffusion model. • We underscore the main steps in applying PMCMPS to treat very large coupled systems. • PMCMPS reduces the uncertainties in the optimally predicted responses and model parameters. • PMCMPS is for sequentially treating coupled systems that cannot be treated simultaneously. - Abstract: This work presents paradigm applications to reactor physics of the innovative mathematical methodology for “predictive modeling of coupled multi-physics systems (PMCMPS)” developed by Cacuci (2014). This methodology enables the assimilation of experimental and computational information and computes optimally predicted responses and model parameters with reduced predicted uncertainties, taking fully into account the coupling terms between the multi-physics systems, but using only the computational resources that would be needed to perform predictive modeling on each system separately. The paradigm examples presented in this work are based on a simple neutron diffusion model, chosen so as to enable closed-form solutions with clear physical interpretations. These paradigm examples also illustrate the computational efficiency of the PMCMPS, which enables the assimilation of additional experimental information, with a minimal increase in computational resources, to reduce the uncertainties in predicted responses and best-estimate values for uncertain model parameters, thus illustrating how very large systems can be treated without loss of information in a sequential rather than simultaneous manner

  7. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  8. A survey of computational physics introductory computational science

    CERN Document Server

    Landau, Rubin H; Bordeianu, Cristian C

    2008-01-01

    Computational physics is a rapidly growing subfield of computational science, in large part because computers can solve previously intractable problems or simulate natural processes that do not have analytic solutions. The next step beyond Landau's First Course in Scientific Computing and a follow-up to Landau and Páez's Computational Physics, this text presents a broad survey of key topics in computational physics for advanced undergraduates and beginning graduate students, including new discussions of visualization tools, wavelet analysis, molecular dynamics, and computational fluid dynamics

  9. Status of computer codes available in AEOI for reactor physics analysis

    International Nuclear Information System (INIS)

    Karbassiafshar, M.

    1986-01-01

    Many of the nuclear computer codes available in Atomic Energy Organization of Iran AEOI can be used for physics analysis of an operating reactor or design purposes. Grasp of the various methods involved and practical experience with these codes would be the starting point for interesting design studies or analysis of operating conditions of presently existing and future reactors. A review of the objectives and flowchart of commonly practiced procedures in reactor physics analysis of LWRs and related computer codes was made, extrapolating to the nationally and internationally available resources. Finally, effective utilization of the existing facilities is discussed and called upon

  10. 16th International workshop on Advanced Computing and Analysis Techniques in physics (ACAT)

    CERN Document Server

    Lokajicek, M; Tumova, N

    2015-01-01

    16th International workshop on Advanced Computing and Analysis Techniques in physics (ACAT). The ACAT workshop series, formerly AIHENP (Artificial Intelligence in High Energy and Nuclear Physics), was created back in 1990. Its main purpose is to gather researchers related with computing in physics research together, from both physics and computer science sides, and bring them a chance to communicate with each other. It has established bridges between physics and computer science research, facilitating the advances in our understanding of the Universe at its smallest and largest scales. With the Large Hadron Collider and many astronomy and astrophysics experiments collecting larger and larger amounts of data, such bridges are needed now more than ever. The 16th edition of ACAT aims to bring related researchers together, once more, to explore and confront the boundaries of computing, automatic data analysis and theoretical calculation technologies. It will create a forum for exchanging ideas among the fields an...

  11. Design, Fabrication and Computational Characterization of a 3D Micro-Valve Built by Multi-Photon Polymerization

    Directory of Open Access Journals (Sweden)

    Stratos Galanopoulos

    2014-08-01

    Full Text Available We report on the design, modeling and fabrication by multi-photon polymerization of a complex medical fluidic device. The physical dimensions of the built micro-valve prototype are compared to those of its computer-designed model. Important fabrication issues such as achieving high dimensional resolution and ability to control distortion due to shrinkage are presented and discussed. The operational performance of both multi-photon and CAD-created models under steady blood flow conditions was evaluated and compared through computational fluid dynamics analysis.

  12. Predictive modeling of coupled multi-physics systems: I. Theory

    International Nuclear Information System (INIS)

    Cacuci, Dan Gabriel

    2014-01-01

    Highlights: • We developed “predictive modeling of coupled multi-physics systems (PMCMPS)”. • PMCMPS reduces predicted uncertainties in predicted model responses and parameters. • PMCMPS treats efficiently very large coupled systems. - Abstract: This work presents an innovative mathematical methodology for “predictive modeling of coupled multi-physics systems (PMCMPS).” This methodology takes into account fully the coupling terms between the systems but requires only the computational resources that would be needed to perform predictive modeling on each system separately. The PMCMPS methodology uses the maximum entropy principle to construct an optimal approximation of the unknown a priori distribution based on a priori known mean values and uncertainties characterizing the parameters and responses for both multi-physics models. This “maximum entropy”-approximate a priori distribution is combined, using Bayes’ theorem, with the “likelihood” provided by the multi-physics simulation models. Subsequently, the posterior distribution thus obtained is evaluated using the saddle-point method to obtain analytical expressions for the optimally predicted values for the multi-physics models parameters and responses along with corresponding reduced uncertainties. Noteworthy, the predictive modeling methodology for the coupled systems is constructed such that the systems can be considered sequentially rather than simultaneously, while preserving exactly the same results as if the systems were treated simultaneously. Consequently, very large coupled systems, which could perhaps exceed available computational resources if treated simultaneously, can be treated with the PMCMPS methodology presented in this work sequentially and without any loss of generality or information, requiring just the resources that would be needed if the systems were treated sequentially

  13. Design and implementation of space physics multi-model application integration based on web

    Science.gov (United States)

    Jiang, Wenping; Zou, Ziming

    With the development of research on space environment and space science, how to develop network online computing environment of space weather, space environment and space physics models for Chinese scientific community is becoming more and more important in recent years. Currently, There are two software modes on space physics multi-model application integrated system (SPMAIS) such as C/S and B/S. the C/S mode which is traditional and stand-alone, demands a team or workshop from many disciplines and specialties to build their own multi-model application integrated system, that requires the client must be deployed in different physical regions when user visits the integrated system. Thus, this requirement brings two shortcomings: reducing the efficiency of researchers who use the models to compute; inconvenience of accessing the data. Therefore, it is necessary to create a shared network resource access environment which could help users to visit the computing resources of space physics models through the terminal quickly for conducting space science research and forecasting spatial environment. The SPMAIS develops high-performance, first-principles in B/S mode based on computational models of the space environment and uses these models to predict "Space Weather", to understand space mission data and to further our understanding of the solar system. the main goal of space physics multi-model application integration system (SPMAIS) is to provide an easily and convenient user-driven online models operating environment. up to now, the SPMAIS have contained dozens of space environment models , including international AP8/AE8 IGRF T96 models and solar proton prediction model geomagnetic transmission model etc. which are developed by Chinese scientists. another function of SPMAIS is to integrate space observation data sets which offers input data for models online high-speed computing. In this paper, service-oriented architecture (SOA) concept that divides system into

  14. Multi-level programming paradigm for extreme computing

    International Nuclear Information System (INIS)

    Petiton, S.; Sato, M.; Emad, N.; Calvin, C.; Tsuji, M.; Dandouna, M.

    2013-01-01

    In order to propose a framework and programming paradigms for post peta-scale computing, on the road to exa-scale computing and beyond, we introduced new languages, associated with a hierarchical multi-level programming paradigm, allowing scientific end-users and developers to program highly hierarchical architectures designed for extreme computing. In this paper, we explain the interest of such hierarchical multi-level programming paradigm for extreme computing and its well adaptation to several large computational science applications, such as for linear algebra solvers used for reactor core physic. We describe the YML language and framework allowing describing graphs of parallel components, which may be developed using PGAS-like language such as XMP, scheduled and computed on supercomputers. Then, we propose experimentations on supercomputers (such as the 'K' and 'Hooper' ones) of the hybrid method MERAM (Multiple Explicitly Restarted Arnoldi Method) as a case study for iterative methods manipulating sparse matrices, and the block Gauss-Jordan method as a case study for direct method manipulating dense matrices. We conclude proposing evolutions for this programming paradigm. (authors)

  15. Towards an Industrial Application of Statistical Uncertainty Analysis Methods to Multi-physical Modelling and Safety Analyses

    International Nuclear Information System (INIS)

    Zhang, Jinzhao; Segurado, Jacobo; Schneidesch, Christophe

    2013-01-01

    Since 1980's, Tractebel Engineering (TE) has being developed and applied a multi-physical modelling and safety analyses capability, based on a code package consisting of the best estimate 3D neutronic (PANTHER), system thermal hydraulic (RELAP5), core sub-channel thermal hydraulic (COBRA-3C), and fuel thermal mechanic (FRAPCON/FRAPTRAN) codes. A series of methodologies have been developed to perform and to license the reactor safety analysis and core reload design, based on the deterministic bounding approach. Following the recent trends in research and development as well as in industrial applications, TE has been working since 2010 towards the application of the statistical sensitivity and uncertainty analysis methods to the multi-physical modelling and licensing safety analyses. In this paper, the TE multi-physical modelling and safety analyses capability is first described, followed by the proposed TE best estimate plus statistical uncertainty analysis method (BESUAM). The chosen statistical sensitivity and uncertainty analysis methods (non-parametric order statistic method or bootstrap) and tool (DAKOTA) are then presented, followed by some preliminary results of their applications to FRAPCON/FRAPTRAN simulation of OECD RIA fuel rod codes benchmark and RELAP5/MOD3.3 simulation of THTF tests. (authors)

  16. Assessment of physical server reliability in multi cloud computing system

    Science.gov (United States)

    Kalyani, B. J. D.; Rao, Kolasani Ramchand H.

    2018-04-01

    Business organizations nowadays functioning with more than one cloud provider. By spreading cloud deployment across multiple service providers, it creates space for competitive prices that minimize the burden on enterprises spending budget. To assess the software reliability of multi cloud application layered software reliability assessment paradigm is considered with three levels of abstractions application layer, virtualization layer, and server layer. The reliability of each layer is assessed separately and is combined to get the reliability of multi-cloud computing application. In this paper, we focused on how to assess the reliability of server layer with required algorithms and explore the steps in the assessment of server reliability.

  17. International MultiConference of Engineers and Computer Scientists 2016

    CERN Document Server

    Kim, Haeng; Huang, Xu; Castillo, Oscar

    2017-01-01

    This volume contains selected revised and extended research articles written by prominent researchers who participated in the International MultiConference of Engineers and Computer Scientists 2016, held in Hong Kong, 16-18 March 2016. Topics covered include engineering physics, communications systems, control theory, automation, engineering mathematics, scientific computing, electrical engineering, and industrial applications. The book showcases the tremendous advances in engineering technologies and applications, and also serves as an excellent reference work for researchers and graduate students working on engineering technologies, physical sciences and their applications.

  18. Multi-Physics Analysis of the Fermilab Booster RF Cavity

    International Nuclear Information System (INIS)

    Awida, M.; Reid, J.; Yakovlev, V.; Lebedev, V.; Khabiboulline, T.; Champion, M.

    2012-01-01

    After about 40 years of operation the RF accelerating cavities in Fermilab Booster need an upgrade to improve their reliability and to increase the repetition rate in order to support a future experimental program. An increase in the repetition rate from 7 to 15 Hz entails increasing the power dissipation in the RF cavities, their ferrite loaded tuners, and HOM dampers. The increased duty factor requires careful modelling for the RF heating effects in the cavity. A multi-physic analysis investigating both the RF and thermal properties of Booster cavity under various operating conditions is presented in this paper.

  19. PUMA Development through a Multi physics Approach

    International Nuclear Information System (INIS)

    Cheon, Jinsik; Kim, Junehyung; Lee, Byoungoon; Lee, Chanbock

    2013-01-01

    Meanwhile advances of numerical methods make it possible for the multi physics problem to be solved in a fully coupled way. In addition to a multidimensional, multi physical approach, a nuclear fuel performance analysis code, which is 1D code, should be improved by accommodating the state-of-the-art in the numerical analysis to support current fuel design and performance analysis. In particular, the coupling between the mechanical equilibrium equation and a set of numerically stiff kinetics equations for fission gas release is of great importance for a multi physics simulation of nuclear fuel. Instead, coupling between temperature and fuel constituent was found to be made with a relative ease by employing an ordinary differential equations solver. As an effort for a new SFR metal fuel performance analysis code, called PUMA (Performance of Uranium Metal fuel rod Analysis code), the deformation of U-Zr fuel for SFR in connection with a fission gas release model is analyzed. A finite element analyses for purely mechanical problems are performed using a backward differentiation formula, and are subjected to scrupulous verification with Abaqus. Then mechanical equilibrium equation and the equations for fission gas release are coupled with the same differential-algebraic equations (DAE) solver

  20. 17th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2016)

    International Nuclear Information System (INIS)

    2016-01-01

    Preface The 2016 version of the International Workshop on Advanced Computing and Analysis Techniques in Physics Research took place on January 18-22, 2016, at the Universidad Técnica Federico Santa Maria -UTFSM- in Valparaiso, Chile. The present volume of IOP Conference Series is devoted to the selected scientific contributions presented at the workshop. In order to guarantee the scientific quality of the Proceedings all papers were thoroughly peer-reviewed by an ad-hoc Editorial Committee with the help of many careful reviewers. The ACAT Workshop series has a long tradition starting in 1990 (Lyon, France), and takes place in intervals of a year and a half. Formerly these workshops were known under the name AIHENP (Artificial Intelligence for High Energy and Nuclear Physics). Each edition brings together experimental and theoretical physicists and computer scientists/experts, from particle and nuclear physics, astronomy and astrophysics in order to exchange knowledge and experience in computing and data analysis in physics. Three tracks cover the main topics: Computing technology: languages and system architectures. Data analysis: algorithms and tools. Theoretical Physics: techniques and methods. Although most contributions and discussions are related to particle physics and computing, other fields like condensed matter physics, earth physics, biophysics are often addressed in the hope to share our approaches and visions. It created a forum for exchanging ideas among fields, exploring and promoting cutting-edge computing technologies and debating hot topics. (paper)

  1. PREFACE: 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT2013)

    Science.gov (United States)

    Wang, Jianxiong

    2014-06-01

    This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 15th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2013) which took place on 16-21 May 2013 at the Institute of High Energy Physics, Chinese Academy of Sciences, Beijing, China. The workshop series brings together computer science researchers and practitioners, and researchers from particle physics and related fields to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. This year's edition of the workshop brought together over 120 participants from all over the world. 18 invited speakers presented key topics on the universe in computer, Computing in Earth Sciences, multivariate data analysis, automated computation in Quantum Field Theory as well as computing and data analysis challenges in many fields. Over 70 other talks and posters presented state-of-the-art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. The round table discussions on open-source, knowledge sharing and scientific collaboration stimulate us to think over the issue in the respective areas. ACAT 2013 was generously sponsored by the Chinese Academy of Sciences (CAS), National Natural Science Foundation of China (NFSC), Brookhaven National Laboratory in the USA (BNL), Peking University (PKU), Theoretical Physics Cernter for Science facilities of CAS (TPCSF-CAS) and Sugon. We would like to thank all the participants for their scientific contributions and for the en- thusiastic participation in all its activities of the workshop. Further information on ACAT 2013 can be found at http://acat2013.ihep.ac.cn. Professor Jianxiong Wang Institute of High Energy Physics Chinese Academy of Science Details of committees and sponsors are available in the PDF

  2. Advanced computations in plasma physics

    International Nuclear Information System (INIS)

    Tang, W.M.

    2002-01-01

    Scientific simulation in tandem with theory and experiment is an essential tool for understanding complex plasma behavior. In this paper we review recent progress and future directions for advanced simulations in magnetically confined plasmas with illustrative examples chosen from magnetic confinement research areas such as microturbulence, magnetohydrodynamics, magnetic reconnection, and others. Significant recent progress has been made in both particle and fluid simulations of fine-scale turbulence and large-scale dynamics, giving increasingly good agreement between experimental observations and computational modeling. This was made possible by innovative advances in analytic and computational methods for developing reduced descriptions of physics phenomena spanning widely disparate temporal and spatial scales together with access to powerful new computational resources. In particular, the fusion energy science community has made excellent progress in developing advanced codes for which computer run-time and problem size scale well with the number of processors on massively parallel machines (MPP's). A good example is the effective usage of the full power of multi-teraflop (multi-trillion floating point computations per second) MPP's to produce three-dimensional, general geometry, nonlinear particle simulations which have accelerated progress in understanding the nature of turbulence self-regulation by zonal flows. It should be emphasized that these calculations, which typically utilized billions of particles for thousands of time-steps, would not have been possible without access to powerful present generation MPP computers and the associated diagnostic and visualization capabilities. In general, results from advanced simulations provide great encouragement for being able to include increasingly realistic dynamics to enable deeper physics insights into plasmas in both natural and laboratory environments. The associated scientific excitement should serve to

  3. Design and multi-physics optimization of rotary MRF brakes

    Science.gov (United States)

    Topcu, Okan; Taşcıoğlu, Yiğit; Konukseven, Erhan İlhan

    2018-03-01

    Particle swarm optimization (PSO) is a popular method to solve the optimization problems. However, calculations for each particle will be excessive when the number of particles and complexity of the problem increases. As a result, the execution speed will be too slow to achieve the optimized solution. Thus, this paper proposes an automated design and optimization method for rotary MRF brakes and similar multi-physics problems. A modified PSO algorithm is developed for solving multi-physics engineering optimization problems. The difference between the proposed method and the conventional PSO is to split up the original single population into several subpopulations according to the division of labor. The distribution of tasks and the transfer of information to the next party have been inspired by behaviors of a hunting party. Simulation results show that the proposed modified PSO algorithm can overcome the problem of heavy computational burden of multi-physics problems while improving the accuracy. Wire type, MR fluid type, magnetic core material, and ideal current inputs have been determined by the optimization process. To the best of the authors' knowledge, this multi-physics approach is novel for optimizing rotary MRF brakes and the developed PSO algorithm is capable of solving other multi-physics engineering optimization problems. The proposed method has showed both better performance compared to the conventional PSO and also has provided small, lightweight, high impedance rotary MRF brake designs.

  4. Advanced graphical user interface for multi-physics simulations using AMST

    Science.gov (United States)

    Hoffmann, Florian; Vogel, Frank

    2017-07-01

    Numerical modelling of particulate matter has gained much popularity in recent decades. Advanced Multi-physics Simulation Technology (AMST) is a state-of-the-art three dimensional numerical modelling technique combining the eX-tended Discrete Element Method (XDEM) with Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) [1]. One major limitation of this code is the lack of a graphical user interface (GUI) meaning that all pre-processing has to be made directly in a HDF5-file. This contribution presents the first graphical pre-processor developed for AMST.

  5. PREFACE: 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011)

    Science.gov (United States)

    Teodorescu, Liliana; Britton, David; Glover, Nigel; Heinrich, Gudrun; Lauret, Jérôme; Naumann, Axel; Speer, Thomas; Teixeira-Dias, Pedro

    2012-06-01

    ACAT2011 This volume of Journal of Physics: Conference Series is dedicated to scientific contributions presented at the 14th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2011) which took place on 5-7 September 2011 at Brunel University, UK. The workshop series, which began in 1990 in Lyon, France, brings together computer science researchers and practitioners, and researchers from particle physics and related fields in order to explore and confront the boundaries of computing and of automatic data analysis and theoretical calculation techniques. It is a forum for the exchange of ideas among the fields, exploring and promoting cutting-edge computing, data analysis and theoretical calculation techniques in fundamental physics research. This year's edition of the workshop brought together over 100 participants from all over the world. 14 invited speakers presented key topics on computing ecosystems, cloud computing, multivariate data analysis, symbolic and automatic theoretical calculations as well as computing and data analysis challenges in astrophysics, bioinformatics and musicology. Over 80 other talks and posters presented state-of-the art developments in the areas of the workshop's three tracks: Computing Technologies, Data Analysis Algorithms and Tools, and Computational Techniques in Theoretical Physics. Panel and round table discussions on data management and multivariate data analysis uncovered new ideas and collaboration opportunities in the respective areas. This edition of ACAT was generously sponsored by the Science and Technology Facility Council (STFC), the Institute for Particle Physics Phenomenology (IPPP) at Durham University, Brookhaven National Laboratory in the USA and Dell. We would like to thank all the participants of the workshop for the high level of their scientific contributions and for the enthusiastic participation in all its activities which were, ultimately, the key factors in the

  6. Multi-physics modeling in electrical engineering. Application to a magneto-thermo-mechanical model

    International Nuclear Information System (INIS)

    Journeaux, Antoine

    2013-01-01

    The modeling of multi-physics problems in electrical engineering is presented, with an application to the numerical computation of vibrations within the end windings of large turbo-generators. This study is divided into four parts: the impositions of current density, the computation of local forces, the transfer of data between disconnected meshes, and the computation of multi-physics problems using weak coupling, Firstly, the representation of current density within numerical models is presented. The process is decomposed into two stages: the construction of the initial current density, and the determination of a divergence-free field. The representation of complex geometries makes the use of analytical methods impossible. A method based on an electrokinetic problem is used and a fully geometrical method are tested. The geometrical method produces results closer to the real current density than the electrokinetic problem. Methods to compute forces are numerous, and this study focuses on the virtual work principle and the Laplace force considering the recommendations of the literature. Laplace force is highly accurate but is applicable only if the permeability is uniform. The virtual work principle is finally preferred as it appears as the most general way to compute local forces. Mesh-to-mesh data transfer methods are developed to compute multi-physics models using multiples meshes adapted to the subproblems and multiple computational software. The interpolation method, a locally conservative projection, and an orthogonal projection are compared. Interpolation method is said to be fast but highly diffusive, and the orthogonal projections are highly accurate. The locally conservative method produces results similar to the orthogonal projection but avoid the assembly of linear systems. The numerical computation of multi-physical problems using multiple meshes and projections is then presented. However for a given class of problems, there is not an unique coupling

  7. Non-Linear Multi-Physics Analysis and Multi-Objective Optimization in Electroheating Applications

    Czech Academy of Sciences Publication Activity Database

    di Barba, P.; Doležel, Ivo; Mognaschi, M. E.; Savini, A.; Karban, P.

    2014-01-01

    Roč. 50, č. 2 (2014), s. 7016604-7016604 ISSN 0018-9464 Institutional support: RVO:61388998 Keywords : coupled multi-physics problems * finite element method * non-linear equations Subject RIV: JA - Electronics ; Optoelectronics, Electrical Engineering Impact factor: 1.386, year: 2014

  8. A general purpose program system for high energy physics experiment data acquisition and analysis

    International Nuclear Information System (INIS)

    Li Shuren; Xing Yuguo; Jin Bingnian

    1985-01-01

    This paper introduced the functions, structure and system generation of a general purpose program system (Fermilab MULTI) for high energy physics experiment data acquisition and analysis. Works concerning the reconstruction of MULTI system level 0.5 which can be run on the computer PDP-11/23 are also introduced briefly

  9. 2016 Final Reports from the Los Alamos National Laboratory Computational Physics Student Summer Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Runnels, Scott Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Bachrach, Harrison Ian [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Carlson, Nils [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Collier, Angela [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Dumas, William [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Fankell, Douglas [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ferris, Natalie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Gonzalez, Francisco [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Griffith, Alec [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Guston, Brandon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Kenyon, Connor [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Li, Benson [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Mookerjee, Adaleena [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Parkinson, Christian [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Peck, Hailee [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Peters, Evan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Poondla, Yasvanth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rogers, Brandon [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Shaffer, Nathaniel [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Trettel, Andrew [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Valaitis, Sonata Mae [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Venzke, Joel Aaron [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Black, Mason [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Demircan, Samet [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Holladay, Robert Tyler [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-22

    The two primary purposes of LANL’s Computational Physics Student Summer Workshop are (1) To educate graduate and exceptional undergraduate students in the challenges and applications of computational physics of interest to LANL, and (2) Entice their interest toward those challenges. Computational physics is emerging as a discipline in its own right, combining expertise in mathematics, physics, and computer science. The mathematical aspects focus on numerical methods for solving equations on the computer as well as developing test problems with analytical solutions. The physics aspects are very broad, ranging from low-temperature material modeling to extremely high temperature plasma physics, radiation transport and neutron transport. The computer science issues are concerned with matching numerical algorithms to emerging architectures and maintaining the quality of extremely large codes built to perform multi-physics calculations. Although graduate programs associated with computational physics are emerging, it is apparent that the pool of U.S. citizens in this multi-disciplinary field is relatively small and is typically not focused on the aspects that are of primary interest to LANL. Furthermore, more structured foundations for LANL interaction with universities in computational physics is needed; historically interactions rely heavily on individuals’ personalities and personal contacts. Thus a tertiary purpose of the Summer Workshop is to build an educational network of LANL researchers, university professors, and emerging students to advance the field and LANL’s involvement in it.

  10. A self-taught artificial agent for multi-physics computational model personalization.

    Science.gov (United States)

    Neumann, Dominik; Mansi, Tommaso; Itu, Lucian; Georgescu, Bogdan; Kayvanpour, Elham; Sedaghat-Hamedani, Farbod; Amr, Ali; Haas, Jan; Katus, Hugo; Meder, Benjamin; Steidl, Stefan; Hornegger, Joachim; Comaniciu, Dorin

    2016-12-01

    Personalization is the process of fitting a model to patient data, a critical step towards application of multi-physics computational models in clinical practice. Designing robust personalization algorithms is often a tedious, time-consuming, model- and data-specific process. We propose to use artificial intelligence concepts to learn this task, inspired by how human experts manually perform it. The problem is reformulated in terms of reinforcement learning. In an off-line phase, Vito, our self-taught artificial agent, learns a representative decision process model through exploration of the computational model: it learns how the model behaves under change of parameters. The agent then automatically learns an optimal strategy for on-line personalization. The algorithm is model-independent; applying it to a new model requires only adjusting few hyper-parameters of the agent and defining the observations to match. The full knowledge of the model itself is not required. Vito was tested in a synthetic scenario, showing that it could learn how to optimize cost functions generically. Then Vito was applied to the inverse problem of cardiac electrophysiology and the personalization of a whole-body circulation model. The obtained results suggested that Vito could achieve equivalent, if not better goodness of fit than standard methods, while being more robust (up to 11% higher success rates) and with faster (up to seven times) convergence rate. Our artificial intelligence approach could thus make personalization algorithms generalizable and self-adaptable to any patient and any model. Copyright © 2016. Published by Elsevier B.V.

  11. An investigation into the organisation and structural design of multi-computer process-control systems

    International Nuclear Information System (INIS)

    Gertenbach, W.P.

    1981-12-01

    A multi-computer system for the collection of data and control of distributed processes has been developed. The structure and organisation of this system, a study of the general theory of systems and of modularity was used as a basis for an investigation into the organisation and structured design of multi-computer process-control systems. A multi-dimensional model of multi-computer process-control systems was developed. In this model a strict separation was made between organisational properties of multi-computer process-control systems and implementation dependant properties. The model was based on the principles of hierarchical analysis and modularity. Several notions of hierarchy were found necessary to describe fully the organisation of multi-computer systems. A new concept, that of interconnection abstraction was identified. This concept is an extrapolation of implementation techniques in the hardware implementation area to the software implementation area. A synthesis procedure which relies heavily on the above described analysis of multi-computer process-control systems is proposed. The above mentioned model, and a set of performance factors which depend on a set of identified design criteria, were used to constrain the set of possible solutions to the multi-computer process-control system synthesis-procedure

  12. A full-spectrum analysis of high-speed train interior noise under multi-physical-field coupling excitations

    Science.gov (United States)

    Zheng, Xu; Hao, Zhiyong; Wang, Xu; Mao, Jie

    2016-06-01

    High-speed-railway-train interior noise at low, medium, and high frequencies could be simulated by finite element analysis (FEA) or boundary element analysis (BEA), hybrid finite element analysis-statistical energy analysis (FEA-SEA) and statistical energy analysis (SEA), respectively. First, a new method named statistical acoustic energy flow (SAEF) is proposed, which can be applied to the full-spectrum HST interior noise simulation (including low, medium, and high frequencies) with only one model. In an SAEF model, the corresponding multi-physical-field coupling excitations are firstly fully considered and coupled to excite the interior noise. The interior noise attenuated by sound insulation panels of carriage is simulated through modeling the inflow acoustic energy from the exterior excitations into the interior acoustic cavities. Rigid multi-body dynamics, fast multi-pole BEA, and large-eddy simulation with indirect boundary element analysis are first employed to extract the multi-physical-field excitations, which include the wheel-rail interaction forces/secondary suspension forces, the wheel-rail rolling noise, and aerodynamic noise, respectively. All the peak values and their frequency bands of the simulated acoustic excitations are validated with those from the noise source identification test. Besides, the measured equipment noise inside equipment compartment is used as one of the excitation sources which contribute to the interior noise. Second, a full-trimmed FE carriage model is firstly constructed, and the simulated modal shapes and frequencies agree well with the measured ones, which has validated the global FE carriage model as well as the local FE models of the aluminum alloy-trim composite panel. Thus, the sound transmission loss model of any composite panel has indirectly been validated. Finally, the SAEF model of the carriage is constructed based on the accurate FE model and stimulated by the multi-physical-field excitations. The results show

  13. Effects of physical exercise therapy on mobility, physical functioning, physical activity and quality of life in a population of community dwelling elderly patients with impaired mobility, physical disability and/ or multi morbidity: a meta analysis

    NARCIS (Netherlands)

    de Vries, Nienke; Staal, Bart; van Ravensburg, Dorine; Hobbelen, Hans; Olde Rikkert, Marcel; Nijhuis-van der Sande, Maria

    2012-01-01

    This is the first meta-analysis focusing on elderly patients with mobility problems, physical disability and/or multi-morbidity. The aim of this study is to assess the effect of physical exercise therapy on mobility, physical functioning, physical activity and quality of life. A broad systematic

  14. Computing in high energy physics

    International Nuclear Information System (INIS)

    Hertzberger, L.O.; Hoogland, W.

    1986-01-01

    This book deals with advanced computing applications in physics, and in particular in high energy physics environments. The main subjects covered are networking; vector and parallel processing; and embedded systems. Also examined are topics such as operating systems, future computer architectures and commercial computer products. The book presents solutions that are foreseen as coping, in the future, with computing problems in experimental and theoretical High Energy Physics. In the experimental environment the large amounts of data to be processed offer special problems on-line as well as off-line. For on-line data reduction, embedded special purpose computers, which are often used for trigger applications are applied. For off-line processing, parallel computers such as emulator farms and the cosmic cube may be employed. The analysis of these topics is therefore a main feature of this volume

  15. 2015 Final Reports from the Los Alamos National Laboratory Computational Physics Student Summer Workshop

    Energy Technology Data Exchange (ETDEWEB)

    Runnels, Scott Robert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Caldwell, Wendy [Arizona State Univ., Mesa, AZ (United States); Brown, Barton Jed [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Pederson, Clark [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Brown, Justin [Univ. of California, Santa Cruz, CA (United States); Burrill, Daniel [Univ. of Vermont, Burlington, VT (United States); Feinblum, David [Univ. of California, Irvine, CA (United States); Hyde, David [SLAC National Accelerator Lab., Menlo Park, CA (United States). Stanford Institute for Materials and Energy Science (SIMES); Levick, Nathan [Univ. of New Mexico, Albuquerque, NM (United States); Lyngaas, Isaac [Florida State Univ., Tallahassee, FL (United States); Maeng, Brad [Univ. of Michigan, Ann Arbor, MI (United States); Reed, Richard LeRoy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Sarno-Smith, Lois [Univ. of Michigan, Ann Arbor, MI (United States); Shohet, Gil [Univ. of Illinois, Urbana-Champaign, IL (United States); Skarda, Jinhie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Stevens, Josey [Missouri Univ. of Science and Technology, Rolla, MO (United States); Zeppetello, Lucas [Columbia Univ., New York, NY (United States); Grossman-Ponemon, Benjamin [Stanford Univ., CA (United States); Bottini, Joseph Larkin [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Loudon, Tyson Shane [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); VanGessel, Francis Gilbert [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Nagaraj, Sriram [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Price, Jacob [Univ. of Washington, Seattle, WA (United States)

    2015-10-15

    The two primary purposes of LANL’s Computational Physics Student Summer Workshop are (1) To educate graduate and exceptional undergraduate students in the challenges and applications of computational physics of interest to LANL, and (2) Entice their interest toward those challenges. Computational physics is emerging as a discipline in its own right, combining expertise in mathematics, physics, and computer science. The mathematical aspects focus on numerical methods for solving equations on the computer as well as developing test problems with analytical solutions. The physics aspects are very broad, ranging from low-temperature material modeling to extremely high temperature plasma physics, radiation transport and neutron transport. The computer science issues are concerned with matching numerical algorithms to emerging architectures and maintaining the quality of extremely large codes built to perform multi-physics calculations. Although graduate programs associated with computational physics are emerging, it is apparent that the pool of U.S. citizens in this multi-disciplinary field is relatively small and is typically not focused on the aspects that are of primary interest to LANL. Furthermore, more structured foundations for LANL interaction with universities in computational physics is needed; historically interactions rely heavily on individuals’ personalities and personal contacts. Thus a tertiary purpose of the Summer Workshop is to build an educational network of LANL researchers, university professors, and emerging students to advance the field and LANL’s involvement in it. This report includes both the background for the program and the reports from the students.

  16. Physical computation and cognitive science

    CERN Document Server

    Fresco, Nir

    2014-01-01

    This book presents a study of digital computation in contemporary cognitive science. Digital computation is a highly ambiguous concept, as there is no common core definition for it in cognitive science. Since this concept plays a central role in cognitive theory, an adequate cognitive explanation requires an explicit account of digital computation. More specifically, it requires an account of how digital computation is implemented in physical systems. The main challenge is to deliver an account encompassing the multiple types of existing models of computation without ending up in pancomputationalism, that is, the view that every physical system is a digital computing system. This book shows that only two accounts, among the ones examined by the author, are adequate for explaining physical computation. One of them is the instructional information processing account, which is developed here for the first time.   “This book provides a thorough and timely analysis of differing accounts of computation while adv...

  17. Transient multi-physics analysis of a magnetorheological shock absorber with the inverse Jiles-Atherton hysteresis model

    Science.gov (United States)

    Zheng, Jiajia; Li, Yancheng; Li, Zhaochun; Wang, Jiong

    2015-10-01

    This paper presents multi-physics modeling of an MR absorber considering the magnetic hysteresis to capture the nonlinear relationship between the applied current and the generated force under impact loading. The magnetic field, temperature field, and fluid dynamics are represented by the Maxwell equations, conjugate heat transfer equations, and Navier-Stokes equations. These fields are coupled through the apparent viscosity and the magnetic force, both of which in turn depend on the magnetic flux density and the temperature. Based on a parametric study, an inverse Jiles-Atherton hysteresis model is used and implemented for the magnetic field simulation. The temperature rise of the MR fluid in the annular gap caused by core loss (i.e. eddy current loss and hysteresis loss) and fluid motion is computed to investigate the current-force behavior. A group of impulsive tests was performed for the manufactured MR absorber with step exciting currents. The numerical and experimental results showed good agreement, which validates the effectiveness of the proposed multi-physics FEA model.

  18. The Cea multi-scale and multi-physics simulation project for nuclear applications

    International Nuclear Information System (INIS)

    Ledermann, P.; Chauliac, C.; Thomas, J.B.

    2005-01-01

    Full text of publication follows. Today numerical modelling is everywhere recognized as an essential tool of capitalization, integration and share of knowledge. For this reason, it becomes the central tool of research. Until now, the Cea developed a set of scientific software allowing to model, in each situation, the operation of whole or part of a nuclear installation and these codes are largely used in nuclear industry. However, for the future, it is essential to aim for a better accuracy, a better control of uncertainties and better performance in computing times. The objective is to obtain validated models allowing accurate predictive calculations for actual complex nuclear problems such as fuel behaviour in accidental situation. This demands to master a large and interactive set of phenomena ranging from nuclear reaction to heat transfer. To this end, Cea, with industrial partners (EDF, Framatome-ANP, ANDRA) has designed an integrated platform of calculation, devoted to the study of nuclear systems, and intended at the same time for industries and scientists. The development of this platform is under way with the start in 2005 of the integrated project NURESIM, with 18 European partners. Improvement is coming not only through a multi-scale description of all phenomena but also through an innovative design approach requiring deep functional analysis which is upstream from the development of the simulation platform itself. In addition, the studies of future nuclear systems are increasingly multidisciplinary (simultaneous modelling of core physics, thermal-hydraulics and fuel behaviour). These multi-physics and multi-scale aspects make mandatory to pay very careful attention to software architecture issues. A global platform is thus developed integrating dedicated specialized platforms: DESCARTES for core physics, NEPTUNE for thermal-hydraulics, PLEIADES for fuel behaviour, SINERGY for materials behaviour under irradiation, ALLIANCES for the performance

  19. Computational complexity and memory usage for multi-frontal direct solvers used in p finite element analysis

    KAUST Repository

    Calo, Victor M.; Collier, Nathan; Pardo, David; Paszyński, Maciej R.

    2011-01-01

    The multi-frontal direct solver is the state of the art for the direct solution of linear systems. This paper provides computational complexity and memory usage estimates for the application of the multi-frontal direct solver algorithm on linear systems resulting from p finite elements. Specifically we provide the estimates for systems resulting from C0 polynomial spaces spanned by B-splines. The structured grid and uniform polynomial order used in isogeometric meshes simplifies the analysis.

  20. Computational complexity and memory usage for multi-frontal direct solvers used in p finite element analysis

    KAUST Repository

    Calo, Victor M.

    2011-05-14

    The multi-frontal direct solver is the state of the art for the direct solution of linear systems. This paper provides computational complexity and memory usage estimates for the application of the multi-frontal direct solver algorithm on linear systems resulting from p finite elements. Specifically we provide the estimates for systems resulting from C0 polynomial spaces spanned by B-splines. The structured grid and uniform polynomial order used in isogeometric meshes simplifies the analysis.

  1. International MultiConference of Engineers and Computer Scientists 2015

    CERN Document Server

    Ao, Sio-Iong; Huang, Xu; Castillo, Oscar

    2016-01-01

    This volume comprises selected extended papers written by prominent researchers participating in the International MultiConference of Engineers and Computer Scientists 2015, Hong Kong, 18-20 March 2015. The conference served as a platform for discussion of frontier topics in theoretical and applied engineering and computer science, and subjects covered include communications systems, control theory and automation, bioinformatics, artificial intelligence, data mining, engineering mathematics, scientific computing, engineering physics, electrical engineering, and industrial applications. The book describes the state-of-the-art in engineering technologies and computer science and its applications, and will serve as an excellent reference for industrial and academic researchers and graduate students working in these fields.

  2. Mapping university students’ epistemic framing of computational physics using network analysis

    Directory of Open Access Journals (Sweden)

    Madelen Bodin

    2012-04-01

    Full Text Available Solving physics problem in university physics education using a computational approach requires knowledge and skills in several domains, for example, physics, mathematics, programming, and modeling. These competences are in turn related to students’ beliefs about the domains as well as about learning. These knowledge and beliefs components are referred to here as epistemic elements, which together represent the students’ epistemic framing of the situation. The purpose of this study was to investigate university physics students’ epistemic framing when solving and visualizing a physics problem using a particle-spring model system. Students’ epistemic framings are analyzed before and after the task using a network analysis approach on interview transcripts, producing visual representations as epistemic networks. The results show that students change their epistemic framing from a modeling task, with expectancies about learning programming, to a physics task, in which they are challenged to use physics principles and conservation laws in order to troubleshoot and understand their simulations. This implies that the task, even though it is not introducing any new physics, helps the students to develop a more coherent view of the importance of using physics principles in problem solving. The network analysis method used in this study is shown to give intelligible representations of the students’ epistemic framing and is proposed as a useful method of analysis of textual data.

  3. A multi-physics analysis for the actuation of the SSS in opal reactor

    Directory of Open Access Journals (Sweden)

    Ferraro Diego

    2018-01-01

    Full Text Available OPAL is a 20 MWth multi-purpose open-pool type Research Reactor located at Lucas Heights, Australia. It was designed, built and commissioned by INVAP between 2000 and 2006 and it has been operated by the Australia Nuclear Science and Technology Organization (ANSTO showing a very good overall performance. On November 2016, OPAL reached 10 years of continuous operation, becoming one of the most reliable and available in its kind worldwide, with an unbeaten record of being fully operational 307 days a year. One of the enhanced safety features present in this state-of-art reactor is the availability of an independent, diverse and redundant Second Shutdown System (SSS, which consists in the drainage of the heavy water reflector contained in the Reflector Vessel. As far as high quality experimental data is available from reactor commissioning and operation stages and even from early component design validation stages, several models both regarding neutronic and thermo-hydraulic approaches have been developed during recent years using advanced calculations tools and the novel capabilities to couple them. These advanced models were developed in order to assess the capability of such codes to simulate and predict complex behaviours and develop highly detail analysis. In this framework, INVAP developed a three-dimensional CFD model that represents the detailed hydraulic behaviour of the Second Shutdown System for an actuation scenario, where the heavy water drainage 3D temporal profiles inside the Reflector Vessel can be obtained. This model was validated, comparing the computational results with experimental measurements performed in a real-size physical model built by INVAP during early OPAL design engineering stages. Furthermore, detailed 3D Serpent Monte Carlo models are also available, which have been already validated with experimental data from reactor commissioning and operating cycles. In the present work the neutronic and thermohydraulic

  4. A multi-physics analysis for the actuation of the SSS in opal reactor

    Science.gov (United States)

    Ferraro, Diego; Alberto, Patricio; Villarino, Eduardo; Doval, Alicia

    2018-05-01

    OPAL is a 20 MWth multi-purpose open-pool type Research Reactor located at Lucas Heights, Australia. It was designed, built and commissioned by INVAP between 2000 and 2006 and it has been operated by the Australia Nuclear Science and Technology Organization (ANSTO) showing a very good overall performance. On November 2016, OPAL reached 10 years of continuous operation, becoming one of the most reliable and available in its kind worldwide, with an unbeaten record of being fully operational 307 days a year. One of the enhanced safety features present in this state-of-art reactor is the availability of an independent, diverse and redundant Second Shutdown System (SSS), which consists in the drainage of the heavy water reflector contained in the Reflector Vessel. As far as high quality experimental data is available from reactor commissioning and operation stages and even from early component design validation stages, several models both regarding neutronic and thermo-hydraulic approaches have been developed during recent years using advanced calculations tools and the novel capabilities to couple them. These advanced models were developed in order to assess the capability of such codes to simulate and predict complex behaviours and develop highly detail analysis. In this framework, INVAP developed a three-dimensional CFD model that represents the detailed hydraulic behaviour of the Second Shutdown System for an actuation scenario, where the heavy water drainage 3D temporal profiles inside the Reflector Vessel can be obtained. This model was validated, comparing the computational results with experimental measurements performed in a real-size physical model built by INVAP during early OPAL design engineering stages. Furthermore, detailed 3D Serpent Monte Carlo models are also available, which have been already validated with experimental data from reactor commissioning and operating cycles. In the present work the neutronic and thermohydraulic models, available for

  5. GPU in Physics Computation: Case Geant4 Navigation

    CERN Document Server

    Seiskari, Otto; Niemi, Tapio

    2012-01-01

    General purpose computing on graphic processing units (GPU) is a potential method of speeding up scientific computation with low cost and high energy efficiency. We experimented with the particle physics simulation toolkit Geant4 used at CERN to benchmark its geometry navigation functionality on a GPU. The goal was to find out whether Geant4 physics simulations could benefit from GPU acceleration and how difficult it is to modify Geant4 code to run in a GPU. We ported selected parts of Geant4 code to C99 & CUDA and implemented a simple gamma physics simulation utilizing this code to measure efficiency. The performance of the program was tested by running it on two different platforms: NVIDIA GeForce 470 GTX GPU and a 12-core AMD CPU system. Our conclusion was that GPUs can be a competitive alternate for multi-core computers but porting existing software in an efficient way is challenging.

  6. A Multi-Physics simulation of the Reactor Core using CUPID/MASTER

    International Nuclear Information System (INIS)

    Lee, Jae Ryong; Cho, Hyoung Kyu; Yoon, Han Young; Cho, Jin Young; Jeong, Jae Jun

    2011-01-01

    KAERI has been developing a component-scale thermal hydraulics code, CUPID. The aim of the code is for multi-dimensional, multi-physics and multi-scale thermal hydraulics analysis. In our previous papers, the CUPID code has proved to be able to reproduce multidimensional thermal hydraulic analysis by validated with various conceptual problems and experimental data. For the numerical closure, it adopts a three dimensional, transient, two-phase and three-field model, and includes physical models and correlations of the interfacial mass, momentum, and energy transfer. For the multi-scale analysis, the CUPID is on progress to merge into system-scale thermal hydraulic code, MARS. In the present paper, a multi-physics simulation was performed by coupling the CUPID with three dimensional neutron kinetics code, MASTER. The MASTER is merged into the CUPID as a dynamic link library (DLL). The APR1400 reactor core during control rod drop/ejection accident was simulated as an example by adopting a porous media approach to employ fuel assembly. The following sections present the numerical modeling for the reactor core, coupling of the kinetics code, and the simulation results

  7. Multi-party Quantum Computation

    OpenAIRE

    Smith, Adam

    2001-01-01

    We investigate definitions of and protocols for multi-party quantum computing in the scenario where the secret data are quantum systems. We work in the quantum information-theoretic model, where no assumptions are made on the computational power of the adversary. For the slightly weaker task of verifiable quantum secret sharing, we give a protocol which tolerates any t < n/4 cheating parties (out of n). This is shown to be optimal. We use this new tool to establish that any multi-party quantu...

  8. Computational analysis of a multistage axial compressor

    Science.gov (United States)

    Mamidoju, Chaithanya

    Turbomachines are used extensively in Aerospace, Power Generation, and Oil & Gas Industries. Efficiency of these machines is often an important factor and has led to the continuous effort to improve the design to achieve better efficiency. The axial flow compressor is a major component in a gas turbine with the turbine's overall performance depending strongly on compressor performance. Traditional analysis of axial compressors involves throughflow calculations, isolated blade passage analysis, Quasi-3D blade-to-blade analysis, single-stage (rotor-stator) analysis, and multi-stage analysis involving larger design cycles. In the current study, the detailed flow through a 15 stage axial compressor is analyzed using a 3-D Navier Stokes CFD solver in a parallel computing environment. Methodology is described for steady state (frozen rotor stator) analysis of one blade passage per component. Various effects such as mesh type and density, boundary conditions, tip clearance and numerical issues such as turbulence model choice, advection model choice, and parallel processing performance are analyzed. A high sensitivity of the predictions to the above was found. Physical explanation to the flow features observed in the computational study are given. The total pressure rise verses mass flow rate was computed.

  9. Computational physics

    CERN Document Server

    Newman, Mark

    2013-01-01

    A complete introduction to the field of computational physics, with examples and exercises in the Python programming language. Computers play a central role in virtually every major physics discovery today, from astrophysics and particle physics to biophysics and condensed matter. This book explains the fundamentals of computational physics and describes in simple terms the techniques that every physicist should know, such as finite difference methods, numerical quadrature, and the fast Fourier transform. The book offers a complete introduction to the topic at the undergraduate level, and is also suitable for the advanced student or researcher who wants to learn the foundational elements of this important field.

  10. Performance modeling and analysis of parallel Gaussian elimination on multi-core computers

    Directory of Open Access Journals (Sweden)

    Fadi N. Sibai

    2014-01-01

    Full Text Available Gaussian elimination is used in many applications and in particular in the solution of systems of linear equations. This paper presents mathematical performance models and analysis of four parallel Gaussian Elimination methods (precisely the Original method and the new Meet in the Middle –MiM– algorithms and their variants with SIMD vectorization on multi-core systems. Analytical performance models of the four methods are formulated and presented followed by evaluations of these models with modern multi-core systems’ operation latencies. Our results reveal that the four methods generally exhibit good performance scaling with increasing matrix size and number of cores. SIMD vectorization only makes a large difference in performance for low number of cores. For a large matrix size (n ⩾ 16 K, the performance difference between the MiM and Original methods falls from 16× with four cores to 4× with 16 K cores. The efficiencies of all four methods are low with 1 K cores or more stressing a major problem of multi-core systems where the network-on-chip and memory latencies are too high in relation to basic arithmetic operations. Thus Gaussian Elimination can greatly benefit from the resources of multi-core systems, but higher performance gains can be achieved if multi-core systems can be designed with lower memory operation, synchronization, and interconnect communication latencies, requirements of utmost importance and challenge in the exascale computing age.

  11. GeN-Foam: a novel OpenFOAM"® based multi-physics solver for 2D/3D transient analysis of nuclear reactors

    International Nuclear Information System (INIS)

    Fiorina, Carlo; Clifford, Ivor; Aufiero, Manuele; Mikityuk, Konstantin

    2015-01-01

    Highlights: • Development of a new multi-physics solver based on OpenFOAM"®. • Tight coupling of thermal-hydraulics, thermal-mechanics and neutronics. • Combined use of traditional RANS and porous-medium models. • Mesh for neutronics deformed according to the predicted displacement field. • Use of three unstructured meshes, adaptive time step, parallel computing. - Abstract: The FAST group at the Paul Scherrer Institut has been developing a code system for reactor analysis for many years. For transient analysis, this code system is currently based on a state-of-the-art coupled TRACE-PARCS routine. This work presents an attempt to supplement the FAST code system with a novel solver characterized by tight coupling between the different equations, parallel computing capabilities, adaptive time-stepping and more accurate treatment of some of the phenomena involved in a reactor transient. The new solver is based on OpenFOAM"®, an open-source C++ library for the solution of partial differential equations using finite-volume discretization. It couples together a multi-scale fine/coarse mesh sub-solver for thermal-hydraulics, a multi-group diffusion sub-solver for neutronics, a displacement-based sub-solver for thermal-mechanics and a finite-difference model for the temperature field in the fuel. It is targeted toward the analysis of pin-based reactors (e.g., liquid metal fast reactors or light water reactors) or homogeneous reactors (e.g., fast-spectrum molten salt reactors). This paper presents each “single-physics” sub-solver and the overall coupling strategy, using the sodium-cooled fast reactor as a test case, and essential code verification tests are described.

  12. Computational simulation of biomolecules transport with multi-physics near microchannel surface for development of biomolecules-detection devices.

    Science.gov (United States)

    Suzuki, Yuma; Shimizu, Tetsuhide; Yang, Ming

    2017-01-01

    The quantitative evaluation of the biomolecules transport with multi-physics in nano/micro scale is demanded in order to optimize the design of microfluidics device for the biomolecules detection with high detection sensitivity and rapid diagnosis. This paper aimed to investigate the effectivity of the computational simulation using the numerical model of the biomolecules transport with multi-physics near a microchannel surface on the development of biomolecules-detection devices. The biomolecules transport with fluid drag force, electric double layer (EDL) force, and van der Waals force was modeled by Newtonian Equation of motion. The model validity was verified in the influence of ion strength and flow velocity on biomolecules distribution near the surface compared with experimental results of previous studies. The influence of acting forces on its distribution near the surface was investigated by the simulation. The trend of its distribution to ion strength and flow velocity was agreement with the experimental result by the combination of all acting forces. Furthermore, EDL force dominantly influenced its distribution near its surface compared with fluid drag force except for the case of high velocity and low ion strength. The knowledges from the simulation might be useful for the design of biomolecules-detection devices and the simulation can be expected to be applied on its development as the design tool for high detection sensitivity and rapid diagnosis in the future.

  13. Computational physics

    Energy Technology Data Exchange (ETDEWEB)

    Anon.

    1987-01-15

    Computers have for many years played a vital role in the acquisition and treatment of experimental data, but they have more recently taken up a much more extended role in physics research. The numerical and algebraic calculations now performed on modern computers make it possible to explore consequences of basic theories in a way which goes beyond the limits of both analytic insight and experimental investigation. This was brought out clearly at the Conference on Perspectives in Computational Physics, held at the International Centre for Theoretical Physics, Trieste, Italy, from 29-31 October.

  14. Computational physics

    International Nuclear Information System (INIS)

    Anon.

    1987-01-01

    Computers have for many years played a vital role in the acquisition and treatment of experimental data, but they have more recently taken up a much more extended role in physics research. The numerical and algebraic calculations now performed on modern computers make it possible to explore consequences of basic theories in a way which goes beyond the limits of both analytic insight and experimental investigation. This was brought out clearly at the Conference on Perspectives in Computational Physics, held at the International Centre for Theoretical Physics, Trieste, Italy, from 29-31 October

  15. Multi-dimensional two-fluid flow computation. An overview

    International Nuclear Information System (INIS)

    Carver, M.B.

    1992-01-01

    This paper discusses a repertoire of three-dimensional computer programs developed to perform critical analysis of single-phase, two-phase and multi-fluid flow in reactor components. The basic numerical approach to solving the governing equations common to all the codes is presented and the additional constitutive relationships required for closure are discussed. Particular applications are presented for a number of computer codes. (author). 12 refs

  16. High performance multi-scale and multi-physics computation of nuclear power plant subjected to strong earthquake. An Overview

    International Nuclear Information System (INIS)

    Yoshimura, Shinobu; Kawai, Hiroshi; Sugimoto, Shin'ichiro; Hori, Muneo; Nakajima, Norihiro; Kobayashi, Kei

    2010-01-01

    Recently importance of nuclear energy has been recognized again due to serious concerns of global warming and energy security. In parallel, it is one of critical issues to verify safety capability of ageing nuclear power plants (NPPs) subjected to strong earthquake. Since 2007, we have been developing the multi-scale and multi-physics based numerical simulator for quantitatively predicting actual quake-proof capability of ageing NPPs under operation or just after plant trip subjected to strong earthquake. In this paper, we describe an overview of the simulator with some preliminary results. (author)

  17. GeN-Foam: a novel OpenFOAM{sup ®} based multi-physics solver for 2D/3D transient analysis of nuclear reactors

    Energy Technology Data Exchange (ETDEWEB)

    Fiorina, Carlo, E-mail: carlo.fiorina@psi.ch [Paul Scherrer Institut, Nuclear Energy and Safety Department, Laboratory for Reactor Physics and Systems Behaviour – PSI, Villigen 5232 (Switzerland); Clifford, Ivor [Paul Scherrer Institut, Nuclear Energy and Safety Department, Laboratory for Reactor Physics and Systems Behaviour – PSI, Villigen 5232 (Switzerland); Aufiero, Manuele [LPSC-IN2P3-CNRS/UJF/Grenoble INP, 53 avenue des Martyrs, 38026 Grenoble Cedex (France); Mikityuk, Konstantin [Paul Scherrer Institut, Nuclear Energy and Safety Department, Laboratory for Reactor Physics and Systems Behaviour – PSI, Villigen 5232 (Switzerland)

    2015-12-01

    Highlights: • Development of a new multi-physics solver based on OpenFOAM{sup ®}. • Tight coupling of thermal-hydraulics, thermal-mechanics and neutronics. • Combined use of traditional RANS and porous-medium models. • Mesh for neutronics deformed according to the predicted displacement field. • Use of three unstructured meshes, adaptive time step, parallel computing. - Abstract: The FAST group at the Paul Scherrer Institut has been developing a code system for reactor analysis for many years. For transient analysis, this code system is currently based on a state-of-the-art coupled TRACE-PARCS routine. This work presents an attempt to supplement the FAST code system with a novel solver characterized by tight coupling between the different equations, parallel computing capabilities, adaptive time-stepping and more accurate treatment of some of the phenomena involved in a reactor transient. The new solver is based on OpenFOAM{sup ®}, an open-source C++ library for the solution of partial differential equations using finite-volume discretization. It couples together a multi-scale fine/coarse mesh sub-solver for thermal-hydraulics, a multi-group diffusion sub-solver for neutronics, a displacement-based sub-solver for thermal-mechanics and a finite-difference model for the temperature field in the fuel. It is targeted toward the analysis of pin-based reactors (e.g., liquid metal fast reactors or light water reactors) or homogeneous reactors (e.g., fast-spectrum molten salt reactors). This paper presents each “single-physics” sub-solver and the overall coupling strategy, using the sodium-cooled fast reactor as a test case, and essential code verification tests are described.

  18. PREFACE: 16th International workshop on Advanced Computing and Analysis Techniques in physics research (ACAT2014)

    Science.gov (United States)

    Fiala, L.; Lokajicek, M.; Tumova, N.

    2015-05-01

    This volume of the IOP Conference Series is dedicated to scientific contributions presented at the 16th International Workshop on Advanced Computing and Analysis Techniques in Physics Research (ACAT 2014), this year the motto was ''bridging disciplines''. The conference took place on September 1-5, 2014, at the Faculty of Civil Engineering, Czech Technical University in Prague, Czech Republic. The 16th edition of ACAT explored the boundaries of computing system architectures, data analysis algorithmics, automatic calculations, and theoretical calculation technologies. It provided a forum for confronting and exchanging ideas among these fields, where new approaches in computing technologies for scientific research were explored and promoted. This year's edition of the workshop brought together over 140 participants from all over the world. The workshop's 16 invited speakers presented key topics on advanced computing and analysis techniques in physics. During the workshop, 60 talks and 40 posters were presented in three tracks: Computing Technology for Physics Research, Data Analysis - Algorithms and Tools, and Computations in Theoretical Physics: Techniques and Methods. The round table enabled discussions on expanding software, knowledge sharing and scientific collaboration in the respective areas. ACAT 2014 was generously sponsored by Western Digital, Brookhaven National Laboratory, Hewlett Packard, DataDirect Networks, M Computers, Bright Computing, Huawei and PDV-Systemhaus. Special appreciations go to the track liaisons Lorenzo Moneta, Axel Naumann and Grigory Rubtsov for their work on the scientific program and the publication preparation. ACAT's IACC would also like to express its gratitude to all referees for their work on making sure the contributions are published in the proceedings. Our thanks extend to the conference liaisons Andrei Kataev and Jerome Lauret who worked with the local contacts and made this conference possible as well as to the program

  19. Physics properties of non-helical scan using 320-row multi detector computed tomography

    International Nuclear Information System (INIS)

    Urikura, Atsushi; Nakaya, Yoshihiro; Kawatani, Keisuke; Kawashima, Ippei; Goto, Hironori; Ichikawa, Katsuhiro

    2012-01-01

    Recently, clinical applications utilizing 320-row multi detector computed tomography (320MDCT) have increased, and the physical image properties of 320MDCT have been more concerned. We evaluated the spatial resolution in scan plane and z-direction, image noise and low-contrast sensitivity of non-helical mode (320NH), 640 slices mode by a double slice reconstruction technology (640DS), and 64-row helical mode (64HE) by using a 320MDCT. The spatial resolution in z-direction was evaluated by the section sensitivity profile (SSP) measurement with the micro coin phantom and the contrast transfer ratio (CTR) with the 0.5-mm comb phantom. The in-plane spatial resolution of 320NH was uniform over all the slice positions. The spatial resolution in z-direction decreased from the cathode side toward the anode side. The image noise of the anode side was higher than that of the cathode side. The contrast to noise ratio as index of the low contrast sensitivity was uniform over all the slice position. The CTR of 320NH fluctuated in the z-position, and the fluctuation was improved by 640DS except for the center of rotation. (author)

  20. Multi-Scale Multi-physics Methods Development for the Calculation of Hot-Spots in the NGNP

    International Nuclear Information System (INIS)

    Downar, Thomas; Seker, Volkan

    2013-01-01

    Radioactive gaseous fission products are released out of the fuel element at a significantly higher rate when the fuel temperature exceeds 1600°C in high-temperature gas-cooled reactors (HTGRs). Therefore, it is of paramount importance to accurately predict the peak fuel temperature during all operational and design-basis accident conditions. The current methods used to predict the peak fuel temperature in HTGRs, such as the Next-Generation Nuclear Plant (NGNP), estimate the average fuel temperature in a computational mesh modeling hundreds of fuel pebbles or a fuel assembly in a pebble-bed reactor (PBR) or prismatic block type reactor (PMR), respectively. Experiments conducted in operating HTGRs indicate considerable uncertainty in the current methods and correlations used to predict actual temperatures. The objective of this project is to improve the accuracy in the prediction of local 'hot' spots by developing multi-scale, multi-physics methods and implementing them within the framework of established codes used for NGNP analysis.The multi-scale approach which this project will implement begins with defining suitable scales for a physical and mathematical model and then deriving and applying the appropriate boundary conditions between scales. The macro scale is the greatest length that describes the entire reactor, whereas the meso scale models only a fuel block in a prismatic reactor and ten to hundreds of pebbles in a pebble bed reactor. The smallest scale is the micro scale--the level of a fuel kernel of the pebble in a PBR and fuel compact in a PMR--which needs to be resolved in order to calculate the peak temperature in a fuel kernel.

  1. Multi-Scale Multi-physics Methods Development for the Calculation of Hot-Spots in the NGNP

    Energy Technology Data Exchange (ETDEWEB)

    Downar, Thomas [Univ. of Michigan, Ann Arbor, MI (United States); Seker, Volkan [Univ. of Michigan, Ann Arbor, MI (United States)

    2013-04-30

    Radioactive gaseous fission products are released out of the fuel element at a significantly higher rate when the fuel temperature exceeds 1600°C in high-temperature gas-cooled reactors (HTGRs). Therefore, it is of paramount importance to accurately predict the peak fuel temperature during all operational and design-basis accident conditions. The current methods used to predict the peak fuel temperature in HTGRs, such as the Next-Generation Nuclear Plant (NGNP), estimate the average fuel temperature in a computational mesh modeling hundreds of fuel pebbles or a fuel assembly in a pebble-bed reactor (PBR) or prismatic block type reactor (PMR), respectively. Experiments conducted in operating HTGRs indicate considerable uncertainty in the current methods and correlations used to predict actual temperatures. The objective of this project is to improve the accuracy in the prediction of local "hot" spots by developing multi-scale, multi-physics methods and implementing them within the framework of established codes used for NGNP analysis.The multi-scale approach which this project will implement begins with defining suitable scales for a physical and mathematical model and then deriving and applying the appropriate boundary conditions between scales. The macro scale is the greatest length that describes the entire reactor, whereas the meso scale models only a fuel block in a prismatic reactor and ten to hundreds of pebbles in a pebble bed reactor. The smallest scale is the micro scale--the level of a fuel kernel of the pebble in a PBR and fuel compact in a PMR--which needs to be resolved in order to calculate the peak temperature in a fuel kernel.

  2. A EU simulation platform for nuclear reactor safety: multi-scale and multi-physics calculations, sensitivity and uncertainty analysis (NURESIM project)

    International Nuclear Information System (INIS)

    Chauliac, Christian; Bestion, Dominique; Crouzet, Nicolas; Aragones, Jose-Maria; Cacuci, Dan Gabriel; Weiss, Frank-Peter; Zimmermann, Martin A.

    2010-01-01

    The NURESIM project, the numerical simulation platform, is developed in the frame of the NURISP European Collaborative Project (FP7), which includes 22 organizations from 14 European countries. NURESIM intends to be a reference platform providing high quality software tools, physical models, generic functions and assessment results. The NURESIM platform provides an accurate representation of the physical phenomena by promoting and incorporating the latest advances in core physics, two-phase thermal-hydraulics and fuel modelling. It includes multi-scale and multi-physics features, especially for coupling core physics and thermal-hydraulics models for reactor safety. Easy coupling of the different codes and solvers is provided through the use of a common data structure and generic functions (e.g., for interpolation between non-conforming meshes). More generally, the platform includes generic pre-processing, post-processing and supervision functions through the open-source SALOME software, in order to make the codes more user-friendly. The platform also provides the informatics environment for testing and comparing different codes. The contribution summarizes the achievements and ongoing developments of the simulation platform in core physics, thermal-hydraulics, multi-physics, uncertainties and code integration

  3. Design of a Modular Monolithic Implicit Solver for Multi-Physics Applications

    Science.gov (United States)

    Carton De Wiart, Corentin; Diosady, Laslo T.; Garai, Anirban; Burgess, Nicholas; Blonigan, Patrick; Ekelschot, Dirk; Murman, Scott M.

    2018-01-01

    The design of a modular multi-physics high-order space-time finite-element framework is presented together with its extension to allow monolithic coupling of different physics. One of the main objectives of the framework is to perform efficient high- fidelity simulations of capsule/parachute systems. This problem requires simulating multiple physics including, but not limited to, the compressible Navier-Stokes equations, the dynamics of a moving body with mesh deformations and adaptation, the linear shell equations, non-re effective boundary conditions and wall modeling. The solver is based on high-order space-time - finite element methods. Continuous, discontinuous and C1-discontinuous Galerkin methods are implemented, allowing one to discretize various physical models. Tangent and adjoint sensitivity analysis are also targeted in order to conduct gradient-based optimization, error estimation, mesh adaptation, and flow control, adding another layer of complexity to the framework. The decisions made to tackle these challenges are presented. The discussion focuses first on the "single-physics" solver and later on its extension to the monolithic coupling of different physics. The implementation of different physics modules, relevant to the capsule/parachute system, are also presented. Finally, examples of coupled computations are presented, paving the way to the simulation of the full capsule/parachute system.

  4. Extreme Scale Computing for First-Principles Plasma Physics Research

    Energy Technology Data Exchange (ETDEWEB)

    Chang, Choogn-Seock [Princeton University

    2011-10-12

    World superpowers are in the middle of the “Computnik” race. US Department of Energy (and National Nuclear Security Administration) wishes to launch exascale computer systems into the scientific (and national security) world by 2018. The objective is to solve important scientific problems and to predict the outcomes using the most fundamental scientific laws, which would not be possible otherwise. Being chosen into the next “frontier” group can be of great benefit to a scientific discipline. An extreme scale computer system requires different types of algorithms and programming philosophy from those we have been accustomed to. Only a handful of scientific codes are blessed to be capable of scalable usage of today’s largest computers in operation at petascale (using more than 100,000 cores concurrently). Fortunately, a few magnetic fusion codes are competing well in this race using the “first principles” gyrokinetic equations.These codes are beginning to study the fusion plasma dynamics in full-scale realistic diverted device geometry in natural nonlinear multiscale, including the large scale neoclassical and small scale turbulence physics, but excluding some ultra fast dynamics. In this talk, most of the above mentioned topics will be introduced at executive level. Representative properties of the extreme scale computers, modern programming exercises to take advantage of them, and different philosophies in the data flows and analyses will be presented. Examples of the multi-scale multi-physics scientific discoveries made possible by solving the gyrokinetic equations on extreme scale computers will be described. Future directions into “virtual tokamak experiments” will also be discussed.

  5. Three-dimensional multi-physics model of the European sodium fast reactor design applied to DBA analysis - 15293

    International Nuclear Information System (INIS)

    Lazaro, A.; Ordonez, J.; Martorell, S.; Przemyslaw, S.; Ammirabile, L.; Tsige-Tamirat, H.

    2015-01-01

    The sodium cooled fast reactor (SFR) is one of the reactor types selected by the Generation IV International Forum. SFR stand out due to its remarkable past operational experience in related projects and its potential to achieve the ambitious goals laid for the new generation of nuclear reactors. Regardless its operational experience, there is a need to apply computational tools able to simulate the system behaviour under conditions that may overtake the reactor safety limits from the early stages of the design process, including the three-dimensional phenomena that may arise in these transients. This paper presents the different steps followed towards the development of a multi-physics platform with capabilities to simulate complex phenomena using a coupled neutronic-thermal-hydraulic scheme. The development started with a one-dimensional thermal-hydraulic model of the European Sodium Fast Reactor (ESFR) design with point kinetic neutronic feedback benchmarked with its peers in the framework of the FP7-CP-ESFR project using the state-of-the-art thermal-hydraulic system code TRACE. The model was successively extended into a three-dimensional model coupled with the spatial kinetic neutronic code PARCS able to simulate three-dimensional multi-physic phenomena along with the comparison of the results for symmetric cases. The last part of the paper shows the application of the developed tool to the analysis of transients involving asymmetrical effects, such as the coast-down of a primary and secondary pump or the withdrawal of a peripheral control rod bank, demonstrating the unique capability of the code to simulate such transients and the capability of the design to withstand them under design basis

  6. Electromagnetic Physics Models for Parallel Computing Architectures

    Science.gov (United States)

    Amadio, G.; Ananya, A.; Apostolakis, J.; Aurora, A.; Bandieramonte, M.; Bhattacharyya, A.; Bianchini, C.; Brun, R.; Canal, P.; Carminati, F.; Duhem, L.; Elvira, D.; Gheata, A.; Gheata, M.; Goulas, I.; Iope, R.; Jun, S. Y.; Lima, G.; Mohanty, A.; Nikitina, T.; Novak, M.; Pokorski, W.; Ribon, A.; Seghal, R.; Shadura, O.; Vallecorsa, S.; Wenzel, S.; Zhang, Y.

    2016-10-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well.

  7. Capturing the complex behavior of hydraulic fracture stimulation through multi-physics modeling, field-based constraints, and model reduction

    Science.gov (United States)

    Johnson, S.; Chiaramonte, L.; Cruz, L.; Izadi, G.

    2016-12-01

    Advances in the accuracy and fidelity of numerical methods have significantly improved our understanding of coupled processes in unconventional reservoirs. However, such multi-physics models are typically characterized by many parameters and require exceptional computational resources to evaluate systems of practical importance, making these models difficult to use for field analyses or uncertainty quantification. One approach to remove these limitations is through targeted complexity reduction and field data constrained parameterization. For the latter, a variety of field data streams may be available to engineers and asset teams, including micro-seismicity from proximate sites, well logs, and 3D surveys, which can constrain possible states of the reservoir as well as the distributions of parameters. We describe one such workflow, using the Argos multi-physics code and requisite geomechanical analysis to parameterize the underlying models. We illustrate with a field study involving a constraint analysis of various field data and details of the numerical optimizations and model reduction to demonstrate how complex models can be applied to operation design in hydraulic fracturing operations, including selection of controllable completion and fluid injection design properties. The implication of this work is that numerical methods are mature and computationally tractable enough to enable complex engineering analysis and deterministic field estimates and to advance research into stochastic analyses for uncertainty quantification and value of information applications.

  8. Module-based Hybrid Uncertainty Quantification for Multi-physics Applications: Theory and Software

    Energy Technology Data Exchange (ETDEWEB)

    Tong, Charles [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chen, Xiao [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Iaccarino, Gianluca [Stanford Univ., CA (United States); Mittal, Akshay [Stanford Univ., CA (United States)

    2013-10-08

    In this project we proposed to develop an innovative uncertainty quantification methodology that captures the best of the two competing approaches in UQ, namely, intrusive and non-intrusive approaches. The idea is to develop the mathematics and the associated computational framework and algorithms to facilitate the use of intrusive or non-intrusive UQ methods in different modules of a multi-physics multi-module simulation model in a way that physics code developers for different modules are shielded (as much as possible) from the chores of accounting for the uncertain ties introduced by the other modules. As the result of our research and development, we have produced a number of publications, conference presentations, and a software product.

  9. Basic concepts in computational physics

    CERN Document Server

    Stickler, Benjamin A

    2016-01-01

    This new edition is a concise introduction to the basic methods of computational physics. Readers will discover the benefits of numerical methods for solving complex mathematical problems and for the direct simulation of physical processes. The book is divided into two main parts: Deterministic methods and stochastic methods in computational physics. Based on concrete problems, the first part discusses numerical differentiation and integration, as well as the treatment of ordinary differential equations. This is extended by a brief introduction to the numerics of partial differential equations. The second part deals with the generation of random numbers, summarizes the basics of stochastics, and subsequently introduces Monte-Carlo (MC) methods. Specific emphasis is on MARKOV chain MC algorithms. The final two chapters discuss data analysis and stochastic optimization. All this is again motivated and augmented by applications from physics. In addition, the book offers a number of appendices to provide the read...

  10. Developing affordable multi-touch technologies for use in physics

    Science.gov (United States)

    Potter, Mark; Ilie, Carolina; Schofield, Damian; Vampola, David

    2012-02-01

    Physics is one of many areas which has the ability to benefit from a number of different teaching styles and sophisticated instructional tools due to it having both theoretical and practical applications which can be explored. The purpose of this research is to develop affordable large scale multi-touch interfaces which can be used within and outside of the classroom as both an instruction technology and a computer supported collaborative learning tool. Not only can this technology be implemented at university levels, but also at the K-12 level of education. Pedagogical research indicates that kinesthetic learning is a fundamental, powerful, and ubiquitous learning style [1]. Through the use of these types of multi-touch tools and teaching methods which incorporate them, the classroom can be enriched to allow for better comprehension and retention of information. This is due in part to a wider range of learning styles, such as kinesthetic learning, which are being catered to within the classroom. [4pt] [1] Wieman, C.E, Perkins, K.K., Adams, W.K., ``Oersted Medal Lecture 2007: Interactive Simulations for teaching physics: What works, what doesn't and why,'' American Journal of Physics. 76 393-99.

  11. GRID computing for experimental high energy physics

    International Nuclear Information System (INIS)

    Moloney, G.R.; Martin, L.; Seviour, E.; Taylor, G.N.; Moorhead, G.F.

    2002-01-01

    Full text: The Large Hadron Collider (LHC), to be completed at the CERN laboratory in 2006, will generate 11 petabytes of data per year. The processing of this large data stream requires a large, distributed computing infrastructure. A recent innovation in high performance distributed computing, the GRID, has been identified as an important tool in data analysis for the LHC. GRID computing has actual and potential application in many fields which require computationally intensive analysis of large, shared data sets. The Australian experimental High Energy Physics community has formed partnerships with the High Performance Computing community to establish a GRID node at the University of Melbourne. Through Australian membership of the ATLAS experiment at the LHC, Australian researchers have an opportunity to be involved in the European DataGRID project. This presentation will include an introduction to the GRID, and it's application to experimental High Energy Physics. We will present the results of our studies, including participation in the first LHC data challenge

  12. A Multi-scale Modeling System with Unified Physics to Study Precipitation Processes

    Science.gov (United States)

    Tao, W. K.

    2017-12-01

    In recent years, exponentially increasing computer power has extended Cloud Resolving Model (CRM) integrations from hours to months, the number of computational grid points from less than a thousand to close to ten million. Three-dimensional models are now more prevalent. Much attention is devoted to precipitating cloud systems where the crucial 1-km scales are resolved in horizontal domains as large as 10,000 km in two-dimensions, and 1,000 x 1,000 km2 in three-dimensions. Cloud resolving models now provide statistical information useful for developing more realistic physically based parameterizations for climate models and numerical weather prediction models. It is also expected that NWP and mesoscale model can be run in grid size similar to cloud resolving model through nesting technique. Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), and (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF). The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the results from using multi-scale modeling system to study the precipitation, processes and their sensitivity on model resolution and microphysics schemes will be presented. Also how to use of the multi-satellite simulator to improve precipitation processes will be discussed.

  13. Computational and Physical Analysis of Catalytic Compounds

    Science.gov (United States)

    Wu, Richard; Sohn, Jung Jae; Kyung, Richard

    2015-03-01

    Nanoparticles exhibit unique physical and chemical properties depending on their geometrical properties. For this reason, synthesis of nanoparticles with controlled shape and size is important to use their unique properties. Catalyst supports are usually made of high-surface-area porous oxides or carbon nanomaterials. These support materials stabilize metal catalysts against sintering at high reaction temperatures. Many studies have demonstrated large enhancements of catalytic behavior due to the role of the oxide-metal interface. In this paper, the catalyzing ability of supported nano metal oxides, such as silicon oxide and titanium oxide compounds as catalysts have been analyzed using computational chemistry method. Computational programs such as Gamess and Chemcraft has been used in an effort to compute the efficiencies of catalytic compounds, and bonding energy changes during the optimization convergence. The result illustrates how the metal oxides stabilize and the steps that it takes. The graph of the energy computation step(N) versus energy(kcal/mol) curve shows that the energy of the titania converges faster at the 7th iteration calculation, whereas the silica converges at the 9th iteration calculation.

  14. Electromagnetic Physics Models for Parallel Computing Architectures

    International Nuclear Information System (INIS)

    Amadio, G; Bianchini, C; Iope, R; Ananya, A; Apostolakis, J; Aurora, A; Bandieramonte, M; Brun, R; Carminati, F; Gheata, A; Gheata, M; Goulas, I; Nikitina, T; Bhattacharyya, A; Mohanty, A; Canal, P; Elvira, D; Jun, S Y; Lima, G; Duhem, L

    2016-01-01

    The recent emergence of hardware architectures characterized by many-core or accelerated processors has opened new opportunities for concurrent programming models taking advantage of both SIMD and SIMT architectures. GeantV, a next generation detector simulation, has been designed to exploit both the vector capability of mainstream CPUs and multi-threading capabilities of coprocessors including NVidia GPUs and Intel Xeon Phi. The characteristics of these architectures are very different in terms of the vectorization depth and type of parallelization needed to achieve optimal performance. In this paper we describe implementation of electromagnetic physics models developed for parallel computing architectures as a part of the GeantV project. Results of preliminary performance evaluation and physics validation are presented as well. (paper)

  15. Model Reduction of Computational Aerothermodynamics for Multi-Discipline Analysis in High Speed Flows

    Science.gov (United States)

    Crowell, Andrew Rippetoe

    This dissertation describes model reduction techniques for the computation of aerodynamic heat flux and pressure loads for multi-disciplinary analysis of hypersonic vehicles. NASA and the Department of Defense have expressed renewed interest in the development of responsive, reusable hypersonic cruise vehicles capable of sustained high-speed flight and access to space. However, an extensive set of technical challenges have obstructed the development of such vehicles. These technical challenges are partially due to both the inability to accurately test scaled vehicles in wind tunnels and to the time intensive nature of high-fidelity computational modeling, particularly for the fluid using Computational Fluid Dynamics (CFD). The aim of this dissertation is to develop efficient and accurate models for the aerodynamic heat flux and pressure loads to replace the need for computationally expensive, high-fidelity CFD during coupled analysis. Furthermore, aerodynamic heating and pressure loads are systematically evaluated for a number of different operating conditions, including: simple two-dimensional flow over flat surfaces up to three-dimensional flows over deformed surfaces with shock-shock interaction and shock-boundary layer interaction. An additional focus of this dissertation is on the implementation and computation of results using the developed aerodynamic heating and pressure models in complex fluid-thermal-structural simulations. Model reduction is achieved using a two-pronged approach. One prong focuses on developing analytical corrections to isothermal, steady-state CFD flow solutions in order to capture flow effects associated with transient spatially-varying surface temperatures and surface pressures (e.g., surface deformation, surface vibration, shock impingements, etc.). The second prong is focused on minimizing the computational expense of computing the steady-state CFD solutions by developing an efficient surrogate CFD model. The developed two

  16. Physics Computing '92: Proceedings of the 4th International Conference

    Science.gov (United States)

    de Groot, Robert A.; Nadrchal, Jaroslav

    1993-04-01

    The Table of Contents for the book is as follows: * Preface * INVITED PAPERS * Ab Initio Theoretical Approaches to the Structural, Electronic and Vibrational Properties of Small Clusters and Fullerenes: The State of the Art * Neural Multigrid Methods for Gauge Theories and Other Disordered Systems * Multicanonical Monte Carlo Simulations * On the Use of the Symbolic Language Maple in Physics and Chemistry: Several Examples * Nonequilibrium Phase Transitions in Catalysis and Population Models * Computer Algebra, Symmetry Analysis and Integrability of Nonlinear Evolution Equations * The Path-Integral Quantum Simulation of Hydrogen in Metals * Digital Optical Computing: A New Approach of Systolic Arrays Based on Coherence Modulation of Light and Integrated Optics Technology * Molecular Dynamics Simulations of Granular Materials * Numerical Implementation of a K.A.M. Algorithm * Quasi-Monte Carlo, Quasi-Random Numbers and Quasi-Error Estimates * What Can We Learn from QMC Simulations * Physics of Fluctuating Membranes * Plato, Apollonius, and Klein: Playing with Spheres * Steady States in Nonequilibrium Lattice Systems * CONVODE: A REDUCE Package for Differential Equations * Chaos in Coupled Rotators * Symplectic Numerical Methods for Hamiltonian Problems * Computer Simulations of Surfactant Self Assembly * High-dimensional and Very Large Cellular Automata for Immunological Shape Space * A Review of the Lattice Boltzmann Method * Electronic Structure of Solids in the Self-interaction Corrected Local-spin-density Approximation * Dedicated Computers for Lattice Gauge Theory Simulations * Physics Education: A Survey of Problems and Possible Solutions * Parallel Computing and Electronic-Structure Theory * High Precision Simulation Techniques for Lattice Field Theory * CONTRIBUTED PAPERS * Case Study of Microscale Hydrodynamics Using Molecular Dynamics and Lattice Gas Methods * Computer Modelling of the Structural and Electronic Properties of the Supported Metal Catalysis

  17. Computer-based multi-channel analyzer based on internet

    International Nuclear Information System (INIS)

    Zhou Xinzhi; Ning Jiaoxian

    2001-01-01

    Combined the technology of Internet with computer-based multi-channel analyzer, a new kind of computer-based multi-channel analyzer system which is based on browser is presented. Its framework and principle as well as its implementation are discussed

  18. Open-Source Java for Teaching Computational Physics

    Science.gov (United States)

    Wolfgang, Christian; Gould, Harvey; Gould, Joshua; Tobochnik, Jan

    2001-11-01

    The switch from procedural to object-oriented (OO) programming has produced dramatic changes in professional software design. OO techniques have not, however, been widely adopted in computational physics. Although most physicists are familiar with procedural languages such as Fortran, few physicists have formal training in computer science and few therefore have made the switch to OO programming. The continued use of procedural languages in education is due, in part, to the lack of up-to-date curricular materials that combine current computational physics research topics with an OO framework. This talk describes an Open-Source curriculum development project to produce such material. Examples will be presented that show how OO techniques can be used to encapsulate the relevant Physics, the analysis, and the associated numerical methods.

  19. Subcooled decompression analysis of the ROSA and the LOFT semiscale blowdown test data with the digital computer code DEPCO-MULTI

    International Nuclear Information System (INIS)

    Namatame, Ken; Kobayashi, Kensuke

    1975-12-01

    In the ROSA (Rig of Safety Assessment) program, the digital computer code DEPCO-SINGLE and DEPCO-MULTI (Subcooled Decompression Process in Loss-of-Coolant Accident - Single Pipe and - Multiple Pipe Network) were prepared to study thermo-hydraulic behavior of the primary coolant in subcooled decompression of the PWR LOCA. The analytical results with DEPCO-MULTI on the subcooled decompression phenomena are presented for ROSA-I, ROSA-II and LOFT 500, 600, 700 and 800 series experiments. The effects of space mesh length, elasticity of pressure boundary materials and simplification for computational piping system on the computed result are described. This will be the final work on the study of the subcooled decompression analysis as for the ROSA program, and the authors wish that the present code shall further be examined with the data of much advanced experiments. (auth.)

  20. Computational advances in transition phase analysis

    International Nuclear Information System (INIS)

    Morita, K.; Kondo, S.; Tobita, Y.; Shirakawa, N.; Brear, D.J.; Fischer, E.A.

    1994-01-01

    In this paper, historical perspective and recent advances are reviewed on computational technologies to evaluate a transition phase of core disruptive accidents in liquid-metal fast reactors. An analysis of the transition phase requires treatment of multi-phase multi-component thermohydraulics coupled with space- and energy-dependent neutron kinetics. Such a comprehensive modeling effort was initiated when the program of SIMMER-series computer code development was initiated in the late 1970s in the USA. Successful application of the latest SIMMER-II in USA, western Europe and Japan have proved its effectiveness, but, at the same time, several areas that require further research have been identified. Based on the experience and lessons learned during the SIMMER-II application through 1980s, a new project of SIMMER-III development is underway at the Power Reactor and Nuclear Fuel Development Corporation (PNC), Japan. The models and methods of SIMMER-III are briefly described with emphasis on recent advances in multi-phase multi-component fluid dynamics technologies and their expected implication on a future reliable transition phase analysis. (author)

  1. Computing in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Watase, Yoshiyuki

    1991-09-15

    The increasingly important role played by computing and computers in high energy physics is displayed in the 'Computing in High Energy Physics' series of conferences, bringing together experts in different aspects of computing - physicists, computer scientists, and vendors.

  2. Information technology and computational physics

    CERN Document Server

    Kóczy, László; Mesiar, Radko; Kacprzyk, Janusz

    2017-01-01

    A broad spectrum of modern Information Technology (IT) tools, techniques, main developments and still open challenges is presented. Emphasis is on new research directions in various fields of science and technology that are related to data analysis, data mining, knowledge discovery, information retrieval, clustering and classification, decision making and decision support, control, computational mathematics and physics, to name a few. Applications in many relevant fields are presented, notably in telecommunication, social networks, recommender systems, fault detection, robotics, image analysis and recognition, electronics, etc. The methods used by the authors range from high level formal mathematical tools and techniques, through algorithmic and computational tools, to modern metaheuristics.

  3. Computing in high energy physics

    International Nuclear Information System (INIS)

    Watase, Yoshiyuki

    1991-01-01

    The increasingly important role played by computing and computers in high energy physics is displayed in the 'Computing in High Energy Physics' series of conferences, bringing together experts in different aspects of computing - physicists, computer scientists, and vendors

  4. Computation of thermodynamic equilibria of nuclear materials in multi-physics codes

    International Nuclear Information System (INIS)

    Piro, M.H.; Lewis, B.J.; Thompson, W.T.; Simunovic, S.; Besmann, T.M.

    2011-01-01

    A new equilibrium thermodynamic solver is being developed with the primary impetus of direct integration into nuclear fuel performance and safety codes to provide improved predictions of fuel behavior. This solver is intended to provide boundary conditions and material properties for continuum transport calculations. There are several legitimate concerns with the use of existing commercial thermodynamic codes: 1) licensing entanglements associated with code distribution, 2) computational performance, and 3) limited capabilities of handling large multi-component systems of interest to the nuclear industry. The development of this solver is specifically aimed at addressing these concerns. In support of this goal, a new numerical algorithm for computing chemical equilibria is presented which is not based on the traditional steepest descent method or 'Gibbs energy minimization' technique. This new approach exploits fundamental principles of equilibrium thermodynamics, which simplifies the optimization equations. The chemical potentials of all species and phases in the system are constrained by the system chemical potentials, and the objective is to minimize the residuals of the mass balance equations. Several numerical advantages are achieved through this simplification, as described in this paper. (author)

  5. Computations in plasma physics

    International Nuclear Information System (INIS)

    Cohen, B.I.; Killeen, J.

    1984-01-01

    A review of computer application in plasma physics is presented. Computer contribution to the investigation of magnetic and inertial confinement of a plasma and charged particle beam propagation is described. Typical utilization of computer for simulation and control of laboratory and cosmic experiments with a plasma and for data accumulation in these experiments is considered. Basic computational methods applied in plasma physics are discussed. Future trends of computer utilization in plasma reseaches are considered in terms of an increasing role of microprocessors and high-speed data plotters and the necessity of more powerful computer application

  6. Simulated, Emulated, and Physical Investigative Analysis (SEPIA) of networked systems.

    Energy Technology Data Exchange (ETDEWEB)

    Burton, David P.; Van Leeuwen, Brian P.; McDonald, Michael James; Onunkwo, Uzoma A.; Tarman, Thomas David; Urias, Vincent E.

    2009-09-01

    This report describes recent progress made in developing and utilizing hybrid Simulated, Emulated, and Physical Investigative Analysis (SEPIA) environments. Many organizations require advanced tools to analyze their information system's security, reliability, and resilience against cyber attack. Today's security analysis utilize real systems such as computers, network routers and other network equipment, computer emulations (e.g., virtual machines) and simulation models separately to analyze interplay between threats and safeguards. In contrast, this work developed new methods to combine these three approaches to provide integrated hybrid SEPIA environments. Our SEPIA environments enable an analyst to rapidly configure hybrid environments to pass network traffic and perform, from the outside, like real networks. This provides higher fidelity representations of key network nodes while still leveraging the scalability and cost advantages of simulation tools. The result is to rapidly produce large yet relatively low-cost multi-fidelity SEPIA networks of computers and routers that let analysts quickly investigate threats and test protection approaches.

  7. Efficient Multi-Party Computation over Rings

    DEFF Research Database (Denmark)

    Cramer, Ronald; Fehr, Serge; Ishai, Yuval

    2003-01-01

    Secure multi-party computation (MPC) is an active research area, and a wide range of literature can be found nowadays suggesting improvements and generalizations of existing protocols in various directions. However, all current techniques for secure MPC apply to functions that are represented by ...... the usefulness of the above results by presenting a novel application of MPC over (non-field) rings to the round-efficient secure computation of the maximum function. Basic Research in Computer Science (www.brics.dk), funded by the Danish National Research Foundation.......Secure multi-party computation (MPC) is an active research area, and a wide range of literature can be found nowadays suggesting improvements and generalizations of existing protocols in various directions. However, all current techniques for secure MPC apply to functions that are represented...... by (boolean or arithmetic) circuits over finite fields. We are motivated by two limitations of these techniques: – Generality. Existing protocols do not apply to computation over more general algebraic structures (except via a brute-force simulation of computation in these structures). – Efficiency. The best...

  8. Information-preserving models of physics and computation: Final report

    International Nuclear Information System (INIS)

    1986-01-01

    This research pertains to discrete dynamical systems, as embodied by cellular automata, reversible finite-difference equations, and reversible computation. The research has strengthened the cross-fertilization between physics, computer science and discrete mathematics. It has shown that methods and concepts of physics can be exported to computation. Conversely, fully discrete dynamical systems have been shown to be fruitful for representing physical phenomena usually described with differential equations - cellular automata for fluid dynamics has been the most noted example of such a representation. At the practical level, the fully discrete representation approach suggests innovative uses of computers for scientific computing. The originality of these uses lies in their non-numerical nature: they avoid the inaccuracies of floating-point arithmetic and bypass the need for numerical analysis. 38 refs

  9. Computational physics an introduction

    CERN Document Server

    Vesely, Franz J

    1994-01-01

    Author Franz J. Vesely offers students an introductory text on computational physics, providing them with the important basic numerical/computational techniques. His unique text sets itself apart from others by focusing on specific problems of computational physics. The author also provides a selection of modern fields of research. Students will benefit from the appendixes which offer a short description of some properties of computing and machines and outline the technique of 'Fast Fourier Transformation.'

  10. Computer-aided engineering in High Energy Physics

    International Nuclear Information System (INIS)

    Bachy, G.; Hauviller, C.; Messerli, R.; Mottier, M.

    1988-01-01

    Computing, standard tool for a long time in the High Energy Physics community, is being slowly introduced at CERN in the mechanical engineering field. The first major application was structural analysis followed by Computer-Aided Design (CAD). Development work is now progressing towards Computer-Aided Engineering around a powerful data base. This paper gives examples of the power of this approach applied to engineering for accelerators and detectors

  11. Computational Analysis of Multi-Rotor Flows

    Science.gov (United States)

    Yoon, Seokkwan; Lee, Henry C.; Pulliam, Thomas H.

    2016-01-01

    Interactional aerodynamics of multi-rotor flows has been studied for a quadcopter representing a generic quad tilt-rotor aircraft in hover. The objective of the present study is to investigate the effects of the separation distances between rotors, and also fuselage and wings on the performance and efficiency of multirotor systems. Three-dimensional unsteady Navier-Stokes equations are solved using a spatially 5th order accurate scheme, dual-time stepping, and the Detached Eddy Simulation turbulence model. The results show that the separation distances as well as the wings have significant effects on the vertical forces of quadroror systems in hover. Understanding interactions in multi-rotor flows would help improve the design of next generation multi-rotor drones.

  12. Computational Physics' Greatest Hits

    Science.gov (United States)

    Bug, Amy

    2011-03-01

    The digital computer, has worked its way so effectively into our profession that now, roughly 65 years after its invention, it is virtually impossible to find a field of experimental or theoretical physics unaided by computational innovation. It is tough to think of another device about which one can make that claim. In the session ``What is computational physics?'' speakers will distinguish computation within the field of computational physics from this ubiquitous importance across all subfields of physics. This talk will recap the invited session ``Great Advances...Past, Present and Future'' in which five dramatic areas of discovery (five of our ``greatest hits'') are chronicled: The physics of many-boson systems via Path Integral Monte Carlo, the thermodynamic behavior of a huge number of diverse systems via Monte Carlo Methods, the discovery of new pharmaceutical agents via molecular dynamics, predictive simulations of global climate change via detailed, cross-disciplinary earth system models, and an understanding of the formation of the first structures in our universe via galaxy formation simulations. The talk will also identify ``greatest hits'' in our field from the teaching and research perspectives of other members of DCOMP, including its Executive Committee.

  13. Specification of the Advanced Burner Test Reactor Multi-Physics Coupling Demonstration Problem

    Energy Technology Data Exchange (ETDEWEB)

    Shemon, E. R. [Argonne National Lab. (ANL), Argonne, IL (United States); Grudzinski, J. J. [Argonne National Lab. (ANL), Argonne, IL (United States); Lee, C. H. [Argonne National Lab. (ANL), Argonne, IL (United States); Thomas, J. W. [Argonne National Lab. (ANL), Argonne, IL (United States); Yu, Y. Q. [Argonne National Lab. (ANL), Argonne, IL (United States)

    2015-12-21

    This document specifies the multi-physics nuclear reactor demonstration problem using the SHARP software package developed by NEAMS. The SHARP toolset simulates the key coupled physics phenomena inside a nuclear reactor. The PROTEUS neutronics code models the neutron transport within the system, the Nek5000 computational fluid dynamics code models the fluid flow and heat transfer, and the DIABLO structural mechanics code models structural and mechanical deformation. The three codes are coupled to the MOAB mesh framework which allows feedback from neutronics, fluid mechanics, and mechanical deformation in a compatible format.

  14. Methods of physical experiment and installation automation on the base of computers

    International Nuclear Information System (INIS)

    Stupin, Yu.V.

    1983-01-01

    Peculiarities of using computers for physical experiment and installation automation are considered. Systems for data acquisition and processing on the base of microprocessors, micro- and mini-computers, CAMAC equipment and real time operational systems as well as systems intended for automation of physical experiments on accelerators and installations of laser thermonuclear fusion and installations for plasma investigation are dpscribed. The problems of multimachine complex and multi-user system, arrangement, development of automated systems for collective use, arrangement of intermachine data exchange and control of experimental data base are discussed. Data on software systems used for complex experimental data processing are presented. It is concluded that application of new computers in combination with new possibilities provided for users by universal operational systems essentially exceeds efficiency of a scientist work

  15. Practice in multi-disciplinary computing. Transonic aero-structural dynamics of semi-monocoque wing

    International Nuclear Information System (INIS)

    Onishi, Ryoichi; Guo, Zhihong; Kimura, Toshiya; Iwamiya, Toshiyuki

    2000-01-01

    Japan Atomic Energy Research Institute is currently involved in expanding the application areas of its distributed parallel computing facility. One of the most anticipated areas of applications is multi-disciplinary interaction problem. This paper introduces the status quo of the system for fluid-structural interaction analysis on the institute's parallel computers by exploring multi-disciplinary engineering methodology. Current application is focused on a transonic aero-elastic analysis of a three dimensional wing. The distinctive features of the system are: (1) Simultaneous executions of fluid and structural codes by exploiting distributed-and-parallel processing technologies. (2) Construction of a computational fluid (aero)-structural dynamics model which combines flow-field grid with a wing structure composed of the external surface and the internal reinforcements. The purpose of this paper is to summarize the basic concepts, analytical methods, and their implementations along with the computed aero-structural properties of a swept-back wing at March, 7 flow condition. (author)

  16. Multi-VO support in IHEP's distributed computing environment

    International Nuclear Information System (INIS)

    Yan, T; Suo, B; Zhao, X H; Zhang, X M; Ma, Z T; Yan, X F; Lin, T; Deng, Z Y; Li, W D; Belov, S; Pelevanyuk, I; Zhemchugov, A; Cai, H

    2015-01-01

    Inspired by the success of BESDIRAC, the distributed computing environment based on DIRAC for BESIII experiment, several other experiments operated by Institute of High Energy Physics (IHEP), such as Circular Electron Positron Collider (CEPC), Jiangmen Underground Neutrino Observatory (JUNO), Large High Altitude Air Shower Observatory (LHAASO) and Hard X-ray Modulation Telescope (HXMT) etc, are willing to use DIRAC to integrate the geographically distributed computing resources available by their collaborations. In order to minimize manpower and hardware cost, we extended the BESDIRAC platform to support multi-VO scenario, instead of setting up a self-contained distributed computing environment for each VO. This makes DIRAC as a service for the community of those experiments. To support multi-VO, the system architecture of BESDIRAC is adjusted for scalability. The VOMS and DIRAC servers are reconfigured to manage users and groups belong to several VOs. A lightweight storage resource manager StoRM is employed as the central SE to integrate local and grid data. A frontend system is designed for user's massive job splitting, submission and management, with plugins to support new VOs. A monitoring and accounting system is also considered to easy the system administration and VO related resources usage accounting. (paper)

  17. Multi-detector computed tomography of acute abdomen

    International Nuclear Information System (INIS)

    Leschka, Sebastian; Alkadhi, Hatem; Wildermuth, Simon; Marincek, Borut; University Hospital of Zurich

    2005-01-01

    Acute abdominal pain is one of the most common causes for referrals to the emergency department. The sudden onset of severe abdominal pain characterising the ''acute abdomen'' requires rapid and accurate identification of a potentially life-threatening abdominal pathology to provide a timely referral to the appropriate physician. While the physical examination and laboratory investigations are often non-specific, computed tomography (CT) has evolved as the first-line imaging modality in patients with an acute abdomen. Because the new multi-detector CT (MDCT) scanner generations provide increased speed, greater volume coverage and thinner slices, the acceptance of CT for abdominal imaging has increased rapidly. The goal of this article is to discuss the role of MDCT in the diagnostic work-up of acute abdominal pain. (orig.)

  18. Mapping University Students' Epistemic Framing of Computational Physics Using Network Analysis

    Science.gov (United States)

    Bodin, Madelen

    2012-01-01

    Solving physics problem in university physics education using a computational approach requires knowledge and skills in several domains, for example, physics, mathematics, programming, and modeling. These competences are in turn related to students' beliefs about the domains as well as about learning. These knowledge and beliefs components are…

  19. A Multi-area Model of a Physical Protection System for a Vulnerability Assessment

    International Nuclear Information System (INIS)

    Jang, Sung Soon; Yoo, Ho Sik

    2008-01-01

    A physical protection system (PPS) integrates people, procedures and equipment for the protection of assets or facilities against theft, sabotage or other malevolent human attacks. Among critical facilities, nuclear facilities and nuclear weapon sites require the highest level of PPS. After the September 11, 2001 terrorist attacks, international communities, including the IAEA, have made substantial efforts to protect nuclear material and nuclear facilities. These efforts include the Nuclear Security Fund established by the IAEA in 2002 and the Global Initiative to Combat Nuclear Terrorism which is launched by the USA and Russia in 2006. Without a regular assessment, the PPS might waste valuable resources on unnecessary protection or, worse yet, fail to provide adequate protection at critical points of a facility. Due to the complexity of protection systems, the assessment usually requires computer modeling techniques. Several Codes were developed to model and analyze a PPS. We also devised and implemented new analysis method and named it as Systematic Analysis of physical Protection Effectiveness (SAPE). A SAPE code consumes much time to analyze a PPS over a large area in detail. It is because SAPE uses meshes of an equal size for the analysis of a 2D map. The analysis is more accurate when the meshes of a smaller size are used. However, the analysis time is roughly proportional to the exponential of the number of meshes. Thus, the speed and accuracy is in a trade-off relation. In the paper, we suggest a multi-area model of a PPS for a vulnerability assessment to solve this problem. Using multi areas with different scales, we can accurately analyze a PPS near a target and can analyze it over a large area rather roughly

  20. High energy physics and grid computing

    International Nuclear Information System (INIS)

    Yu Chuansong

    2004-01-01

    The status of the new generation computing environment of the high energy physics experiments is introduced briefly in this paper. The development of the high energy physics experiments and the new computing requirements by the experiments are presented. The blueprint of the new generation computing environment of the LHC experiments, the history of the Grid computing, the R and D status of the high energy physics grid computing technology, the network bandwidth needed by the high energy physics grid and its development are described. The grid computing research in Chinese high energy physics community is introduced at last. (authors)

  1. A Compute Environment of ABC95 Array Computer Based on Multi-FPGA Chip

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    ABC95 array computer is a multi-function network's computer based on FPGA technology, The multi-function network supports processors conflict-free access data from memory and supports processors access data from processors based on enhanced MESH network.ABC95 instruction's system includes control instructions, scalar instructions, vectors instructions.Mostly net-work instructions are introduced.A programming environment of ABC95 array computer assemble language is designed.A programming environment of ABC95 array computer for VC++ is advanced.It includes load function of ABC95 array computer program and data, store function, run function and so on.Specially, The data type of ABC95 array computer conflict-free access is defined.The results show that these technologies can develop programmer of ABC95 array computer effectively.

  2. Image matrix processor for fast multi-dimensional computations

    Science.gov (United States)

    Roberson, George P.; Skeate, Michael F.

    1996-01-01

    An apparatus for multi-dimensional computation which comprises a computation engine, including a plurality of processing modules. The processing modules are configured in parallel and compute respective contributions to a computed multi-dimensional image of respective two dimensional data sets. A high-speed, parallel access storage system is provided which stores the multi-dimensional data sets, and a switching circuit routes the data among the processing modules in the computation engine and the storage system. A data acquisition port receives the two dimensional data sets representing projections through an image, for reconstruction algorithms such as encountered in computerized tomography. The processing modules include a programmable local host, by which they may be configured to execute a plurality of different types of multi-dimensional algorithms. The processing modules thus include an image manipulation processor, which includes a source cache, a target cache, a coefficient table, and control software for executing image transformation routines using data in the source cache and the coefficient table and loading resulting data in the target cache. The local host processor operates to load the source cache with a two dimensional data set, loads the coefficient table, and transfers resulting data out of the target cache to the storage system, or to another destination.

  3. Towards quantum computation with multi-particle interference

    Energy Technology Data Exchange (ETDEWEB)

    Tamma, Vincenzo; Schleich, Wolfgang P. [Institut fuer Quantenphysik, Universitaet Ulm (Germany); Shih, Yanhua [Univ. of Maryland, Baltimore County, Baltimore, MD (Germany). Dept. of Physics

    2012-07-01

    One of the main challenges in quantum computation is the realization of entangled states with a large number of particles. We have experimentally demonstrated a novel factoring algorithm which relies only on optical multi-path interference and on the periodicity properties of Gauss sums with continuous arguments. An interesting implementation of such a method can, in principle, take advantage of matter-wave interferometers characterized by long-time evolution of a BEC in microgravity. A more recent approach to factorization aims to achieve an exponential speed-up without entanglement by exploiting multi-particle m-order interference. In this case, the basic requirement for quantum computation is interference of an exponentially large number of multi-particle amplitudes.

  4. Fuzzy preference based interactive fuzzy physical programming and its application in multi-objective optimization

    International Nuclear Information System (INIS)

    Zhang, Xu; Huang, Hong Zhong; Yu, Lanfeng

    2006-01-01

    Interactive Fuzzy Physical Programming (IFPP) developed in this paper is a new efficient multi-objective optimization method, which retains the advantages of physical programming while considering the fuzziness of the designer's preferences. The fuzzy preference function is introduced based on the model of linear physical programming, which is used to guide the search for improved solutions by interactive decision analysis. The example of multi-objective optimization design of the spindle of internal grinder demonstrates that the improved preference conforms to the subjective desires of the designer

  5. Design and Analysis of a New Hair Sensor for Multi-Physical Signal Measurement

    Directory of Open Access Journals (Sweden)

    Bo Yang

    2016-07-01

    Full Text Available A new hair sensor for multi-physical signal measurements, including acceleration, angular velocity and air flow, is presented in this paper. The entire structure consists of a hair post, a torsional frame and a resonant signal transducer. The hair post is utilized to sense and deliver the physical signals of the acceleration and the air flow rate. The physical signals are converted into frequency signals by the resonant transducer. The structure is optimized through finite element analysis. The simulation results demonstrate that the hair sensor has a frequency of 240 Hz in the first mode for the acceleration or the air flow sense, 3115 Hz in the third and fourth modes for the resonant conversion, and 3467 Hz in the fifth and sixth modes for the angular velocity transformation, respectively. All the above frequencies present in a reasonable modal distribution and are separated from interference modes. The input-output analysis of the new hair sensor demonstrates that the scale factor of the acceleration is 12.35 Hz/g, the scale factor of the angular velocity is 0.404 nm/deg/s and the sensitivity of the air flow is 1.075 Hz/(m/s2, which verifies the multifunction sensitive characteristics of the hair sensor. Besides, the structural optimization of the hair post is used to improve the sensitivity of the air flow rate and the acceleration. The analysis results illustrate that the hollow circular hair post can increase the sensitivity of the air flow and the II-shape hair post can increase the sensitivity of the acceleration. Moreover, the thermal analysis confirms the scheme of the frequency difference for the resonant transducer can prominently eliminate the temperature influences on the measurement accuracy. The air flow analysis indicates that the surface area increase of hair post is significantly beneficial for the efficiency improvement of the signal transmission. In summary, the structure of the new hair sensor is proved to be feasible by

  6. Computational atomic and nuclear physics

    International Nuclear Information System (INIS)

    Bottcher, C.; Strayer, M.R.; McGrory, J.B.

    1990-01-01

    The evolution of parallel processor supercomputers in recent years provides opportunities to investigate in detail many complex problems, in many branches of physics, which were considered to be intractable only a few years ago. But to take advantage of these new machines, one must have a better understanding of how the computers organize their work than was necessary with previous single processor machines. Equally important, the scientist must have this understanding as well as a good understanding of the structure of the physics problem under study. In brief, a new field of computational physics is evolving, which will be led by investigators who are highly literate both computationally and physically. A Center for Computationally Intensive Problems has been established with the collaboration of the University of Tennessee Science Alliance, Vanderbilt University, and the Oak Ridge National Laboratory. The objective of this Center is to carry out forefront research in computationally intensive areas of atomic, nuclear, particle, and condensed matter physics. An important part of this effort is the appropriate training of students. An early effort of this Center was to conduct a Summer School of Computational Atomic and Nuclear Physics. A distinguished faculty of scientists in atomic, nuclear, and particle physics gave lectures on the status of present understanding of a number of topics at the leading edge in these fields, and emphasized those areas where computational physics was in a position to make a major contribution. In addition, there were lectures on numerical techniques which are particularly appropriate for implementation on parallel processor computers and which are of wide applicability in many branches of science

  7. The application of AFS in the high energy physics computing system

    International Nuclear Information System (INIS)

    Xu Dong; Yan Xiaofei; Chen Yaodong; Chen Gang; Yu Chuansong

    2010-01-01

    With the development of high energy physics, physics experiments are producing large amount of data. The workload of data analysis is very large, and the analysis work needs to be finished by many scientists together. So, the computing system must provide more secure user manage function and higher level of data-sharing ability. The article introduces a solution based on AFS in the high energy physics computing system, which not only make user management safer, but also make data-sharing easier. (authors)

  8. M and c'99 : Mathematics and computation, reactor physics and environmental analysis in nuclear applications, Madrid, September 27-30, 1999

    International Nuclear Information System (INIS)

    Aragones, J. M.; Ahnert, C.; Cabellos, O.

    1999-01-01

    The international conference on mathematics and computation, reactor physics and environmental analysis in nuclear applications in the biennial topical meeting of the mathematics and computation division of the American Nuclear Society. (Author)

  9. Multi-GPU Jacobian accelerated computing for soft-field tomography

    International Nuclear Information System (INIS)

    Borsic, A; Attardo, E A; Halter, R J

    2012-01-01

    Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use finite element models (FEMs) to represent the volume of interest and solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are 3D. Although the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in electrical impedance tomography (EIT) applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15–20 min with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Furthermore, providing high-speed reconstructions is essential for some promising clinical application of EIT. For 3D problems, 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In this work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with the use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times

  10. Multi-GPU Jacobian accelerated computing for soft-field tomography.

    Science.gov (United States)

    Borsic, A; Attardo, E A; Halter, R J

    2012-10-01

    Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use finite element models (FEMs) to represent the volume of interest and solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are 3D. Although the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in electrical impedance tomography (EIT) applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15-20 min with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Furthermore, providing high-speed reconstructions is essential for some promising clinical application of EIT. For 3D problems, 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In this work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with the use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20

  11. Physics, Computer Science and Mathematics Division annual report, January 1--December 31, 1976

    International Nuclear Information System (INIS)

    Lepore, J.V.

    1977-01-01

    This annual report of the Physics, Computer Science and Mathematics Division describes the scientific research and other work carried out within the Division during the calendar year 1976. The Division is concerned with work in experimental and theoretical physics, with computer science and applied mathematics, and with the operation of a computer center. The major physics research activity is in high-energy physics; a vigorous program is maintained in this pioneering field. The high-energy physics research program in the Division now focuses on experiments with e + e - colliding beams using advanced techniques and developments initiated and perfected at the Laboratory. The Division continues its work in medium energy physics, with experimental work carried out at the Bevatron and at the Los Alamos Pi-Meson Facility. Work in computer science and applied mathematics includes construction of data bases, computer graphics, computational physics and data analysis, mathematical modeling, and mathematical analysis of differential and integral equations resulting from physical problems. The computer center serves the Laboratory by constantly upgrading its facility and by providing day-to-day service. This report is descriptive in nature; references to detailed publications are given

  12. Physics, Computer Science and Mathematics Division annual report, January 1--December 31, 1976

    Energy Technology Data Exchange (ETDEWEB)

    Lepore, J.V. (ed.)

    1977-01-01

    This annual report of the Physics, Computer Science and Mathematics Division describes the scientific research and other work carried out within the Division during the calendar year 1976. The Division is concerned with work in experimental and theoretical physics, with computer science and applied mathematics, and with the operation of a computer center. The major physics research activity is in high-energy physics; a vigorous program is maintained in this pioneering field. The high-energy physics research program in the Division now focuses on experiments with e/sup +/e/sup -/ colliding beams using advanced techniques and developments initiated and perfected at the Laboratory. The Division continues its work in medium energy physics, with experimental work carried out at the Bevatron and at the Los Alamos Pi-Meson Facility. Work in computer science and applied mathematics includes construction of data bases, computer graphics, computational physics and data analysis, mathematical modeling, and mathematical analysis of differential and integral equations resulting from physical problems. The computer center serves the Laboratory by constantly upgrading its facility and by providing day-to-day service. This report is descriptive in nature; references to detailed publications are given. (RWR)

  13. Physics, Computer Science and Mathematics Division. Annual report, 1 January--31 December 1977

    International Nuclear Information System (INIS)

    Lepore, J.V.

    1977-01-01

    This annual report of the Physics, Computer Science and Mathematics Division describes the scientific research and other work carried out within the Division during 1977. The Division is concerned with work in experimental and theoretical physics, with computer science and applied mathematics, and with the operation of a computer center. The major physics research activity is in high-energy physics, although there is a relatively small program of medium-energy research. The High Energy Physics research program in the Physics Division is concerned with fundamental research which will enable man to comprehend the nature of the physical world. The major effort is now directed toward experiments with positron-electron colliding beam at PEP. The Medium Energy Physics program is concerned with research using mesons and nucleons to probe the properties of matter. This research is concerned with the study of nuclear structure, nuclear reactions, and the interactions between nuclei and electromagnetic radiation and mesons. The Computer Science and Applied Mathematics Department engages in research in a variety of computer science and mathematics disciplines. Work in computer science and applied mathematics includes construction of data bases, computer graphics, computational physics and data analysis, mathematical modeling, and mathematical analysis of differential and integral equations resulting from physical problems. The Computer Center provides large-scale computational support to LBL's scientific programs. Descriptions of the various activities are quite short; references to published results are given. 24 figures

  14. Biomorphic Multi-Agent Architecture for Persistent Computing

    Science.gov (United States)

    Lodding, Kenneth N.; Brewster, Paul

    2009-01-01

    A multi-agent software/hardware architecture, inspired by the multicellular nature of living organisms, has been proposed as the basis of design of a robust, reliable, persistent computing system. Just as a multicellular organism can adapt to changing environmental conditions and can survive despite the failure of individual cells, a multi-agent computing system, as envisioned, could adapt to changing hardware, software, and environmental conditions. In particular, the computing system could continue to function (perhaps at a reduced but still reasonable level of performance) if one or more component( s) of the system were to fail. One of the defining characteristics of a multicellular organism is unity of purpose. In biology, the purpose is survival of the organism. The purpose of the proposed multi-agent architecture is to provide a persistent computing environment in harsh conditions in which repair is difficult or impossible. A multi-agent, organism-like computing system would be a single entity built from agents or cells. Each agent or cell would be a discrete hardware processing unit that would include a data processor with local memory, an internal clock, and a suite of communication equipment capable of both local line-of-sight communications and global broadcast communications. Some cells, denoted specialist cells, could contain such additional hardware as sensors and emitters. Each cell would be independent in the sense that there would be no global clock, no global (shared) memory, no pre-assigned cell identifiers, no pre-defined network topology, and no centralized brain or control structure. Like each cell in a living organism, each agent or cell of the computing system would contain a full description of the system encoded as genes, but in this case, the genes would be components of a software genome.

  15. High energy physics and cloud computing

    International Nuclear Information System (INIS)

    Cheng Yaodong; Liu Baoxu; Sun Gongxing; Chen Gang

    2011-01-01

    High Energy Physics (HEP) has been a strong promoter of computing technology, for example WWW (World Wide Web) and the grid computing. In the new era of cloud computing, HEP has still a strong demand, and major international high energy physics laboratories have launched a number of projects to research on cloud computing technologies and applications. It describes the current developments in cloud computing and its applications in high energy physics. Some ongoing projects in the institutes of high energy physics, Chinese Academy of Sciences, including cloud storage, virtual computing clusters, and BESⅢ elastic cloud, are also described briefly in the paper. (authors)

  16. Visual physics analysis-from desktop to physics analysis at your fingertips

    International Nuclear Information System (INIS)

    Bretz, H-P; Erdmann, M; Fischer, R; Hinzmann, A; Klingebiel, D; Komm, M; Lingemann, J; Rieger, M; Müller, G; Steggemann, J; Winchen, T

    2012-01-01

    Visual Physics Analysis (VISPA) is an analysis environment with applications in high energy and astroparticle physics. Based on a data-flow-driven paradigm, it allows users to combine graphical steering with self-written C++ and Python modules. This contribution presents new concepts integrated in VISPA: layers, convenient analysis execution, and web-based physics analysis. While the convenient execution offers full flexibility to vary settings for the execution phase of an analysis, layers allow to create different views of the analysis already during its design phase. Thus, one application of layers is to define different stages of an analysis (e.g. event selection and statistical analysis). However, there are other use cases such as to independently optimize settings for different types of input data in order to guide all data through the same analysis flow. The new execution feature makes job submission to local clusters as well as the LHC Computing Grid possible directly from VISPA. Web-based physics analysis is realized in the VISPA-Web project, which represents a whole new way to design and execute analyses via a standard web browser.

  17. Computing in high energy physics

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Sarah; Devenish, Robin [Nuclear Physics Laboratory, Oxford University (United Kingdom)

    1989-07-15

    Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations. The state of the art, as well as new trends and hopes, were reflected in this year's 'Computing In High Energy Physics' conference held in the dreamy setting of Oxford's spires. The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference's aim – 'to bring together high energy physicists and computer scientists'.

  18. Physics, Computer Science and Mathematics Division. Annual report, January 1-December 31, 1980

    International Nuclear Information System (INIS)

    Birge, R.W.

    1981-12-01

    Research in the physics, computer science, and mathematics division is described for the year 1980. While the division's major effort remains in high energy particle physics, there is a continually growing program in computer science and applied mathematics. Experimental programs are reported in e + e - annihilation, muon and neutrino reactions at FNAL, search for effects of a right-handed gauge boson, limits on neutrino oscillations from muon-decay neutrinos, strong interaction experiments at FNAL, strong interaction experiments at BNL, particle data center, Barrelet moment analysis of πN scattering data, astrophysics and astronomy, earth sciences, and instrument development and engineering for high energy physics. In theoretical physics research, studies included particle physics and accelerator physics. Computer science and mathematics research included analytical and numerical methods, information analysis techniques, advanced computer concepts, and environmental and epidemiological studies

  19. Physics, Computer Science and Mathematics Division. Annual report, January 1-December 31, 1980

    Energy Technology Data Exchange (ETDEWEB)

    Birge, R.W.

    1981-12-01

    Research in the physics, computer science, and mathematics division is described for the year 1980. While the division's major effort remains in high energy particle physics, there is a continually growing program in computer science and applied mathematics. Experimental programs are reported in e/sup +/e/sup -/ annihilation, muon and neutrino reactions at FNAL, search for effects of a right-handed gauge boson, limits on neutrino oscillations from muon-decay neutrinos, strong interaction experiments at FNAL, strong interaction experiments at BNL, particle data center, Barrelet moment analysis of ..pi..N scattering data, astrophysics and astronomy, earth sciences, and instrument development and engineering for high energy physics. In theoretical physics research, studies included particle physics and accelerator physics. Computer science and mathematics research included analytical and numerical methods, information analysis techniques, advanced computer concepts, and environmental and epidemiological studies. (GHT)

  20. Nuclear Physics computer networking: Report of the Nuclear Physics Panel on Computer Networking

    International Nuclear Information System (INIS)

    Bemis, C.; Erskine, J.; Franey, M.; Greiner, D.; Hoehn, M.; Kaletka, M.; LeVine, M.; Roberson, R.; Welch, L.

    1990-05-01

    This paper discusses: the state of computer networking within nuclear physics program; network requirements for nuclear physics; management structure; and issues of special interest to the nuclear physics program office

  1. Computing in high energy physics

    International Nuclear Information System (INIS)

    Smith, Sarah; Devenish, Robin

    1989-01-01

    Computing in high energy physics has changed over the years from being something one did on a slide-rule, through early computers, then a necessary evil to the position today where computers permeate all aspects of the subject from control of the apparatus to theoretical lattice gauge calculations. The state of the art, as well as new trends and hopes, were reflected in this year's 'Computing In High Energy Physics' conference held in the dreamy setting of Oxford's spires. The conference aimed to give a comprehensive overview, entailing a heavy schedule of 35 plenary talks plus 48 contributed papers in two afternoons of parallel sessions. In addition to high energy physics computing, a number of papers were given by experts in computing science, in line with the conference's aim – 'to bring together high energy physicists and computer scientists'

  2. Computational Physics as a Path for Physics Education

    Science.gov (United States)

    Landau, Rubin H.

    2008-04-01

    Evidence and arguments will be presented that modifications in the undergraduate physics curriculum are necessary to maintain the long-term relevance of physics. Suggested will a balance of analytic, experimental, computational, and communication skills, that in many cases will require an increased inclusion of computation and its associated skill set into the undergraduate physics curriculum. The general arguments will be followed by a detailed enumeration of suggested subjects and student learning outcomes, many of which have already been adopted or advocated by the computational science community, and which permit high performance computing and communication. Several alternative models for how these computational topics can be incorporated into the undergraduate curriculum will be discussed. This includes enhanced topics in the standard existing courses, as well as stand-alone courses. Applications and demonstrations will be presented throughout the talk, as well as prototype video-based materials and electronic books.

  3. Nanostructure symmetry: Relevance for physics and computing

    International Nuclear Information System (INIS)

    Dupertuis, Marc-André; Oberli, D. Y.; Karlsson, K. F.; Dalessi, S.; Gallinet, B.; Svendsen, G.

    2014-01-01

    We review the research done in recent years in our group on the effects of nanostructure symmetry, and outline its relevance both for nanostructure physics and for computations of their electronic and optical properties. The exemples of C3v and C2v quantum dots are used. A number of surprises and non-trivial aspects are outlined, and a few symmetry-based tools for computing and analysis are shortly presented

  4. Nanostructure symmetry: Relevance for physics and computing

    Energy Technology Data Exchange (ETDEWEB)

    Dupertuis, Marc-André; Oberli, D. Y. [Laboratory for Physics of Nanostructure, EPF Lausanne (Switzerland); Karlsson, K. F. [Department of Physics, Chemistry, and Biology (IFM), Linköping University (Sweden); Dalessi, S. [Computational Biology Group, Department of Medical Genetics, University of Lausanne (Switzerland); Gallinet, B. [Nanophotonics and Metrology Laboratory, EPF Lausanne (Switzerland); Svendsen, G. [Dept. of Electronics and Telecom., Norwegian University of Science and Technology, Trondheim (Norway)

    2014-03-31

    We review the research done in recent years in our group on the effects of nanostructure symmetry, and outline its relevance both for nanostructure physics and for computations of their electronic and optical properties. The exemples of C3v and C2v quantum dots are used. A number of surprises and non-trivial aspects are outlined, and a few symmetry-based tools for computing and analysis are shortly presented.

  5. Development and verification of the neutron diffusion solver for the GeN-Foam multi-physics platform

    International Nuclear Information System (INIS)

    Fiorina, Carlo; Kerkar, Nordine; Mikityuk, Konstantin; Rubiolo, Pablo; Pautz, Andreas

    2016-01-01

    Highlights: • Development and verification of a neutron diffusion solver based on OpenFOAM. • Integration in the GeN-Foam multi-physics platform. • Implementation and verification of acceleration techniques. • Implementation of isotropic discontinuity factors. • Automatic adjustment of discontinuity factors. - Abstract: The Laboratory for Reactor Physics and Systems Behaviour at the PSI and the EPFL has been developing in recent years a new code system for reactor analysis based on OpenFOAM®. The objective is to supplement available legacy codes with a modern tool featuring state-of-the-art characteristics in terms of scalability, programming approach and flexibility. As part of this project, a new solver has been developed for the eigenvalue and transient solution of multi-group diffusion equations. Several features distinguish the developed solver from other available codes, in particular: object oriented programming to ease code modification and maintenance; modern parallel computing capabilities; use of general unstructured meshes; possibility of mesh deformation; cell-wise parametrization of cross-sections; and arbitrary energy group structure. In addition, the solver is integrated into the GeN-Foam multi-physics solver. The general features of the solver and its integration with GeN-Foam have already been presented in previous publications. The present paper describes the diffusion solver in more details and provides an overview of new features recently implemented, including the use of acceleration techniques and discontinuity factors. In addition, a code verification is performed through a comparison with Monte Carlo results for both a thermal and a fast reactor system.

  6. Accident sequence analysis of human-computer interface design

    International Nuclear Information System (INIS)

    Fan, C.-F.; Chen, W.-H.

    2000-01-01

    It is important to predict potential accident sequences of human-computer interaction in a safety-critical computing system so that vulnerable points can be disclosed and removed. We address this issue by proposing a Multi-Context human-computer interaction Model along with its analysis techniques, an Augmented Fault Tree Analysis, and a Concurrent Event Tree Analysis. The proposed augmented fault tree can identify the potential weak points in software design that may induce unintended software functions or erroneous human procedures. The concurrent event tree can enumerate possible accident sequences due to these weak points

  7. Organization of the secure distributed computing based on multi-agent system

    Science.gov (United States)

    Khovanskov, Sergey; Rumyantsev, Konstantin; Khovanskova, Vera

    2018-04-01

    Nowadays developing methods for distributed computing is received much attention. One of the methods of distributed computing is using of multi-agent systems. The organization of distributed computing based on the conventional network computers can experience security threats performed by computational processes. Authors have developed the unified agent algorithm of control system of computing network nodes operation. Network PCs is used as computing nodes. The proposed multi-agent control system for the implementation of distributed computing allows in a short time to organize using of the processing power of computers any existing network to solve large-task by creating a distributed computing. Agents based on a computer network can: configure a distributed computing system; to distribute the computational load among computers operated agents; perform optimization distributed computing system according to the computing power of computers on the network. The number of computers connected to the network can be increased by connecting computers to the new computer system, which leads to an increase in overall processing power. Adding multi-agent system in the central agent increases the security of distributed computing. This organization of the distributed computing system reduces the problem solving time and increase fault tolerance (vitality) of computing processes in a changing computing environment (dynamic change of the number of computers on the network). Developed a multi-agent system detects cases of falsification of the results of a distributed system, which may lead to wrong decisions. In addition, the system checks and corrects wrong results.

  8. An ODMG-compatible testbed architecture for scalable management and analysis of physics data

    International Nuclear Information System (INIS)

    Malon, D.M.; May, E.N.

    1997-01-01

    This paper describes a testbed architecture for the investigation and development of scalable approaches to the management and analysis of massive amounts of high energy physics data. The architecture has two components: an interface layer that is compliant with a substantial subset of the ODMG-93 Version 1.2 specification, and a lightweight object persistence manager that provides flexible storage and retrieval services on a variety of single- and multi-level storage architectures, and on a range of parallel and distributed computing platforms

  9. Mathematics, Physics and Computer Sciences The computation of ...

    African Journals Online (AJOL)

    Mathematics, Physics and Computer Sciences The computation of system matrices for biquadraticsquare finite ... Global Journal of Pure and Applied Sciences ... The computation of system matrices for biquadraticsquare finite elements.

  10. Optimization and parallelization of the thermal–hydraulic subchannel code CTF for high-fidelity multi-physics applications

    International Nuclear Information System (INIS)

    Salko, Robert K.; Schmidt, Rodney C.; Avramova, Maria N.

    2015-01-01

    Highlights: • COBRA-TF was adopted by the Consortium for Advanced Simulation of LWRs. • We have improved code performance to support running large-scale LWR simulations. • Code optimization has led to reductions in execution time and memory usage. • An MPI parallelization has reduced full-core simulation time from days to minutes. - Abstract: This paper describes major improvements to the computational infrastructure of the CTF subchannel code so that full-core, pincell-resolved (i.e., one computational subchannel per real bundle flow channel) simulations can now be performed in much shorter run-times, either in stand-alone mode or as part of coupled-code multi-physics calculations. These improvements support the goals of the Department Of Energy Consortium for Advanced Simulation of Light Water Reactors (CASL) Energy Innovation Hub to develop high fidelity multi-physics simulation tools for nuclear energy design and analysis. A set of serial code optimizations—including fixing computational inefficiencies, optimizing the numerical approach, and making smarter data storage choices—are first described and shown to reduce both execution time and memory usage by about a factor of ten. Next, a “single program multiple data” parallelization strategy targeting distributed memory “multiple instruction multiple data” platforms utilizing domain decomposition is presented. In this approach, data communication between processors is accomplished by inserting standard Message-Passing Interface (MPI) calls at strategic points in the code. The domain decomposition approach implemented assigns one MPI process to each fuel assembly, with each domain being represented by its own CTF input file. The creation of CTF input files, both for serial and parallel runs, is also fully automated through use of a pressurized water reactor (PWR) pre-processor utility that uses a greatly simplified set of user input compared with the traditional CTF input. To run CTF in

  11. Physical activity and asthma: A longitudinal and multi-country study.

    Science.gov (United States)

    Russell, Melissa A; Janson, Christer; Real, Francisco Gómez; Johannessen, Ane; Waatevik, Marie; Benediktsdóttir, Bryndis; Holm, Mathias; Lindberg, Eva; Schlünssen, Vivi; Raza, Wasif; Dharmage, Shyamali C; Svanes, Cecilie

    2017-11-01

    To investigate the impact of physical activity on asthma in middle-aged adults, in one longitudinal analysis, and one multi-centre cross-sectional analysis. The Respiratory Health in Northern Europe (RHINE) is a population-based postal questionnaire cohort study. Physical activity, height and weight were self-reported in Bergen, Norway, at RHINE II (1999-2001) and all centres at RHINE III (2010-2012). A longitudinal analysis of Bergen data investigated the association of baseline physical activity with follow-up asthma, incident asthma and symptoms, using logistic and zero-inflated Poisson regression (n = 1782). A cross-sectional analysis of all RHINE III centres investigated the association of physical activity with concurrent asthma and symptoms (n = 13,542) using mixed-effects models. Body mass index (BMI) was categorised (asthma (odds ratio [OR] 0.44, 95% confidence interval [CI] 0.22, 0.89), whilst an effect from undertaking vigorous activity 3+ times/week was not detected (OR 1.22, 95% CI 0.44, 2.76). The associations were attenuated with BMI adjustment. In the all-centre cross-sectional analysis an interaction was found, with the association between physical activity and asthma varying across BMI categories. These findings suggest potential longer-term benefit from lighter physical activity, whilst improvement in asthma outcomes from increasing activity intensity was not evident. Additionally, it appears the benefit from physical activity may differ according to BMI.

  12. M and c'99 : Mathematics and computation, reactor physics and environmental analysis in nuclear applications, Madrid, September 27-30, 1999

    Energy Technology Data Exchange (ETDEWEB)

    Aragones, J. M.; Ahnert, C.; Cabellos, O.

    1999-07-01

    The international conference on mathematics and computation, reactor physics and environmental analysis in nuclear applications in the biennial topical meeting of the mathematics and computation division of the American Nuclear Society. (Author)

  13. Physical Computing and Its Scope--Towards a Constructionist Computer Science Curriculum with Physical Computing

    Science.gov (United States)

    Przybylla, Mareen; Romeike, Ralf

    2014-01-01

    Physical computing covers the design and realization of interactive objects and installations and allows students to develop concrete, tangible products of the real world, which arise from the learners' imagination. This can be used in computer science education to provide students with interesting and motivating access to the different topic…

  14. Numerical Simulation of Early Age Cracking of Reinforced Concrete Bridge Decks with a Full-3D Multiscale and Multi-Chemo-Physical Integrated Analysis

    Directory of Open Access Journals (Sweden)

    Tetsuya Ishida

    2018-03-01

    Full Text Available In November 2011, the Japanese government resolved to build “Revival Roads” in the Tohoku region to accelerate the recovery from the Great East Japan Earthquake of March 2011. Because the Tohoku region experiences such cold and snowy weather in winter, complex degradation from a combination of frost damage, chloride attack from de-icing agents, alkali–silica reaction, cracking and fatigue is anticipated. Thus, to enhance the durability performance of road structures, particularly reinforced concrete (RC bridge decks, multiple countermeasures are proposed: a low water-to-cement ratio in the mix, mineral admixtures such as ground granulated blast furnace slag and/or fly ash to mitigate the risks of chloride attack and alkali–silica reaction, anticorrosion rebar and 6% entrained air for frost damage. It should be noted here that such high durability specifications may conversely increase the risk of early age cracking caused by temperature and shrinkage due to the large amounts of cement and the use of mineral admixtures. Against this background, this paper presents a numerical simulation of early age deformation and cracking of RC bridge decks with full 3D multiscale and multi-chemo-physical integrated analysis. First, a multiscale constitutive model of solidifying cementitious materials is briefly introduced based on systematic knowledge coupling microscopic thermodynamic phenomena and microscopic structural mechanics. With the aim to assess the early age thermal and shrinkage-induced cracks on real bridge deck, the study began with extensive model validations by applying the multiscale and multi-physical integrated analysis system to small specimens and mock-up RC bridge deck specimens. Then, through the application of the current computational system, factors that affect the generation and propagation of early age thermal and shrinkage-induced cracks are identified via experimental validation and full-scale numerical simulation on real

  15. Multi-Core Processor Memory Contention Benchmark Analysis Case Study

    Science.gov (United States)

    Simon, Tyler; McGalliard, James

    2009-01-01

    Multi-core processors dominate current mainframe, server, and high performance computing (HPC) systems. This paper provides synthetic kernel and natural benchmark results from an HPC system at the NASA Goddard Space Flight Center that illustrate the performance impacts of multi-core (dual- and quad-core) vs. single core processor systems. Analysis of processor design, application source code, and synthetic and natural test results all indicate that multi-core processors can suffer from significant memory subsystem contention compared to similar single-core processors.

  16. Use of new computer technologies in elementary particle physics

    International Nuclear Information System (INIS)

    Gaines, I.; Nash, T.

    1987-01-01

    Elementary particle physics and computers have progressed together for as long as anyone can remember. The symbiosis is surprising considering the dissimilar objectives of these fields, but physics understanding cannot be had simply by detecting the passage of particles. It requires a selection of interesting events and their analysis in comparison with quantitative theoretical predictions. The extraordinary reach made by experimentalists into realms always further removed from everyday observation frequently encountered technology constraints. Pushing away such barriers has been an essential activity of the physicist since long before Rossi developed the first practical electronic AND gates as coincidence circuits in 1930. This article describes the latest episode of this history, the development of new computer technologies to meet the various and increasing appetite for computing of experimental (and theoretical) high energy physics

  17. Cost and performance analysis of physical security systems

    International Nuclear Information System (INIS)

    Hicks, M.J.; Yates, D.; Jago, W.H.; Phillips, A.W.

    1998-04-01

    Analysis of cost and performance of physical security systems can be a complex, multi-dimensional problem. There are a number of point tools that address various aspects of cost and performance analysis. Increased interest in cost tradeoffs of physical security alternatives has motivated development of an architecture called Cost and Performance Analysis (CPA), which takes a top-down approach to aligning cost and performance metrics. CPA incorporates results generated by existing physical security system performance analysis tools, and utilizes an existing cost analysis tool. The objective of this architecture is to offer comprehensive visualization of complex data to security analysts and decision-makers

  18. Large-Scale, Parallel, Multi-Sensor Atmospheric Data Fusion Using Cloud Computing

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2013-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the 'A-Train' platforms (AIRS, AMSR-E, MODIS, MISR, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration analyses of important climate variables presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (MERRA), stratify the comparisons using a classification of the 'cloud scenes' from CloudSat, and repeat the entire analysis over 10 years of data. To efficiently assemble such datasets, we are utilizing Elastic Computing in the Cloud and parallel map/reduce-based algorithms. However, these problems are Data Intensive computing so the data transfer times and storage costs (for caching) are key issues. SciReduce is a Hadoop-like parallel analysis system, programmed in parallel python, that is designed from the ground up for Earth science. SciReduce executes inside VMWare images and scales to any number of nodes in the Cloud. Unlike Hadoop, SciReduce operates on bundles of named numeric arrays, which can be passed in memory or serialized to disk in netCDF4 or HDF5. Figure 1 shows the architecture of the full computational system, with SciReduce at the core. Multi-year datasets are automatically 'sharded' by time and space across a cluster of nodes so that years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the cached input and intermediate datasets. We are using SciReduce to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a NASA MEASURES grant. We will

  19. Pulsed fusion space propulsion: Computational Magneto-Hydro Dynamics of a multi-coil parabolic reaction chamber

    Science.gov (United States)

    Romanelli, Gherardo; Mignone, Andrea; Cervone, Angelo

    2017-10-01

    Pulsed fusion propulsion might finally revolutionise manned space exploration by providing an affordable and relatively fast access to interplanetary destinations. However, such systems are still in an early development phase and one of the key areas requiring further investigations is the operation of the magnetic nozzle, the device meant to exploit the fusion energy and generate thrust. One of the last pulsed fusion magnetic nozzle design is the so called multi-coil parabolic reaction chamber: the reaction is thereby ignited at the focus of an open parabolic chamber, enclosed by a series of coaxial superconducting coils that apply a magnetic field. The field, beside confining the reaction and preventing any contact between hot fusion plasma and chamber structure, is also meant to reflect the explosion and push plasma out of the rocket. Reflection is attained thanks to electric currents induced in conductive skin layers that cover each of the coils, the change of plasma axial momentum generates thrust in reaction. This working principle has yet to be extensively verified and computational Magneto-Hydro Dynamics (MHD) is a viable option to achieve that. This work is one of the first detailed ideal-MHD analysis of a multi-coil parabolic reaction chamber of this kind and has been completed employing PLUTO, a freely distributed computational code developed at the Physics Department of the University of Turin. The results are thus a preliminary verification of the chamber's performance. Nonetheless, plasma leakage through the chamber structure has been highlighted. Therefore, further investigations are required to validate the chamber design. Implementing a more accurate physical model (e.g. Hall-MHD or relativistic-MHD) is thus mandatory, and PLUTO shows the capabilities to achieve that.

  20. Scalable Parallelization of Skyline Computation for Multi-core Processors

    DEFF Research Database (Denmark)

    Chester, Sean; Sidlauskas, Darius; Assent, Ira

    2015-01-01

    The skyline is an important query operator for multi-criteria decision making. It reduces a dataset to only those points that offer optimal trade-offs of dimensions. In general, it is very expensive to compute. Recently, multi-core CPU algorithms have been proposed to accelerate the computation...... of the skyline. However, they do not sufficiently minimize dominance tests and so are not competitive with state-of-the-art sequential algorithms. In this paper, we introduce a novel multi-core skyline algorithm, Hybrid, which processes points in blocks. It maintains a shared, global skyline among all threads...

  1. Towards a cyber-physical era: soft computing framework based multi-sensor array for water quality monitoring

    Science.gov (United States)

    Bhardwaj, Jyotirmoy; Gupta, Karunesh K.; Gupta, Rajiv

    2018-02-01

    New concepts and techniques are replacing traditional methods of water quality parameter measurement systems. This paper introduces a cyber-physical system (CPS) approach for water quality assessment in a distribution network. Cyber-physical systems with embedded sensors, processors and actuators can be designed to sense and interact with the water environment. The proposed CPS is comprised of sensing framework integrated with five different water quality parameter sensor nodes and soft computing framework for computational modelling. Soft computing framework utilizes the applications of Python for user interface and fuzzy sciences for decision making. Introduction of multiple sensors in a water distribution network generates a huge number of data matrices, which are sometimes highly complex, difficult to understand and convoluted for effective decision making. Therefore, the proposed system framework also intends to simplify the complexity of obtained sensor data matrices and to support decision making for water engineers through a soft computing framework. The target of this proposed research is to provide a simple and efficient method to identify and detect presence of contamination in a water distribution network using applications of CPS.

  2. Realization of multi-parameter and multi-state in fault tree computer-aided building software

    International Nuclear Information System (INIS)

    Guo Xiaoli; Tong Jiejuan; Xue Dazhi

    2004-01-01

    More than one parameter and more than one failed state of a parameter are often involved in building fault tree, so it is necessary for fault tree computer-aided building software to deal with multi-parameter and multi-state. Fault Tree Expert System (FTES) has the target of aiding the FT-building work of hydraulic systems. This paper expatiates on how to realize multi-parameter and multi-state in FTES with focus on Knowledge Base and Illation Engine. (author)

  3. The role of micro size computing clusters for small physics groups

    International Nuclear Information System (INIS)

    Shevel, A Y

    2014-01-01

    A small physics group (3-15 persons) might use a number of computing facilities for the analysis/simulation, developing/testing, teaching. It is discussed different types of computing facilities: collaboration computing facilities, group local computing cluster (including colocation), cloud computing. The author discuss the growing variety of different computing options for small groups and does emphasize the role of the group owned computing cluster of micro size.

  4. Physics, Computer Science and Mathematics Division. Annual report, 1 January--31 December 1977. [LBL, 1977

    Energy Technology Data Exchange (ETDEWEB)

    Lepore, J.V. (ed.)

    1977-01-01

    This annual report of the Physics, Computer Science and Mathematics Division describes the scientific research and other work carried out within the Division during 1977. The Division is concerned with work in experimental and theoretical physics, with computer science and applied mathematics, and with the operation of a computer center. The major physics research activity is in high-energy physics, although there is a relatively small program of medium-energy research. The High Energy Physics research program in the Physics Division is concerned with fundamental research which will enable man to comprehend the nature of the physical world. The major effort is now directed toward experiments with positron-electron colliding beam at PEP. The Medium Energy Physics program is concerned with research using mesons and nucleons to probe the properties of matter. This research is concerned with the study of nuclear structure, nuclear reactions, and the interactions between nuclei and electromagnetic radiation and mesons. The Computer Science and Applied Mathematics Department engages in research in a variety of computer science and mathematics disciplines. Work in computer science and applied mathematics includes construction of data bases, computer graphics, computational physics and data analysis, mathematical modeling, and mathematical analysis of differential and integral equations resulting from physical problems. The Computer Center provides large-scale computational support to LBL's scientific programs. Descriptions of the various activities are quite short; references to published results are given. 24 figures. (RWR)

  5. Multi-level approach for parametric roll analysis

    Science.gov (United States)

    Kim, Taeyoung; Kim, Yonghwan

    2011-03-01

    The present study considers multi-level approach for the analysis of parametric roll phenomena. Three kinds of computation method, GM variation, impulse response function (IRF), and Rankine panel method, are applied for the multi-level approach. IRF and Rankine panel method are based on the weakly nonlinear formulation which includes nonlinear Froude- Krylov and restoring forces. In the computation result of parametric roll occurrence test in regular waves, IRF and Rankine panel method show similar tendency. Although the GM variation approach predicts the occurrence of parametric roll at twice roll natural frequency, its frequency criteria shows a little difference. Nonlinear roll motion in bichromatic wave is also considered in this study. To prove the unstable roll motion in bichromatic waves, theoretical and numerical approaches are applied. The occurrence of parametric roll is theoretically examined by introducing the quasi-periodic Mathieu equation. Instability criteria are well predicted from stability analysis in theoretical approach. From the Fourier analysis, it has been verified that difference-frequency effects create the unstable roll motion. The occurrence of unstable roll motion in bichromatic wave is also observed in the experiment.

  6. Advances in Reactor Physics, Mathematics and Computation. Volume 2

    Energy Technology Data Exchange (ETDEWEB)

    1987-01-01

    These proceedings of the international topical meeting on advances in reactor physics, mathematics and computation, Volume 2, are divided into 7 sessions bearing on: - session 7: Deterministic transport methods 1 (7 conferences), - session 8: Interpretation and analysis of reactor instrumentation (6 conferences), - session 9: High speed computing applied to reactor operations (5 conferences), - session 10: Diffusion theory and kinetics (7 conferences), - session 11: Fast reactor design, validation and operating experience (8 conferences), - session 12: Deterministic transport methods 2 (7 conferences), - session 13: Application of expert systems to physical aspects of reactor design and operation.

  7. Grid computing in high energy physics

    CERN Document Server

    Avery, P

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them. Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software r...

  8. Scholarly literature and the press: scientific impact and social perception of physics computing

    International Nuclear Information System (INIS)

    Pia, M G; Basaglia, T; Bell, Z W; Dressendorfer, P V

    2014-01-01

    The broad coverage of the search for the Higgs boson in the mainstream media is a relative novelty for high energy physics (HEP) research, whose achievements have traditionally been limited to scholarly literature. This paper illustrates the results of a scientometric analysis of HEP computing in scientific literature, institutional media and the press, and a comparative overview of similar metrics concerning representative particle physics measurements. The picture emerging from these scientometric data documents the relationship between the scientific impact and the social perception of HEP physics research versus that of HEP computing. The results of this analysis suggest that improved communication of the scientific and social role of HEP computing via press releases from the major HEP laboratories would be beneficial to the high energy physics community.

  9. Tuneable resolution as a systems biology approach for multi-scale, multi-compartment computational models.

    Science.gov (United States)

    Kirschner, Denise E; Hunt, C Anthony; Marino, Simeone; Fallahi-Sichani, Mohammad; Linderman, Jennifer J

    2014-01-01

    The use of multi-scale mathematical and computational models to study complex biological processes is becoming increasingly productive. Multi-scale models span a range of spatial and/or temporal scales and can encompass multi-compartment (e.g., multi-organ) models. Modeling advances are enabling virtual experiments to explore and answer questions that are problematic to address in the wet-lab. Wet-lab experimental technologies now allow scientists to observe, measure, record, and analyze experiments focusing on different system aspects at a variety of biological scales. We need the technical ability to mirror that same flexibility in virtual experiments using multi-scale models. Here we present a new approach, tuneable resolution, which can begin providing that flexibility. Tuneable resolution involves fine- or coarse-graining existing multi-scale models at the user's discretion, allowing adjustment of the level of resolution specific to a question, an experiment, or a scale of interest. Tuneable resolution expands options for revising and validating mechanistic multi-scale models, can extend the longevity of multi-scale models, and may increase computational efficiency. The tuneable resolution approach can be applied to many model types, including differential equation, agent-based, and hybrid models. We demonstrate our tuneable resolution ideas with examples relevant to infectious disease modeling, illustrating key principles at work. © 2014 The Authors. WIREs Systems Biology and Medicine published by Wiley Periodicals, Inc.

  10. Scholarly literature and the press: scientific impact and social perception of physics computing

    CERN Document Server

    Pia, Maria Grazia; Bell, Zane W; Dressendorfer, Paul V

    2014-01-01

    The broad coverage of the search for the Higgs boson in the mainstream media is a relative novelty for high energy physics (HEP) research, whose achievements have traditionally been limited to scholarly literature. This paper illustrates the results of a scientometric analysis of HEP computing in scientific literature, institutional media and the press, and a comparative overview of similar metrics concerning representative particle physics measurements. The picture emerging from these scientometric data documents the scientific impact and social perception of HEP computing. The results of this analysis suggest that improved communication of the scientific and social role of HEP computing would be beneficial to the high energy physics community.

  11. The application of a multi-physics tool kit to spatial reactor dynamics

    International Nuclear Information System (INIS)

    Clifford, I.; Jasak, H.

    2009-01-01

    Traditionally coupled field nuclear reactor analysis has been carried out using several loosely coupled solvers, each having been developed independently from the others. In the field of multi-physics, the current generation of object-oriented tool kits provides robust close coupling of multiple fields on a single framework. This paper describes the initial results obtained as part of continuing research in the use of the OpenFOAM multi-physics tool kit for reactor dynamics application development. An unstructured, three-dimensional, time-dependent multi-group diffusion code Diffusion FOAM has been developed using the OpenFOAM multi-physics tool kit as a basis. The code is based on the finite-volume methodology and uses a newly developed block-coupled sparse matrix solver for the coupled solution of the multi-group diffusion equations. A description of this code is given with particular emphasis on the newly developed block-coupled solver, along with a selection of results obtained thus far. The code has performed well, indicating that the OpenFOAM tool kit is suited to reactor dynamics applications. This work has shown that the neutronics and simplified thermal-hydraulics of a reactor May be represented and solved for using a common calculation platform, and opens up the possibility for research into robust close-coupling of neutron diffusion and thermal-fluid calculations. This work has further opened up the possibility for research in a number of other areas, including research into three-dimensional unstructured meshes for reactor dynamics applications. (authors)

  12. Uncertainties propagation in the framework of a Rod Ejection Accident modeling based on a multi-physics approach

    Energy Technology Data Exchange (ETDEWEB)

    Le Pallec, J. C.; Crouzet, N.; Bergeaud, V.; Delavaud, C. [CEA/DEN/DM2S, CEA/Saclay, 91191 Gif sur Yvette Cedex (France)

    2012-07-01

    The control of uncertainties in the field of reactor physics and their propagation in best-estimate modeling are a major issue in safety analysis. In this framework, the CEA develops a methodology to perform multi-physics simulations including uncertainties analysis. The present paper aims to present and apply this methodology for the analysis of an accidental situation such as REA (Rod Ejection Accident). This accident is characterized by a strong interaction between the different areas of the reactor physics (neutronic, fuel thermal and thermal hydraulic). The modeling is performed with CRONOS2 code. The uncertainties analysis has been conducted with the URANIE platform developed by the CEA: For each identified response from the modeling (output) and considering a set of key parameters with their uncertainties (input), a surrogate model in the form of a neural network has been produced. The set of neural networks is then used to carry out a sensitivity analysis which consists on a global variance analysis with the determination of the Sobol indices for all responses. The sensitivity indices are obtained for the input parameters by an approach based on the use of polynomial chaos. The present exercise helped to develop a methodological flow scheme, to consolidate the use of URANIE tool in the framework of parallel calculations. Finally, the use of polynomial chaos allowed computing high order sensitivity indices and thus highlighting and classifying the influence of identified uncertainties on each response of the analysis (single and interaction effects). (authors)

  13. Physical properties of the WASP-67 planetary system from multi-colour photometry

    Science.gov (United States)

    Mancini, L.; Southworth, J.; Ciceri, S.; Calchi Novati, S.; Dominik, M.; Henning, Th.; Jørgensen, U. G.; Korhonen, H.; Nikolov, N.; Alsubai, K. A.; Bozza, V.; Bramich, D. M.; D'Ago, G.; Figuera Jaimes, R.; Galianni, P.; Gu, S.-H.; Harpsøe, K.; Hinse, T. C.; Hundertmark, M.; Juncher, D.; Kains, N.; Popovas, A.; Rabus, M.; Rahvar, S.; Skottfelt, J.; Snodgrass, C.; Street, R.; Surdej, J.; Tsapras, Y.; Vilela, C.; Wang, X.-B.; Wertz, O.

    2014-08-01

    Context. The extrasolar planet WASP-67 b is the first hot Jupiter definitively known to undergo only partial eclipses. The lack of the second and third contact points in this planetary system makes it difficult to obtain accurate measurements of its physical parameters. Aims: By using new high-precision photometric data, we confirm that WASP-67 b shows grazing eclipses and compute accurate estimates of the physical properties of the planet and its parent star. Methods: We present high-quality, multi-colour, broad-band photometric observations comprising five light curves covering two transit events, obtained using two medium-class telescopes and the telescope-defocusing technique. One transit was observed through a Bessel-R filter and the other simultaneously through filters similar to Sloan g'r'i'z'. We modelled these data using jktebop. The physical parameters of the system were obtained from the analysis of these light curves and from published spectroscopic measurements. Results: All five of our light curves satisfy the criterion for being grazing eclipses. We revise the physical parameters of the whole WASP-67 system and, in particular, significantly improve the measurements of the planet's radius (Rb = 1.091 ± 0.046 RJup) and density (ρb = 0.292 ± 0.036 ρJup), as compared to the values in the discovery paper (Rb = 1.4 -0.2+0.3 RJup and ρb = 0.16 ± 0.08 ρJup). The transit ephemeris was also substantially refined. We investigated the variation of the planet's radius as a function of the wavelength, using the simultaneous multi-band data, finding that our measurements are consistent with a flat spectrum to within the experimental uncertainties. Based on data collected with GROND at the MPG 2.2 m telescope and DFOSC at the Danish 1.54 m telescope.Full Table 2 is only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (ftp://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/568/A127

  14. Computing trends using graphic processor in high energy physics

    CERN Document Server

    Niculescu, Mihai

    2011-01-01

    One of the main challenges in Heavy Energy Physics is to make fast analysis of high amount of experimental and simulated data. At LHC-CERN one p-p event is approximate 1 Mb in size. The time taken to analyze the data and obtain fast results depends on high computational power. The main advantage of using GPU(Graphic Processor Unit) programming over traditional CPU one is that graphical cards bring a lot of computing power at a very low price. Today a huge number of application(scientific, financial etc) began to be ported or developed for GPU, including Monte Carlo tools or data analysis tools for High Energy Physics. In this paper, we'll present current status and trends in HEP using GPU.

  15. Sierra toolkit computational mesh conceptual model

    International Nuclear Information System (INIS)

    Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.

    2010-01-01

    The Sierra Toolkit computational mesh is a software library intended to support massively parallel multi-physics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.

  16. Physics, Computer Science and Mathematics Division annual report, 1 January--31 December 1975

    International Nuclear Information System (INIS)

    Lepore, J.L.

    1975-01-01

    This annual report describes the scientific research and other work carried out during the calendar year 1975. The report is nontechnical in nature, with almost no data. A 17-page bibliography lists the technical papers which detail the work. The contents of the report include the following: experimental physics (high-energy physics--SPEAR, PEP, SLAC, FNAL, BNL, Bevatron; particle data group; medium-energy physics; astrophysics, astronomy, and cosmic rays; instrumentation development), theoretical physics (particle theory and accelerator theory and design), computer science and applied mathematics (data management systems, socio-economic environment demographic information system, computer graphics, computer networks, management information systems, computational physics and data analysis, mathematical modeling, programing languages, applied mathematics research), real-time systems (ModComp and PDP networks), and computer center activities (systems programing, user services, hardware development, computer operations). A glossary of computer science and mathematics terms is also included. 32 figures

  17. Using spatial principles to optimize distributed computing for enabling the physical science discoveries.

    Science.gov (United States)

    Yang, Chaowei; Wu, Huayi; Huang, Qunying; Li, Zhenlong; Li, Jing

    2011-04-05

    Contemporary physical science studies rely on the effective analyses of geographically dispersed spatial data and simulations of physical phenomena. Single computers and generic high-end computing are not sufficient to process the data for complex physical science analysis and simulations, which can be successfully supported only through distributed computing, best optimized through the application of spatial principles. Spatial computing, the computing aspect of a spatial cyberinfrastructure, refers to a computing paradigm that utilizes spatial principles to optimize distributed computers to catalyze advancements in the physical sciences. Spatial principles govern the interactions between scientific parameters across space and time by providing the spatial connections and constraints to drive the progression of the phenomena. Therefore, spatial computing studies could better position us to leverage spatial principles in simulating physical phenomena and, by extension, advance the physical sciences. Using geospatial science as an example, this paper illustrates through three research examples how spatial computing could (i) enable data intensive science with efficient data/services search, access, and utilization, (ii) facilitate physical science studies with enabling high-performance computing capabilities, and (iii) empower scientists with multidimensional visualization tools to understand observations and simulations. The research examples demonstrate that spatial computing is of critical importance to design computing methods to catalyze physical science studies with better data access, phenomena simulation, and analytical visualization. We envision that spatial computing will become a core technology that drives fundamental physical science advancements in the 21st century.

  18. Computers in Nuclear Physics Division

    International Nuclear Information System (INIS)

    Kowalczyk, M.; Tarasiuk, J.; Srebrny, J.

    1997-01-01

    Improving of the computer equipment in Nuclear Physics Division is described. It include: new computer equipment and hardware upgrading, software developing, new programs for computer booting and modernization of data acquisition systems

  19. Large Scale Document Inversion using a Multi-threaded Computing System.

    Science.gov (United States)

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2017-06-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.

  20. Physics vs. computer science

    International Nuclear Information System (INIS)

    Pike, R.

    1982-01-01

    With computers becoming more frequently used in theoretical and experimental physics, physicists can no longer afford to be ignorant of the basic techniques and results of computer science. Computing principles belong in a physicist's tool box, along with experimental methods and applied mathematics, and the easiest way to educate physicists in computing is to provide, as part of the undergraduate curriculum, a computing course designed specifically for physicists. As well, the working physicist should interact with computer scientists, giving them challenging problems in return for their expertise. (orig.)

  1. Towards a cyber-physical era: soft computing framework based multi-sensor array for water quality monitoring

    Directory of Open Access Journals (Sweden)

    J. Bhardwaj

    2018-02-01

    Full Text Available New concepts and techniques are replacing traditional methods of water quality parameter measurement systems. This paper introduces a cyber-physical system (CPS approach for water quality assessment in a distribution network. Cyber-physical systems with embedded sensors, processors and actuators can be designed to sense and interact with the water environment. The proposed CPS is comprised of sensing framework integrated with five different water quality parameter sensor nodes and soft computing framework for computational modelling. Soft computing framework utilizes the applications of Python for user interface and fuzzy sciences for decision making. Introduction of multiple sensors in a water distribution network generates a huge number of data matrices, which are sometimes highly complex, difficult to understand and convoluted for effective decision making. Therefore, the proposed system framework also intends to simplify the complexity of obtained sensor data matrices and to support decision making for water engineers through a soft computing framework. The target of this proposed research is to provide a simple and efficient method to identify and detect presence of contamination in a water distribution network using applications of CPS.

  2. Multi-processor developments in the United States for future high energy physics experiments and accelerators

    International Nuclear Information System (INIS)

    Gaines, I.

    1988-03-01

    The use of multi-processors for analysis and high-level triggering in High Energy Physics experiments, pioneered by the early emulator systems, has reached maturity, in particular with the multiple microprocessor systems in use at Fermilab. It is widely acknowledged that such systems will fulfill the major portion of the computing needs of future large experiments. Recent developments at Fermilab's Advanced Computer Program will make such systems even more powerful, cost-effective, and easier to use than they are at present. The next generation of microprocessors, already available, will provide CPU power of about one VAX 780 equivalent/$300, while supporting most VMS FORTRAN extensions and large (>8MB) amounts of memory. Low cost high density mass storage devices (based on video tape cartridge technology) will allow parallel I/O to remove potential I/O bottlenecks in systems of over 1000 VAX equipment processors. New interconnection schemes and system software will allow more flexible topologies and extremely high data bandwidth, especially for on-line systems. This talk will summarize the work at the Advanced Computer Program and the rest of the US in this field. 3 refs., 4 figs

  3. From the direct numerical simulation to system codes-perspective for the multi-scale analysis of LWR thermal hydraulics

    International Nuclear Information System (INIS)

    Bestion, D.

    2010-01-01

    A multi-scale analysis of water-cooled reactor thermal hydraulics can be used to take advantage of increased computer power and improved simulation tools, including Direct Numerical Simulation (DNS), Computational Fluid Dynamics (CFD) (in both open and porous mediums), and system thermalhydraulic codes. This paper presents a general strategy for this procedure for various thermalhydraulic scales. A short state of the art is given for each scale, and the role of the scale in the overall multi-scale analysis process is defined. System thermalhydraulic codes will remain a privileged tool for many investigations related to safety. CFD in porous medium is already being frequently used for core thermal hydraulics, either in 3D modules of system codes or in component codes. CFD in open medium allows zooming on some reactor components in specific situations, and may be coupled to the system and component scales. Various modeling approaches exist in the domain from DNS to CFD which may be used to improve the understanding of flow processes, and as a basis for developing more physically based models for macroscopic tools. A few examples are given to illustrate the multi-scale approach. Perspectives for the future are drawn from the present state of the art and directions for future research and development are given

  4. Multi-level tree analysis of pulmonary artery/vein trees in non-contrast CT images

    Science.gov (United States)

    Gao, Zhiyun; Grout, Randall W.; Hoffman, Eric A.; Saha, Punam K.

    2012-02-01

    Diseases like pulmonary embolism and pulmonary hypertension are associated with vascular dystrophy. Identifying such pulmonary artery/vein (A/V) tree dystrophy in terms of quantitative measures via CT imaging significantly facilitates early detection of disease or a treatment monitoring process. A tree structure, consisting of nodes and connected arcs, linked to the volumetric representation allows multi-level geometric and volumetric analysis of A/V trees. Here, a new theory and method is presented to generate multi-level A/V tree representation of volumetric data and to compute quantitative measures of A/V tree geometry and topology at various tree hierarchies. The new method is primarily designed on arc skeleton computation followed by a tree construction based topologic and geometric analysis of the skeleton. The method starts with a volumetric A/V representation as input and generates its topologic and multi-level volumetric tree representations long with different multi-level morphometric measures. A new recursive merging and pruning algorithms are introduced to detect bad junctions and noisy branches often associated with digital geometric and topologic analysis. Also, a new notion of shortest axial path is introduced to improve the skeletal arc joining two junctions. The accuracy of the multi-level tree analysis algorithm has been evaluated using computer generated phantoms and pulmonary CT images of a pig vessel cast phantom while the reproducibility of method is evaluated using multi-user A/V separation of in vivo contrast-enhanced CT images of a pig lung at different respiratory volumes.

  5. Multi-level Discourse Analysis in a Physics Teaching Methods Course from the Psychological Perspective of Activity Theory

    Science.gov (United States)

    Vieira, Rodrigo Drumond; Kelly, Gregory J.

    2014-11-01

    In this paper, we present and apply a multi-level method for discourse analysis in science classrooms. This method is based on the structure of human activity (activity, actions, and operations) and it was applied to study a pre-service physics teacher methods course. We argue that such an approach, based on a cultural psychological perspective, affords opportunities for analysts to perform a theoretically based detailed analysis of discourse events. Along with the presentation of analysis, we show and discuss how the articulation of different levels offers interpretative criteria for analyzing instructional conversations. We synthesize the results into a model for a teacher's practice and discuss the implications and possibilities of this approach for the field of discourse analysis in science classrooms. Finally, we reflect on how the development of teachers' understanding of their activity structures can contribute to forms of progressive discourse of science education.

  6. High energy physics computing in Japan

    International Nuclear Information System (INIS)

    Watase, Yoshiyuki

    1989-01-01

    A brief overview of the computing provision for high energy physics in Japan is presented. Most of the computing power for high energy physics is concentrated in KEK. Here there are two large scale systems: one providing a general computing service including vector processing and the other dedicated to TRISTAN experiments. Each university group has a smaller sized mainframe or VAX system to facilitate both their local computing needs and the remote use of the KEK computers through a network. The large computer system for the TRISTAN experiments is described. An overview of a prospective future large facility is also given. (orig.)

  7. Conceptual design of an ALICE Tier-2 centre. Integrated into a multi-purpose computing facility

    Energy Technology Data Exchange (ETDEWEB)

    Zynovyev, Mykhaylo

    2012-06-29

    This thesis discusses the issues and challenges associated with the design and operation of a data analysis facility for a high-energy physics experiment at a multi-purpose computing centre. At the spotlight is a Tier-2 centre of the distributed computing model of the ALICE experiment at the Large Hadron Collider at CERN in Geneva, Switzerland. The design steps, examined in the thesis, include analysis and optimization of the I/O access patterns of the user workload, integration of the storage resources, and development of the techniques for effective system administration and operation of the facility in a shared computing environment. A number of I/O access performance issues on multiple levels of the I/O subsystem, introduced by utilization of hard disks for data storage, have been addressed by the means of exhaustive benchmarking and thorough analysis of the I/O of the user applications in the ALICE software framework. Defining the set of requirements to the storage system, describing the potential performance bottlenecks and single points of failure and examining possible ways to avoid them allows one to develop guidelines for selecting the way how to integrate the storage resources. The solution, how to preserve a specific software stack for the experiment in a shared environment, is presented along with its effects on the user workload performance. The proposal for a flexible model to deploy and operate the ALICE Tier-2 infrastructure and applications in a virtual environment through adoption of the cloud computing technology and the 'Infrastructure as Code' concept completes the thesis. Scientific software applications can be efficiently computed in a virtual environment, and there is an urgent need to adapt the infrastructure for effective usage of cloud resources.

  8. Conceptual design of an ALICE Tier-2 centre. Integrated into a multi-purpose computing facility

    International Nuclear Information System (INIS)

    Zynovyev, Mykhaylo

    2012-01-01

    This thesis discusses the issues and challenges associated with the design and operation of a data analysis facility for a high-energy physics experiment at a multi-purpose computing centre. At the spotlight is a Tier-2 centre of the distributed computing model of the ALICE experiment at the Large Hadron Collider at CERN in Geneva, Switzerland. The design steps, examined in the thesis, include analysis and optimization of the I/O access patterns of the user workload, integration of the storage resources, and development of the techniques for effective system administration and operation of the facility in a shared computing environment. A number of I/O access performance issues on multiple levels of the I/O subsystem, introduced by utilization of hard disks for data storage, have been addressed by the means of exhaustive benchmarking and thorough analysis of the I/O of the user applications in the ALICE software framework. Defining the set of requirements to the storage system, describing the potential performance bottlenecks and single points of failure and examining possible ways to avoid them allows one to develop guidelines for selecting the way how to integrate the storage resources. The solution, how to preserve a specific software stack for the experiment in a shared environment, is presented along with its effects on the user workload performance. The proposal for a flexible model to deploy and operate the ALICE Tier-2 infrastructure and applications in a virtual environment through adoption of the cloud computing technology and the 'Infrastructure as Code' concept completes the thesis. Scientific software applications can be efficiently computed in a virtual environment, and there is an urgent need to adapt the infrastructure for effective usage of cloud resources.

  9. Ground-glass opacity: High-resolution computed tomography and 64-multi-slice computed tomography findings comparison

    International Nuclear Information System (INIS)

    Sergiacomi, Gianluigi; Ciccio, Carmelo; Boi, Luca; Velari, Luca; Crusco, Sonia; Orlacchio, Antonio; Simonetti, Giovanni

    2010-01-01

    Objective: Comparative evaluation of ground-glass opacity using conventional high-resolution computed tomography technique and volumetric computed tomography by 64-row multi-slice scanner, verifying advantage of volumetric acquisition and post-processing technique allowed by 64-row CT scanner. Methods: Thirty-four patients, in which was assessed ground-glass opacity pattern by previous high-resolution computed tomography during a clinical-radiological follow-up for their lung disease, were studied by means of 64-row multi-slice computed tomography. Comparative evaluation of image quality was done by both CT modalities. Results: It was reported good inter-observer agreement (k value 0.78-0.90) in detection of ground-glass opacity with high-resolution computed tomography technique and volumetric Computed Tomography acquisition with moderate increasing of intra-observer agreement (k value 0.46) using volumetric computed tomography than high-resolution computed tomography. Conclusions: In our experience, volumetric computed tomography with 64-row scanner shows good accuracy in detection of ground-glass opacity, providing a better spatial and temporal resolution and advanced post-processing technique than high-resolution computed tomography.

  10. Large Scale Document Inversion using a Multi-threaded Computing System

    Science.gov (United States)

    Jung, Sungbo; Chang, Dar-Jen; Park, Juw Won

    2018-01-01

    Current microprocessor architecture is moving towards multi-core/multi-threaded systems. This trend has led to a surge of interest in using multi-threaded computing devices, such as the Graphics Processing Unit (GPU), for general purpose computing. We can utilize the GPU in computation as a massive parallel coprocessor because the GPU consists of multiple cores. The GPU is also an affordable, attractive, and user-programmable commodity. Nowadays a lot of information has been flooded into the digital domain around the world. Huge volume of data, such as digital libraries, social networking services, e-commerce product data, and reviews, etc., is produced or collected every moment with dramatic growth in size. Although the inverted index is a useful data structure that can be used for full text searches or document retrieval, a large number of documents will require a tremendous amount of time to create the index. The performance of document inversion can be improved by multi-thread or multi-core GPU. Our approach is to implement a linear-time, hash-based, single program multiple data (SPMD), document inversion algorithm on the NVIDIA GPU/CUDA programming platform utilizing the huge computational power of the GPU, to develop high performance solutions for document indexing. Our proposed parallel document inversion system shows 2-3 times faster performance than a sequential system on two different test datasets from PubMed abstract and e-commerce product reviews. CCS Concepts •Information systems➝Information retrieval • Computing methodologies➝Massively parallel and high-performance simulations.

  11. Polytopol computing for multi-core and distributed systems

    Science.gov (United States)

    Spaanenburg, Henk; Spaanenburg, Lambert; Ranefors, Johan

    2009-05-01

    Multi-core computing provides new challenges to software engineering. The paper addresses such issues in the general setting of polytopol computing, that takes multi-core problems in such widely differing areas as ambient intelligence sensor networks and cloud computing into account. It argues that the essence lies in a suitable allocation of free moving tasks. Where hardware is ubiquitous and pervasive, the network is virtualized into a connection of software snippets judiciously injected to such hardware that a system function looks as one again. The concept of polytopol computing provides a further formalization in terms of the partitioning of labor between collector and sensor nodes. Collectors provide functions such as a knowledge integrator, awareness collector, situation displayer/reporter, communicator of clues and an inquiry-interface provider. Sensors provide functions such as anomaly detection (only communicating singularities, not continuous observation), they are generally powered or self-powered, amorphous (not on a grid) with generation-and-attrition, field re-programmable, and sensor plug-and-play-able. Together the collector and the sensor are part of the skeleton injector mechanism, added to every node, and give the network the ability to organize itself into some of many topologies. Finally we will discuss a number of applications and indicate how a multi-core architecture supports the security aspects of the skeleton injector.

  12. Multi-objective optimization in computer networks using metaheuristics

    CERN Document Server

    Donoso, Yezid

    2007-01-01

    Metaheuristics are widely used to solve important practical combinatorial optimization problems. Many new multicast applications emerging from the Internet-such as TV over the Internet, radio over the Internet, and multipoint video streaming-require reduced bandwidth consumption, end-to-end delay, and packet loss ratio. It is necessary to design and to provide for these kinds of applications as well as for those resources necessary for functionality. Multi-Objective Optimization in Computer Networks Using Metaheuristics provides a solution to the multi-objective problem in routing computer networks. It analyzes layer 3 (IP), layer 2 (MPLS), and layer 1 (GMPLS and wireless functions). In particular, it assesses basic optimization concepts, as well as several techniques and algorithms for the search of minimals; examines the basic multi-objective optimization concepts and the way to solve them through traditional techniques and through several metaheuristics; and demonstrates how to analytically model the compu...

  13. The Computational Physics Program of the national MFE Computer Center

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1989-01-01

    Since June 1974, the MFE Computer Center has been engaged in a significant computational physics effort. The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generations of supercomputers. The Computational Physics Group has been involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to tokamaks and compact toroids. A third area is the investigation of kinetic instabilities using a 3-D particle code; this work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence have been under examination, with the hope of being able to explain anomalous transport. Also, we are collaborating in an international effort to evaluate fully three-dimensional linear stability of toroidal devices. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers. A summary of these programs are included in this paper. 6 tabs

  14. A physics computing bureau

    CERN Document Server

    Laurikainen, P

    1975-01-01

    The author first reviews the services offered by the Bureau to the user community scattered over three separate physics departments and a theory research institute. Limited services are offered also to non- physics research in the University, in collaboration with the University Computing Center. The personnel is divided into operations sections responsible for the terminal and data archive management, punching and document services, etc. and into analysts sections with half a dozen full-time scientific programmers recruited among promising graduate level physics students, rather than computer scientists or mathematicians. Analysts are thus able not only to communicate with physicists but also to participate in research to some extent. Only more demanding program development tasks can be handled by the Bureau, most of the routine data processing is the users responsibility.

  15. Multi-detector row computed tomography and blunt chest trauma

    International Nuclear Information System (INIS)

    Scaglione, Mariano; Pinto, Antonio; Pedrosa, Ivan; Sparano, Amelia; Romano, Luigia

    2008-01-01

    Blunt chest trauma is a significant source of morbidity and mortality in industrialized countries. The clinical presentation of trauma patients varies widely from one individual to another and ranges from minor reports of pain to shock. Knowledge of the mechanism of injury, the time of injury, estimates of motor vehicle accident velocity and deceleration, and evidence of associated injury to other systems are all salient features to provide for an adequate assessment of chest trauma. Multi-detector row computed tomography (MDCT) scanning and MDCT-angiography are being used more frequently in the diagnosis of patients with chest trauma. The high sensitivity of MDCT has increased the recognized spectrum of injuries. This new technology can be regarded as an extremely valuable adjunct to physical examination to recognize suspected and unsuspected blunt chest trauma

  16. Physics, Computer Science and Mathematics Division annual report, 1 January--31 December 1975. [LBL

    Energy Technology Data Exchange (ETDEWEB)

    Lepore, J.L. (ed.)

    1975-01-01

    This annual report describes the scientific research and other work carried out during the calendar year 1975. The report is nontechnical in nature, with almost no data. A 17-page bibliography lists the technical papers which detail the work. The contents of the report include the following: experimental physics (high-energy physics--SPEAR, PEP, SLAC, FNAL, BNL, Bevatron; particle data group; medium-energy physics; astrophysics, astronomy, and cosmic rays; instrumentation development), theoretical physics (particle theory and accelerator theory and design), computer science and applied mathematics (data management systems, socio-economic environment demographic information system, computer graphics, computer networks, management information systems, computational physics and data analysis, mathematical modeling, programing languages, applied mathematics research), real-time systems (ModComp and PDP networks), and computer center activities (systems programing, user services, hardware development, computer operations). A glossary of computer science and mathematics terms is also included. 32 figures. (RWR)

  17. Multi keno-VAX a modified version of the reactor computer code Multi keno-2

    Energy Technology Data Exchange (ETDEWEB)

    Imam, M [National center for nuclear safety and radiation control, atomic energy authority, Cairo, (Egypt)

    1995-10-01

    The reactor computer code Multi keno-2 is developed in Japan from the original Monte Carlo Keno-IV. By applications of this code on some real problems, fatal errors were detected. These errors are related to the restart option in the code. The restart option is essential for solving time-consuming problems on mini-computer like VAX-6320. These errors were corrected and other modifications were carried out in the code. Because of these modifications new input data description was written for the code. Thus a new VAX/VMS version for the program was developed which is also adaptable for mini-mainframes. This new developed program, called Multi keno-VAX is accepted in the Nea-IAEA data bank and is added to its international computer codes library. 1 fig.

  18. Multi keno-VAX a modified version of the reactor computer code Multi keno-2

    International Nuclear Information System (INIS)

    Imam, M.

    1995-01-01

    The reactor computer code Multi keno-2 is developed in Japan from the original Monte Carlo Keno-IV. By applications of this code on some real problems, fatal errors were detected. These errors are related to the restart option in the code. The restart option is essential for solving time-consuming problems on mini-computer like VAX-6320. These errors were corrected and other modifications were carried out in the code. Because of these modifications new input data description was written for the code. Thus a new VAX/VMS version for the program was developed which is also adaptable for mini-mainframes. This new developed program, called Multi keno-VAX is accepted in the Nea-IAEA data bank and is added to its international computer codes library. 1 fig

  19. Aerodynamic multi-objective integrated optimization based on principal component analysis

    Directory of Open Access Journals (Sweden)

    Jiangtao HUANG

    2017-08-01

    Full Text Available Based on improved multi-objective particle swarm optimization (MOPSO algorithm with principal component analysis (PCA methodology, an efficient high-dimension multi-objective optimization method is proposed, which, as the purpose of this paper, aims to improve the convergence of Pareto front in multi-objective optimization design. The mathematical efficiency, the physical reasonableness and the reliability in dealing with redundant objectives of PCA are verified by typical DTLZ5 test function and multi-objective correlation analysis of supercritical airfoil, and the proposed method is integrated into aircraft multi-disciplinary design (AMDEsign platform, which contains aerodynamics, stealth and structure weight analysis and optimization module. Then the proposed method is used for the multi-point integrated aerodynamic optimization of a wide-body passenger aircraft, in which the redundant objectives identified by PCA are transformed to optimization constraints, and several design methods are compared. The design results illustrate that the strategy used in this paper is sufficient and multi-point design requirements of the passenger aircraft are reached. The visualization level of non-dominant Pareto set is improved by effectively reducing the dimension without losing the primary feature of the problem.

  20. Asynchronous Multi-Party Computation with Quadratic Communication

    DEFF Research Database (Denmark)

    Hirt, Martin; Nielsen, Jesper Buus; Przydatek, Bartosz

    2008-01-01

    We present an efficient protocol for secure multi-party computation in the asynchronous model with optimal resilience. For n parties, up to t < n/3 of them being corrupted, and security parameter κ, a circuit with c gates can be securely computed with communication complexity O(cn^2k) bits, which...... circuit randomization due to Beaver (Crypto’91), and an abstraction of certificates, which can be of independent interest....

  1. Quantum computing for physics research

    International Nuclear Information System (INIS)

    Georgeot, B.

    2006-01-01

    Quantum computers hold great promises for the future of computation. In this paper, this new kind of computing device is presented, together with a short survey of the status of research in this field. The principal algorithms are introduced, with an emphasis on the applications of quantum computing to physics. Experimental implementations are also briefly discussed

  2. Computing in high-energy physics

    International Nuclear Information System (INIS)

    Mount, Richard P.

    2016-01-01

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Lastly, I describe recent developments aimed at improving the overall coherence of high-energy physics software

  3. Computing in high-energy physics

    Science.gov (United States)

    Mount, Richard P.

    2016-04-01

    I present a very personalized journey through more than three decades of computing for experimental high-energy physics, pointing out the enduring lessons that I learned. This is followed by a vision of how the computing environment will evolve in the coming ten years and the technical challenges that this will bring. I then address the scale and cost of high-energy physics software and examine the many current and future challenges, particularly those of management, funding and software-lifecycle management. Finally, I describe recent developments aimed at improving the overall coherence of high-energy physics software.

  4. Multivariate statistical analysis of a multi-step industrial processes

    DEFF Research Database (Denmark)

    Reinikainen, S.P.; Høskuldsson, Agnar

    2007-01-01

    Monitoring and quality control of industrial processes often produce information on how the data have been obtained. In batch processes, for instance, the process is carried out in stages; some process or control parameters are set at each stage. However, the obtained data might not be utilized...... efficiently, even if this information may reveal significant knowledge about process dynamics or ongoing phenomena. When studying the process data, it may be important to analyse the data in the light of the physical or time-wise development of each process step. In this paper, a unified approach to analyse...... multivariate multi-step processes, where results from each step are used to evaluate future results, is presented. The methods presented are based on Priority PLS Regression. The basic idea is to compute the weights in the regression analysis for given steps, but adjust all data by the resulting score vectors...

  5. Grid computing in high-energy physics

    International Nuclear Information System (INIS)

    Bischof, R.; Kuhn, D.; Kneringer, E.

    2003-01-01

    Full text: The future high energy physics experiments are characterized by an enormous amount of data delivered by the large detectors presently under construction e.g. at the Large Hadron Collider and by a large number of scientists (several thousands) requiring simultaneous access to the resulting experimental data. Since it seems unrealistic to provide the necessary computing and storage resources at one single place, (e.g. CERN), the concept of grid computing i.e. the use of distributed resources, will be chosen. The DataGrid project (under the leadership of CERN) develops, based on the Globus toolkit, the software necessary for computation and analysis of shared large-scale databases in a grid structure. The high energy physics group Innsbruck participates with several resources in the DataGrid test bed. In this presentation our experience as grid users and resource provider is summarized. In cooperation with the local IT-center (ZID) we installed a flexible grid system which uses PCs (at the moment 162) in student's labs during nights, weekends and holidays, which is especially used to compare different systems (local resource managers, other grid software e.g. from the Nordugrid project) and to supply a test bed for the future Austrian Grid (AGrid). (author)

  6. OpenCMISS: a multi-physics & multi-scale computational infrastructure for the VPH/Physiome project.

    Science.gov (United States)

    Bradley, Chris; Bowery, Andy; Britten, Randall; Budelmann, Vincent; Camara, Oscar; Christie, Richard; Cookson, Andrew; Frangi, Alejandro F; Gamage, Thiranja Babarenda; Heidlauf, Thomas; Krittian, Sebastian; Ladd, David; Little, Caton; Mithraratne, Kumar; Nash, Martyn; Nickerson, David; Nielsen, Poul; Nordbø, Oyvind; Omholt, Stig; Pashaei, Ali; Paterson, David; Rajagopal, Vijayaraghavan; Reeve, Adam; Röhrle, Oliver; Safaei, Soroush; Sebastián, Rafael; Steghöfer, Martin; Wu, Tim; Yu, Ting; Zhang, Heye; Hunter, Peter

    2011-10-01

    The VPH/Physiome Project is developing the model encoding standards CellML (cellml.org) and FieldML (fieldml.org) as well as web-accessible model repositories based on these standards (models.physiome.org). Freely available open source computational modelling software is also being developed to solve the partial differential equations described by the models and to visualise results. The OpenCMISS code (opencmiss.org), described here, has been developed by the authors over the last six years to replace the CMISS code that has supported a number of organ system Physiome projects. OpenCMISS is designed to encompass multiple sets of physical equations and to link subcellular and tissue-level biophysical processes into organ-level processes. In the Heart Physiome project, for example, the large deformation mechanics of the myocardial wall need to be coupled to both ventricular flow and embedded coronary flow, and the reaction-diffusion equations that govern the propagation of electrical waves through myocardial tissue need to be coupled with equations that describe the ion channel currents that flow through the cardiac cell membranes. In this paper we discuss the design principles and distributed memory architecture behind the OpenCMISS code. We also discuss the design of the interfaces that link the sets of physical equations across common boundaries (such as fluid-structure coupling), or between spatial fields over the same domain (such as coupled electromechanics), and the concepts behind CellML and FieldML that are embodied in the OpenCMISS data structures. We show how all of these provide a flexible infrastructure for combining models developed across the VPH/Physiome community. Copyright © 2011 Elsevier Ltd. All rights reserved.

  7. Security analysis of cyber-physical system

    Science.gov (United States)

    Li, Bo; Zhang, Lichen

    2017-05-01

    In recent years, Cyber-Physical System (CPS) has become an important research direction of academic circles and scientific and technological circles at home and abroad, is considered to be following the third wave of world information technology after the computer, the Internet. PS is a multi-dimensional, heterogeneous, deep integration of open systems, Involving the computer, communication, control and other disciplines of knowledge. As the various disciplines in the research theory and methods are significantly different, so the application of CPS has brought great challenges. This paper introduces the definition and characteristics of CPS, analyzes the current situation of CPS, analyzes the security threats faced by CPS, and gives the security solution for security threats. It also discusses CPS-specific security technology, to promote the healthy development of CPS in information security.

  8. What Computational Approaches Should be Taught for Physics?

    Science.gov (United States)

    Landau, Rubin

    2005-03-01

    The standard Computational Physics courses are designed for upper-level physics majors who already have some computational skills. We believe that it is important for first-year physics students to learn modern computing techniques that will be useful throughout their college careers, even before they have learned the math and science required for Computational Physics. To teach such Introductory Scientific Computing courses requires that some choices be made as to what subjects and computer languages wil be taught. Our survey of colleagues active in Computational Physics and Physics Education show no predominant choice, with strong positions taken for the compiled languages Java, C, C++ and Fortran90, as well as for problem-solving environments like Maple and Mathematica. Over the last seven years we have developed an Introductory course and have written up those courses as text books for others to use. We will describe our model of using both a problem-solving environment and a compiled language. The developed materials are available in both Maple and Mathaematica, and Java and Fortran90ootnotetextPrinceton University Press, to be published; www.physics.orst.edu/˜rubin/IntroBook/.

  9. The computational physics program of the National MFE Computer Center

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1988-01-01

    The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generation of supercomputers. The computational physics group is involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to compact toroids. Another major area is the investigation of kinetic instabilities using a 3-D particle code. This work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence are being examined. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers

  10. Disk access controller for Multi 8 computer

    International Nuclear Information System (INIS)

    Segalard, Jean

    1970-01-01

    After having presented the initial characteristics and weaknesses of the software provided for the control of a memory disk coupled with a Multi 8 computer, the author reports the development and improvement of this controller software. He presents the different constitutive parts of the computer and the operation of the disk coupling and of the direct access to memory. He reports the development of the disk access controller: software organisation, loader, subprograms and statements

  11. Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit

    Energy Technology Data Exchange (ETDEWEB)

    Merzari, E. [Argonne National Lab. (ANL), Argonne, IL (United States); Shemon, E. R. [Argonne National Lab. (ANL), Argonne, IL (United States); Yu, Y. Q. [Argonne National Lab. (ANL), Argonne, IL (United States); Thomas, J. W. [Argonne National Lab. (ANL), Argonne, IL (United States); Obabko, A. [Argonne National Lab. (ANL), Argonne, IL (United States); Jain, Rajeev [Argonne National Lab. (ANL), Argonne, IL (United States); Mahadevan, Vijay [Argonne National Lab. (ANL), Argonne, IL (United States); Tautges, Timothy [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Solberg, Jerome [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Ferencz, Robert Mark [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Whitesides, R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2015-12-21

    This report describes to employ SHARP to perform a first-of-a-kind analysis of the core radial expansion phenomenon in an SFR. This effort required significant advances in the framework Multi-Physics Demonstration Problem with the SHARP Reactor Simulation Toolkit used to drive the coupled simulations, manipulate the mesh in response to the deformation of the geometry, and generate the necessary modified mesh files. Furthermore, the model geometry is fairly complex, and consistent mesh generation for the three physics modules required significant effort. Fully-integrated simulations of a 7-assembly mini-core test problem have been performed, and the results are presented here. Physics models of a full-core model of the Advanced Burner Test Reactor have also been developed for each of the three physics modules. Standalone results of each of the three physics modules for the ABTR are presented here, which provides a demonstration of the feasibility of the fully-integrated simulation.

  12. Modular ORIGEN-S for multi-physics code systems

    Energy Technology Data Exchange (ETDEWEB)

    Yesilyurt, Gokhan; Clarno, Kevin T.; Gauld, Ian C., E-mail: yesilyurtg@ornl.gov, E-mail: clarnokt@ornl.gov, E-mail: gauldi@ornl.gov [Oak Ridge National Laboratory, TN (United States); Galloway, Jack, E-mail: jack@galloways.net [Los Alamos National Laboratory, Los Alamos, NM (United States)

    2011-07-01

    The ORIGEN-S code in the SCALE 6.0 nuclear analysis code suite is a well-validated tool to calculate the time-dependent concentrations of nuclides due to isotopic depletion, decay, and transmutation for many systems in a wide range of time scales. Application areas include nuclear reactor and spent fuel storage analyses, burnup credit evaluations, decay heat calculations, and environmental assessments. Although simple to use within the SCALE 6.0 code system, especially with the ORIGEN-ARP graphical user interface, it is generally complex to use as a component within an externally developed code suite because of its tight coupling within the infrastructure of the larger SCALE 6.0 system. The ORIGEN2 code, which has been widely integrated within other simulation suites, is no longer maintained by Oak Ridge National Laboratory (ORNL), has obsolete data, and has a relatively small validation database. Therefore, a modular version of the SCALE/ORIGEN-S code was developed to simplify its integration with other software packages to allow multi-physics nuclear code systems to easily incorporate the well-validated isotopic depletion, decay, and transmutation capability to perform realistic nuclear reactor and fuel simulations. SCALE/ORIGEN-S was extensively restructured to develop a modular version that allows direct access to the matrix solvers embedded in the code. Problem initialization and the solver were segregated to provide a simple application program interface and fewer input/output operations for the multi-physics nuclear code systems. Furthermore, new interfaces were implemented to access and modify the ORIGEN-S input variables and nuclear cross-section data through external drivers. Three example drivers were implemented, in the C, C++, and Fortran 90 programming languages, to demonstrate the modular use of the new capability. This modular version of SCALE/ORIGEN-S has been embedded within several multi-physics software development projects at ORNL, including

  13. Modular ORIGEN-S for multi-physics code systems

    International Nuclear Information System (INIS)

    Yesilyurt, Gokhan; Clarno, Kevin T.; Gauld, Ian C.; Galloway, Jack

    2011-01-01

    The ORIGEN-S code in the SCALE 6.0 nuclear analysis code suite is a well-validated tool to calculate the time-dependent concentrations of nuclides due to isotopic depletion, decay, and transmutation for many systems in a wide range of time scales. Application areas include nuclear reactor and spent fuel storage analyses, burnup credit evaluations, decay heat calculations, and environmental assessments. Although simple to use within the SCALE 6.0 code system, especially with the ORIGEN-ARP graphical user interface, it is generally complex to use as a component within an externally developed code suite because of its tight coupling within the infrastructure of the larger SCALE 6.0 system. The ORIGEN2 code, which has been widely integrated within other simulation suites, is no longer maintained by Oak Ridge National Laboratory (ORNL), has obsolete data, and has a relatively small validation database. Therefore, a modular version of the SCALE/ORIGEN-S code was developed to simplify its integration with other software packages to allow multi-physics nuclear code systems to easily incorporate the well-validated isotopic depletion, decay, and transmutation capability to perform realistic nuclear reactor and fuel simulations. SCALE/ORIGEN-S was extensively restructured to develop a modular version that allows direct access to the matrix solvers embedded in the code. Problem initialization and the solver were segregated to provide a simple application program interface and fewer input/output operations for the multi-physics nuclear code systems. Furthermore, new interfaces were implemented to access and modify the ORIGEN-S input variables and nuclear cross-section data through external drivers. Three example drivers were implemented, in the C, C++, and Fortran 90 programming languages, to demonstrate the modular use of the new capability. This modular version of SCALE/ORIGEN-S has been embedded within several multi-physics software development projects at ORNL, including

  14. Extreme Physics and Informational/Computational Limits

    Energy Technology Data Exchange (ETDEWEB)

    Di Sia, Paolo, E-mail: paolo.disia@univr.it, E-mail: 10alla33@virgilio.it [Department of Computer Science, Faculty of Science, Verona University, Strada Le Grazie 15, I-37134 Verona (Italy) and Faculty of Computer Science, Free University of Bozen, Piazza Domenicani 3, I-39100 Bozen-Bolzano (Italy)

    2011-07-08

    A sector of the current theoretical physics, even called 'extreme physics', deals with topics concerning superstring theories, multiverse, quantum teleportation, negative energy, and more, that only few years ago were considered scientific imaginations or purely speculative physics. Present experimental lines of evidence and implications of cosmological observations seem on the contrary support such theories. These new physical developments lead to informational limits, as the quantity of information, that a physical system can record, and computational limits, resulting from considerations regarding black holes and space-time fluctuations. In this paper I consider important limits for information and computation resulting in particular from string theories and its foundations.

  15. Extreme Physics and Informational/Computational Limits

    International Nuclear Information System (INIS)

    Di Sia, Paolo

    2011-01-01

    A sector of the current theoretical physics, even called 'extreme physics', deals with topics concerning superstring theories, multiverse, quantum teleportation, negative energy, and more, that only few years ago were considered scientific imaginations or purely speculative physics. Present experimental lines of evidence and implications of cosmological observations seem on the contrary support such theories. These new physical developments lead to informational limits, as the quantity of information, that a physical system can record, and computational limits, resulting from considerations regarding black holes and space-time fluctuations. In this paper I consider important limits for information and computation resulting in particular from string theories and its foundations.

  16. The Review of Visual Analysis Methods of Multi-modal Spatio-temporal Big Data

    Directory of Open Access Journals (Sweden)

    ZHU Qing

    2017-10-01

    Full Text Available The visual analysis of spatio-temporal big data is not only the state-of-art research direction of both big data analysis and data visualization, but also the core module of pan-spatial information system. This paper reviews existing visual analysis methods at three levels:descriptive visual analysis, explanatory visual analysis and exploratory visual analysis, focusing on spatio-temporal big data's characteristics of multi-source, multi-granularity, multi-modal and complex association.The technical difficulties and development tendencies of multi-modal feature selection, innovative human-computer interaction analysis and exploratory visual reasoning in the visual analysis of spatio-temporal big data were discussed. Research shows that the study of descriptive visual analysis for data visualizationis is relatively mature.The explanatory visual analysis has become the focus of the big data analysis, which is mainly based on interactive data mining in a visual environment to diagnose implicit reason of problem. And the exploratory visual analysis method needs a major break-through.

  17. Parallelized computation for computer simulation of electrocardiograms using personal computers with multi-core CPU and general-purpose GPU.

    Science.gov (United States)

    Shen, Wenfeng; Wei, Daming; Xu, Weimin; Zhu, Xin; Yuan, Shizhong

    2010-10-01

    Biological computations like electrocardiological modelling and simulation usually require high-performance computing environments. This paper introduces an implementation of parallel computation for computer simulation of electrocardiograms (ECGs) in a personal computer environment with an Intel CPU of Core (TM) 2 Quad Q6600 and a GPU of Geforce 8800GT, with software support by OpenMP and CUDA. It was tested in three parallelization device setups: (a) a four-core CPU without a general-purpose GPU, (b) a general-purpose GPU plus 1 core of CPU, and (c) a four-core CPU plus a general-purpose GPU. To effectively take advantage of a multi-core CPU and a general-purpose GPU, an algorithm based on load-prediction dynamic scheduling was developed and applied to setting (c). In the simulation with 1600 time steps, the speedup of the parallel computation as compared to the serial computation was 3.9 in setting (a), 16.8 in setting (b), and 20.0 in setting (c). This study demonstrates that a current PC with a multi-core CPU and a general-purpose GPU provides a good environment for parallel computations in biological modelling and simulation studies. Copyright 2010 Elsevier Ireland Ltd. All rights reserved.

  18. XXVII IUPAP Conference on Computational Physics (CCP2015)

    International Nuclear Information System (INIS)

    Santra, Sitangshu Bikas; Ray, Purusattam

    2016-01-01

    The 27th IUPAP Conference on Computational Physics, CCP2015, was held in the heritage city Guwahati, in the eastern part of India, next to the mighty river Brahmaputra, during December 2-5, 2015. The Conference on Computational Physics is organized annually under the auspices of Commission 20 (C20) of the IUPAP (International Union of Pure and Applied Physics). This is the first time it has been held in India. Almost 300 participants from 25 countries convened at the auditorium and lecture halls at the Indian Institute of Technology Guwahati for four days. Thirteen plenary speakers, fifty six invited speakers, three presnters from the computer industries and two hundred and eight contributory participants coverd a broad range of topics in computational physics and related areas. Thirty eight women participated in CCP2015 and seven of them presented invited talks. This volume of Journal of Physics: Conference Series contains the proceedings of the scientific contributions presented at the Conference. The main purpose of the meeting was to discuss the progress, opportunities and challenges of common interest to physicists engaged in computational research. Computational physics has taken giant leaps during the lat few years, not only because of the enormous increases in computer power but especially because of the development of new methods and algorithms. Computational physics now represents a third leg of research alongside analytical theory and experiments. A meeting such as CCP, must have sufficient depth in different areas and at the same time should be broad and accessible. The topics covered in this conference were: Materials/Condensed Matter Theory and Nanoscience, Strongly Correlated Systems and Quantum Phase Transitions, Quantum Chemistry and Atomic Physics, Quantum Chromodynamics, Astrophysics, Plasma Physics, Nuclear and High Energy Physics, Complex Systems: Chaos and Statistical Physics, Macroscopic Transport and Mesoscopic Methods, Biological Physics

  19. Computer Aided Design and Analysis of Separation Processes with Electrolyte Systems

    DEFF Research Database (Denmark)

    A methodology for computer aided design and analysis of separation processes involving electrolyte systems is presented. The methodology consists of three main parts. The thermodynamic part "creates" the problem specific property model package, which is a collection of pure component and mixture...... property models. The design and analysis part generates process (flowsheet) alternatives, evaluates/analyses feasibility of separation and provides a visual operation path for the desired separation. The simulation part consists of a simulation/calculation engine that allows the screening and validation...... of process alternatives. For the simulation part, a general multi-purpose, multi-phase separation model has been developed and integrated to an existing computer aided system. Application of the design and analysis methodology is highlighted through two illustrative case studies....

  20. Computer Aided Design and Analysis of Separation Processes with Electrolyte Systems

    DEFF Research Database (Denmark)

    Takano, Kiyoteru; Gani, Rafiqul; Kolar, P.

    2000-01-01

    A methodology for computer aided design and analysis of separation processes involving electrolyte systems is presented. The methodology consists of three main parts. The thermodynamic part 'creates' the problem specific property model package, which is a collection of pure component and mixture...... property models. The design and analysis part generates process (flowsheet) alternatives, evaluates/analyses feasibility of separation and provides a visual operation path for the desired separation. The simulation part consists of a simulation/calculation engine that allows the screening and validation...... of process alternatives. For the simulation part, a general multi-purpose, multi-phase separation model has been developed and integrated to an existing computer aided system. Application of the design and analysis methodology is highlighted through two illustrative case studies, (C) 2000 Elsevier Science...

  1. Grid Computing in High Energy Physics

    International Nuclear Information System (INIS)

    Avery, Paul

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them.Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software resources, regardless of location); (4) collaboration (providing tools that allow members full and fair access to all collaboration resources and enable distributed teams to work effectively, irrespective of location); and (5) education, training and outreach (providing resources and mechanisms for training students and for communicating important information to the public).It is believed that computing infrastructures based on Data Grids and optical networks can meet these challenges and can offer data intensive enterprises in high energy physics and elsewhere a comprehensive, scalable framework for collaboration and resource sharing. A number of Data Grid projects have been underway since 1999. Interestingly, the most exciting and far ranging of these projects are led by collaborations of high energy physicists, computer scientists and scientists from other disciplines in support of experiments with massive, near-term data needs. I review progress in this

  2. Electronics, trigger, data acquisition, and computing working group on future B physics experiments

    International Nuclear Information System (INIS)

    Geer, S.

    1993-01-01

    Electronics, trigger, data acquisition, and computing: this is a very broad list of topics. Nevertheless in a modern particle physics experiment one thinks in terms of a data pipeline in which the front end electronics, the trigger and data acquisition, and the offline reconstruction are linked together. In designing any piece of this pipeline it is necessary to understand the bigger picture of the data flow, data rates and volume, and the input rate, output rate, and latencies for each part of the pipeline. All of this needs to be developed with a clear understanding of the requirements imposed by the physics goals of the experiment; the signal efficiencies, background rates, and the amount of recorded information that needs to be propagated through the pipeline to select and analyse the events of interest. The technology needed to meet the demanding high data volume needs of the next round of B physics experiments appears to be available, now or within a couple of years. This seems to be the case for both fixed target and collider B physics experiments. Although there are many differences between the various data pipelines that are being proposed, there are also striking similarities. All experiments have a multi-level trigger scheme (most have levels 1, 2, and 3) where the final level consists of a computing farm that can run offline-type code and reduce the data volume by a factor of a few. Finally, the ability to reconstruct large data volumes offline in a reasonably short time, and making large data volumes available to many physicists for analysis, imposes severe constraints on the foreseen data pipelines, and a significant uncertainty in evaluating the various approaches proposed

  3. Computational Methods in Plasma Physics

    CERN Document Server

    Jardin, Stephen

    2010-01-01

    Assuming no prior knowledge of plasma physics or numerical methods, Computational Methods in Plasma Physics covers the computational mathematics and techniques needed to simulate magnetically confined plasmas in modern magnetic fusion experiments and future magnetic fusion reactors. Largely self-contained, the text presents the basic concepts necessary for the numerical solution of partial differential equations. Along with discussing numerical stability and accuracy, the author explores many of the algorithms used today in enough depth so that readers can analyze their stability, efficiency,

  4. Implementation multi representation and oral communication skills in Department of Physics Education on Elementary Physics II

    International Nuclear Information System (INIS)

    Kusumawati, Intan; Marwoto, Putut; Linuwih, Suharto

    2015-01-01

    The ability of multi representation has been widely studied, but there has been no implementation through a model of learning. This study aimed to determine the ability of the students multi representation, relationships multi representation capabilities and oral communication skills, as well as the application of the relations between the two capabilities through learning model Presentatif Based on Multi representation (PBM) in solving optical geometric (Elementary Physics II). A concurrent mixed methods research methods with qualitative–quantitative weights. Means of collecting data in the form of the pre-test and post-test with essay form, observation sheets oral communication skills, and assessment of learning by observation sheet PBM–learning models all have a high degree of respectively validity category is 3.91; 4.22; 4.13; 3.88. Test reliability with Alpha Cronbach technique, reliability coefficient of 0.494. The students are department of Physics Education Unnes as a research subject. Sequence multi representation tendency of students from high to low in sequence, representation of M, D, G, V; whereas the order of accuracy, the group representation V, D, G, M. Relationship multi representation ability and oral communication skills, comparable/proportional. Implementation conjunction generate grounded theory. This study should be applied to the physics of matter, or any other university for comparison

  5. Implementation multi representation and oral communication skills in Department of Physics Education on Elementary Physics II

    Science.gov (United States)

    Kusumawati, Intan; Marwoto, Putut; Linuwih, Suharto

    2015-09-01

    The ability of multi representation has been widely studied, but there has been no implementation through a model of learning. This study aimed to determine the ability of the students multi representation, relationships multi representation capabilities and oral communication skills, as well as the application of the relations between the two capabilities through learning model Presentatif Based on Multi representation (PBM) in solving optical geometric (Elementary Physics II). A concurrent mixed methods research methods with qualitative-quantitative weights. Means of collecting data in the form of the pre-test and post-test with essay form, observation sheets oral communication skills, and assessment of learning by observation sheet PBM-learning models all have a high degree of respectively validity category is 3.91; 4.22; 4.13; 3.88. Test reliability with Alpha Cronbach technique, reliability coefficient of 0.494. The students are department of Physics Education Unnes as a research subject. Sequence multi representation tendency of students from high to low in sequence, representation of M, D, G, V; whereas the order of accuracy, the group representation V, D, G, M. Relationship multi representation ability and oral communication skills, comparable/proportional. Implementation conjunction generate grounded theory. This study should be applied to the physics of matter, or any other university for comparison.

  6. Implementation multi representation and oral communication skills in Department of Physics Education on Elementary Physics II

    Energy Technology Data Exchange (ETDEWEB)

    Kusumawati, Intan, E-mail: intankusumawati10@gmail.com [High School in Teaching and Education (STKIP) Singkawang Jl. STKIP–Ex. Naram, district. North Singkawang, Singkawang-79251 West Borneo (Indonesia); Marwoto, Putut, E-mail: pmarwoto@yahoo.com; Linuwih, Suharto, E-mail: suhartolinuwih@gmail.com [Department of Physics Education, State University of Semarang (Unnes) Campus Unnes Bendan Ngisor, Semarang 50233 Central Java (Indonesia)

    2015-09-30

    The ability of multi representation has been widely studied, but there has been no implementation through a model of learning. This study aimed to determine the ability of the students multi representation, relationships multi representation capabilities and oral communication skills, as well as the application of the relations between the two capabilities through learning model Presentatif Based on Multi representation (PBM) in solving optical geometric (Elementary Physics II). A concurrent mixed methods research methods with qualitative–quantitative weights. Means of collecting data in the form of the pre-test and post-test with essay form, observation sheets oral communication skills, and assessment of learning by observation sheet PBM–learning models all have a high degree of respectively validity category is 3.91; 4.22; 4.13; 3.88. Test reliability with Alpha Cronbach technique, reliability coefficient of 0.494. The students are department of Physics Education Unnes as a research subject. Sequence multi representation tendency of students from high to low in sequence, representation of M, D, G, V; whereas the order of accuracy, the group representation V, D, G, M. Relationship multi representation ability and oral communication skills, comparable/proportional. Implementation conjunction generate grounded theory. This study should be applied to the physics of matter, or any other university for comparison.

  7. Feasibility study of applying a multi-channel analysis model to on-line core monitoring system

    International Nuclear Information System (INIS)

    In, W. K.; Yoo, Y. J.; Hwang, D. H.; Jun, T. H.

    1998-01-01

    A feasibility study was performed to evaluate the effect of implementing a multi-channel analysis model in on-line core monitoring system. A simplified thermal-hydraulic model has been used in the on-line core monitoring system of digital PWR. The design procedure, core thermal margin and computation time were investigated in case of replacing the simplified model with the multi-channel analysis model. For the given ranges of limiting conditions for operation in Yonggwang Unit 3 Cycle 1, the minimum DNBR of the simplified thermal-hydraulic code CETOP-D was compared to that of the multi-channel analysis code MATRA. A CETOP-D tuning is additionally required to ensure the accurate and conservative DNBR calculation but the MATRA tuning is not necessary. MATRA appeared to increase the DNBR overpower margin from 2.5% to 6% over the CETOP-D margin. MATRA took approximately 1 second to compute DNBR on the HP9000 workstation system, which is longer than the DNBR computation time of CETOP-D. It is, however, fast enough to perform the on-line monitoring of DNBR. It can be therefore concluded that the application of the multi-channel analysis model MATRA in the on-line core monitoring system is feasible

  8. On the Computational Capabilities of Physical Systems. Part 1; The Impossibility of Infallible Computation

    Science.gov (United States)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In this first of two papers, strong limits on the accuracy of physical computation are established. First it is proven that there cannot be a physical computer C to which one can pose any and all computational tasks concerning the physical universe. Next it is proven that no physical computer C can correctly carry out any computational task in the subset of such tasks that can be posed to C. This result holds whether the computational tasks concern a system that is physically isolated from C, or instead concern a system that is coupled to C. As a particular example, this result means that there cannot be a physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly 'processing information faster than the universe does'. The results also mean that there cannot exist an infallible, general-purpose observation apparatus, and that there cannot be an infallible, general-purpose control apparatus. These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - a definition of 'physical computation' - is needed to address the issues considered in these papers. While this definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. The second in this pair of papers presents a preliminary exploration of some of this mathematical structure, including in particular that of prediction complexity, which is a 'physical computation

  9. Research in applied mathematics, numerical analysis, and computer science

    Science.gov (United States)

    1984-01-01

    Research conducted at the Institute for Computer Applications in Science and Engineering (ICASE) in applied mathematics, numerical analysis, and computer science is summarized and abstracts of published reports are presented. The major categories of the ICASE research program are: (1) numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; (2) control and parameter identification; (3) computational problems in engineering and the physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and (4) computer systems and software, especially vector and parallel computers.

  10. A multi-modal training programme to improve physical activity, physical fitness and perceived physical ability in obese children.

    Science.gov (United States)

    Morano, Milena; Colella, Dario; Rutigliano, Irene; Fiore, Pietro; Pettoello-Mantovani, Massimo; Campanozzi, Angelo

    2014-01-01

    Actual and perceived physical abilities are important correlates of physical activity (PA) and fitness, but little research has explored these relationships over time in obese children. This study was designed: (a) to assess the feasibility of a multi-modal training programme promoting changes in PA, fundamental motor skills and real and perceived physical abilities of obese children; and (b) to explore cross-sectional and longitudinal relationships between real and perceived physical competence in boys and girls. Forty-one participants (9.2 ± 1.2 years) were assessed before and after an 8-month intervention with respect to body composition, physical fitness, self-reported PA and perceived physical ability. After treatment, obese children reported improvements in the body mass index, PA levels, gross motor performance and actual and perceived physical abilities. Real and perceived physical competence was correlated in boys, but not in girls. Results indicate that a multi-modal programme focused on actual and perceived physical competence as associated with the gradual increase in the volume of activity might be an effective strategy to improve adherence of the participants and to increase the lifelong exercise skills of obese children.

  11. EDDYMULT: a computing system for solving eddy current problems in a multi-torus system

    International Nuclear Information System (INIS)

    Nakamura, Yukiharu; Ozeki, Takahisa

    1989-03-01

    A new computing system EDDYMULT based on the finite element circuit method has been developed to solve actual eddy current problems in a multi-torus system, which consists of many torus-conductors and various kinds of axisymmetric poloidal field coils. The EDDYMULT computing system can deal three-dimensionally with the modal decomposition of eddy current in a multi-torus system, the transient phenomena of eddy current distributions and the resultant magnetic field. Therefore, users can apply the computing system to the solution of the eddy current problems in a tokamak fusion device, such as the design of poloidal field coil power supplies, the mechanical stress design of the intensive electromagnetic loading on device components and the control analysis of plasma position. The present report gives a detailed description of the EDDYMULT system as an user's manual: 1) theory, 2) structure of the code system, 3) input description, 4) problem restrictions, 5) description of the subroutines, etc. (author)

  12. Multi-Mounted X-Ray Computed Tomography.

    Science.gov (United States)

    Fu, Jian; Liu, Zhenzhong; Wang, Jingzheng

    2016-01-01

    Most existing X-ray computed tomography (CT) techniques work in single-mounted mode and need to scan the inspected objects one by one. It is time-consuming and not acceptable for the inspection in a large scale. In this paper, we report a multi-mounted CT method and its first engineering implementation. It consists of a multi-mounted scanning geometry and the corresponding algebraic iterative reconstruction algorithm. This approach permits the CT rotation scanning of multiple objects simultaneously without the increase of penetration thickness and the signal crosstalk. Compared with the conventional single-mounted methods, it has the potential to improve the imaging efficiency and suppress the artifacts from the beam hardening and the scatter. This work comprises a numerical study of the method and its experimental verification using a dataset measured with a developed multi-mounted X-ray CT prototype system. We believe that this technique is of particular interest for pushing the engineering applications of X-ray CT.

  13. Optimal Weighting of Multi-Spacecraft Data to Estimate Gradients of Physical Fields

    Science.gov (United States)

    Chanteur, G. M.; Le Contel, O.; Sahraoui, F.; Retino, A.; Mirioni, L.

    2016-12-01

    Multi-spacecraft missions like the ESA mission CLUSTER and the NASA mission MMS are essential to improve our understanding of physical processes in space plasmas. Several methods were designed in the 90's during the preparation phase of the CLUSTER mission to estimate gradients of physical fields from simultaneous multi-points measurements [1, 2]. Both CLUSTER and MMS involve four spacecraft with identical full scientific payloads including various sensors of electromagnetic fields and different type of particle detectors. In the standard methods described in [1, 2], which are presently in use, data from the four spacecraft have identical weights and the estimated gradients are most reliable when the tetrahedron formed by the four spacecraft is regular. There are three types of errors affecting the estimated gradients (see chapter 14 in [1]) : i) truncature errors are due to local non-linearity of spatial variations, ii) physical errors are due to instruments, and iii) geometrical errors are due to uncertainties on the positions of the spacecraft. An assessment of truncature errors for a given observation requires a theoretical model of the measured field. Instrumental errors can easily be taken into account for a given geometry of the cluster but are usually less than the geometrical errors which diverge quite fast when the tetrahedron flattens, a circumstance occurring twice per orbit of the cluster. Hence reliable gradients can be estimated only on part of the orbit. Reciprocal vectors of the tetrahedron were presented in chapter 4 of [1], they have the advantage over other methods to treat the four spacecraft symmetrically and to allow a theoretical analysis of the errors (see chapters 4 of [1] and 4 of [2]). We will present Generalized Reciprocal Vectors for weighted data and an optimization procedure to improve the reliability of the estimated gradients when the tetrahedron is not regular. A brief example using CLUSTER or MMS data will be given. This approach

  14. State-Transition Structures in Physics and in Computation

    Science.gov (United States)

    Petri, C. A.

    1982-12-01

    In order to establish close connections between physical and computational processes, it is assumed that the concepts of “state” and of “transition” are acceptable both to physicists and to computer scientists, at least in an informal way. The aim of this paper is to propose formal definitions of state and transition elements on the basis of very low level physical concepts in such a way that (1) all physically possible computations can be described as embedded in physical processes; (2) the computational aspects of physical processes can be described on a well-defined level of abstraction; (3) the gulf between the continuous models of physics and the discrete models of computer science can be bridged by simple mathematical constructs which may be given a physical interpretation; (4) a combinatorial, nonstatistical definition of “information” can be given on low levels of abstraction which may serve as a basis to derive higher-level concepts of information, e.g., by a statistical or probabilistic approach. Conceivable practical consequences are discussed.

  15. Abstraction/Representation Theory for heterotic physical computing.

    Science.gov (United States)

    Horsman, D C

    2015-07-28

    We give a rigorous framework for the interaction of physical computing devices with abstract computation. Device and program are mediated by the non-logical representation relation; we give the conditions under which representation and device theory give rise to commuting diagrams between logical and physical domains, and the conditions for computation to occur. We give the interface of this new framework with currently existing formal methods, showing in particular its close relationship to refinement theory, and the implications for questions of meaning and reference in theoretical computer science. The case of hybrid computing is considered in detail, addressing in particular the example of an Internet-mediated social machine, and the abstraction/representation framework used to provide a formal distinction between heterotic and hybrid computing. This forms the basis for future use of the framework in formal treatments of non-standard physical computers. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  16. Physics Mining of Multi-Source Data Sets

    Science.gov (United States)

    Helly, John; Karimabadi, Homa; Sipes, Tamara

    2012-01-01

    Powerful new parallel data mining algorithms can produce diagnostic and prognostic numerical models and analyses from observational data. These techniques yield higher-resolution measures than ever before of environmental parameters by fusing synoptic imagery and time-series measurements. These techniques are general and relevant to observational data, including raster, vector, and scalar, and can be applied in all Earth- and environmental science domains. Because they can be highly automated and are parallel, they scale to large spatial domains and are well suited to change and gap detection. This makes it possible to analyze spatial and temporal gaps in information, and facilitates within-mission replanning to optimize the allocation of observational resources. The basis of the innovation is the extension of a recently developed set of algorithms packaged into MineTool to multi-variate time-series data. MineTool is unique in that it automates the various steps of the data mining process, thus making it amenable to autonomous analysis of large data sets. Unlike techniques such as Artificial Neural Nets, which yield a blackbox solution, MineTool's outcome is always an analytical model in parametric form that expresses the output in terms of the input variables. This has the advantage that the derived equation can then be used to gain insight into the physical relevance and relative importance of the parameters and coefficients in the model. This is referred to as physics-mining of data. The capabilities of MineTool are extended to include both supervised and unsupervised algorithms, handle multi-type data sets, and parallelize it.

  17. Multi-Disciplinary System Reliability Analysis

    Science.gov (United States)

    Mahadevan, Sankaran; Han, Song

    1997-01-01

    The objective of this study is to develop a new methodology for estimating the reliability of engineering systems that encompass multiple disciplines. The methodology is formulated in the context of the NESSUS probabilistic structural analysis code developed under the leadership of NASA Lewis Research Center. The NESSUS code has been successfully applied to the reliability estimation of a variety of structural engineering systems. This study examines whether the features of NESSUS could be used to investigate the reliability of systems in other disciplines such as heat transfer, fluid mechanics, electrical circuits etc., without considerable programming effort specific to each discipline. In this study, the mechanical equivalence between system behavior models in different disciplines are investigated to achieve this objective. A new methodology is presented for the analysis of heat transfer, fluid flow, and electrical circuit problems using the structural analysis routines within NESSUS, by utilizing the equivalence between the computational quantities in different disciplines. This technique is integrated with the fast probability integration and system reliability techniques within the NESSUS code, to successfully compute the system reliability of multi-disciplinary systems. Traditional as well as progressive failure analysis methods for system reliability estimation are demonstrated, through a numerical example of a heat exchanger system involving failure modes in structural, heat transfer and fluid flow disciplines.

  18. Multi-fidelity Gaussian process regression for computer experiments

    International Nuclear Information System (INIS)

    Le-Gratiet, Loic

    2013-01-01

    This work is on Gaussian-process based approximation of a code which can be run at different levels of accuracy. The goal is to improve the predictions of a surrogate model of a complex computer code using fast approximations of it. A new formulation of a co-kriging based method has been proposed. In particular this formulation allows for fast implementation and for closed-form expressions for the predictive mean and variance for universal co-kriging in the multi-fidelity framework, which is a breakthrough as it really allows for the practical application of such a method in real cases. Furthermore, fast cross validation, sequential experimental design and sensitivity analysis methods have been extended to the multi-fidelity co-kriging framework. This thesis also deals with a conjecture about the dependence of the learning curve (i.e. the decay rate of the mean square error) with respect to the smoothness of the underlying function. A proof in a fairly general situation (which includes the classical models of Gaussian-process based meta-models with stationary covariance functions) has been obtained while the previous proofs hold only for degenerate kernels (i.e. when the process is in fact finite- dimensional). This result allows for addressing rigorously practical questions such as the optimal allocation of the budget between different levels of codes in the multi-fidelity framework. (author) [fr

  19. Pipelining Computational Stages of the Tomographic Reconstructor for Multi-Object Adaptive Optics on a Multi-GPU System

    KAUST Repository

    Charara, Ali; Ltaief, Hatem; Gratadour, Damien; Keyes, David E.; Sevin, Arnaud; Abdelfattah, Ahmad; Gendron, Eric; Morel, Carine; Vidal, Fabrice

    2014-01-01

    called MOSAIC has been proposed to perform multi-object spectroscopy using the Multi-Object Adaptive Optics (MOAO) technique. The core implementation of the simulation lies in the intensive computation of a tomographic reconstruct or (TR), which is used

  20. The Simulation Computer Based Learning (SCBL) for Short Circuit Multi Machine Power System Analysis

    Science.gov (United States)

    Rahmaniar; Putri, Maharani

    2018-03-01

    Strengthening Competitiveness of human resources become the reply of college as a conductor of high fomal education. Electrical Engineering Program UNPAB (Prodi TE UNPAB) as one of the department of electrical engineering that manages the field of electrical engineering expertise has a very important part in preparing human resources (HR), Which is required by where graduates are produced by DE UNPAB, Is expected to be able to compete globally, especially related to the implementation of Asean Economic Community (AEC) which requires the active participation of graduates with competence and quality of human resource competitiveness. Preparation of HR formation Competitive is done with the various strategies contained in the Seven (7) Higher Education Standard, one part of which is the implementation of teaching and learning process in Electrical system analysis with short circuit analysis (SCA) This course is a course The core of which is the basis for the competencies of other subjects in the advanced semester at Development of Computer Based Learning model (CBL) is done in the learning of interference analysis of multi-machine short circuit which includes: (a) Short-circuit One phase, (B) Two-phase Short Circuit Disruption, (c) Ground Short Circuit Disruption, (d) Short Circuit Disruption One Ground Floor Development of CBL learning model for Electrical System Analysis course provides space for students to be more active In learning in solving complex (complicated) problems, so it is thrilling Ilkan flexibility of student learning how to actively solve the problem of short-circuit analysis and to form the active participation of students in learning (Student Center Learning, in the course of electrical power system analysis.

  1. A language for data-parallel and task parallel programming dedicated to multi-SIMD computers. Contributions to hydrodynamic simulation with lattice gases

    International Nuclear Information System (INIS)

    Pic, Marc Michel

    1995-01-01

    Parallel programming covers task-parallelism and data-parallelism. Many problems need both parallelisms. Multi-SIMD computers allow hierarchical approach of these parallelisms. The T++ language, based on C++, is dedicated to exploit Multi-SIMD computers using a programming paradigm which is an extension of array-programming to tasks managing. Our language introduced array of independent tasks to achieve separately (MIMD), on subsets of processors of identical behaviour (SIMD), in order to translate the hierarchical inclusion of data-parallelism in task-parallelism. To manipulate in a symmetrical way tasks and data we propose meta-operations which have the same behaviour on tasks arrays and on data arrays. We explain how to implement this language on our parallel computer SYMPHONIE in order to profit by the locally-shared memory, by the hardware virtualization, and by the multiplicity of communications networks. We analyse simultaneously a typical application of such architecture. Finite elements scheme for Fluid mechanic needs powerful parallel computers and requires large floating points abilities. Lattice gases is an alternative to such simulations. Boolean lattice bases are simple, stable, modular, need to floating point computation, but include numerical noise. Boltzmann lattice gases present large precision of computation, but needs floating points and are only locally stable. We propose a new scheme, called multi-bit, who keeps the advantages of each boolean model to which it is applied, with large numerical precision and reduced noise. Experiments on viscosity, physical behaviour, noise reduction and spurious invariants are shown and implementation techniques for parallel Multi-SIMD computers detailed. (author) [fr

  2. Coupled multi-physics simulation frameworks for reactor simulation: A bottom-up approach

    International Nuclear Information System (INIS)

    Tautges, Timothy J.; Caceres, Alvaro; Jain, Rajeev; Kim, Hong-Jun; Kraftcheck, Jason A.; Smith, Brandon M.

    2011-01-01

    A 'bottom-up' approach to multi-physics frameworks is described, where first common interfaces to simulation data are developed, then existing physics modules are adapted to communicate through those interfaces. Physics modules read and write data through those common interfaces, which also provide access to common simulation services like parallel IO, mesh partitioning, etc.. Multi-physics codes are assembled as a combination of physics modules, services, interface implementations, and driver code which coordinates calling these various pieces. Examples of various physics modules and services connected to this framework are given. (author)

  3. Computational physics problem solving with Python

    CERN Document Server

    Landau, Rubin H; Bordeianu, Cristian C

    2015-01-01

    The use of computation and simulation has become an essential part of the scientific process. Being able to transform a theory into an algorithm requires significant theoretical insight, detailed physical and mathematical understanding, and a working level of competency in programming. This upper-division text provides an unusually broad survey of the topics of modern computational physics from a multidisciplinary, computational science point of view. Its philosophy is rooted in learning by doing (assisted by many model programs), with new scientific materials as well as with the Python progr

  4. Small Area and Individual Level Predictors of Physical Activity in Urban Communities: A Multi-Level Study in Stoke on Trent, England

    Directory of Open Access Journals (Sweden)

    Hilde Stephansen

    2009-02-01

    Full Text Available Reducing population physical inactivity has been declared a global public health priority. We report a detailed multi-level analysis of small area indices and individual factors as correlates of physical activity in deprived urban areas. Multi-level regression analysis was used to investigate environmental and individual correlates of physical activity. Nine individual factors were retained in the overall model, two related to individual intentions or beliefs, three to access to shops, work or fast food outlets and two to weather; age and gender being the other two. Four area level indices related to: traffic, road casualties, criminal damage and access to green space were important in explaining variation in physical activity.

  5. Multi-dimensional Code Development for Safety Analysis of LMR

    International Nuclear Information System (INIS)

    Ha, K. S.; Jeong, H. Y.; Kwon, Y. M.; Lee, Y. B.

    2006-08-01

    A liquid metal reactor loaded a metallic fuel has the inherent safety mechanism due to the several negative reactivity feedback. Although this feature demonstrated through experiments in the EBR-II, any of the computer programs until now did not exactly analyze it because of the complexity of the reactivity feedback mechanism. A multi-dimensional detail program was developed through the International Nuclear Energy Research Initiative(INERI) from 2003 to 2005. This report includes the numerical coupling the multi-dimensional program and SSC-K code which is used to the safety analysis of liquid metal reactors in KAERI. The coupled code has been proved by comparing the analysis results using the code with the results using SAS-SASSYS code of ANL for the UTOP, ULOF, and ULOHS applied to the safety analysis for KALIMER-150

  6. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    International Nuclear Information System (INIS)

    Pop, Florin

    2014-01-01

    Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP) analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  7. Cross-platform validation and analysis environment for particle physics

    Science.gov (United States)

    Chekanov, S. V.; Pogrebnyak, I.; Wilbern, D.

    2017-11-01

    A multi-platform validation and analysis framework for public Monte Carlo simulation for high-energy particle collisions is discussed. The front-end of this framework uses the Python programming language, while the back-end is written in Java, which provides a multi-platform environment that can be run from a web browser and can easily be deployed at the grid sites. The analysis package includes all major software tools used in high-energy physics, such as Lorentz vectors, jet algorithms, histogram packages, graphic canvases, and tools for providing data access. This multi-platform software suite, designed to minimize OS-specific maintenance and deployment time, is used for online validation of Monte Carlo event samples through a web interface.

  8. Multi-tissue computational modeling analyzes pathophysiology of type 2 diabetes in MKR mice.

    Directory of Open Access Journals (Sweden)

    Amit Kumar

    Full Text Available Computational models using metabolic reconstructions for in silico simulation of metabolic disorders such as type 2 diabetes mellitus (T2DM can provide a better understanding of disease pathophysiology and avoid high experimentation costs. There is a limited amount of computational work, using metabolic reconstructions, performed in this field for the better understanding of T2DM. In this study, a new algorithm for generating tissue-specific metabolic models is presented, along with the resulting multi-confidence level (MCL multi-tissue model. The effect of T2DM on liver, muscle, and fat in MKR mice was first studied by microarray analysis and subsequently the changes in gene expression of frank T2DM MKR mice versus healthy mice were applied to the multi-tissue model to test the effect. Using the first multi-tissue genome-scale model of all metabolic pathways in T2DM, we found out that branched-chain amino acids' degradation and fatty acids oxidation pathway is downregulated in T2DM MKR mice. Microarray data showed low expression of genes in MKR mice versus healthy mice in the degradation of branched-chain amino acids and fatty-acid oxidation pathways. In addition, the flux balance analysis using the MCL multi-tissue model showed that the degradation pathways of branched-chain amino acid and fatty acid oxidation were significantly downregulated in MKR mice versus healthy mice. Validation of the model was performed using data derived from the literature regarding T2DM. Microarray data was used in conjunction with the model to predict fluxes of various other metabolic pathways in the T2DM mouse model and alterations in a number of pathways were detected. The Type 2 Diabetes MCL multi-tissue model may explain the high level of branched-chain amino acids and free fatty acids in plasma of Type 2 Diabetic subjects from a metabolic fluxes perspective.

  9. Review on study of multi-physics in environment engineering

    International Nuclear Information System (INIS)

    Liu Shanli; Zhao Jian; Sheng Jinchang

    2006-01-01

    This paper analyzes some problems on multi-field coupling ones between seepage mechanics and other physical and chemical processes (such as temperature field. stress field, solute transport. chemical action and so on) in environment engineering, it explains the research theory of multi-field coupling, it summarizes the abroad and domestic research about the model of multi-field problem and finally it looks into the future of research tendency in environment engineering. (authors)

  10. Computational-physics program of the National MFE Computer Center

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1982-02-01

    The computational physics group is ivolved in several areas of fusion research. One main area is the application of multidimensional Fokker-Planck, transport and combined Fokker-Planck/transport codes to both toroidal and mirror devices. Another major area is the investigation of linear and nonlinear resistive magnetohydrodynamics in two and three dimensions, with applications to all types of fusion devices. The MHD work is often coupled with the task of numerically generating equilibria which model experimental devices. In addition to these computational physics studies, investigations of more efficient numerical algorithms are being carried out

  11. Seventeenth Workshop on Computer Simulation Studies in Condensed-Matter Physics

    CERN Document Server

    Landau, David P; Schütler, Heinz-Bernd; Computer Simulation Studies in Condensed-Matter Physics XVI

    2006-01-01

    This status report features the most recent developments in the field, spanning a wide range of topical areas in the computer simulation of condensed matter/materials physics. Both established and new topics are included, ranging from the statistical mechanics of classical magnetic spin models to electronic structure calculations, quantum simulations, and simulations of soft condensed matter. The book presents new physical results as well as novel methods of simulation and data analysis. Highlights of this volume include various aspects of non-equilibrium statistical mechanics, studies of properties of real materials using both classical model simulations and electronic structure calculations, and the use of computer simulations in teaching.

  12. Development of multi-representation learning tools for the course of fundamental physics

    Science.gov (United States)

    Huda, C.; Siswanto, J.; Kurniawan, A. F.; Nuroso, H.

    2016-08-01

    This research is aimed at designing a learning tool based on multi-representation that can improve problem solving skills. It used the research and development approach. It was applied for the course of Fundamental Physics at Universitas PGRI Semarang for the 2014/2015 academic year. Results show gain analysis value of 0.68, which means some medium improvements. The result of t-test is shows a calculated value of 27.35 and a table t of 2.020 for df = 25 and α = 0.05. Results of pre-tests and post-tests increase from 23.45 to 76.15. Application of multi-representation learning tools significantly improves students’ grades.

  13. Verification of Electromagnetic Physics Models for Parallel Computing Architectures in the GeantV Project

    Energy Technology Data Exchange (ETDEWEB)

    Amadio, G.; et al.

    2017-11-22

    An intensive R&D and programming effort is required to accomplish new challenges posed by future experimental high-energy particle physics (HEP) programs. The GeantV project aims to narrow the gap between the performance of the existing HEP detector simulation software and the ideal performance achievable, exploiting latest advances in computing technology. The project has developed a particle detector simulation prototype capable of transporting in parallel particles in complex geometries exploiting instruction level microparallelism (SIMD and SIMT), task-level parallelism (multithreading) and high-level parallelism (MPI), leveraging both the multi-core and the many-core opportunities. We present preliminary verification results concerning the electromagnetic (EM) physics models developed for parallel computing architectures within the GeantV project. In order to exploit the potential of vectorization and accelerators and to make the physics model effectively parallelizable, advanced sampling techniques have been implemented and tested. In this paper we introduce a set of automated statistical tests in order to verify the vectorized models by checking their consistency with the corresponding Geant4 models and to validate them against experimental data.

  14. Simulation of multi-pulse coaxial helicity injection in the Sustained Spheromak Physics Experiment

    Science.gov (United States)

    O'Bryan, J. B.; Romero-Talamás, C. A.; Woodruff, S.

    2018-03-01

    Nonlinear, numerical computation with the NIMROD code is used to explore magnetic self-organization during multi-pulse coaxial helicity injection in the Sustained Spheromak Physics eXperiment. We describe multiple distinct phases of spheromak evolution, starting from vacuum magnetic fields and the formation of the initial magnetic flux bubble through multiple refluxing pulses and the eventual onset of the column mode instability. Experimental and computational magnetic diagnostics agree on the onset of the column mode instability, which first occurs during the second refluxing pulse of the simulated discharge. Our computations also reproduce the injector voltage traces, despite only specifying the injector current and not explicitly modeling the external capacitor bank circuit. The computations demonstrate that global magnetic evolution is fairly robust to different transport models and, therefore, that a single fluid-temperature model is sufficient for a broader, qualitative assessment of spheromak performance. Although discharges with similar traces of normalized injector current produce similar global spheromak evolution, details of the current distribution during the column mode instability impact the relative degree of poloidal flux amplification and magnetic helicity content.

  15. Reactor physics computations

    International Nuclear Information System (INIS)

    Shapiro, A.

    1977-01-01

    Those reactor-core calculations which provide the effective multiplication factor (or eigenvalue) and the stationary (or fundamental mode) neutron-flux distribution at selected times during the lifetime of the core are considered. The multiplication factor is required to establish the nuclear composition and configuration which satisfy criticality and control requirements. The steady-state flux distribution must be known to calculate reaction rates and power distributions which are needed for the thermal, mechanical and shielding design of the reactor, as well as for evaluating refueling requirements. The calculational methods and techniques used for evaluating the nuclear design information vary with the type of reactor and with the preferences and prejudices of the reactor-physics group responsible for the calculation. Additionally, new methods and techniques are continually being developed and made operational. This results in a rather large conglomeration of methods and computer codes which are available for reactor analysis. The author provides the basic calculational framework and discusses the more prominent techniques which have evolved. (Auth.)

  16. Multi Scale Finite Element Analyses By Using SEM-EBSD Crystallographic Modeling and Parallel Computing

    International Nuclear Information System (INIS)

    Nakamachi, Eiji

    2005-01-01

    A crystallographic homogenization procedure is introduced to the conventional static-explicit and dynamic-explicit finite element formulation to develop a multi scale - double scale - analysis code to predict the plastic strain induced texture evolution, yield loci and formability of sheet metal. The double-scale structure consists of a crystal aggregation - micro-structure - and a macroscopic elastic plastic continuum. At first, we measure crystal morphologies by using SEM-EBSD apparatus, and define a unit cell of micro structure, which satisfy the periodicity condition in the real scale of polycrystal. Next, this crystallographic homogenization FE code is applied to 3N pure-iron and 'Benchmark' aluminum A6022 polycrystal sheets. It reveals that the initial crystal orientation distribution - the texture - affects very much to a plastic strain induced texture and anisotropic hardening evolutions and sheet deformation. Since, the multi-scale finite element analysis requires a large computation time, a parallel computing technique by using PC cluster is developed for a quick calculation. In this parallelization scheme, a dynamic workload balancing technique is introduced for quick and efficient calculations

  17. Physical Layer Multi-Core Prototyping A Dataflow-Based Approach for LTE eNodeB

    CERN Document Server

    Pelcat, Maxime; Piat, Jonathan; Nezan, Jean-François

    2013-01-01

    Base stations developed according to the 3GPP Long Term Evolution (LTE) standard require unprecedented processing power. 3GPP LTE enables data rates beyond hundreds of Mbits/s by using advanced technologies, necessitating a highly complex LTE physical layer. The operating power of base stations is a significant cost for operators, and is currently optimized using state-of-the-art hardware solutions, such as heterogeneous distributed systems. The traditional system design method of porting algorithms to heterogeneous distributed systems based on test-and-refine methods is a manual, thus time-expensive, task.   Physical Layer Multi-Core Prototyping: A Dataflow-Based Approach for LTE eNodeB provides a clear introduction to the 3GPP LTE physical layer and to dataflow-based prototyping and programming. The difficulties in the process of 3GPP LTE physical layer porting are outlined, with particular focus on automatic partitioning and scheduling, load balancing and computation latency reduction, specifically in sys...

  18. CASE SERIES Multi-detector computer tomography venography ...

    African Journals Online (AJOL)

    in the curved coronal plane with particular reference to the course of the common and external iliac veins through the pelvis. Axial venous. Aim. To evaluate the role of multi-detector computer tomography venography (MDCTV), compared with conventional venography, as a diagnostic tool in the management of patients with ...

  19. On The Computational Capabilities of Physical Systems. Part 2; Relationship With Conventional Computer Science

    Science.gov (United States)

    Wolpert, David H.; Koga, Dennis (Technical Monitor)

    2000-01-01

    In the first of this pair of papers, it was proven that there cannot be a physical computer to which one can properly pose any and all computational tasks concerning the physical universe. It was then further proven that no physical computer C can correctly carry out all computational tasks that can be posed to C. As a particular example, this result means that no physical computer that can, for any physical system external to that computer, take the specification of that external system's state as input and then correctly predict its future state before that future state actually occurs; one cannot build a physical computer that can be assured of correctly "processing information faster than the universe does". These results do not rely on systems that are infinite, and/or non-classical, and/or obey chaotic dynamics. They also hold even if one uses an infinitely fast, infinitely dense computer, with computational powers greater than that of a Turing Machine. This generality is a direct consequence of the fact that a novel definition of computation - "physical computation" - is needed to address the issues considered in these papers, which concern real physical computers. While this novel definition does not fit into the traditional Chomsky hierarchy, the mathematical structure and impossibility results associated with it have parallels in the mathematics of the Chomsky hierarchy. This second paper of the pair presents a preliminary exploration of some of this mathematical structure. Analogues of Chomskian results concerning universal Turing Machines and the Halting theorem are derived, as are results concerning the (im)possibility of certain kinds of error-correcting codes. In addition, an analogue of algorithmic information complexity, "prediction complexity", is elaborated. A task-independent bound is derived on how much the prediction complexity of a computational task can differ for two different reference universal physical computers used to solve that task

  20. Research on reactor physics analysis method based on Monte Carlo homogenization

    International Nuclear Information System (INIS)

    Ye Zhimin; Zhang Peng

    2014-01-01

    In order to meet the demand of nuclear energy market in the future, many new concepts of nuclear energy systems has been put forward. The traditional deterministic neutronics analysis method has been challenged in two aspects: one is the ability of generic geometry processing; the other is the multi-spectrum applicability of the multigroup cross section libraries. Due to its strong geometry modeling capability and the application of continuous energy cross section libraries, the Monte Carlo method has been widely used in reactor physics calculations, and more and more researches on Monte Carlo method has been carried out. Neutronics-thermal hydraulics coupling analysis based on Monte Carlo method has been realized. However, it still faces the problems of long computation time and slow convergence which make it not applicable to the reactor core fuel management simulations. Drawn from the deterministic core analysis method, a new two-step core analysis scheme is proposed in this work. Firstly, Monte Carlo simulations are performed for assembly, and the assembly homogenized multi-group cross sections are tallied at the same time. Secondly, the core diffusion calculations can be done with these multigroup cross sections. The new scheme can achieve high efficiency while maintain acceptable precision, so it can be used as an effective tool for the design and analysis of innovative nuclear energy systems. Numeric tests have been done in this work to verify the new scheme. (authors)

  1. Performance analysis of the FDTD method applied to holographic volume gratings: Multi-core CPU versus GPU computing

    Science.gov (United States)

    Francés, J.; Bleda, S.; Neipp, C.; Márquez, A.; Pascual, I.; Beléndez, A.

    2013-03-01

    The finite-difference time-domain method (FDTD) allows electromagnetic field distribution analysis as a function of time and space. The method is applied to analyze holographic volume gratings (HVGs) for the near-field distribution at optical wavelengths. Usually, this application requires the simulation of wide areas, which implies more memory and time processing. In this work, we propose a specific implementation of the FDTD method including several add-ons for a precise simulation of optical diffractive elements. Values in the near-field region are computed considering the illumination of the grating by means of a plane wave for different angles of incidence and including absorbing boundaries as well. We compare the results obtained by FDTD with those obtained using a matrix method (MM) applied to diffraction gratings. In addition, we have developed two optimized versions of the algorithm, for both CPU and GPU, in order to analyze the improvement of using the new NVIDIA Fermi GPU architecture versus highly tuned multi-core CPU as a function of the size simulation. In particular, the optimized CPU implementation takes advantage of the arithmetic and data transfer streaming SIMD (single instruction multiple data) extensions (SSE) included explicitly in the code and also of multi-threading by means of OpenMP directives. A good agreement between the results obtained using both FDTD and MM methods is obtained, thus validating our methodology. Moreover, the performance of the GPU is compared to the SSE+OpenMP CPU implementation, and it is quantitatively determined that a highly optimized CPU program can be competitive for a wider range of simulation sizes, whereas GPU computing becomes more powerful for large-scale simulations.

  2. Does regional disadvantage affect health-related sport and physical activity level? A multi-level analysis of individual behaviour.

    Science.gov (United States)

    Wicker, Pamela; Downward, Paul; Lera-López, Fernando

    2017-11-01

    This study examines the role of regional government quality in health-related participation in sport and physical activity among adults (18-64 years) in 28 European countries. The importance of the analysis rests in the relative autonomy that regional and local governments have over policy decisions connected with sport and physical activity. While existing studies have focussed on economic and infrastructural investment and expenditure, this research investigates the quality of regional governments across 208 regions within 28 European countries. The individual-level data stem from the 2013 Eurobarometer 80.2 (n = 18,675) and were combined with regional-level data from Eurostat. An individual's level of participation in sport and physical activity was measured by three variables reflecting whether an individual's activity level is below, meets, or exceeds the recommendations of the World Health Organization. The results of multi-level analyses reveal that regional government quality has a significant and positive association with individual participation in sport and physical activity at a level meeting or exceeding the guidelines. The impact is much larger than that of regional gross domestic product per capita, indicating that regional disadvantage in terms of political quality is more relevant than being disadvantaged in terms of economic wealth.

  3. DESIGN of MICRO CANTILEVER BEAM for VAPOUR DETECTION USING COMSOL MULTI PHYSICS SOFTWARE

    OpenAIRE

    Sivacoumar R; Parvathy JM; Pratishtha Deep

    2015-01-01

    This paper gives an overview of micro cantilever beam of various shapes and materials for vapour detection. The design of micro cantilever beam, analysis and simulation is done for each shape. The simulation is done using COMSOL Multi physics software using structural mechanics and chemical module. The simulation results of applied force and resulting Eigen frequencies will be analyzed for different beam structures. The vapour analysis is done using flow cell that consists of chemical pill...

  4. International Conference on Theoretical and Computational Physics

    CERN Document Server

    2016-01-01

    Int'l Conference on Theoretical and Computational Physics (TCP 2016) will be held from August 24 to 26, 2016 in Xi'an, China. This Conference will cover issues on Theoretical and Computational Physics. It dedicates to creating a stage for exchanging the latest research results and sharing the advanced research methods. TCP 2016 will be an important platform for inspiring international and interdisciplinary exchange at the forefront of Theoretical and Computational Physics. The Conference will bring together researchers, engineers, technicians and academicians from all over the world, and we cordially invite you to take this opportunity to join us for academic exchange and visit the ancient city of Xi’an.

  5. Computational multi-fluid dynamics predictions of critical heat flux in boiling flow

    Energy Technology Data Exchange (ETDEWEB)

    Mimouni, S., E-mail: stephane.mimouni@edf.fr; Baudry, C.; Guingo, M.; Lavieville, J.; Merigoux, N.; Mechitoua, N.

    2016-04-01

    Highlights: • A new mechanistic model dedicated to DNB has been implemented in the Neptune-CFD code. • The model has been validated against 150 tests. • Neptune-CFD code is a CFD tool dedicated to boiling flows. - Abstract: Extensive efforts have been made in the last five decades to evaluate the boiling heat transfer coefficient and the critical heat flux in particular. Boiling crisis remains a major limiting phenomenon for the analysis of operation and safety of both nuclear reactors and conventional thermal power systems. As a consequence, models dedicated to boiling flows have being improved. For example, Reynolds Stress Transport Model, polydispersion and two-phase flow wall law have been recently implemented. In a previous work, we have evaluated computational fluid dynamics results against single-phase liquid water tests equipped with a mixing vane and against two-phase boiling cases. The objective of this paper is to propose a new mechanistic model in a computational multi-fluid dynamics tool leading to wall temperature excursion and onset of boiling crisis. Critical heat flux is calculated against 150 tests and the mean relative error between calculations and experimental values is equal to 8.3%. The model tested covers a large physics scope in terms of mass flux, pressure, quality and channel diameter. Water and R12 refrigerant fluid are considered. Furthermore, it was found that the sensitivity to the grid refinement was acceptable.

  6. Computational multi-fluid dynamics predictions of critical heat flux in boiling flow

    International Nuclear Information System (INIS)

    Mimouni, S.; Baudry, C.; Guingo, M.; Lavieville, J.; Merigoux, N.; Mechitoua, N.

    2016-01-01

    Highlights: • A new mechanistic model dedicated to DNB has been implemented in the Neptune_CFD code. • The model has been validated against 150 tests. • Neptune_CFD code is a CFD tool dedicated to boiling flows. - Abstract: Extensive efforts have been made in the last five decades to evaluate the boiling heat transfer coefficient and the critical heat flux in particular. Boiling crisis remains a major limiting phenomenon for the analysis of operation and safety of both nuclear reactors and conventional thermal power systems. As a consequence, models dedicated to boiling flows have being improved. For example, Reynolds Stress Transport Model, polydispersion and two-phase flow wall law have been recently implemented. In a previous work, we have evaluated computational fluid dynamics results against single-phase liquid water tests equipped with a mixing vane and against two-phase boiling cases. The objective of this paper is to propose a new mechanistic model in a computational multi-fluid dynamics tool leading to wall temperature excursion and onset of boiling crisis. Critical heat flux is calculated against 150 tests and the mean relative error between calculations and experimental values is equal to 8.3%. The model tested covers a large physics scope in terms of mass flux, pressure, quality and channel diameter. Water and R12 refrigerant fluid are considered. Furthermore, it was found that the sensitivity to the grid refinement was acceptable.

  7. Statistical Projections for Multi-resolution, Multi-dimensional Visual Data Exploration and Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, Hoa T. [Univ. of Utah, Salt Lake City, UT (United States); Stone, Daithi [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bethel, E. Wes [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2016-01-01

    An ongoing challenge in visual exploration and analysis of large, multi-dimensional datasets is how to present useful, concise information to a user for some specific visualization tasks. Typical approaches to this problem have proposed either reduced-resolution versions of data, or projections of data, or both. These approaches still have some limitations such as consuming high computation or suffering from errors. In this work, we explore the use of a statistical metric as the basis for both projections and reduced-resolution versions of data, with a particular focus on preserving one key trait in data, namely variation. We use two different case studies to explore this idea, one that uses a synthetic dataset, and another that uses a large ensemble collection produced by an atmospheric modeling code to study long-term changes in global precipitation. The primary findings of our work are that in terms of preserving the variation signal inherent in data, that using a statistical measure more faithfully preserves this key characteristic across both multi-dimensional projections and multi-resolution representations than a methodology based upon averaging.

  8. Multi-scale analysis of the effect of nano-filler particle diameter on the physical properties of CAD/CAM composite resin blocks.

    Science.gov (United States)

    Yamaguchi, Satoshi; Inoue, Sayuri; Sakai, Takahiko; Abe, Tomohiro; Kitagawa, Haruaki; Imazato, Satoshi

    2017-05-01

    The objective of this study was to assess the effect of silica nano-filler particle diameters in a computer-aided design/manufacturing (CAD/CAM) composite resin (CR) block on physical properties at the multi-scale in silico. CAD/CAM CR blocks were modeled, consisting of silica nano-filler particles (20, 40, 60, 80, and 100 nm) and matrix (Bis-GMA/TEGDMA), with filler volume contents of 55.161%. Calculation of Young's moduli and Poisson's ratios for the block at macro-scale were analyzed by homogenization. Macro-scale CAD/CAM CR blocks (3 × 3 × 3 mm) were modeled and compressive strengths were defined when the fracture loads exceeded 6075 N. MPS values of the nano-scale models were compared by localization analysis. As the filler size decreased, Young's moduli and compressive strength increased, while Poisson's ratios and MPS decreased. All parameters were significantly correlated with the diameters of the filler particles (Pearson's correlation test, r = -0.949, 0.943, -0.951, 0.976, p CAD/CAM CR blocks can be enhanced by loading silica nanofiller particles of smaller diameter. CAD/CAM CR blocks by using smaller silica nano-filler particles have a potential to increase fracture resistance.

  9. Learning Natural Selection in 4th Grade with Multi-Agent-Based Computational Models

    Science.gov (United States)

    Dickes, Amanda Catherine; Sengupta, Pratim

    2013-01-01

    In this paper, we investigate how elementary school students develop multi-level explanations of population dynamics in a simple predator-prey ecosystem, through scaffolded interactions with a multi-agent-based computational model (MABM). The term "agent" in an MABM indicates individual computational objects or actors (e.g., cars), and these…

  10. Analysis and synthesis of multi-qubit, multi-mode quantum devices

    Energy Technology Data Exchange (ETDEWEB)

    Solgun, Firat

    2015-03-27

    In this thesis we propose new methods in multi-qubit multi-mode circuit quantum electrodynamics (circuit-QED) architectures. First we describe a direct parity measurement method for three qubits, which can be realized in 2D circuit-QED with a possible extension to four qubits in a 3D circuit-QED setup for the implementation of the surface code. In Chapter 3 we show how to derive Hamiltonians and compute relaxation rates of the multi-mode superconducting microwave circuits consisting of single Josephson junctions using an exact impedance synthesis technique (the Brune synthesis) and applying previous formalisms for lumped element circuit quantization. In the rest of the thesis we extend our method to multi-junction (multi-qubit) multi-mode circuits through the use of state-space descriptions which allows us to quantize any multiport microwave superconducting circuit with a reciprocal lossy impedance response.

  11. MMA, A Computer Code for Multi-Model Analysis

    Science.gov (United States)

    Poeter, Eileen P.; Hill, Mary C.

    2007-01-01

    This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will

  12. Computational implementation of the multi-mechanism deformation coupled fracture model for salt

    International Nuclear Information System (INIS)

    Koteras, J.R.; Munson, D.E.

    1996-01-01

    The Multi-Mechanism Deformation (M-D) model for creep in rock salt has been used in three-dimensional computations for the Waste Isolation Pilot Plant (WIPP), a potential waste, repository. These computational studies are relied upon to make key predictions about long-term behavior of the repository. Recently, the M-D model was extended to include creep-induced damage. The extended model, the Multi-Mechanism Deformation Coupled Fracture (MDCF) model, is considerably more complicated than the M-D model and required a different technology from that of the M-D model for a computational implementation

  13. Integral Full Core Multi-Physics PWR Benchmark with Measured Data

    Energy Technology Data Exchange (ETDEWEB)

    Forget, Benoit; Smith, Kord; Kumar, Shikhar; Rathbun, Miriam; Liang, Jingang

    2018-04-11

    In recent years, the importance of modeling and simulation has been highlighted extensively in the DOE research portfolio with concrete examples in nuclear engineering with the CASL and NEAMS programs. These research efforts and similar efforts worldwide aim at the development of high-fidelity multi-physics analysis tools for the simulation of current and next-generation nuclear power reactors. Like all analysis tools, verification and validation is essential to guarantee proper functioning of the software and methods employed. The current approach relies mainly on the validation of single physic phenomena (e.g. critical experiment, flow loops, etc.) and there is a lack of relevant multiphysics benchmark measurements that are necessary to validate high-fidelity methods being developed today. This work introduces a new multi-cycle full-core Pressurized Water Reactor (PWR) depletion benchmark based on two operational cycles of a commercial nuclear power plant that provides a detailed description of fuel assemblies, burnable absorbers, in-core fission detectors, core loading and re-loading patterns. This benchmark enables analysts to develop extremely detailed reactor core models that can be used for testing and validation of coupled neutron transport, thermal-hydraulics, and fuel isotopic depletion. The benchmark also provides measured reactor data for Hot Zero Power (HZP) physics tests, boron letdown curves, and three-dimensional in-core flux maps from 58 instrumented assemblies. The benchmark description is now available online and has been used by many groups. However, much work remains to be done on the quantification of uncertainties and modeling sensitivities. This work aims to address these deficiencies and make this benchmark a true non-proprietary international benchmark for the validation of high-fidelity tools. This report details the BEAVRS uncertainty quantification for the first two cycle of operations and serves as the final report of the project.

  14. From Digital Imaging to Computer Image Analysis of Fine Art

    Science.gov (United States)

    Stork, David G.

    An expanding range of techniques from computer vision, pattern recognition, image analysis, and computer graphics are being applied to problems in the history of art. The success of these efforts is enabled by the growing corpus of high-resolution multi-spectral digital images of art (primarily paintings and drawings), sophisticated computer vision methods, and most importantly the engagement of some art scholars who bring questions that may be addressed through computer methods. This paper outlines some general problem areas and opportunities in this new inter-disciplinary research program.

  15. The reactor physics computer programs in PC's era

    International Nuclear Information System (INIS)

    Nainer, O.; Serghiuta, D.

    1995-01-01

    The main objective of reactor physics analysis is the evaluation of flux and power distribution over the reactor core. For CANDU reactors sophisticated computer programs, such as FMDP and RFSP, were developed 20 years ago for mainframe computers. These programs were adapted to work on workstations with UNIX or DOS, but they lack a feature that could improve their use and that is 'user friendly'. For using these programs the users need to deal with a great amount of information contained in sophisticated files. To modify a model is a great challenge. First of all, it is necessary to bear in mind all the geometrical dimensions and accordingly, to modify the core model to match the new requirements. All this must be done in a line input file. For a DOS platform, using an average performance PC system, could it be possible: to represent and modify all the geometrical and physical parameters in a meaningful way, on screen, using an intuitive graphic user interface; to reduce the real time elapsed in order to perform complex fuel-management analysis 'at home'; to avoid the rewrite of the mainframe version of the program? The author's answer is a fuel-management computer package operating on PC, 3 time faster than on a CDC-Cyber 830 mainframe one (486DX/33MHz/8MbRAM) or 20 time faster (Pentium-PC), respectively. (author). 5 refs., 1 tab., 5 figs

  16. Distributed and multi-core computation of 2-loop integrals

    International Nuclear Information System (INIS)

    De Doncker, E; Yuasa, F

    2014-01-01

    For an automatic computation of Feynman loop integrals in the physical region we rely on an extrapolation technique where the integrals of the sequence are obtained with iterated/repeated adaptive methods from the QUADPACK 1D quadrature package. The integration rule evaluations in the outer level, corresponding to independent inner integral approximations, are assigned to threads dynamically via the OpenMP runtime in the parallel implementation. Furthermore, multi-level (nested) parallelism enables an efficient utilization of hyperthreading or larger numbers of cores. For a class of loop integrals in the unphysical region, which do not suffer from singularities in the interior of the integration domain, we find that the distributed adaptive integration methods in the multivariate PARINT package are highly efficient and accurate. We apply these techniques without resorting to integral transformations and report on the capabilities of the algorithms and the parallel performance for a test set including various types of two-loop integrals

  17. Use of computer technologies in physical education of women of the first mature age

    Directory of Open Access Journals (Sweden)

    Julia Tomilina

    2016-08-01

    Full Text Available Purpose: the improvement of process of physical education of women of the first mature age by means of Pilates with use of information technologies. Material & Methods: the experience of development and deployment of computer technologies in the process of physical education of women of mature age was systematized by means of the analysis of scientific and methodical and special literature and the best pedagogical and coach's practices, which is presented in mass media. The ways of application of computer programs were revealed, and the computer program ‘’Pilates’’ was developed by means of programming in the system Visual Basic. Results: the computer program ‘’Pilates’’, which consists of directory, settlement and recreational blocks, is offered for the purpose of improvement of the process of physical education of women of the first mature age by means of Pilates and increase in their motivation to classes by physical exercises. Conclusions: it is revealed that application of the computer programs has the positive effect in practice of physical education of women of the first mature age. The computer program ‘’Pilates’’ is submitted.

  18. Recommending the heterogeneous cluster type multi-processor system computing

    International Nuclear Information System (INIS)

    Iijima, Nobukazu

    2010-01-01

    Real-time reactor simulator had been developed by reusing the equipment of the Musashi reactor and its performance improvement became indispensable for research tools to increase sampling rate with introduction of arithmetic units using multi-Digital Signal Processor(DSP) system (cluster). In order to realize the heterogeneous cluster type multi-processor system computing, combination of two kinds of Control Processor (CP) s, Cluster Control Processor (CCP) and System Control Processor (SCP), were proposed with Large System Control Processor (LSCP) for hierarchical cluster if needed. Faster computing performance of this system was well evaluated by simulation results for simultaneous execution of plural jobs and also pipeline processing between clusters, which showed the system led to effective use of existing system and enhancement of the cost performance. (T. Tanaka)

  19. Integration of topological modification within the modeling of multi-physics systems: Application to a Pogo-stick

    Science.gov (United States)

    Abdeljabbar Kharrat, Nourhene; Plateaux, Régis; Miladi Chaabane, Mariem; Choley, Jean-Yves; Karra, Chafik; Haddar, Mohamed

    2018-05-01

    The present work tackles the modeling of multi-physics systems applying a topological approach while proceeding with a new methodology using a topological modification to the structure of systems. Then the comparison with the Magos' methodology is made. Their common ground is the use of connectivity within systems. The comparison and analysis of the different types of modeling show the importance of the topological methodology through the integration of the topological modification to the topological structure of a multi-physics system. In order to validate this methodology, the case of Pogo-stick is studied. The first step consists in generating a topological graph of the system. Then the connectivity step takes into account the contact with the ground. During the last step of this research; the MGS language (Modeling of General System) is used to model the system through equations. Finally, the results are compared to those obtained by MODELICA. Therefore, this proposed methodology may be generalized to model multi-physics systems that can be considered as a set of local elements.

  20. An SQL-based approach to physics analysis

    International Nuclear Information System (INIS)

    Limper, Dr Maaike

    2014-01-01

    As part of the CERN openlab collaboration a study was made into the possibility of performing analysis of the data collected by the experiments at the Large Hadron Collider (LHC) through SQL-queries on data stored in a relational database. Currently LHC physics analysis is done using data stored in centrally produced 'ROOT-ntuple' files that are distributed through the LHC computing grid. The SQL-based approach to LHC physics analysis presented in this paper allows calculations in the analysis to be done at the database and can make use of the database's in-built parallelism features. Using this approach it was possible to reproduce results for several physics analysis benchmarks. The study shows the capability of the database to handle complex analysis tasks but also illustrates the limits of using row-based storage for storing physics analysis data, as performance was limited by the I/O read speed of the system.

  1. Computers in plasma physics: remote data access and magnetic configuration design

    International Nuclear Information System (INIS)

    Blackwell, B.D.; McMillan, B.F.; Searle, A.C.; Gardner, H.J.; Price, D.M.; Fredian, T.W.

    2000-01-01

    Full text: Two graphically intensive examples of the application of computers in plasma physics are described remote data access for plasma confinement experiments, and a code for real-time magnetic field tracing and optimisation. The application for both of these is the H-1NF National Plasma Fusion Research Facility, a Commonwealth Major National Research Facility within the Research School of Physical Science, Institute of Advanced Studies, ANU. It is based on the 'flexible' heliac stellarator H-1, a plasma confinement device in which the confining fields are generated solely by external conductors. These complex, fully three dimensional magnetic fields are used as examples for the magnetic design application, and data from plasma physics experiments are used to illustrate the remote access techniques. As plasma fusion experiments grow in size, increased remote access allows physicists to participate in experiments and data analysis from their home base. Three types of access will be described and demonstrated - a simple Java-based web interface, an example TCP client-server built around the widely used MDSPlus data system and the visualisation package IDL (RSI Inc), and a virtual desktop Environment (VNC: AT and T Research) that simulates terminals local to the plasma facility. A client server TCP/IP - web interface to the programmable logic controller that provides user interface to the programmable high power magnet power supplies is described. A very general configuration file allows great flexibility, and allows new displays and interfaces to be created (usually) without changes to the underlying C++ and Java code. The magnetic field code BLINE provides accurate calculation of complex magnetic fields, and 3D visualisation in real time, using a low cost multiprocessor computer and an OpenGL-compatible graphics accelerator. A fast, flexible multi-mesh interpolation method is used for tracing vacuum magnetic field lines created by arbitrary filamentary

  2. Multi-Level iterative methods in computational plasma physics

    International Nuclear Information System (INIS)

    Knoll, D.A.; Barnes, D.C.; Brackbill, J.U.; Chacon, L.; Lapenta, G.

    1999-01-01

    Plasma physics phenomena occur on a wide range of spatial scales and on a wide range of time scales. When attempting to model plasma physics problems numerically the authors are inevitably faced with the need for both fine spatial resolution (fine grids) and implicit time integration methods. Fine grids can tax the efficiency of iterative methods and large time steps can challenge the robustness of iterative methods. To meet these challenges they are developing a hybrid approach where multigrid methods are used as preconditioners to Krylov subspace based iterative methods such as conjugate gradients or GMRES. For nonlinear problems they apply multigrid preconditioning to a matrix-few Newton-GMRES method. Results are presented for application of these multilevel iterative methods to the field solves in implicit moment method PIC, multidimensional nonlinear Fokker-Planck problems, and their initial efforts in particle MHD

  3. IMPETUS - Interactive MultiPhysics Environment for Unified Simulations.

    Science.gov (United States)

    Ha, Vi Q; Lykotrafitis, George

    2016-12-08

    We introduce IMPETUS - Interactive MultiPhysics Environment for Unified Simulations, an object oriented, easy-to-use, high performance, C++ program for three-dimensional simulations of complex physical systems that can benefit a large variety of research areas, especially in cell mechanics. The program implements cross-communication between locally interacting particles and continuum models residing in the same physical space while a network facilitates long-range particle interactions. Message Passing Interface is used for inter-processor communication for all simulations. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Computational physics of the mind

    Science.gov (United States)

    Duch, Włodzisław

    1996-08-01

    In the XIX century and earlier physicists such as Newton, Mayer, Hooke, Helmholtz and Mach were actively engaged in the research on psychophysics, trying to relate psychological sensations to intensities of physical stimuli. Computational physics allows to simulate complex neural processes giving a chance to answer not only the original psychophysical questions but also to create models of the mind. In this paper several approaches relevant to modeling of the mind are outlined. Since direct modeling of the brain functions is rather limited due to the complexity of such models a number of approximations is introduced. The path from the brain, or computational neurosciences, to the mind, or cognitive sciences, is sketched, with emphasis on higher cognitive functions such as memory and consciousness. No fundamental problems in understanding of the mind seem to arise. From a computational point of view realistic models require massively parallel architectures.

  5. Statistical and thermal physics with computer applications

    CERN Document Server

    Gould, Harvey

    2010-01-01

    This textbook carefully develops the main ideas and techniques of statistical and thermal physics and is intended for upper-level undergraduate courses. The authors each have more than thirty years' experience in teaching, curriculum development, and research in statistical and computational physics. Statistical and Thermal Physics begins with a qualitative discussion of the relation between the macroscopic and microscopic worlds and incorporates computer simulations throughout the book to provide concrete examples of important conceptual ideas. Unlike many contemporary texts on the

  6. Large-Scale, Multi-Sensor Atmospheric Data Fusion Using Hybrid Cloud Computing

    Science.gov (United States)

    Wilson, B. D.; Manipon, G.; Hua, H.; Fetzer, E. J.

    2015-12-01

    NASA's Earth Observing System (EOS) is an ambitious facility for studying global climate change. The mandate now is to combine measurements from the instruments on the "A-Train" platforms (AIRS, MODIS, MLS, and CloudSat) and other Earth probes to enable large-scale studies of climate change over decades. Moving to multi-sensor, long-duration presents serious challenges for large-scale data mining and fusion. For example, one might want to compare temperature and water vapor retrievals from one instrument (AIRS) to another (MODIS), and to a model (ECMWF), stratify the comparisons using a classification of the "cloud scenes" from CloudSat, and repeat the entire analysis over 10 years of data. HySDS is a Hybrid-Cloud Science Data System that has been developed and applied under NASA AIST, MEaSUREs, and ACCESS grants. HySDS uses the SciFlow workflow engine to partition analysis workflows into parallel tasks (e.g. segmenting by time or space) that are pushed into a durable job queue. The tasks are "pulled" from the queue by worker Virtual Machines (VM's) and executed in an on-premise Cloud (Eucalyptus or OpenStack) or at Amazon in the public Cloud or govCloud. In this way, years of data (millions of files) can be processed in a massively parallel way. Input variables (arrays) are pulled on-demand into the Cloud using OPeNDAP URLs or other subsetting services, thereby minimizing the size of the transferred data. We are using HySDS to automate the production of multiple versions of a ten-year A-Train water vapor climatology under a MEASURES grant. We will present the architecture of HySDS, describe the achieved "clock time" speedups in fusing datasets on our own nodes and in the Amazon Cloud, and discuss the Cloud cost tradeoffs for storage, compute, and data transfer. Our system demonstrates how one can pull A-Train variables (Levels 2 & 3) on-demand into the Amazon Cloud, and cache only those variables that are heavily used, so that any number of compute jobs can be

  7. Run 2 analysis computing for CDF and D0

    International Nuclear Information System (INIS)

    Fuess, S.

    1995-11-01

    Two large experiments at the Fermilab Tevatron collider will use upgraded of running. The associated analysis software is also expected to change, both to account for higher data rates and to embrace new computing paradigms. A discussion is given to the problems facing current and future High Energy Physics (HEP) analysis computing, and several issues explored in detail

  8. The value of unenhanced multi-detector computed tomography ...

    African Journals Online (AJOL)

    Introduction: Unenhanced computed tomography (CT) is used to detect urinary tract calculi with high accuracy. The development of multi-detector CT (MDCT) allows reconstructions in coronal, sagittal and oblique directions. Objective: To compare MDCT with three-dimensional (3D) ultrasound (US) imaging in evaluating ...

  9. Computational Physics Program of the National MFE Computer Center

    International Nuclear Information System (INIS)

    Mirin, A.A.

    1984-12-01

    The principal objective of the computational physics group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. A summary of the groups activities is presented, including computational studies in MHD equilibria and stability, plasma transport, Fokker-Planck, and efficient numerical and programming algorithms. References are included

  10. a Physics of Computation.

    Science.gov (United States)

    Stornetta, Wakefield Scott, Jr.

    The large numbers of elements in distributed computational systems such as systolic arrays, connectionist models, and groups of agents, makes it difficult to do a "bottom -up" analysis of system performance. This raises the issue of what aspects of such systems can be analyzed without detailed knowledge of the lowest-level interactions. Such an approach is analogous in spirit to seeking a thermodynamic description of a gas, rather than tracking the motions of individual molecules. Four concrete illustrations of this notion are presented as follows: (1) an alternative to the NetTalk approach to temporal pattern processing in connectionist networks, which exhibits simple scaling laws that reduce the system's dependence on the sampling rate, (2) an experimental study of the effect that symmetrizing the operating range of the back propagation connectionist model has on relaxation rates and capacity (3) a phenomenological model of a recently introduced fault-stealing mechanism for multi-pipeline systolic arrays, which predicts global failure rates on arrays of arbitrary size based on only a small number of measurements, and (4) the effects that the range of interaction has on specialization and fault tolerance for a group of agents engaged in problem solving.

  11. A non-oscillatory energy-splitting method for the computation of compressible multi-fluid flows

    Science.gov (United States)

    Lei, Xin; Li, Jiequan

    2018-04-01

    This paper proposes a new non-oscillatory energy-splitting conservative algorithm for computing multi-fluid flows in the Eulerian framework. In comparison with existing multi-fluid algorithms in the literature, it is shown that the mass fraction model with isobaric hypothesis is a plausible choice for designing numerical methods for multi-fluid flows. Then we construct a conservative Godunov-based scheme with the high order accurate extension by using the generalized Riemann problem solver, through the detailed analysis of kinetic energy exchange when fluids are mixed under the hypothesis of isobaric equilibrium. Numerical experiments are carried out for the shock-interface interaction and shock-bubble interaction problems, which display the excellent performance of this type of schemes and demonstrate that nonphysical oscillations are suppressed around material interfaces substantially.

  12. Multi-criteria multi-stakeholder decision analysis using a fuzzy-stochastic approach for hydrosystem management

    Science.gov (United States)

    Subagadis, Y. H.; Schütze, N.; Grundmann, J.

    2014-09-01

    The conventional methods used to solve multi-criteria multi-stakeholder problems are less strongly formulated, as they normally incorporate only homogeneous information at a time and suggest aggregating objectives of different decision-makers avoiding water-society interactions. In this contribution, Multi-Criteria Group Decision Analysis (MCGDA) using a fuzzy-stochastic approach has been proposed to rank a set of alternatives in water management decisions incorporating heterogeneous information under uncertainty. The decision making framework takes hydrologically, environmentally, and socio-economically motivated conflicting objectives into consideration. The criteria related to the performance of the physical system are optimized using multi-criteria simulation-based optimization, and fuzzy linguistic quantifiers have been used to evaluate subjective criteria and to assess stakeholders' degree of optimism. The proposed methodology is applied to find effective and robust intervention strategies for the management of a coastal hydrosystem affected by saltwater intrusion due to excessive groundwater extraction for irrigated agriculture and municipal use. Preliminary results show that the MCGDA based on a fuzzy-stochastic approach gives useful support for robust decision-making and is sensitive to the decision makers' degree of optimism.

  13. The effectiveness of multi-component goal setting interventions for changing physical activity behaviour: a systematic review and meta-analysis.

    Science.gov (United States)

    McEwan, Desmond; Harden, Samantha M; Zumbo, Bruno D; Sylvester, Benjamin D; Kaulius, Megan; Ruissen, Geralyn R; Dowd, A Justine; Beauchamp, Mark R

    2016-01-01

    Drawing from goal setting theory (Latham & Locke, 1991; Locke & Latham, 2002; Locke et al., 1981), the purpose of this study was to conduct a systematic review and meta-analysis of multi-component goal setting interventions for changing physical activity (PA) behaviour. A literature search returned 41,038 potential articles. Included studies consisted of controlled experimental trials wherein participants in the intervention conditions set PA goals and their PA behaviour was compared to participants in a control group who did not set goals. A meta-analysis was ultimately carried out across 45 articles (comprising 52 interventions, 126 effect sizes, n = 5912) that met eligibility criteria using a random-effects model. Overall, a medium, positive effect (Cohen's d(SE) = .552(.06), 95% CI = .43-.67, Z = 9.03, p goal setting interventions in relation to PA behaviour was found. Moderator analyses across 20 variables revealed several noteworthy results with regard to features of the study, sample characteristics, PA goal content, and additional goal-related behaviour change techniques. In conclusion, multi-component goal setting interventions represent an effective method of fostering PA across a diverse range of populations and settings. Implications for effective goal setting interventions are discussed.

  14. Myocardial perfusion with multi-detector computed tomography: quantitative evaluation

    International Nuclear Information System (INIS)

    Carrascosa, Patricia M.; Vallejos, J.; Capunay, Carlos M.; Deviggiano, A.; Carrascosa, Jorge M.

    2007-01-01

    The objective of this work is to evaluate the skill of multidetector computer tomography, to quantify the different patterns of intensification during the evaluation of the myocardial perfusion. 45 patients were studied with suspicion of cardiovascular disease. Multi-detector computed tomography was utilized on patients at rest and in effort with pharmacological stress, after the administration of dipyridamole. Also they were evaluated using nuclear medicine [es

  15. A Framework for Understanding Physics Students' Computational Modeling Practices

    Science.gov (United States)

    Lunk, Brandon Robert

    With the growing push to include computational modeling in the physics classroom, we are faced with the need to better understand students' computational modeling practices. While existing research on programming comprehension explores how novices and experts generate programming algorithms, little of this discusses how domain content knowledge, and physics knowledge in particular, can influence students' programming practices. In an effort to better understand this issue, I have developed a framework for modeling these practices based on a resource stance towards student knowledge. A resource framework models knowledge as the activation of vast networks of elements called "resources." Much like neurons in the brain, resources that become active can trigger cascading events of activation throughout the broader network. This model emphasizes the connectivity between knowledge elements and provides a description of students' knowledge base. Together with resources resources, the concepts of "epistemic games" and "frames" provide a means for addressing the interaction between content knowledge and practices. Although this framework has generally been limited to describing conceptual and mathematical understanding, it also provides a means for addressing students' programming practices. In this dissertation, I will demonstrate this facet of a resource framework as well as fill in an important missing piece: a set of epistemic games that can describe students' computational modeling strategies. The development of this theoretical framework emerged from the analysis of video data of students generating computational models during the laboratory component of a Matter & Interactions: Modern Mechanics course. Student participants across two semesters were recorded as they worked in groups to fix pre-written computational models that were initially missing key lines of code. Analysis of this video data showed that the students' programming practices were highly influenced by

  16. Development of computer-based analytical tool for assessing physical protection system

    Science.gov (United States)

    Mardhi, Alim; Pengvanich, Phongphaeth

    2016-01-01

    Assessment of physical protection system effectiveness is the priority for ensuring the optimum protection caused by unlawful acts against a nuclear facility, such as unauthorized removal of nuclear materials and sabotage of the facility itself. Since an assessment based on real exercise scenarios is costly and time-consuming, the computer-based analytical tool can offer the solution for approaching the likelihood threat scenario. There are several currently available tools that can be used instantly such as EASI and SAPE, however for our research purpose it is more suitable to have the tool that can be customized and enhanced further. In this work, we have developed a computer-based analytical tool by utilizing the network methodological approach for modelling the adversary paths. The inputs are multi-elements in security used for evaluate the effectiveness of the system's detection, delay, and response. The tool has capability to analyze the most critical path and quantify the probability of effectiveness of the system as performance measure.

  17. Nuclear physics at multi-GeV hadron facilities

    International Nuclear Information System (INIS)

    Geesaman, D.F.

    1993-01-01

    The important contributions Multi-GeV hadron beam facilities can make to the field of Nuclear Physics have been recognized by the community for a decade. Such a facility has featured prominently in each NSAC planning exercise in this period. As Nuclear Physicists realize they must become more concerned with the quark structure of nuclei and the applications of Quantum Chromodynamics to many body systems, the need for experiments at such facilities has become more urgent. In this talk, I will present a personal view of some of the significant recent Nuclear Physics results with multi-GeV hadron facilities, the most important opportunities which can open up to us in the future, and demonstrate how our field must take advantage of these opportunities to progress. I will also report on the recent discussions in the community to make this possible

  18. Physics Mining of Multi-source Data Sets, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — We propose to implement novel physics mining algorithms with analytical capabilities to derive diagnostic and prognostic numerical models from multi-source...

  19. Earth Science Computational Architecture for Multi-disciplinary Investigations

    Science.gov (United States)

    Parker, J. W.; Blom, R.; Gurrola, E.; Katz, D.; Lyzenga, G.; Norton, C.

    2005-12-01

    Understanding the processes underlying Earth's deformation and mass transport requires a non-traditional, integrated, interdisciplinary, approach dependent on multiple space and ground based data sets, modeling, and computational tools. Currently, details of geophysical data acquisition, analysis, and modeling largely limit research to discipline domain experts. Interdisciplinary research requires a new computational architecture that is optimized to perform complex data processing of multiple solid Earth science data types in a user-friendly environment. A web-based computational framework is being developed and integrated with applications for automatic interferometric radar processing, and models for high-resolution deformation & gravity, forward models of viscoelastic mass loading over short wavelengths & complex time histories, forward-inverse codes for characterizing surface loading-response over time scales of days to tens of thousands of years, and inversion of combined space magnetic & gravity fields to constrain deep crustal and mantle properties. This framework combines an adaptation of the QuakeSim distributed services methodology with the Pyre framework for multiphysics development. The system uses a three-tier architecture, with a middle tier server that manages user projects, available resources, and security. This ensures scalability to very large networks of collaborators. Users log into a web page and have a personal project area, persistently maintained between connections, for each application. Upon selection of an application and host from a list of available entities, inputs may be uploaded or constructed from web forms and available data archives, including gravity, GPS and imaging radar data. The user is notified of job completion and directed to results posted via URLs. Interdisciplinary work is supported through easy availability of all applications via common browsers, application tutorials and reference guides, and worked examples with

  20. Usefulness of multi-detector row Computed Tomography for ...

    African Journals Online (AJOL)

    A 74-year-old female underwent surgical treatment for adenocarcinoma of the pancreatic head. Preoperative multi-detector row computed tomography (MD-CT) demonstrated tumor invasion into the accessory right colic vein and the branch of the middle colic artery (MCA), which was not detected by digital subtraction ...

  1. CHEP95: Computing in high energy physics. Abstracts

    International Nuclear Information System (INIS)

    1995-01-01

    These proceedings cover the technical papers on computation in High Energy Physics, including computer codes, computer devices, control systems, simulations, data acquisition systems. New approaches on computer architectures are also discussed

  2. Multi-group transport methods for high-resolution neutron activation analysis

    International Nuclear Information System (INIS)

    Burns, K. A.; Smith, L. E.; Gesh, C. J.; Shaver, M. W.

    2009-01-01

    The accurate and efficient simulation of coupled neutron-photon problems is necessary for several important radiation detection applications. Examples include the detection of nuclear threats concealed in cargo containers and prompt gamma neutron activation analysis for nondestructive determination of elemental composition of unknown samples. In these applications, high-resolution gamma-ray spectrometers are used to preserve as much information as possible about the emitted photon flux, which consists of both continuum and characteristic gamma rays with discrete energies. Monte Carlo transport is the most commonly used modeling tool for this type of problem, but computational times for many problems can be prohibitive. This work explores the use of multi-group deterministic methods for the simulation of neutron activation problems. Central to this work is the development of a method for generating multi-group neutron-photon cross-sections in a way that separates the discrete and continuum photon emissions so that the key signatures in neutron activation analysis (i.e., the characteristic line energies) are preserved. The mechanics of the cross-section preparation method are described and contrasted with standard neutron-gamma cross-section sets. These custom cross-sections are then applied to several benchmark problems. Multi-group results for neutron and photon flux are compared to MCNP results. Finally, calculated responses of high-resolution spectrometers are compared. Preliminary findings show promising results when compared to MCNP. A detailed discussion of the potential benefits and shortcomings of the multi-group-based approach, in terms of accuracy, and computational efficiency, is provided. (authors)

  3. Multi-detector row computed tomography angiography of peripheral arterial disease

    International Nuclear Information System (INIS)

    Kock, Marc C.J.M.; Dijkshoorn, Marcel L.; Pattynama, Peter M.T.; Myriam Hunink, M.G.

    2007-01-01

    With the introduction of multi-detector row computed tomography (MDCT), scan speed and image quality has improved considerably. Since the longitudinal coverage is no longer a limitation, multi-detector row computed tomography angiography (MDCTA) is increasingly used to depict the peripheral arterial runoff. Hence, it is important to know the advantages and limitations of this new non-invasive alternative for the reference test, digital subtraction angiography. Optimization of the acquisition parameters and the contrast delivery is important to achieve a reliable enhancement of the entire arterial runoff in patients with peripheral arterial disease (PAD) using fast CT scanners. The purpose of this review is to discuss the different scanning and injection protocols using 4-, 16-, and 64-detector row CT scanners, to propose effective methods to evaluate and to present large data sets, to discuss its clinical value and major limitations, and to review the literature on the validity, reliability, and cost-effectiveness of multi-detector row CT in the evaluation of PAD. (orig.)

  4. Computer methods in physics 250 problems with guided solutions

    CERN Document Server

    Landau, Rubin H

    2018-01-01

    Our future scientists and professionals must be conversant in computational techniques. In order to facilitate integration of computer methods into existing physics courses, this textbook offers a large number of worked examples and problems with fully guided solutions in Python as well as other languages (Mathematica, Java, C, Fortran, and Maple). It’s also intended as a self-study guide for learning how to use computer methods in physics. The authors include an introductory chapter on numerical tools and indication of computational and physics difficulty level for each problem.

  5. On multi-rate Erlang-B computations

    DEFF Research Database (Denmark)

    Iversen, Villy Bæk; Nilsson, A.A.; Perry, M.

    1999-01-01

    The single-rate Erlang-B model is and has been a cornerstone of numerous traffic engineering applications which involve the calculation and optimization of blocking probabilities. However, with the emerging integrated multimedia networks, one single traffic type is often inadequate. Yet the single......-rate Erlang-B model is often used even for today's new networks. A primary reason for this is the existence of methods to compute blocking probabilities and their derivatives that are simple, fast, and stable. With this paper, we provide an improved, and better, way of computing multi-rate Erlang-B blocking...... probabilities and their derivatives. Since the improvements are also simple, fast, direct, and stable, optimization and engineering methods based on a single-rate Erlang-B model can now be easily extended to emerging multimedia networks....

  6. Summary of research in applied mathematics, numerical analysis, and computer sciences

    Science.gov (United States)

    1986-01-01

    The major categories of current ICASE research programs addressed include: numerical methods, with particular emphasis on the development and analysis of basic numerical algorithms; control and parameter identification problems, with emphasis on effective numerical methods; computational problems in engineering and physical sciences, particularly fluid dynamics, acoustics, and structural analysis; and computer systems and software, especially vector and parallel computers.

  7. Multi-Fidelity Multi-Strategy and Multi-Disciplinary Design Optimization Environment, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — Multidisciplinary design and optimization (MDO) tools developed to perform multi-disciplinary analysis based on low fidelity computation methods have been used in...

  8. Microphysics in Multi-scale Modeling System with Unified Physics

    Science.gov (United States)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  9. Specialized, multi-user computer facility for the high-speed, interactive processing of experimental data

    International Nuclear Information System (INIS)

    Maples, C.C.

    1979-01-01

    A proposal has been made to develop a specialized computer facility specifically designed to deal with the problems associated with the reduction and analysis of experimental data. Such a facility would provide a highly interactive, graphics-oriented, multi-user environment capable of handling relatively large data bases for each user. By conceptually separating the general problem of data analysis into two parts, cyclic batch calculations and real-time interaction, a multi-level, parallel processing framework may be used to achieve high-speed data processing. In principle such a system should be able to process a mag tape equivalent of data, through typical transformations and correlations, in under 30 sec. The throughput for such a facility, assuming five users simultaneously reducing data, is estimated to be 2 to 3 times greater than is possible, for example, on a CDC7600

  10. Computer Aided Multi-Data Fusion Dismount Modeling

    Science.gov (United States)

    2012-03-22

    dependent on a particular environmental condition. They are costly, cumbersome, and involve dedicated software practices and particular knowledge to operate...allow manipulation of 2D matrices, like Microsoft Excel or Libre Office. The second alternative is to modify an already created model (MEM). The model... software . Therefore, with the described computer aided multi-data dismount model the researcher will be able to attach signatures to any desired

  11. On efficiently computing multigroup multi-layer neutron reflection and transmission conditions

    International Nuclear Information System (INIS)

    Abreu, Marcos P. de

    2007-01-01

    In this article, we present an algorithm for efficient computation of multigroup discrete ordinates neutron reflection and transmission conditions, which replace a multi-layered boundary region in neutron multiplication eigenvalue computations with no spatial truncation error. In contrast to the independent layer-by-layer algorithm considered thus far in our computations, the algorithm here is based on an inductive approach developed by the present author for deriving neutron reflection and transmission conditions for a nonactive boundary region with an arbitrary number of arbitrarily thick layers. With this new algorithm, we were able to increase significantly the computational efficiency of our spectral diamond-spectral Green's function method for solving multigroup neutron multiplication eigenvalue problems with multi-layered boundary regions. We provide comparative results for a two-group reactor core model to illustrate the increased efficiency of our spectral method, and we conclude this article with a number of general remarks. (author)

  12. The use of personal computers in reactor physics

    International Nuclear Information System (INIS)

    Cullen, D.E.

    1988-01-01

    This paper points out that personal computers are now powerful enough (in terms of core size and speed) to allow them to be used for serious reactor physics applications. In addition the low cost of personal computers means that even small institutes can now have access to a significant amount of computer power. At the present time distribution centers, such as RSIC, are beginning to distribute reactor physics codes for use on personal computers; hopefully in the near future more and more of these codes will become available through distribution centers, such as RSIC

  13. Development of Multi-physics (Multiphase CFD + MCNP) simulation for generic solution vessel power calculation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Jun [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Buechler, Cynthia Eileen [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-07-17

    The current study aims to predict the steady state power of a generic solution vessel and to develop a corresponding heat transfer coefficient correlation for a Moly99 production facility by conducting a fully coupled multi-physics simulation. A prediction of steady state power for the current application is inherently interconnected between thermal hydraulic characteristics (i.e. Multiphase computational fluid dynamics solved by ANSYS-Fluent 17.2) and the corresponding neutronic behavior (i.e. particle transport solved by MCNP6.2) in the solution vessel. Thus, the development of a coupling methodology is vital to understand the system behavior at a variety of system design and postulated operating scenarios. In this study, we report on the k-effective (keff) calculation for the baseline solution vessel configuration with a selected solution concentration using MCNP K-code modeling. The associated correlation of thermal properties (e.g. density, viscosity, thermal conductivity, specific heat) at the selected solution concentration are developed based on existing experimental measurements in the open literature. The numerical coupling methodology between multiphase CFD and MCNP is successfully demonstrated, and the detailed coupling procedure is documented. In addition, improved coupling methods capturing realistic physics in the solution vessel thermal-neutronic dynamics are proposed and tested further (i.e. dynamic height adjustment, mull-cell approach). As a key outcome of the current study, a multi-physics coupling methodology between MCFD and MCNP is demonstrated and tested for four different operating conditions. Those different operating conditions are determined based on the neutron source strength at a fixed geometry condition. The steady state powers for the generic solution vessel at various operating conditions are reported, and a generalized correlation of the heat transfer coefficient for the current application is discussed. The assessment of multi-physics

  14. An Efficient Upscaling Process Based on a Unified Fine-scale Multi-Physics Model for Flow Simulation in Naturally Fracture Carbonate Karst Reservoirs

    KAUST Repository

    Bi, Linfeng

    2009-01-01

    The main challenges in modeling fluid flow through naturally-fractured carbonate karst reservoirs are how to address various flow physics in complex geological architectures due to the presence of vugs and caves which are connected via fracture networks at multiple scales. In this paper, we present a unified multi-physics model that adapts to the complex flow regime through naturally-fractured carbonate karst reservoirs. This approach generalizes Stokes-Brinkman model (Popov et al. 2007). The fracture networks provide the essential connection between the caves in carbonate karst reservoirs. It is thus very important to resolve the flow in fracture network and the interaction between fractures and caves to better understand the complex flow behavior. The idea is to use Stokes-Brinkman model to represent flow through rock matrix, void caves as well as intermediate flows in very high permeability regions and to use an idea similar to discrete fracture network model to represent flow in fracture network. Consequently, various numerical solution strategies can be efficiently applied to greatly improve the computational efficiency in flow simulations. We have applied this unified multi-physics model as a fine-scale flow solver in scale-up computations. Both local and global scale-up are considered. It is found that global scale-up has much more accurate than local scale-up. Global scale-up requires the solution of global flow problems on fine grid, which generally is computationally expensive. The proposed model has the ability to deal with large number of fractures and caves, which facilitate the application of Stokes-Brinkman model in global scale-up computation. The proposed model flexibly adapts to the different flow physics in naturally-fractured carbonate karst reservoirs in a simple and effective way. It certainly extends modeling and predicting capability in efficient development of this important type of reservoir.

  15. CLINICAL SURFACES - Activity-Based Computing for Distributed Multi-Display Environments in Hospitals

    Science.gov (United States)

    Bardram, Jakob E.; Bunde-Pedersen, Jonathan; Doryab, Afsaneh; Sørensen, Steffen

    A multi-display environment (MDE) is made up of co-located and networked personal and public devices that form an integrated workspace enabling co-located group work. Traditionally, MDEs have, however, mainly been designed to support a single “smart room”, and have had little sense of the tasks and activities that the MDE is being used for. This paper presents a novel approach to support activity-based computing in distributed MDEs, where displays are physically distributed across a large building. CLINICAL SURFACES was designed for clinical work in hospitals, and enables context-sensitive retrieval and browsing of patient data on public displays. We present the design and implementation of CLINICAL SURFACES, and report from an evaluation of the system at a large hospital. The evaluation shows that using distributed public displays to support activity-based computing inside a hospital is very useful for clinical work, and that the apparent contradiction between maintaining privacy of medical data in a public display environment can be mitigated by the use of CLINICAL SURFACES.

  16. Computer Tutorial Programs in Physics.

    Science.gov (United States)

    Faughn, Jerry; Kuhn, Karl

    1979-01-01

    Describes a series of computer tutorial programs which are intended to help college students in introductory physics courses. Information about these programs, which are either calculus or algebra-trig based, is presented. (HM)

  17. A mathematical model for predicting photo-induced voltage and photostriction of PLZT with coupled multi-physics fields and its application

    International Nuclear Information System (INIS)

    Huang, J H; Wang, X J; Wang, J

    2016-01-01

    The primary purpose of this paper is to propose a mathematical model of PLZT ceramic with coupled multi-physics fields, e.g. thermal, electric, mechanical and light field. To this end, the coupling relationships of multi-physics fields and the mechanism of some effects resulting in the photostrictive effect are analyzed theoretically, based on which a mathematical model considering coupled multi-physics fields is established. According to the analysis and experimental results, the mathematical model can explain the hysteresis phenomenon and the variation trend of the photo-induced voltage very well and is in agreement with the experimental curves. In addition, the PLZT bimorph is applied as an energy transducer for a photovoltaic–electrostatic hybrid actuated micromirror, and the relation of the rotation angle and the photo-induced voltage is discussed based on the novel photostrictive mathematical model. (paper)

  18. Advanced Computing Tools and Models for Accelerator Physics

    International Nuclear Information System (INIS)

    Ryne, Robert; Ryne, Robert D.

    2008-01-01

    This paper is based on a transcript of my EPAC'08 presentation on advanced computing tools for accelerator physics. Following an introduction I present several examples, provide a history of the development of beam dynamics capabilities, and conclude with thoughts on the future of large scale computing in accelerator physics

  19. Multi-criteria multi-stakeholder decision analysis using a fuzzy-stochastic approach for hydrosystem management

    Directory of Open Access Journals (Sweden)

    Y. H. Subagadis

    2014-09-01

    Full Text Available The conventional methods used to solve multi-criteria multi-stakeholder problems are less strongly formulated, as they normally incorporate only homogeneous information at a time and suggest aggregating objectives of different decision-makers avoiding water–society interactions. In this contribution, Multi-Criteria Group Decision Analysis (MCGDA using a fuzzy-stochastic approach has been proposed to rank a set of alternatives in water management decisions incorporating heterogeneous information under uncertainty. The decision making framework takes hydrologically, environmentally, and socio-economically motivated conflicting objectives into consideration. The criteria related to the performance of the physical system are optimized using multi-criteria simulation-based optimization, and fuzzy linguistic quantifiers have been used to evaluate subjective criteria and to assess stakeholders' degree of optimism. The proposed methodology is applied to find effective and robust intervention strategies for the management of a coastal hydrosystem affected by saltwater intrusion due to excessive groundwater extraction for irrigated agriculture and municipal use. Preliminary results show that the MCGDA based on a fuzzy-stochastic approach gives useful support for robust decision-making and is sensitive to the decision makers' degree of optimism.

  20. Multi-scale and multi-physics model of the uterine smooth muscle with mechanotransduction.

    Science.gov (United States)

    Yochum, Maxime; Laforêt, Jérémy; Marque, Catherine

    2018-02-01

    Preterm labor is an important public health problem. However, the efficiency of the uterine muscle during labor is complex and still poorly understood. This work is a first step towards a model of the uterine muscle, including its electrical and mechanical components, to reach a better understanding of the uterus synchronization. This model is proposed to investigate, by simulation, the possible role of mechanotransduction for the global synchronization of the uterus. The electrical diffusion indeed explains the local propagation of contractile activity, while the tissue stretching may play a role in the synchronization of distant parts of the uterine muscle. This work proposes a multi-physics (electrical, mechanical) and multi-scales (cell, tissue, whole uterus) model, which is applied to a realistic uterus 3D mesh. This model includes electrical components at different scales: generation of action potentials at the cell level, electrical diffusion at the tissue level. It then links these electrical events to the mechanical behavior, at the cellular level (via the intracellular calcium concentration), by simulating the force generated by each active cell. It thus computes an estimation of the intra uterine pressure (IUP) by integrating the forces generated by each active cell at the whole uterine level, as well as the stretching of the tissue (by using a viscoelastic law for the behavior of the tissue). It finally includes at the cellular level stretch activated channels (SACs) that permit to create a loop between the mechanical and the electrical behavior (mechanotransduction). The simulation of different activated regions of the uterus, which in this first "proof of concept" case are electrically isolated, permits the activation of inactive regions through the stretching (induced by the electrically active regions) computed at the whole organ scale. This permits us to evidence the role of the mechanotransduction in the global synchronization of the uterus. The

  1. Benefits of Multi-Sports Physical Education in the Elementary School Context

    Science.gov (United States)

    Pesce, Caterina; Faigenbaum, Avery; Crova, Claudia; Marchetti, Rosalba; Bellucci, Mario

    2013-01-01

    Objective: In many countries, physical education (PE) is taught by classroom teachers (generalists) during the formative years of elementary school. The purpose of this study was to evaluate the physical and psychological outcomes of multi-sports PE taught by qualified PE teachers (specialists) and how they contribute to children's physical and…

  2. Large Scale Computing and Storage Requirements for Nuclear Physics Research

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey J.

    2012-03-02

    IThe National Energy Research Scientific Computing Center (NERSC) is the primary computing center for the DOE Office of Science, serving approximately 4,000 users and hosting some 550 projects that involve nearly 700 codes for a wide variety of scientific disciplines. In addition to large-scale computing resources NERSC provides critical staff support and expertise to help scientists make the most efficient use of these resources to advance the scientific mission of the Office of Science. In May 2011, NERSC, DOE’s Office of Advanced Scientific Computing Research (ASCR) and DOE’s Office of Nuclear Physics (NP) held a workshop to characterize HPC requirements for NP research over the next three to five years. The effort is part of NERSC’s continuing involvement in anticipating future user needs and deploying necessary resources to meet these demands. The workshop revealed several key requirements, in addition to achieving its goal of characterizing NP computing. The key requirements include: 1. Larger allocations of computational resources at NERSC; 2. Visualization and analytics support; and 3. Support at NERSC for the unique needs of experimental nuclear physicists. This report expands upon these key points and adds others. The results are based upon representative samples, called “case studies,” of the needs of science teams within NP. The case studies were prepared by NP workshop participants and contain a summary of science goals, methods of solution, current and future computing requirements, and special software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, “multi-core” environment that is expected to dominate HPC architectures over the next few years. The report also includes a section with NERSC responses to the workshop findings. NERSC has many initiatives already underway that address key workshop findings and all of the action items are aligned with NERSC strategic plans.

  3. Computing for particle physics. Report of the HEPAP subpanel on computer needs for the next decade

    International Nuclear Information System (INIS)

    1985-08-01

    The increasing importance of computation to the future progress in high energy physics is documented. Experimental computing demands are analyzed for the near future (four to ten years). The computer industry's plans for the near term and long term are surveyed as they relate to the solution of high energy physics computing problems. This survey includes large processors and the future role of alternatives to commercial mainframes. The needs for low speed and high speed networking are assessed, and the need for an integrated network for high energy physics is evaluated. Software requirements are analyzed. The role to be played by multiple processor systems is examined. The computing needs associated with elementary particle theory are briefly summarized. Computing needs associated with the Superconducting Super Collider are analyzed. Recommendations are offered for expanding computing capabilities in high energy physics and for networking between the laboratories

  4. Reactor physics and reactor computations

    International Nuclear Information System (INIS)

    Ronen, Y.; Elias, E.

    1994-01-01

    Mathematical methods and computer calculations for nuclear and thermonuclear reactor kinetics, reactor physics, neutron transport theory, core lattice parameters, waste treatment by transmutation, breeding, nuclear and thermonuclear fuels are the main interests of the conference

  5. Multi-scenario electromagnetic load analysis for CFETR and EAST magnet systems

    Energy Technology Data Exchange (ETDEWEB)

    Xu, Weiwei; Liu, Xufeng, E-mail: lxf@ipp.ac.cn; Du, Shuangsong; Song, Yuntao

    2017-01-15

    Highlights: • A multi-scenario force-calculating simulator for Tokamak magnet system is developed using interaction matrix method. • The simulator is applied to EM analysis of CFETR and EAST magnet system. • The EM loads on CFETR magnet coils at different typical scenarios and the EM loads acting on magnet system of EAST as function of time for different shots are analyzed with the simulator. • Results indicate that the approach can be conveniently used for multi-scenario and real-time EM analysis of Tokamak magnet system. - Abstract: A technology for electromagnetic (EM) analysis of the current-carrying components in tokamaks has been proposed recently (Rozov, 2013; Rozov and Alekseev, 2015). According to this method, the EM loads can be obtained by a linear transform of given currents using the pre-computed interaction matrix. Based on this technology, a multi-scenario force-calculating simulator for Tokamak magnet system is developed using Fortran programming in this paper. And the simulator is applied to EM analysis of China Fusion Engineering Test Reactor (CFETR) and Experimental Advanced Superconducting Tokamak (EAST) magnet system. The pre-computed EM interaction matrices of CFETR and EAST magnet system are implanted into the simulator, then the EM loads on CFETR magnet coils at different typical scenarios are evaluated with the simulator, and the comparison of the results with ANSYS method results validates the efficiency and accuracy of the method. Using the simulator, the EM loads acting on magnet system of EAST as function of time for different shots are further analyzed, and results indicate that the approach can be conveniently used for the real-time EM analysis of Tokamak magnet system.

  6. HEPLIB '91: International users meeting on the support and environments of high energy physics computing

    International Nuclear Information System (INIS)

    Johnstad, H.

    1991-01-01

    The purpose of this meeting is to discuss the current and future HEP computing support and environments from the perspective of new horizons in accelerator, physics, and computing technologies. Topics of interest to the Meeting include (but are limited to): the forming of the HEPLIB world user group for High Energy Physic computing; mandate, desirables, coordination, organization, funding; user experience, international collaboration; the roles of national labs, universities, and industry; range of software, Monte Carlo, mathematics, physics, interactive analysis, text processors, editors, graphics, data base systems, code management tools; program libraries, frequency of updates, distribution; distributed and interactive computing, data base systems, user interface, UNIX operating systems, networking, compilers, Xlib, X-Graphics; documentation, updates, availability, distribution; code management in large collaborations, keeping track of program versions; and quality assurance, testing, conventions, standards

  7. Developing a multi-physics solver in APOLLO3 and applications to cross section homogenization

    International Nuclear Information System (INIS)

    Dugan, Kevin-James

    2016-01-01

    Multi-physics coupling is becoming of large interest in the nuclear engineering and computational science fields. The ability to obtain accurate solutions to realistic models is important to the design and licensing of novel reactor designs, especially in design basis accident situations. The physical models involved in calculating accident behavior in nuclear reactors includes: neutron transport, thermal conduction/convection, thermo-mechanics in fuel and support structure, fuel stoichiometry, among others. However, this thesis focuses on the coupling between two models, neutron transport and thermal conduction/convection.The goal of this thesis is to develop a multi-physics solver for simulating accidents in nuclear reactors. The focus is both on the simulation environment and the data treatment used in such simulations.This work discusses the development of a multi-physics framework based around the Jacobian-Free Newton-Krylov (JFNK) method. The framework includes linear and nonlinear solvers, along with interfaces to existing numerical codes that solve neutron transport and thermal hydraulics models (APOLLO3 and MCTH respectively) through the computation of residuals. a new formulation for the neutron transport residual is explored, which reduces the solution size and search space by a large factor; instead of the residual being based on the angular flux, it is based on the fission source.The question of whether using a fundamental mode distribution of the neutron flux for cross section homogenization is sufficiently accurate during fast transients is also explored. It is shown that in an infinite homogeneous medium, using homogenized cross sections produced with a fundamental mode flux differ significantly from a reference solution. The error is remedied by using an alternative weighting flux taken from a time dependent calculation; either a time-integrated flux or an asymptotic solution. The time-integrated flux comes from the multi-physics solution of the

  8. Automation of multi-agent control for complex dynamic systems in heterogeneous computational network

    Science.gov (United States)

    Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan

    2017-01-01

    The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.

  9. Highway traffic simulation on multi-processor computers

    Energy Technology Data Exchange (ETDEWEB)

    Hanebutte, U.R.; Doss, E.; Tentner, A.M.

    1997-04-01

    A computer model has been developed to simulate highway traffic for various degrees of automation with a high level of fidelity in regard to driver control and vehicle characteristics. The model simulates vehicle maneuvering in a multi-lane highway traffic system and allows for the use of Intelligent Transportation System (ITS) technologies such as an Automated Intelligent Cruise Control (AICC). The structure of the computer model facilitates the use of parallel computers for the highway traffic simulation, since domain decomposition techniques can be applied in a straight forward fashion. In this model, the highway system (i.e. a network of road links) is divided into multiple regions; each region is controlled by a separate link manager residing on an individual processor. A graphical user interface augments the computer model kv allowing for real-time interactive simulation control and interaction with each individual vehicle and road side infrastructure element on each link. Average speed and traffic volume data is collected at user-specified loop detector locations. Further, as a measure of safety the so- called Time To Collision (TTC) parameter is being recorded.

  10. MCPLOTS: a particle physics resource based on volunteer computing

    CERN Document Server

    Karneyeu, A; Prestel, S; Skands, P Z

    2014-01-01

    The mcplots.cern.ch web site (MCPLOTS) provides a simple online repository of plots made with high-energy-physics event generators, comparing them to a wide variety of experimental data. The repository is based on the HEPDATA online database of experimental results and on the RIVET Monte Carlo analysis tool. The repository is continually updated and relies on computing power donated by volunteers, via the LHC@HOME platform.

  11. Multi-source Geospatial Data Analysis with Google Earth Engine

    Science.gov (United States)

    Erickson, T.

    2014-12-01

    The Google Earth Engine platform is a cloud computing environment for data analysis that combines a public data catalog with a large-scale computational facility optimized for parallel processing of geospatial data. The data catalog is a multi-petabyte archive of georeferenced datasets that include images from Earth observing satellite and airborne sensors (examples: USGS Landsat, NASA MODIS, USDA NAIP), weather and climate datasets, and digital elevation models. Earth Engine supports both a just-in-time computation model that enables real-time preview and debugging during algorithm development for open-ended data exploration, and a batch computation mode for applying algorithms over large spatial and temporal extents. The platform automatically handles many traditionally-onerous data management tasks, such as data format conversion, reprojection, and resampling, which facilitates writing algorithms that combine data from multiple sensors and/or models. Although the primary use of Earth Engine, to date, has been the analysis of large Earth observing satellite datasets, the computational platform is generally applicable to a wide variety of use cases that require large-scale geospatial data analyses. This presentation will focus on how Earth Engine facilitates the analysis of geospatial data streams that originate from multiple separate sources (and often communities) and how it enables collaboration during algorithm development and data exploration. The talk will highlight current projects/analyses that are enabled by this functionality.https://earthengine.google.org

  12. The Goddard multi-scale modeling system with unified physics

    Directory of Open Access Journals (Sweden)

    W.-K. Tao

    2009-08-01

    Full Text Available Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1 a cloud-resolving model (CRM, (2 a regional-scale model, the NASA unified Weather Research and Forecasting Model (WRF, and (3 a coupled CRM-GCM (general circulation model, known as the Goddard Multi-scale Modeling Framework or MMF. The same cloud-microphysical processes, long- and short-wave radiative transfer and land-surface processes are applied in all of the models to study explicit cloud-radiation and cloud-surface interactive processes in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator for comparison and validation with NASA high-resolution satellite data.

    This paper reviews the development and presents some applications of the multi-scale modeling system, including results from using the multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols. In addition, use of the multi-satellite simulator to identify the strengths and weaknesses of the model-simulated precipitation processes will be discussed as well as future model developments and applications.

  13. Physics Analysis Tools Workshop 2007

    CERN Multimedia

    Elizabeth Gallas,

    The ATLAS PAT (Physics Analysis Tools) group evaluates, develops and tests software tools for the analysis of physics data, consistent with the ATLAS analysis and event data models. Following on from earlier PAT workshops in London (2004), Tucson (2005) and Tokyo (2006), this year's workshop was hosted by the University of Bergen in Norway on April 23-28 with more than 60 participants. The workshop brought together PAT developers and users to discuss the available tools with an emphasis on preparing for data taking. At the start of the week, workshop participants, laptops and power converters in-hand, jumped headfirst into tutorials, learning how to become trigger-aware and how to use grid computing resources via the distributed analysis tools Panda and Ganga. The well organised tutorials were well attended and soon the network was humming, providing rapid results to the users and ample feedback to the developers. A mid-week break was provided by a relaxing and enjoyable cruise through the majestic Norwegia...

  14. Analysis of Solar Energy Use for Multi-Flat Buildings Renovation

    Directory of Open Access Journals (Sweden)

    Kęstutis Valančius

    2016-10-01

    Full Text Available The paper analyses the energy and financial possibilities to install renewable energy sources (solar energy generating systems when renovating multi-flat buildings. The aim is to analyse solar energy system possibilities for modernization of multi-flat buildings (5-storey, 9-storey and 16-storey, providing detailed conclusions about the appropriateness of the energy systems and financial aspects. It is also intended to determine the optimal technological combinations and solutions to reach the maximum energy benefits. For the research computer simulation tools “EnergyPRO” and “PV*SOL Premium” are chosen. Also actual collected heat and electricity consumption data is used for the analysis.

  15. Uncertainty Flow Facilitates Zero-Shot Multi-Label Learning in Affective Facial Analysis

    Directory of Open Access Journals (Sweden)

    Wenjun Bai

    2018-02-01

    Full Text Available Featured Application: The proposed Uncertainty Flow framework may benefit the facial analysis with its promised elevation in discriminability in multi-label affective classification tasks. Moreover, this framework also allows the efficient model training and between tasks knowledge transfer. The applications that rely heavily on continuous prediction on emotional valance, e.g., to monitor prisoners’ emotional stability in jail, can be directly benefited from our framework. Abstract: To lower the single-label dependency on affective facial analysis, it urges the fruition of multi-label affective learning. The impediment to practical implementation of existing multi-label algorithms pertains to scarcity of scalable multi-label training datasets. To resolve this, an inductive transfer learning based framework, i.e.,Uncertainty Flow, is put forward in this research to allow knowledge transfer from a single labelled emotion recognition task to a multi-label affective recognition task. I.e., the model uncertainty—which can be quantified in Uncertainty Flow—is distilled from a single-label learning task. The distilled model uncertainty ensures the later efficient zero-shot multi-label affective learning. On the theoretical perspective, within our proposed Uncertainty Flow framework, the feasibility of applying weakly informative priors, e.g., uniform and Cauchy prior, is fully explored in this research. More importantly, based on the derived weight uncertainty, three sets of prediction related uncertainty indexes, i.e., soft-max uncertainty, pure uncertainty and uncertainty plus are proposed to produce reliable and accurate multi-label predictions. Validated on our manual annotated evaluation dataset, i.e., the multi-label annotated FER2013, our proposed Uncertainty Flow in multi-label facial expression analysis exhibited superiority to conventional multi-label learning algorithms and multi-label compatible neural networks. The success of our

  16. The Fermilab Advanced Computer Program multi-array processor system (ACPMAPS): A site oriented supercomputer for theoretical physics

    International Nuclear Information System (INIS)

    Nash, T.; Areti, H.; Atac, R.

    1988-08-01

    The ACP Multi-Array Processor System (ACPMAPS) is a highly cost effective, local memory parallel computer designed for floating point intensive grid based problems. The processing nodes of the system are single board array processors based on the FORTRAN and C programmable Weitek XL chip set. The nodes are connected by a network of very high bandwidth 16 port crossbar switches. The architecture is designed to achieve the highest possible cost effectiveness while maintaining a high level of programmability. The primary application of the machine at Fermilab will be lattice gauge theory. The hardware is supported by a transparent site oriented software system called CANOPY which shields theorist users from the underlying node structure. 4 refs., 2 figs

  17. DiFX: A software correlator for very long baseline interferometry using multi-processor computing environments

    OpenAIRE

    Deller, A. T.; Tingay, S. J.; Bailes, M.; West, C.

    2007-01-01

    We describe the development of an FX style correlator for Very Long Baseline Interferometry (VLBI), implemented in software and intended to run in multi-processor computing environments, such as large clusters of commodity machines (Beowulf clusters) or computers specifically designed for high performance computing, such as multi-processor shared-memory machines. We outline the scientific and practical benefits for VLBI correlation, these chiefly being due to the inherent flexibility of softw...

  18. Sophistication of computational science and fundamental physics simulations

    International Nuclear Information System (INIS)

    Ishiguro, Seiji; Ito, Atsushi; Usami, Shunsuke; Ohtani, Hiroaki; Sakagami, Hitoshi; Toida, Mieko; Hasegawa, Hiroki; Horiuchi, Ritoku; Miura, Hideaki

    2016-01-01

    Numerical experimental reactor research project is composed of the following studies: (1) nuclear fusion simulation research with a focus on specific physical phenomena of specific equipment, (2) research on advanced simulation method to increase predictability or expand its application range based on simulation, (3) visualization as the foundation of simulation research, (4) research for advanced computational science such as parallel computing technology, and (5) research aiming at elucidation of fundamental physical phenomena not limited to specific devices. Specifically, a wide range of researches with medium- to long-term perspectives are being developed: (1) virtual reality visualization, (2) upgrading of computational science such as multilayer simulation method, (3) kinetic behavior of plasma blob, (4) extended MHD theory and simulation, (5) basic plasma process such as particle acceleration due to interaction of wave and particle, and (6) research related to laser plasma fusion. This paper reviews the following items: (1) simultaneous visualization in virtual reality space, (2) multilayer simulation of collisionless magnetic reconnection, (3) simulation of microscopic dynamics of plasma coherent structure, (4) Hall MHD simulation of LHD, (5) numerical analysis for extension of MHD equilibrium and stability theory, (6) extended MHD simulation of 2D RT instability, (7) simulation of laser plasma, (8) simulation of shock wave and particle acceleration, and (9) study on simulation of homogeneous isotropic MHD turbulent flow. (A.O.)

  19. Computing in plasma physics

    International Nuclear Information System (INIS)

    Nuehrenberg, J.

    1986-01-01

    These proceedings contain the articles presented at the named conference. These concern numerical methods for astrophysical plasmas, the numerical simulation of reversed-field pinch dynamics, methods for numerical simulation of ideal MHD stability of axisymmetric plasmas, calculations of the resistive internal m=1 mode in tokamaks, parallel computing and multitasking, particle simulation methods in plasma physics, 2-D Lagrangian studies of symmetry and stability of laser fusion targets, computing of rf heating and current drive in tokamaks, three-dimensional free boundary calculations using a spectral Green's function method, as well as the calculation of three-dimensional MHD equilibria with islands and stochastic regions. See hints under the relevant topics. (HSI)

  20. UWALK: the development of a multi-strategy, community-wide physical activity program

    OpenAIRE

    Jennings, Cally A.; Berry, Tanya R.; Carson, Valerie; Culos-Reed, S. Nicole; Duncan, Mitch J.; Loitz, Christina C.; McCormack, Gavin R.; McHugh, Tara-Leigh F.; Spence, John C.; Vallance, Jeff K.; Mummery, W. Kerry

    2016-01-01

    UWALK is a multi-strategy, multi-sector, theory-informed, community-wide approach using e and mHealth to promote physical activity in Alberta, Canada. The aim of UWALK is to promote physical activity, primarily via the accumulation of steps and flights of stairs, through a single over-arching brand. This paper describes the development of the UWALK program. A social ecological model and the social cognitive theory guided the development of key strategies, including the marketing and communica...

  1. The physics analysis environment of the ZEUS experiment

    International Nuclear Information System (INIS)

    Bauerdick, L.A.T.; Derugin, O.; Gilkinson, D.; Kasemann, M.; Manczak, O.

    1995-12-01

    The ZEUS Experiment has over the last three years developed its own model of the central computing environment for physics analysis. This model has been designed to provide ZEUS physicists with powerful and user friendly tools for data analysis as well as to be truly scalable and open. (orig.)

  2. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  3. BaBar computing - From collisions to physics results

    CERN Multimedia

    CERN. Geneva

    2004-01-01

    The BaBar experiment at SLAC studies B-physics at the Upsilon(4S) resonance using the high-luminosity e+e- collider PEP-II at the Stanford Linear Accelerator Center (SLAC). Taking, processing and analyzing the very large data samples is a significant computing challenge. This presentation will describe the entire BaBar computing chain and illustrate the solutions chosen as well as their evolution with the ever higher luminosity being delivered by PEP-II. This will include data acquisition and software triggering in a high availability, low-deadtime online environment, a prompt, automated calibration pass through the data SLAC and then the full reconstruction of the data that takes place at INFN-Padova within 24 hours. Monte Carlo production takes place in a highly automated fashion in 25+ sites. The resulting real and simulated data is distributed and made available at SLAC and other computing centers. For analysis a much more sophisticated skimming pass has been introduced in the past year, ...

  4. Analysis of bone mellow density in adults of domestic local area using multi-detector computed tomography: Focus on correlation about eating habits, lifestyle, physical features and social characteristics

    International Nuclear Information System (INIS)

    Lee, Tae Hui; Kim, Tae Hyung; So, Woon Young; Lim, Hei Gyeom; Lim, Cheong Hwan; Park, Myeong Hwan; Cheon, Myung Ki

    2016-01-01

    This study analyzed the correlation between BMD (bone mineral density) value calculated in the MDCT(multidetector computed tomography) and lifestyle, physical features and social characteristics. From July 15 2015 to June 6 2016, we converted from HU (hounsfield unit) value measured by using MDCT to T-score for BMD of 141 patients (male: 63, female: 78) in W medical center. We measured the 2nd, 3rd and 4th lumbar spine and analyzed the correlation between gender differences in BMD and lifestyle, physical features and social characteristics. Statistical significance was validated using independent sample T test with one way Anova. Gender BMD was confirmed that a statistically significant difference (p<0.05). BMD values decreased with increasing age but for the statistically men, there was no significant difference from 20s to 50s, it only showed a significant difference in 20s and 60s (p<0.001). For the statistically women, there was no significant difference from 20s to 40s. but since 50s BMD was decreased rapidly, which showed a significant difference(p<0.001). women showed significant differences for the menstruation and menopause, childbirth, alcohol, cereals and greasy food in bone mineral density (p<0.05) but there were no significant differences in men. The bone mineral density values calculated by the MDCT and lifestyle, physical features and social characteristics correlation analysis method is considered to be used as a basis for estimating the state in BMD and osteoporosis management

  5. Analysis of bone mellow density in adults of domestic local area using multi-detector computed tomography: Focus on correlation about eating habits, lifestyle, physical features and social characteristics

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Tae Hui [Wonju Medical Center, Wonju (Korea, Republic of); Kim, Tae Hyung; So, Woon Young; Lim, Hei Gyeom [Kangwon National University Graduate School, Wonju (Korea, Republic of); Lim, Cheong Hwan [Hanseo University, Seosan (Korea, Republic of); Park, Myeong Hwan [Daegu Health College, Daegu (Korea, Republic of); Cheon, Myung Ki [Soongsil University, Seoul (Korea, Republic of)

    2016-12-15

    This study analyzed the correlation between BMD (bone mineral density) value calculated in the MDCT(multidetector computed tomography) and lifestyle, physical features and social characteristics. From July 15 2015 to June 6 2016, we converted from HU (hounsfield unit) value measured by using MDCT to T-score for BMD of 141 patients (male: 63, female: 78) in W medical center. We measured the 2nd, 3rd and 4th lumbar spine and analyzed the correlation between gender differences in BMD and lifestyle, physical features and social characteristics. Statistical significance was validated using independent sample T test with one way Anova. Gender BMD was confirmed that a statistically significant difference (p<0.05). BMD values decreased with increasing age but for the statistically men, there was no significant difference from 20s to 50s, it only showed a significant difference in 20s and 60s (p<0.001). For the statistically women, there was no significant difference from 20s to 40s. but since 50s BMD was decreased rapidly, which showed a significant difference(p<0.001). women showed significant differences for the menstruation and menopause, childbirth, alcohol, cereals and greasy food in bone mineral density (p<0.05) but there were no significant differences in men. The bone mineral density values calculated by the MDCT and lifestyle, physical features and social characteristics correlation analysis method is considered to be used as a basis for estimating the state in BMD and osteoporosis management.

  6. Multivariate Multi-Scale Permutation Entropy for Complexity Analysis of Alzheimer’s Disease EEG

    Directory of Open Access Journals (Sweden)

    Isabella Palamara

    2012-07-01

    Full Text Available An original multivariate multi-scale methodology for assessing the complexity of physiological signals is proposed. The technique is able to incorporate the simultaneous analysis of multi-channel data as a unique block within a multi-scale framework. The basic complexity measure is done by using Permutation Entropy, a methodology for time series processing based on ordinal analysis. Permutation Entropy is conceptually simple, structurally robust to noise and artifacts, computationally very fast, which is relevant for designing portable diagnostics. Since time series derived from biological systems show structures on multiple spatial-temporal scales, the proposed technique can be useful for other types of biomedical signal analysis. In this work, the possibility of distinguish among the brain states related to Alzheimer’s disease patients and Mild Cognitive Impaired subjects from normal healthy elderly is checked on a real, although quite limited, experimental database.

  7. PREFACE: International Conference on Computing in High Energy and Nuclear Physics (CHEP'09)

    Science.gov (United States)

    Gruntorad, Jan; Lokajicek, Milos

    2010-11-01

    The 17th International Conference on Computing in High Energy and Nuclear Physics (CHEP) was held on 21-27 March 2009 in Prague, Czech Republic. CHEP is a major series of international conferences for physicists and computing professionals from the worldwide High Energy and Nuclear Physics community, Computer Science, and Information Technology. The CHEP conference provides an international forum to exchange information on computing experience and needs for the community, and to review recent, ongoing and future activities. Recent conferences were held in Victoria, Canada 2007, Mumbai, India in 2006, Interlaken, Switzerland in 2004, San Diego, USA in 2003, Beijing, China in 2001, Padua, Italy in 2000. The CHEP'09 conference had 600 attendees with a program that included plenary sessions of invited oral presentations, a number of parallel sessions comprising 200 oral and 300 poster presentations, and an industrial exhibition. We thanks all the presenters, for the excellent scientific content of their contributions to the conference. Conference tracks covered topics on Online Computing, Event Processing, Software Components, Tools and Databases, Hardware and Computing Fabrics, Grid Middleware and Networking Technologies, Distributed Processing and Analysis and Collaborative Tools. The conference included excursions to Prague and other Czech cities and castles and a banquet held at the Zofin palace in Prague. The next CHEP conference will be held in Taipei, Taiwan on 18-22 October 2010. We would like thank the Ministry of Education Youth and Sports of the Czech Republic and the EU ACEOLE project for the conference support, further to commercial sponsors, the International Advisory Committee, the Local Organizing Committee members representing the five collaborating Czech institutions Jan Gruntorad (co-chair), CESNET, z.s.p.o., Prague Andrej Kugler, Nuclear Physics Institute AS CR v.v.i., Rez Rupert Leitner, Charles University in Prague, Faculty of Mathematics and

  8. Approximate Analysis of Multi-State Weighted k-Out-of-n Systems Applied to Transmission Lines

    Directory of Open Access Journals (Sweden)

    Xiaogang Song

    2017-10-01

    Full Text Available Multi-state weighted k-out-of-n systems are widely applied in various scenarios, such as multiple line (power/oil transmission line transmission systems where the capability of fault tolerance is desirable. However, the complex operating environment and the dynamic features of load demands influence the evaluation of system reliability. In this paper, a stochastic multiple-valued (SMV approach is proposed to efficiently predict the reliability of two models of systems with non-repairable components and dynamically repairable components. The weights/performances and reliabilities of multi-state components (MSCs are represented by stochastic sequences consisting of a fixed number of multi-state values with the positions being randomly permutated. Using stochastic sequences with L multiple values, linear computational complexities with parameters n and L are required by the SMV approach to compute the reliability of different multi-state k-out-of-n systems at a reasonable accuracy, compared to the complexities of universal generating functions (UGF and fuzzy universal generating functions (FUGF that increase exponentially with the value of n. The analysis of two benchmarks shows that the proposed SMV approach is more efficient than the analysis using UGF or FUGF.

  9. Large Scale Computing and Storage Requirements for High Energy Physics

    International Nuclear Information System (INIS)

    Gerber, Richard A.; Wasserman, Harvey

    2010-01-01

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years. The report includes

  10. MCPLOTS: a particle physics resource based on volunteer computing

    International Nuclear Information System (INIS)

    Karneyeu, A.; Mijovic, L.; Prestel, S.; Skands, P.Z.

    2014-01-01

    The mcplots.cern.ch web site (mcplots) provides a simple online repository of plots made with high-energy-physics event generators, comparing them to a wide variety of experimental data. The repository is based on the hepdata online database of experimental results and on the rivet Monte Carlo analysis tool. The repository is continually updated and relies on computing power donated by volunteers, via the lhc rate at home 2.0 platform. (orig.)

  11. MCPLOTS. A particle physics resource based on volunteer computing

    Energy Technology Data Exchange (ETDEWEB)

    Karneyeu, A. [Joint Inst. for Nuclear Research, Moscow (Russian Federation); Mijovic, L. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Irfu/SPP, CEA-Saclay, Gif-sur-Yvette (France); Prestel, S. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Lund Univ. (Sweden). Dept. of Astronomy and Theoretical Physics; Skands, P.Z. [European Organization for Nuclear Research (CERN), Geneva (Switzerland)

    2013-07-15

    The mcplots.cern.ch web site (MCPLOTS) provides a simple online repository of plots made with high-energy-physics event generators, comparing them to a wide variety of experimental data. The repository is based on the HEPDATA online database of experimental results and on the RIVET Monte Carlo analysis tool. The repository is continually updated and relies on computing power donated by volunteers, via the LHC rate at HOME 2.0 platform.

  12. MCPLOTS. A particle physics resource based on volunteer computing

    International Nuclear Information System (INIS)

    Karneyeu, A.; Mijovic, L.; Prestel, S.

    2013-07-01

    The mcplots.cern.ch web site (MCPLOTS) provides a simple online repository of plots made with high-energy-physics event generators, comparing them to a wide variety of experimental data. The repository is based on the HEPDATA online database of experimental results and on the RIVET Monte Carlo analysis tool. The repository is continually updated and relies on computing power donated by volunteers, via the LHC rate at HOME 2.0 platform.

  13. Integrating numerical computation into the undergraduate education physics curriculum using spreadsheet excel

    Science.gov (United States)

    Fauzi, Ahmad

    2017-11-01

    Numerical computation has many pedagogical advantages: it develops analytical skills and problem-solving skills, helps to learn through visualization, and enhances physics education. Unfortunately, numerical computation is not taught to undergraduate education physics students in Indonesia. Incorporate numerical computation into the undergraduate education physics curriculum presents many challenges. The main challenges are the dense curriculum that makes difficult to put new numerical computation course and most students have no programming experience. In this research, we used case study to review how to integrate numerical computation into undergraduate education physics curriculum. The participants of this research were 54 students of the fourth semester of physics education department. As a result, we concluded that numerical computation could be integrated into undergraduate education physics curriculum using spreadsheet excel combined with another course. The results of this research become complements of the study on how to integrate numerical computation in learning physics using spreadsheet excel.

  14. MAINS: MULTI-AGENT INTELLIGENT SERVICE ARCHITECTURE FOR CLOUD COMPUTING

    Directory of Open Access Journals (Sweden)

    T. Joshva Devadas

    2014-04-01

    Full Text Available Computing has been transformed to a model having commoditized services. These services are modeled similar to the utility services water and electricity. The Internet has been stunningly successful over the course of past three decades in supporting multitude of distributed applications and a wide variety of network technologies. However, its popularity has become the biggest impediment to its further growth with the handheld devices mobile and laptops. Agents are intelligent software system that works on behalf of others. Agents are incorporated in many innovative applications in order to improve the performance of the system. Agent uses its possessed knowledge to react with the system and helps to improve the performance. Agents are introduced in the cloud computing is to minimize the response time when similar request is raised from an end user in the globe. In this paper, we have introduced a Multi Agent Intelligent system (MAINS prior to cloud service models and it was tested using sample dataset. Performance of the MAINS layer was analyzed in three aspects and the outcome of the analysis proves that MAINS Layer provides a flexible model to create cloud applications and deploying them in variety of applications.

  15. Computational physics simulation of classical and quantum systems

    CERN Document Server

    Scherer, Philipp O J

    2013-01-01

    This textbook presents basic and advanced computational physics in a very didactic style. It contains very-well-presented and simple mathematical descriptions of many of the most important algorithms used in computational physics. Many clear mathematical descriptions of important techniques in computational physics are given. The first part of the book discusses the basic numerical methods. A large number of exercises and computer experiments allows to study the properties of these methods. The second part concentrates on simulation of classical and quantum systems. It uses a rather general concept for the equation of motion which can be applied to ordinary and partial differential equations. Several classes of integration methods are discussed including not only the standard Euler and Runge Kutta method but also multistep methods and the class of Verlet methods which is introduced by studying the motion in Liouville space. Besides the classical methods, inverse interpolation is discussed, together with the p...

  16. Quantum algorithms for computational nuclear physics

    Directory of Open Access Journals (Sweden)

    Višňák Jakub

    2015-01-01

    Full Text Available While quantum algorithms have been studied as an efficient tool for the stationary state energy determination in the case of molecular quantum systems, no similar study for analogical problems in computational nuclear physics (computation of energy levels of nuclei from empirical nucleon-nucleon or quark-quark potentials have been realized yet. Although the difference between the above mentioned studies might seem negligible, it will be examined. First steps towards a particular simulation (on classical computer of the Iterative Phase Estimation Algorithm for deuterium and tritium nuclei energy level computation will be carried out with the aim to prove algorithm feasibility (and extensibility to heavier nuclei for its possible practical realization on a real quantum computer.

  17. Data processing of X-ray fluorescence analysis using an electronic computer

    International Nuclear Information System (INIS)

    Yakubovich, A.L.; Przhiyalovskij, S.M.; Tsameryan, G.N.; Golubnichij, G.V.; Nikitin, S.A.

    1979-01-01

    Considered are problems of data processing of multi-element (for 17 elements) X-ray fluorescence analysis of tungsten and molybdenum ores. The analysis was carried out using silicon-lithium spectrometer with the energy resolution of about 300 eV and a 1024-channel analyzer. A characteristic radiation of elements was excited with two 109 Cd radioisotope sources, their general activity being 10 mCi. The period of measurements was 400 s. The data obtained were processed with a computer using the ''Proba-1'' and ''Proba-2'' programs. Data processing algorithms and computer calculation results are presented

  18. Using multi-criteria analysis of simulation models to understand complex biological systems

    Science.gov (United States)

    Maureen C. Kennedy; E. David. Ford

    2011-01-01

    Scientists frequently use computer-simulation models to help solve complex biological problems. Typically, such models are highly integrated, they produce multiple outputs, and standard methods of model analysis are ill suited for evaluating them. We show how multi-criteria optimization with Pareto optimality allows for model outputs to be compared to multiple system...

  19. Fermilab ACP multi-microprocessor project

    International Nuclear Information System (INIS)

    Gaines, I.; Areti, H.; Biel, J.; Bracker, S.; Case, G.; Fischler, M.; Husby, D.; Nash, T.

    1984-08-01

    We report on the status of the Fermilab Advanced Computer Program's project to provide more cost-effective computing engines for the high energy physics community. The project will exploit the cheap, but powerful, commercial microprocessors now available by constructing modular multi-microprocessor systems. A working test bed system as well as plans for the next stages of the project are described

  20. Impact of multi-detector row computed tomography on the tactics of cardiovascular surgery. From qualitative evaluation to quantitative assessment

    International Nuclear Information System (INIS)

    Imagawa, Hiroshi; Kawachi, Kanji; Takano, Shinji

    2005-01-01

    We assessed the role of multi-detector row computed tomography in cardiovascular surgery. The efficacy of multi-detector row computed tomography was assessed concerning the graft patency of coronary artery bypass, arterial atheromatous degeneration, small vessel imaging, and left ventricular volume measurement. Images were reconstructed using both the volume-rendering and the maximum-intensity-profile methods. Arterial atherosclerotic degeneration was assessed by aortic wall volume and aortic calcification volume. In the assessment of bypass graft patency, multidetector row computed tomography showed a 98% correct positive ratio with sensitivity and specificity of 98% and 100%, respectively. Atheromatous degeneration showed matching results in more than 70% of cases compared with intraoperative findings. More than 92% of arterial branches with diameters of 3 mm or greater were detected by preoperative multi-detector row computed tomography images, though only 6% of branches with diameters of 2 mm or less could be visualized. There was a positive linear correlation between left ventricular volumes determined by multi-detector row computed tomography and those calculated from cine angiography. Multi-detector row computed tomography clearly visualized coronary bypass grafts and aortic arterial branches, providing detailed vascular images. Atheromatous degeneration assessed by multi-detector row computed tomography was equivalent with intraoperative findings in more than 70% of cases. Left ventricular volumes measured by multi-detector row computed tomography correlated closely with those determined by cine-angiography. Multidetector row computed tomography is an efficient and promising modality in cardiovascular surgery. (author)

  1. MRPack: Multi-Algorithm Execution Using Compute-Intensive Approach in MapReduce

    Science.gov (United States)

    2015-01-01

    Large quantities of data have been generated from multiple sources at exponential rates in the last few years. These data are generated at high velocity as real time and streaming data in variety of formats. These characteristics give rise to challenges in its modeling, computation, and processing. Hadoop MapReduce (MR) is a well known data-intensive distributed processing framework using the distributed file system (DFS) for Big Data. Current implementations of MR only support execution of a single algorithm in the entire Hadoop cluster. In this paper, we propose MapReducePack (MRPack), a variation of MR that supports execution of a set of related algorithms in a single MR job. We exploit the computational capability of a cluster by increasing the compute-intensiveness of MapReduce while maintaining its data-intensive approach. It uses the available computing resources by dynamically managing the task assignment and intermediate data. Intermediate data from multiple algorithms are managed using multi-key and skew mitigation strategies. The performance study of the proposed system shows that it is time, I/O, and memory efficient compared to the default MapReduce. The proposed approach reduces the execution time by 200% with an approximate 50% decrease in I/O cost. Complexity and qualitative results analysis shows significant performance improvement. PMID:26305223

  2. Computer programs in accelerator physics

    International Nuclear Information System (INIS)

    Keil, E.

    1984-01-01

    Three areas of accelerator physics are discussed in which computer programs have been applied with much success: i) single-particle beam dynamics in circular machines, i.e. the design and matching of machine lattices; ii) computations of electromagnetic fields in RF cavities and similar objects, useful for the design of RF cavities and for the calculation of wake fields; iii) simulation of betatron and synchrotron oscillations in a machine with non-linear elements, e.g. sextupoles, and of bunch lengthening due to longitudinal wake fields. (orig.)

  3. ALE3D: An Arbitrary Lagrangian-Eulerian Multi-Physics Code

    Energy Technology Data Exchange (ETDEWEB)

    Noble, Charles R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Anderson, Andrew T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Barton, Nathan R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Bramwell, Jamie A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Capps, Arlie [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chang, Michael H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Chou, Jin J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dawson, David M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Diana, Emily R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Dunn, Timothy A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Faux, Douglas R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Fisher, Aaron C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Greene, Patrick T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Heinz, Ines [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Kanarska, Yuliya [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Khairallah, Saad A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Liu, Benjamin T. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Margraf, Jon D. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nichols, Albert L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Nourgaliev, Robert N. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Puso, Michael A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Reus, James F. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Robinson, Peter B. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shestakov, Alek I. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Solberg, Jerome M. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Taller, Daniel [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Tsuji, Paul H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); White, Christopher A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); White, Jeremy L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-05-23

    ALE3D is a multi-physics numerical simulation software tool utilizing arbitrary-Lagrangian- Eulerian (ALE) techniques. The code is written to address both two-dimensional (2D plane and axisymmetric) and three-dimensional (3D) physics and engineering problems using a hybrid finite element and finite volume formulation to model fluid and elastic-plastic response of materials on an unstructured grid. As shown in Figure 1, ALE3D is a single code that integrates many physical phenomena.

  4. A Distributed Multi-dimensional SOLAP Model of Remote Sensing Data and Its Application in Drought Analysis

    Directory of Open Access Journals (Sweden)

    LI Jiyuan

    2014-06-01

    Full Text Available SOLAP (Spatial On-Line Analytical Processing has been applied to multi-dimensional analysis of remote sensing data recently. However, its computation performance faces a considerable challenge from the large-scale dataset. A geo-raster cube model extended by Map-Reduce is proposed, which refers to the application of Map-Reduce (a data-intensive computing paradigm in the OLAP field. In this model, the existing methods are modified to adapt to distributed environment based on the multi-level raster tiles. Then the multi-dimensional map algebra is introduced to decompose the SOLAP computation into multiple distributed parallel map algebra functions on tiles under the support of Map-Reduce. The drought monitoring by remote sensing data is employed as a case study to illustrate the model construction and application. The prototype is also implemented, and the performance testing shows the efficiency and scalability of this model.

  5. Computer Algebra Recipes for Mathematical Physics

    CERN Document Server

    Enns, Richard H

    2005-01-01

    Over two hundred novel and innovative computer algebra worksheets or "recipes" will enable readers in engineering, physics, and mathematics to easily and rapidly solve and explore most problems they encounter in their mathematical physics studies. While the aim of this text is to illustrate applications, a brief synopsis of the fundamentals for each topic is presented, the topics being organized to correlate with those found in traditional mathematical physics texts. The recipes are presented in the form of stories and anecdotes, a pedagogical approach that makes a mathematically challenging subject easier and more fun to learn. Key features: * Uses the MAPLE computer algebra system to allow the reader to easily and quickly change the mathematical models and the parameters and then generate new answers * No prior knowledge of MAPLE is assumed; the relevant MAPLE commands are introduced on a need-to-know basis * All MAPLE commands are indexed for easy reference * A classroom-tested story/anecdote format is use...

  6. A high speed, selective multi-ADC to computer data transfer interface, for nuclear physics experiments

    International Nuclear Information System (INIS)

    Arctaedius, T.; Ekstroem, R.E.

    1986-08-01

    A link connecting up to fifteen Analog to Digital Converters with a computer, through a Direct Memory Access interface, is described. The interface decides which of the connected ADC:s that participate in an event, and transfers the output-data from these to the computer, accompanied with a 2-byte word identifying the participating ADC:s. This data format can be recorded on tape without further transformations, and is easy to unfold at the off-line analysis. Data transfer is accomplished in less than a few microseconds, which is made possible by the use of high speed TTL circuits. (authors)

  7. Climateprediction.com: Public Involvement, Multi-Million Member Ensembles and Systematic Uncertainty Analysis

    Science.gov (United States)

    Stainforth, D. A.; Allen, M.; Kettleborough, J.; Collins, M.; Heaps, A.; Stott, P.; Wehner, M.

    2001-12-01

    The climateprediction.com project is preparing to carry out the first systematic uncertainty analysis of climate forecasts using large ensembles of GCM climate simulations. This will be done by involving schools, businesses and members of the public, and utilizing the novel technology of distributed computing. Each participant will be asked to run one member of the ensemble on their PC. The model used will initially be the UK Met Office's Unified Model (UM). It will be run under Windows and software will be provided to enable those involved to view their model output as it develops. The project will use this method to carry out large perturbed physics GCM ensembles and thereby analyse the uncertainty in the forecasts from such models. Each participant/ensemble member will therefore have a version of the UM in which certain aspects of the model physics have been perturbed from their default values. Of course the non-linear nature of the system means that it will be necessary to look not just at perturbations to individual parameters in specific schemes, such as the cloud parameterization, but also to the many combinations of perturbations. This rapidly leads to the need for very large, perhaps multi-million member ensembles, which could only be undertaken using the distributed computing methodology. The status of the project will be presented and the Windows client will be demonstrated. In addition, initial results will be presented from beta test runs using a demo release for Linux PCs and Alpha workstations. Although small by comparison to the whole project, these pilot results constitute a 20-50 member perturbed physics climate ensemble with results indicating how climate sensitivity can be substantially affected by individual parameter values in the cloud scheme.

  8. Instructional Materials Physics High School with Multi Representation Approach

    Directory of Open Access Journals (Sweden)

    Yuvita Widi Astuti

    2014-06-01

    Full Text Available Bahan Ajar Fisika SMA dengan Pendekatan Multi Representasi Abstract: One effort to improve understanding of concepts and problem-solving skills in learning physics is to provide instructional materials in accordance with the characteristics of the students and help students learn. The purpose of this study are: (1 developing a high school physics teaching materials especially materials Rotation Dynamics and Equilibrium Rigid objects using multiple representations approach to improve the understanding of physics concepts, (2 test the effectiveness of instructional materials development results. This research method is the development of research using Dick & Carey model tailored to the needs of research. The research instrument used in the form of feasibility questionnaire. The type of data that is obtained is quantitative data and qualitative data. Experimental results show that the result of the development of teaching materials can be categorized as very feasible. Results of field trials showed that: (1 most of the students in the experimental class above KKM obtain test results, (2 the results of the experimental class postes greater than the control class, so that teaching materials said to be effective, but not significant to improve the understanding of physics concepts. Key Words: teaching materials, multi-representation, the rotational dynamics Abstrak: Salah satu upaya untuk meningkatkan pemahaman konsep dan kemampuan memecahkan masalah dalam pembelajaran fisika adalah dengan menyediakan bahan ajar yang sesuai dengan karakteristik siswa dan memudahkan siswa dalam belajar. Tujuan dari penelitian ini adalah: (1 mengembangkan bahan ajar fisika SMA khususnya materi Dinamika Rotasi dan Kesetimbangan Benda Tegar menggunakan pendekatan multi representasi untuk meningkatkan pemahaman konsep fisika, (2 menguji efektifitas bahan ajar hasil pengembangan. Metode penelitian ini adalah penelitian pengembangan menggunakan model Dick & Carey yang

  9. An advanced computational algorithm for systems analysis of tokamak power plants

    International Nuclear Information System (INIS)

    Dragojlovic, Zoran; Rene Raffray, A.; Najmabadi, Farrokh; Kessel, Charles; Waganer, Lester; El-Guebaly, Laila; Bromberg, Leslie

    2010-01-01

    A new computational algorithm for tokamak power plant system analysis is being developed for the ARIES project. The objective of this algorithm is to explore the most influential parameters in the physical, technological and economic trade space related to the developmental transition from experimental facilities to viable commercial power plants. This endeavor is being pursued as a new approach to tokamak systems studies, which examines an expansive, multi-dimensional trade space as opposed to traditional sensitivity analyses about a baseline design point. The new ARIES systems code consists of adaptable modules which are built from a custom-made software toolbox using object-oriented programming. The physics module captures the current tokamak physics knowledge database including modeling of the most-current proposed burning plasma experiment design (FIRE). The engineering model accurately reflects the intent and design detail of the power core elements including accurate and adjustable 3D tokamak geometry and complete modeling of all the power core and ancillary systems. Existing physics and engineering models reflect both near-term as well as advanced technology solutions that have higher performance potential. To fully assess the impact of the range of physics and engineering implementations, the plant cost accounts have been revised to reflect a more functional cost structure, supported by an updated set of costing algorithms for the direct, indirect, and financial cost accounts. All of these features have been validated against the existing ARIES-AT baseline case. The present results demonstrate visualization techniques that provide an insight into trade space assessment of attractive steady-state tokamaks for commercial use.

  10. Multi-server blind quantum computation over collective-noise channels

    Science.gov (United States)

    Xiao, Min; Liu, Lin; Song, Xiuli

    2018-03-01

    Blind quantum computation (BQC) enables ordinary clients to securely outsource their computation task to costly quantum servers. Besides two essential properties, namely correctness and blindness, practical BQC protocols also should make clients as classical as possible and tolerate faults from nonideal quantum channel. In this paper, using logical Bell states as quantum resource, we propose multi-server BQC protocols over collective-dephasing noise channel and collective-rotation noise channel, respectively. The proposed protocols permit completely or almost classical client, meet the correctness and blindness requirements of BQC protocol, and are typically practical BQC protocols.

  11. Physical Realizations of Quantum Computing

    CERN Document Server

    Kanemitsu, Shigeru; Salomaa, Martti; Takagi, Shin; Are the DiVincenzo Criteria Fulfilled in 2004 ?

    2006-01-01

    The contributors of this volume are working at the forefront of various realizations of quantum computers. They survey the recent developments in each realization, in the context of the DiVincenzo criteria, including nuclear magnetic resonance, Josephson junctions, quantum dots, and trapped ions. There are also some theoretical contributions which have relevance in the physical realizations of a quantum computer. This book fills the gap between elementary introductions to the subject and highly specialized research papers to allow beginning graduate students to understand the cutting-edge of r

  12. A computational intelligent approach to multi-factor analysis of violent crime information system

    Science.gov (United States)

    Liu, Hongbo; Yang, Chao; Zhang, Meng; McLoone, Seán; Sun, Yeqing

    2017-02-01

    Various scientific studies have explored the causes of violent behaviour from different perspectives, with psychological tests, in particular, applied to the analysis of crime factors. The relationship between bi-factors has also been extensively studied including the link between age and crime. In reality, many factors interact to contribute to criminal behaviour and as such there is a need to have a greater level of insight into its complex nature. In this article we analyse violent crime information systems containing data on psychological, environmental and genetic factors. Our approach combines elements of rough set theory with fuzzy logic and particle swarm optimisation to yield an algorithm and methodology that can effectively extract multi-knowledge from information systems. The experimental results show that our approach outperforms alternative genetic algorithm and dynamic reduct-based techniques for reduct identification and has the added advantage of identifying multiple reducts and hence multi-knowledge (rules). Identified rules are consistent with classical statistical analysis of violent crime data and also reveal new insights into the interaction between several factors. As such, the results are helpful in improving our understanding of the factors contributing to violent crime and in highlighting the existence of hidden and intangible relationships between crime factors.

  13. A network-based multi-target computational estimation scheme for anticoagulant activities of compounds.

    Directory of Open Access Journals (Sweden)

    Qian Li

    Full Text Available BACKGROUND: Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. METHODOLOGY: We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671 between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. CONCLUSIONS: This article proposes a network-based multi-target computational estimation

  14. A network-based multi-target computational estimation scheme for anticoagulant activities of compounds.

    Science.gov (United States)

    Li, Qian; Li, Xudong; Li, Canghai; Chen, Lirong; Song, Jun; Tang, Yalin; Xu, Xiaojie

    2011-03-22

    Traditional virtual screening method pays more attention on predicted binding affinity between drug molecule and target related to a certain disease instead of phenotypic data of drug molecule against disease system, as is often less effective on discovery of the drug which is used to treat many types of complex diseases. Virtual screening against a complex disease by general network estimation has become feasible with the development of network biology and system biology. More effective methods of computational estimation for the whole efficacy of a compound in a complex disease system are needed, given the distinct weightiness of the different target in a biological process and the standpoint that partial inhibition of several targets can be more efficient than the complete inhibition of a single target. We developed a novel approach by integrating the affinity predictions from multi-target docking studies with biological network efficiency analysis to estimate the anticoagulant activities of compounds. From results of network efficiency calculation for human clotting cascade, factor Xa and thrombin were identified as the two most fragile enzymes, while the catalytic reaction mediated by complex IXa:VIIIa and the formation of the complex VIIIa:IXa were recognized as the two most fragile biological matter in the human clotting cascade system. Furthermore, the method which combined network efficiency with molecular docking scores was applied to estimate the anticoagulant activities of a serial of argatroban intermediates and eight natural products respectively. The better correlation (r = 0.671) between the experimental data and the decrease of the network deficiency suggests that the approach could be a promising computational systems biology tool to aid identification of anticoagulant activities of compounds in drug discovery. This article proposes a network-based multi-target computational estimation method for anticoagulant activities of compounds by

  15. Computing Convex Coverage Sets for Faster Multi-Objective Coordination

    NARCIS (Netherlands)

    Roijers, D.M.; Whiteson, S.; Oliehoek, F.A.

    2015-01-01

    In this article, we propose new algorithms for multi-objective coordination graphs (MO-CoGs). Key to the efficiency of these algorithms is that they compute a convex coverage set (CCS) instead of a Pareto coverage set (PCS). Not only is a CCS a sufficient solution set for a large class of problems,

  16. PHYSICAL OBJECT-ORIENTED MODELING IN DEVELOPMENT OF INDIVIDUALIZED TEACHING AND ORGANIZATION OF MINI-RESEARCH IN MECHANICS COURSES

    Directory of Open Access Journals (Sweden)

    Alexander S. Chirtsov

    2017-03-01

    research by students on physical properties of complex mechanical systems, the analysis of which is beyond the scope of the standard physics and mathematics curriculum. The heuristic capabilities of models created by the physical reality constructor were also demonstrated. The capability to investigate dynamics of the complex systems, an a priori analysis of which is not evident or with a difficult or impossible-to-calculate evolution, was also demonstrated. Practical Relevance. The developed computer program for automated development of interactive educational simulations provides a solution to standing problems in accompanying massive open individualized learning multi-level courses in physics as well as an opportunity to develop creative forms of training in physics with elements of research.

  17. Reactor physics simulations with coupled Monte Carlo calculation and computational fluid dynamics

    International Nuclear Information System (INIS)

    Seker, V.; Thomas, J. W.; Downar, T. J.

    2007-01-01

    The interest in high fidelity modeling of nuclear reactor cores has increased over the last few years and has become computationally more feasible because of the dramatic improvements in processor speed and the availability of low cost parallel platforms. In the research here high fidelity, multi-physics analyses was performed by solving the neutron transport equation using Monte Carlo methods and by solving the thermal-hydraulics equations using computational fluid dynamics. A computation tool based on coupling the Monte Carlo code MCNP5 and the Computational Fluid Dynamics (CFD) code STAR-CD was developed as an audit tool for lower order nuclear reactor calculations. This paper presents the methodology of the developed computer program 'McSTAR' along with the verification and validation efforts. McSTAR is written in PERL programming language and couples MCNP5 and the commercial CFD code STAR-CD. MCNP uses a continuous energy cross section library produced by the NJOY code system from the raw ENDF/B data. A major part of the work was to develop and implement methods to update the cross section library with the temperature distribution calculated by STAR-CD for every region. Three different methods were investigated and two of them are implemented in McSTAR. The user subroutines in STAR-CD are modified to read the power density data and assign them to the appropriate variables in the program and to write an output data file containing the temperature, density and indexing information to perform the mapping between MCNP and STAR-CD cells. The necessary input file manipulation, data file generation, normalization and multi-processor calculation settings are all done through the program flow in McSTAR. Initial testing of the code was performed using a single pin cell and a 3X3 PWR pin-cell problem. The preliminary results of the single pin-cell problem are compared with those obtained from a STAR-CD coupled calculation with the deterministic transport code De

  18. Some Aspects of Mathematical and Physical Approaches for Topological Quantum Computation

    Directory of Open Access Journals (Sweden)

    V. Kantser

    2011-10-01

    Full Text Available A paradigm to build a quantum computer, based on topological invariants is highlighted. The identities in the ensemble of knots, links and braids originally discovered in relation to topological quantum field theory are shown: how they define Artin braid group -- the mathematical basis of topological quantum computation (TQC. Vector spaces of TQC correspond to associated strings of particle interactions, and TQC operates its calculations on braided strings of special physical quasiparticles -- anyons -- with non-Abelian statistics. The physical platform of TQC is to use the topological quantum numbers of such small groups of anyons as qubits and to perform operations on these qubits by exchanging the anyons, both within the groups that form the qubits and, for multi-qubit gates, between groups. By braiding two or more anyons, they acquire up a topological phase or Berry phase similar to that found in the Aharonov-Bohm effect. Topological matter such as fractional quantum Hall systems and novel discovered topological insulators open the way to form system of anyons -- Majorana fermions -- with the unique property of encoding and processing quantum information in a naturally fault-tolerant way. In the topological insulators, due to its fundamental attribute of topological surface state occurrence of the bound, Majorana fermions are generated at its heterocontact with superconductors. One of the key operations of TQC -- braiding of non-Abelian anyons: it is illustrated how it can be implemented in one-dimensional topological isolator wire networks.

  19. HCIDL: Human-computer interface description language for multi-target, multimodal, plastic user interfaces

    Directory of Open Access Journals (Sweden)

    Lamia Gaouar

    2018-06-01

    Full Text Available From the human-computer interface perspectives, the challenges to be faced are related to the consideration of new, multiple interactions, and the diversity of devices. The large panel of interactions (touching, shaking, voice dictation, positioning … and the diversification of interaction devices can be seen as a factor of flexibility albeit introducing incidental complexity. Our work is part of the field of user interface description languages. After an analysis of the scientific context of our work, this paper introduces HCIDL, a modelling language staged in a model-driven engineering approach. Among the properties related to human-computer interface, our proposition is intended for modelling multi-target, multimodal, plastic interaction interfaces using user interface description languages. By combining plasticity and multimodality, HCIDL improves usability of user interfaces through adaptive behaviour by providing end-users with an interaction-set adapted to input/output of terminals and, an optimum layout. Keywords: Model driven engineering, Human-computer interface, User interface description languages, Multimodal applications, Plastic user interfaces

  20. Settings for Physical Activity – Developing a Site-specific Physical Activity Behavior Model based on Multi-level Intervention Studies

    DEFF Research Database (Denmark)

    Troelsen, Jens; Klinker, Charlotte Demant; Breum, Lars

    Settings for Physical Activity – Developing a Site-specific Physical Activity Behavior Model based on Multi-level Intervention Studies Introduction: Ecological models of health behavior have potential as theoretical framework to comprehend the multiple levels of factors influencing physical...... to be taken into consideration. A theoretical implication of this finding is to develop a site-specific physical activity behavior model adding a layered structure to the ecological model representing the determinants related to the specific site. Support: This study was supported by TrygFonden, Realdania...... activity (PA). The potential is shown by the fact that there has been a dramatic increase in application of ecological models in research and practice. One proposed core principle is that an ecological model is most powerful if the model is behavior-specific. However, based on multi-level interventions...

  1. Novel instrument for characterizing comprehensive physical properties under multi-mechanical loads and multi-physical field coupling conditions

    Science.gov (United States)

    Liu, Changyi; Zhao, Hongwei; Ma, Zhichao; Qiao, Yuansen; Hong, Kun; Ren, Zhuang; Zhang, Jianhai; Pei, Yongmao; Ren, Luquan

    2018-02-01

    Functional materials represented by ferromagnetics and ferroelectrics are widely used in advanced sensor and precision actuation due to their special characterization under coupling interactions of complex loads and external physical fields. However, the conventional devices for material characterization can only provide a limited type of loads and physical fields and cannot simulate the actual service conditions of materials. A multi-field coupling instrument for characterization has been designed and implemented to overcome this barrier and measure the comprehensive physical properties under complex service conditions. The testing forms include tension, compression, bending, torsion, and fatigue in mechanical loads, as well as different external physical fields, including electric, magnetic, and thermal fields. In order to offer a variety of information to reveal mechanical damage or deformation forms, a series of measurement methods at the microscale are integrated with the instrument including an indentation unit and in situ microimaging module. Finally, several coupling experiments which cover all the loading and measurement functions of the instrument have been implemented. The results illustrate the functions and characteristics of the instrument and then reveal the variety in mechanical and electromagnetic properties of the piezoelectric transducer ceramic, TbDyFe alloy, and carbon fiber reinforced polymer under coupling conditions.

  2. [Geometry, analysis, and computation in mathematics and applied science]. Progress report

    Energy Technology Data Exchange (ETDEWEB)

    Hoffman, D.

    1994-02-01

    The principal investigators` work on a variety of pure and applied problems in Differential Geometry, Calculus of Variations and Mathematical Physics has been done in a computational laboratory and been based on interactive scientific computer graphics and high speed computation created by the principal investigators to study geometric interface problems in the physical sciences. We have developed software to simulate various physical phenomena from constrained plasma flow to the electron microscope imaging of the microstructure of compound materials, techniques for the visualization of geometric structures that has been used to make significant breakthroughs in the global theory of minimal surfaces, and graphics tools to study evolution processes, such as flow by mean curvature, while simultaneously developing the mathematical foundation of the subject. An increasingly important activity of the laboratory is to extend this environment in order to support and enhance scientific collaboration with researchers at other locations. Toward this end, the Center developed the GANGVideo distributed video software system and software methods for running lab-developed programs simultaneously on remote and local machines. Further, the Center operates a broadcast video network, running in parallel with the Center`s data networks, over which researchers can access stored video materials or view ongoing computations. The graphical front-end to GANGVideo can be used to make ``multi-media mail`` from both ``live`` computing sessions and stored materials without video editing. Currently, videotape is used as the delivery medium, but GANGVideo is compatible with future ``all-digital`` distribution systems. Thus as a byproduct of mathematical research, we are developing methods for scientific communication. But, most important, our research focuses on important scientific problems; the parallel development of computational and graphical tools is driven by scientific needs.

  3. Challenges in computational materials science: Multiple scales, multi-physics and evolving discontinuities

    NARCIS (Netherlands)

    Borst, de R.

    2008-01-01

    Novel experimental possibilities together with improvements in computer hardware as well as new concepts in computational mathematics and mechanics in particular multiscale methods are now, in principle, making it possible to derive and compute phenomena and material parameters at a macroscopic

  4. Multi-component controllers in reactor physics optimality analysis

    International Nuclear Information System (INIS)

    Aldemir, T.

    1978-01-01

    An algorithm is developed for the optimality analysis of thermal reactor assemblies with multi-component control vectors. The neutronics of the system under consideration is assumed to be described by the two-group diffusion equations and constraints are imposed upon the state and control variables. It is shown that if the problem is such that the differential and algebraic equations describing the system can be cast into a linear form via a change of variables, the optimal control components are piecewise constant functions and the global optimal controller can be determined by investigating the properties of the influence functions. Two specific problems are solved utilizing this approach. A thermal reactor consisting of fuel, burnable poison and moderator is found to yield maximal power when the assembly consists of two poison zones and the power density is constant throughout the assembly. It is shown that certain variational relations have to be considered to maintain the activeness of the system equations as differential constraints. The problem of determining the maximum initial breeding ratio for a thermal reactor is solved by treating the fertile and fissile material absorption densities as controllers. The optimal core configurations are found to consist of three fuel zones for a bare assembly and two fuel zones for a reflected assembly. The optimum fissile material density is determined to be inversely proportional to the thermal flux

  5. PREFACE: IUPAP C20 Conference on Computational Physics (CCP 2011)

    Science.gov (United States)

    Troparevsky, Claudia; Stocks, George Malcolm

    2012-12-01

    Increasingly, computational physics stands alongside experiment and theory as an integral part of the modern approach to solving the great scientific challenges of the day on all scales - from cosmology and astrophysics, through climate science, to materials physics, and the fundamental structure of matter. Computational physics touches aspects of science and technology with direct relevance to our everyday lives, such as communication technologies and securing a clean and efficient energy future. This volume of Journal of Physics: Conference Series contains the proceedings of the scientific contributions presented at the 23rd Conference on Computational Physics held in Gatlinburg, Tennessee, USA, in November 2011. The annual Conferences on Computational Physics (CCP) are dedicated to presenting an overview of the most recent developments and opportunities in computational physics across a broad range of topical areas and from around the world. The CCP series has been in existence for more than 20 years, serving as a lively forum for computational physicists. The topics covered by this conference were: Materials/Condensed Matter Theory and Nanoscience, Strongly Correlated Systems and Quantum Phase Transitions, Quantum Chemistry and Atomic Physics, Quantum Chromodynamics, Astrophysics, Plasma Physics, Nuclear and High Energy Physics, Complex Systems: Chaos and Statistical Physics, Macroscopic Transport and Mesoscopic Methods, Biological Physics and Soft Materials, Supercomputing and Computational Physics Teaching, Computational Physics and Sustainable Energy. We would like to take this opportunity to thank our sponsors: International Union of Pure and Applied Physics (IUPAP), IUPAP Commission on Computational Physics (C20), American Physical Society Division of Computational Physics (APS-DCOMP), Oak Ridge National Laboratory (ORNL), Center for Defect Physics (CDP), the University of Tennessee (UT)/ORNL Joint Institute for Computational Sciences (JICS) and Cray, Inc

  6. Mental Computation or Standard Algorithm? Children's Strategy Choices on Multi-Digit Subtractions

    Science.gov (United States)

    Torbeyns, Joke; Verschaffel, Lieven

    2016-01-01

    This study analyzed children's use of mental computation strategies and the standard algorithm on multi-digit subtractions. Fifty-eight Flemish 4th graders of varying mathematical achievement level were individually offered subtractions that either stimulated the use of mental computation strategies or the standard algorithm in one choice and two…

  7. Computational medical imaging and hemodynamics framework for functional analysis and assessment of cardiovascular structures.

    Science.gov (United States)

    Wong, Kelvin K L; Wang, Defeng; Ko, Jacky K L; Mazumdar, Jagannath; Le, Thu-Thao; Ghista, Dhanjoo

    2017-03-21

    Cardiac dysfunction constitutes common cardiovascular health issues in the society, and has been an investigation topic of strong focus by researchers in the medical imaging community. Diagnostic modalities based on echocardiography, magnetic resonance imaging, chest radiography and computed tomography are common techniques that provide cardiovascular structural information to diagnose heart defects. However, functional information of cardiovascular flow, which can in fact be used to support the diagnosis of many cardiovascular diseases with a myriad of hemodynamics performance indicators, remains unexplored to its full potential. Some of these indicators constitute important cardiac functional parameters affecting the cardiovascular abnormalities. With the advancement of computer technology that facilitates high speed computational fluid dynamics, the realization of a support diagnostic platform of hemodynamics quantification and analysis can be achieved. This article reviews the state-of-the-art medical imaging and high fidelity multi-physics computational analyses that together enable reconstruction of cardiovascular structures and hemodynamic flow patterns within them, such as of the left ventricle (LV) and carotid bifurcations. The combined medical imaging and hemodynamic analysis enables us to study the mechanisms of cardiovascular disease-causing dysfunctions, such as how (1) cardiomyopathy causes left ventricular remodeling and loss of contractility leading to heart failure, and (2) modeling of LV construction and simulation of intra-LV hemodynamics can enable us to determine the optimum procedure of surgical ventriculation to restore its contractility and health This combined medical imaging and hemodynamics framework can potentially extend medical knowledge of cardiovascular defects and associated hemodynamic behavior and their surgical restoration, by means of an integrated medical image diagnostics and hemodynamic performance analysis framework.

  8. Computer vision uncovers predictors of physical urban change.

    Science.gov (United States)

    Naik, Nikhil; Kominers, Scott Duke; Raskar, Ramesh; Glaeser, Edward L; Hidalgo, César A

    2017-07-18

    Which neighborhoods experience physical improvements? In this paper, we introduce a computer vision method to measure changes in the physical appearances of neighborhoods from time-series street-level imagery. We connect changes in the physical appearance of five US cities with economic and demographic data and find three factors that predict neighborhood improvement. First, neighborhoods that are densely populated by college-educated adults are more likely to experience physical improvements-an observation that is compatible with the economic literature linking human capital and local success. Second, neighborhoods with better initial appearances experience, on average, larger positive improvements-an observation that is consistent with "tipping" theories of urban change. Third, neighborhood improvement correlates positively with physical proximity to the central business district and to other physically attractive neighborhoods-an observation that is consistent with the "invasion" theories of urban sociology. Together, our results provide support for three classical theories of urban change and illustrate the value of using computer vision methods and street-level imagery to understand the physical dynamics of cities.

  9. Undergraduate students’ challenges with computational modelling in physics

    Directory of Open Access Journals (Sweden)

    Simen A. Sørby

    2012-12-01

    Full Text Available In later years, computational perspectives have become essential parts in several of the University of Oslo’s natural science studies. In this paper we discuss some main findings from a qualitative study of the computational perspectives’ impact on the students’ work with their first course in physics– mechanics – and their learning and meaning making of its contents. Discussions of the students’ learning of physics are based on sociocultural theory, which originates in Vygotsky and Bakhtin, and subsequent physics education research. Results imply that the greatest challenge for students when working with computational assignments is to combine knowledge from previously known, but separate contexts. Integrating knowledge of informatics, numerical and analytical mathematics and conceptual understanding of physics appears as a clear challenge for the students. We also observe alack of awareness concerning the limitations of physical modelling. The students need help with identifying the appropriate knowledge system or “tool set”, for the different tasks at hand; they need helpto create a plan for their modelling and to become aware of its limits. In light of this, we propose thatan instructive and dialogic text as basis for the exercises, in which the emphasis is on specification, clarification and elaboration, would be of potential great aid for students who are new to computational modelling.

  10. Computing for ongoing experiments on high energy physics in LPP, JINR

    International Nuclear Information System (INIS)

    Belosludtsev, D.A.; Zhil'tsov, V.E.; Zinchenko, A.I.; Kekelidze, V.D.; Madigozhin, D.T.; Potrebenikov, Yu.K.; Khabarov, S.V.; Shkarovskij, S.N.; Shchinov, B.G.

    2004-01-01

    The computer infrastructure made at the Laboratory of Particle Physics, JINR, purposed for active participation of JINR experts in ongoing experiments on particle and nuclear physics is presented. The principles of design and construction of the personal computer farm have been given and the used computer and informational services for effective application of distributed computer resources have been described

  11. Computational intelligence for decision support in cyber-physical systems

    CERN Document Server

    Ali, A; Riaz, Zahid

    2014-01-01

    This book is dedicated to applied computational intelligence and soft computing techniques with special reference to decision support in Cyber Physical Systems (CPS), where the physical as well as the communication segment of the networked entities interact with each other. The joint dynamics of such systems result in a complex combination of computers, software, networks and physical processes all combined to establish a process flow at system level. This volume provides the audience with an in-depth vision about how to ensure dependability, safety, security and efficiency in real time by making use of computational intelligence in various CPS applications ranging from the nano-world to large scale wide area systems of systems. Key application areas include healthcare, transportation, energy, process control and robotics where intelligent decision support has key significance in establishing dynamic, ever-changing and high confidence future technologies. A recommended text for graduate students and researche...

  12. Advances in Reactor Physics, Mathematics and Computation. Volume 1

    Energy Technology Data Exchange (ETDEWEB)

    1987-01-01

    These proceedings of the international topical meeting on advances in reactor physics, mathematics and computation, volume one, are divided into 6 sessions bearing on: - session 1: Advances in computational methods including utilization of parallel processing and vectorization (7 conferences) - session 2: Fast, epithermal, reactor physics, calculation, versus measurements (9 conferences) - session 3: New fast and thermal reactor designs (9 conferences) - session 4: Thermal radiation and charged particles transport (7 conferences) - session 5: Super computers (7 conferences) - session 6: Thermal reactor design, validation and operating experience (8 conferences).

  13. Parallel computing for event reconstruction in high-energy physics

    International Nuclear Information System (INIS)

    Wolbers, S.

    1993-01-01

    Parallel computing has been recognized as a solution to large computing problems. In High Energy Physics offline event reconstruction of detector data is a very large computing problem that has been solved with parallel computing techniques. A review of the parallel programming package CPS (Cooperative Processes Software) developed and used at Fermilab for offline reconstruction of Terabytes of data requiring the delivery of hundreds of Vax-Years per experiment is given. The Fermilab UNIX farms, consisting of 180 Silicon Graphics workstations and 144 IBM RS6000 workstations, are used to provide the computing power for the experiments. Fermilab has had a long history of providing production parallel computing starting with the ACP (Advanced Computer Project) Farms in 1986. The Fermilab UNIX Farms have been in production for over 2 years with 24 hour/day service to experimental user groups. Additional tools for management, control and monitoring these large systems will be described. Possible future directions for parallel computing in High Energy Physics will be given

  14. Computational Physics Simulation of Classical and Quantum Systems

    CERN Document Server

    Scherer, Philipp O. J

    2010-01-01

    This book encapsulates the coverage for a two-semester course in computational physics. The first part introduces the basic numerical methods while omitting mathematical proofs but demonstrating the algorithms by way of numerous computer experiments. The second part specializes in simulation of classical and quantum systems with instructive examples spanning many fields in physics, from a classical rotor to a quantum bit. All program examples are realized as Java applets ready to run in your browser and do not require any programming skills.

  15. Physical-resource requirements and the power of quantum computation

    International Nuclear Information System (INIS)

    Caves, Carlton M; Deutsch, Ivan H; Blume-Kohout, Robin

    2004-01-01

    The primary resource for quantum computation is the Hilbert-space dimension. Whereas Hilbert space itself is an abstract construction, the number of dimensions available to a system is a physical quantity that requires physical resources. Avoiding a demand for an exponential amount of these resources places a fundamental constraint on the systems that are suitable for scalable quantum computation. To be scalable, the number of degrees of freedom in the computer must grow nearly linearly with the number of qubits in an equivalent qubit-based quantum computer. These considerations rule out quantum computers based on a single particle, a single atom, or a single molecule consisting of a fixed number of atoms or on classical waves manipulated using the transformations of linear optics

  16. CFD Vision 2030 Study: A Path to Revolutionary Computational Aerosciences

    Science.gov (United States)

    Slotnick, Jeffrey; Khodadoust, Abdollah; Alonso, Juan; Darmofal, David; Gropp, William; Lurie, Elizabeth; Mavriplis, Dimitri

    2014-01-01

    This report documents the results of a study to address the long range, strategic planning required by NASA's Revolutionary Computational Aerosciences (RCA) program in the area of computational fluid dynamics (CFD), including future software and hardware requirements for High Performance Computing (HPC). Specifically, the "Vision 2030" CFD study is to provide a knowledge-based forecast of the future computational capabilities required for turbulent, transitional, and reacting flow simulations across a broad Mach number regime, and to lay the foundation for the development of a future framework and/or environment where physics-based, accurate predictions of complex turbulent flows, including flow separation, can be accomplished routinely and efficiently in cooperation with other physics-based simulations to enable multi-physics analysis and design. Specific technical requirements from the aerospace industrial and scientific communities were obtained to determine critical capability gaps, anticipated technical challenges, and impediments to achieving the target CFD capability in 2030. A preliminary development plan and roadmap were created to help focus investments in technology development to help achieve the CFD vision in 2030.

  17. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  18. Crystal assisted experiments for multi-disciplinary physics with heavy ion beams at GANIL

    International Nuclear Information System (INIS)

    Dauvergne, Denis

    2015-01-01

    We present a review of the channeling and blocking experiments that have been performed at GANIL during the 30 years of stable beam operation, with the strong support of the multi-disciplinary CIRIL-CIMAP laboratory. These experiments combine atomic physics, solid state physics and nuclear physics. (paper)

  19. From pattern formation to material computation multi-agent modelling of physarum polycephalum

    CERN Document Server

    Jones, Jeff

    2015-01-01

    This book addresses topics of mobile multi-agent systems, pattern formation, biological modelling, artificial life, unconventional computation, and robotics. The behaviour of a simple organism which is capable of remarkable biological and computational feats that seem to transcend its simple component parts is examined and modelled. In this book the following question is asked: How can something as simple as Physarum polycephalum - a giant amoeboid single-celled organism which does not possess any neural tissue, fixed skeleton or organised musculature - can approximate complex computational behaviour during its foraging, growth and adaptation of its amorphous body plan, and with such limited resources? To answer this question the same apparent limitations as faced by the organism are applied: using only simple components with local interactions. A synthesis approach is adopted and a mobile multi-agent system with very simple individual behaviours is employed. It is shown their interactions yield emergent beha...

  20. Multi-objective experimental design for (13)C-based metabolic flux analysis.

    Science.gov (United States)

    Bouvin, Jeroen; Cajot, Simon; D'Huys, Pieter-Jan; Ampofo-Asiama, Jerry; Anné, Jozef; Van Impe, Jan; Geeraerd, Annemie; Bernaerts, Kristel

    2015-10-01

    (13)C-based metabolic flux analysis is an excellent technique to resolve fluxes in the central carbon metabolism but costs can be significant when using specialized tracers. This work presents a framework for cost-effective design of (13)C-tracer experiments, illustrated on two different networks. Linear and non-linear optimal input mixtures are computed for networks for Streptomyces lividans and a carcinoma cell line. If only glucose tracers are considered as labeled substrate for a carcinoma cell line or S. lividans, the best parameter estimation accuracy is obtained by mixtures containing high amounts of 1,2-(13)C2 glucose combined with uniformly labeled glucose. Experimental designs are evaluated based on a linear (D-criterion) and non-linear approach (S-criterion). Both approaches generate almost the same input mixture, however, the linear approach is favored due to its low computational effort. The high amount of 1,2-(13)C2 glucose in the optimal designs coincides with a high experimental cost, which is further enhanced when labeling is introduced in glutamine and aspartate tracers. Multi-objective optimization gives the possibility to assess experimental quality and cost at the same time and can reveal excellent compromise experiments. For example, the combination of 100% 1,2-(13)C2 glucose with 100% position one labeled glutamine and the combination of 100% 1,2-(13)C2 glucose with 100% uniformly labeled glutamine perform equally well for the carcinoma cell line, but the first mixture offers a decrease in cost of $ 120 per ml-scale cell culture experiment. We demonstrated the validity of a multi-objective linear approach to perform optimal experimental designs for the non-linear problem of (13)C-metabolic flux analysis. Tools and a workflow are provided to perform multi-objective design. The effortless calculation of the D-criterion can be exploited to perform high-throughput screening of possible (13)C-tracers, while the illustrated benefit of multi

  1. Analytical calculations by computer in physics and mathematics

    International Nuclear Information System (INIS)

    Gerdt, V.P.; Tarasov, O.V.; Shirokov, D.V.

    1978-01-01

    The review of present status of analytical calculations by computer is given. Some programming systems for analytical computations are considered. Such systems as SCHOONSCHIP, CLAM, REDUCE-2, SYMBAL, CAMAL, AVTO-ANALITIK which are implemented or will be implemented in JINR, and MACSYMA - one of the most developed systems - are discussed. It is shown on the basis of mathematical operations, realized in these systems, that they are appropriated for different problems of theoretical physics and mathematics, for example, for problems of quantum field theory, celestial mechanics, general relativity and so on. Some problems solved in JINR by programming systems for analytical computations are described. The review is intended for specialists in different fields of theoretical physics and mathematics

  2. Computations, Complexity, Experiments, and the World Outside Physics

    International Nuclear Information System (INIS)

    Kadanoff, L.P

    2009-01-01

    Computer Models in the Sciences and Social Sciences. 1. Simulation and Prediction in Complex Systems: the Good the Bad and the Awful. This lecture deals with the history of large-scale computer modeling mostly in the context of the U.S. Department of Energy's sponsorship of modeling for weapons development and innovation in energy sources. 2. Complexity: Making a Splash-Breaking a Neck - The Making of Complexity in Physical System. For ages thinkers have been asking how complexity arise. The laws of physics are very simple. How come we are so complex? This lecture tries to approach this question by asking how complexity arises in physical fluids. 3. Forrester, et. al. Social and Biological Model-Making The partial collapse of the world's economy has raised the question of whether we could improve the performance of economic and social systems by a major effort on creating understanding via large-scale computer models. (author)

  3. Enabling Grid Computing resources within the KM3NeT computing model

    Directory of Open Access Journals (Sweden)

    Filippidis Christos

    2016-01-01

    Full Text Available KM3NeT is a future European deep-sea research infrastructure hosting a new generation neutrino detectors that – located at the bottom of the Mediterranean Sea – will open a new window on the universe and answer fundamental questions both in particle physics and astrophysics. International collaborative scientific experiments, like KM3NeT, are generating datasets which are increasing exponentially in both complexity and volume, making their analysis, archival, and sharing one of the grand challenges of the 21st century. These experiments, in their majority, adopt computing models consisting of different Tiers with several computing centres and providing a specific set of services for the different steps of data processing such as detector calibration, simulation and data filtering, reconstruction and analysis. The computing requirements are extremely demanding and, usually, span from serial to multi-parallel or GPU-optimized jobs. The collaborative nature of these experiments demands very frequent WAN data transfers and data sharing among individuals and groups. In order to support the aforementioned demanding computing requirements we enabled Grid Computing resources, operated by EGI, within the KM3NeT computing model. In this study we describe our first advances in this field and the method for the KM3NeT users to utilize the EGI computing resources in a simulation-driven use-case.

  4. Tree Based Decision Strategies and Auctions in Computational Multi-Agent Systems

    Czech Academy of Sciences Publication Activity Database

    Šlapák, M.; Neruda, Roman

    2017-01-01

    Roč. 38, č. 4 (2017), s. 335-342 ISSN 0257-4306 Institutional support: RVO:67985807 Keywords : auction systems * decision making * genetic programming * multi-agent system * task distribution Subject RIV: IN - Informatics, Computer Science OBOR OECD: Computer sciences, information science, bioinformathics (hardware development to be 2.2, social aspect to be 5.8) http://rev-inv-ope.univ-paris1.fr/fileadmin/rev-inv-ope/files/38417/38417-04.pdf

  5. Computational physics. Simulation of classical and quantum systems

    Energy Technology Data Exchange (ETDEWEB)

    Scherer, Philipp O.J. [TU Muenchen (Germany). Physikdepartment T38

    2010-07-01

    This book encapsulates the coverage for a two-semester course in computational physics. The first part introduces the basic numerical methods while omitting mathematical proofs but demonstrating the algorithms by way of numerous computer experiments. The second part specializes in simulation of classical and quantum systems with instructive examples spanning many fields in physics, from a classical rotor to a quantum bit. All program examples are realized as Java applets ready to run in your browser and do not require any programming skills. (orig.)

  6. High-performance secure multi-party computation for data mining applications

    DEFF Research Database (Denmark)

    Bogdanov, Dan; Niitsoo, Margus; Toft, Tomas

    2012-01-01

    Secure multi-party computation (MPC) is a technique well suited for privacy-preserving data mining. Even with the recent progress in two-party computation techniques such as fully homomorphic encryption, general MPC remains relevant as it has shown promising performance metrics in real...... operations such as multiplication and comparison. Secondly, the confidential processing of financial data requires the use of more complex primitives, including a secure division operation. This paper describes new protocols in the Sharemind model for secure multiplication, share conversion, equality, bit...

  7. Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems

    Science.gov (United States)

    2017-04-13

    AFRL-AFOSR-UK-TR-2017-0029 Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems ...2012, “ Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous MultiCore Systems .” 2. The objective...2012 - 01/25/2015 4. TITLE AND SUBTITLE Automated and Assistive Tools for Accelerated Code migration of Scientific Computing on to Heterogeneous

  8. Building a Prototype of LHC Analysis Oriented Computing Centers

    Science.gov (United States)

    Bagliesi, G.; Boccali, T.; Della Ricca, G.; Donvito, G.; Paganoni, M.

    2012-12-01

    A Consortium between four LHC Computing Centers (Bari, Milano, Pisa and Trieste) has been formed in 2010 to prototype Analysis-oriented facilities for CMS data analysis, profiting from a grant from the Italian Ministry of Research. The Consortium aims to realize an ad-hoc infrastructure to ease the analysis activities on the huge data set collected at the LHC Collider. While “Tier2” Computing Centres, specialized in organized processing tasks like Monte Carlo simulation, are nowadays a well established concept, with years of running experience, site specialized towards end user chaotic analysis activities do not yet have a defacto standard implementation. In our effort, we focus on all the aspects that can make the analysis tasks easier for a physics user not expert in computing. On the storage side, we are experimenting on storage techniques allowing for remote data access and on storage optimization on the typical analysis access patterns. On the networking side, we are studying the differences between flat and tiered LAN architecture, also using virtual partitioning of the same physical networking for the different use patterns. Finally, on the user side, we are developing tools and instruments to allow for an exhaustive monitoring of their processes at the site, and for an efficient support system in case of problems. We will report about the results of the test executed on different subsystem and give a description of the layout of the infrastructure in place at the site participating to the consortium.

  9. Building a Prototype of LHC Analysis Oriented Computing Centers

    International Nuclear Information System (INIS)

    Bagliesi, G; Boccali, T; Della Ricca, G; Donvito, G; Paganoni, M

    2012-01-01

    A Consortium between four LHC Computing Centers (Bari, Milano, Pisa and Trieste) has been formed in 2010 to prototype Analysis-oriented facilities for CMS data analysis, profiting from a grant from the Italian Ministry of Research. The Consortium aims to realize an ad-hoc infrastructure to ease the analysis activities on the huge data set collected at the LHC Collider. While “Tier2” Computing Centres, specialized in organized processing tasks like Monte Carlo simulation, are nowadays a well established concept, with years of running experience, site specialized towards end user chaotic analysis activities do not yet have a defacto standard implementation. In our effort, we focus on all the aspects that can make the analysis tasks easier for a physics user not expert in computing. On the storage side, we are experimenting on storage techniques allowing for remote data access and on storage optimization on the typical analysis access patterns. On the networking side, we are studying the differences between flat and tiered LAN architecture, also using virtual partitioning of the same physical networking for the different use patterns. Finally, on the user side, we are developing tools and instruments to allow for an exhaustive monitoring of their processes at the site, and for an efficient support system in case of problems. We will report about the results of the test executed on different subsystem and give a description of the layout of the infrastructure in place at the site participating to the consortium.

  10. Supercomputers and the future of computational atomic scattering physics

    International Nuclear Information System (INIS)

    Younger, S.M.

    1989-01-01

    The advent of the supercomputer has opened new vistas for the computational atomic physicist. Problems of hitherto unparalleled complexity are now being examined using these new machines, and important connections with other fields of physics are being established. This talk briefly reviews some of the most important trends in computational scattering physics and suggests some exciting possibilities for the future. 7 refs., 2 figs

  11. Multi-instrument assessment of physical activity in female office workers

    Directory of Open Access Journals (Sweden)

    Sema Can

    2016-12-01

    Full Text Available Objectives: The aim of this study was to examine the multi-instrument assessment of physical activity in female office workers. Material and Methods: Fifty healthy women (age (mean ± standard deviation: 34.8±5.9 years, body height: 158±0.4 cm, body weight: 61.8±7.5 kg, body mass index: 24.6±2.7 kg/m2 workers from the same workplace volunteered to participate in the study. Physical activity was measured with the 7-day Physical Activity Assessment Questionnaire (7-d PAAQ, an objective multi-sensor armband tool, and also a waist-mounted pedometer, which were both worn for 7 days. Results: A significant correlation between step numbers measured by armband and pedometer was observed (r = 0.735, but the step numbers measured by these 2 methods were significantly different (10 941±2236 steps/ day and 9170±2377 steps/day, respectively; p < 0.001. There was a weak correlation between the value of 7-d PAAQ total energy expenditure and the value of armband total energy expenditure (r = 0.394, p = 0.005. However, total energy expenditure values measured by armband and 7-d PAAQ were not significantly different (2081±370 kcal/ day and 2084±197 kcal/day, respectively; p = 0.96. In addition, physical activity levels (average daily metabolic equivalents (MET measured by armband and 7-d PAAQ were not significantly different (1.45±0.12 MET/day and 1.47±0.24 MET/day, respectively; p = 0.44. Conclusions: The results of this study showed that the correlation between pedometer and armband measurements was higher than that between armband measurements and 7-d PAAQ selfreports. Our results suggest that none of the assessment methods examined here, 7-d PAAQ, pedometer, or armband, is sufficient when used as a single tool for physical activity level determination. Therefore, multi-instrument assessment methods are preferable. Int J Occup Med Environ Health 2016;29(6:937–945

  12. Computer Self-Efficacy, Computer Anxiety, Performance and Personal Outcomes of Turkish Physical Education Teachers

    Science.gov (United States)

    Aktag, Isil

    2015-01-01

    The purpose of this study is to determine the computer self-efficacy, performance outcome, personal outcome, and affect and anxiety level of physical education teachers. Influence of teaching experience, computer usage and participation of seminars or in-service programs on computer self-efficacy level were determined. The subjects of this study…

  13. Application of local computer networks in nuclear-physical experiments and technology

    International Nuclear Information System (INIS)

    Foteev, V.A.

    1986-01-01

    The bases of construction, comparative performance and potentialities of local computer networks with respect to their application in physical experiments are considered. The principle of operation of local networks is shown on the basis of the Ethernet network and the results of analysis of their operating performance are given. The examples of operating local networks in the area of nuclear-physics research and nuclear technology are presented as follows: networks of Japan Atomic Energy Research Institute, California University and Los Alamos National Laboratory, network realization according to the DECnet and Fast-bus programs, home network configurations of the USSR Academy of Sciences and JINR Neutron Physical Laboratory etc. It is shown that local networks allows significantly raise productivity in the sphere of data processing

  14. Modern computer hardware and the role of central computing facilities in particle physics

    International Nuclear Information System (INIS)

    Zacharov, V.

    1981-01-01

    Important recent changes in the hardware technology of computer system components are reviewed, and the impact of these changes assessed on the present and future pattern of computing in particle physics. The place of central computing facilities is particularly examined, to answer the important question as to what, if anything, should be their future role. Parallelism in computing system components is considered to be an important property that can be exploited with advantage. The paper includes a short discussion of the position of communications and network technology in modern computer systems. (orig.)

  15. Preparing Students for Careers in Science and Industry with Computational Physics

    Science.gov (United States)

    Florinski, V. A.

    2011-12-01

    Funded by NSF CAREER grant, the University of Alabama (UAH) in Huntsville has launched a new graduate program in Computational Physics. It is universally accepted that today's physics is done on a computer. The program blends the boundary between physics and computer science by teaching student modern, practical techniques of solving difficult physics problems using diverse computational platforms. Currently consisting of two courses first offered in the Fall of 2011, the program will eventually include 5 courses covering methods for fluid dynamics, particle transport via stochastic methods, and hybrid and PIC plasma simulations. The UAH's unique location allows courses to be shaped through discussions with faculty, NASA/MSFC researchers and local R&D business representatives, i.e., potential employers of the program's graduates. Students currently participating in the program have all begun their research careers in space and plasma physics; many are presenting their research at this meeting.

  16. Computer work and self-reported variables on anthropometrics, computer usage, work ability, productivity, pain, and physical activity.

    Science.gov (United States)

    Madeleine, Pascal; Vangsgaard, Steffen; Hviid Andersen, Johan; Ge, Hong-You; Arendt-Nielsen, Lars

    2013-08-01

    Computer users often report musculoskeletal complaints and pain in the upper extremities and the neck-shoulder region. However, recent epidemiological studies do not report a relationship between the extent of computer use and work-related musculoskeletal disorders (WMSD).The aim of this study was to conduct an explorative analysis on short and long-term pain complaints and work-related variables in a cohort of Danish computer users. A structured web-based questionnaire including questions related to musculoskeletal pain, anthropometrics, work-related variables, work ability, productivity, health-related parameters, lifestyle variables as well as physical activity during leisure time was designed. Six hundred and ninety office workers completed the questionnaire responding to an announcement posted in a union magazine. The questionnaire outcomes, i.e., pain intensity, duration and locations as well as anthropometrics, work-related variables, work ability, productivity, and level of physical activity, were stratified by gender and correlations were obtained. Women reported higher pain intensity, longer pain duration as well as more locations with pain than men (P women scored poorer work ability and ability to fulfil the requirements on productivity than men (P work ability/productivity (P work ability reported by women workers relate to their higher risk of contracting WMSD. Overall, this investigation confirmed the complex interplay between anthropometrics, work ability, productivity, and pain perception among computer users.

  17. Comprehensive preclinical evaluation of a multi-physics model of liver tumor radiofrequency ablation.

    Science.gov (United States)

    Audigier, Chloé; Mansi, Tommaso; Delingette, Hervé; Rapaka, Saikiran; Passerini, Tiziano; Mihalef, Viorel; Jolly, Marie-Pierre; Pop, Raoul; Diana, Michele; Soler, Luc; Kamen, Ali; Comaniciu, Dorin; Ayache, Nicholas

    2017-09-01

    We aim at developing a framework for the validation of a subject-specific multi-physics model of liver tumor radiofrequency ablation (RFA). The RFA computation becomes subject specific after several levels of personalization: geometrical and biophysical (hemodynamics, heat transfer and an extended cellular necrosis model). We present a comprehensive experimental setup combining multimodal, pre- and postoperative anatomical and functional images, as well as the interventional monitoring of intra-operative signals: the temperature and delivered power. To exploit this dataset, an efficient processing pipeline is introduced, which copes with image noise, variable resolution and anisotropy. The validation study includes twelve ablations from five healthy pig livers: a mean point-to-mesh error between predicted and actual ablation extent of 5.3 ± 3.6 mm is achieved. This enables an end-to-end preclinical validation framework that considers the available dataset.

  18. Computational physics: an introduction (second edition)

    International Nuclear Information System (INIS)

    Borcherds, Peter

    2002-01-01

    This book has much in common with many other books on Computational Physics texts, some of which are helpfully listed by the author as 'A subjective review on related texts'. The first five chapters are introductory, covering finite differences, linear algebra, stochastics and ordinary and partial differential equations. The final section of chapter 3 is entitled 'Stochastic Optimisation', and covers Simulated Annealing and Genetic Algorithms. Neither topic is adequately covered; an explicit example, with algorithms, in each case would have been helpful. However few other computational physics texts mention these topics at all. The chapters in the final part of the book are more advanced, and cover comprehensively Simulation and Statistical Mechanics, Quantum Mechanical Simulation and Hydrodynamics. These chapters include specialist material not in other texts, e.g. Alder vortices and the Nose--Hoover method. There is an extensive coverage of Ewald summation. The author is in the course of augmenting his book by web-resident sample programs, which should enhance the value of the book. This book should appeal to anyone working in the fields covered in the final section. It ought also to be in any physics library. (author)

  19. GPU-computing in econophysics and statistical physics

    Science.gov (United States)

    Preis, T.

    2011-03-01

    A recent trend in computer science and related fields is general purpose computing on graphics processing units (GPUs), which can yield impressive performance. With multiple cores connected by high memory bandwidth, today's GPUs offer resources for non-graphics parallel processing. This article provides a brief introduction into the field of GPU computing and includes examples. In particular computationally expensive analyses employed in financial market context are coded on a graphics card architecture which leads to a significant reduction of computing time. In order to demonstrate the wide range of possible applications, a standard model in statistical physics - the Ising model - is ported to a graphics card architecture as well, resulting in large speedup values.

  20. A computer programme for use in the development of multi-element x-ray-fluorescence methods of analysis

    International Nuclear Information System (INIS)

    Wall, G.J.

    1985-01-01

    A computer programme (written in BASIC) is described for the evaluation of spectral-line intensities in X-ray-fluorescence spectrometry. The programme is designed to assist the analyst while he is developing new analytical methods, because it facilitates the selection of the following evaluation parameters: calculation models, spectral-line correction factors, calibration curves, calibration ranges, and point deletions. In addition, the programme enables the analyst to undertake routine calculations of data from multi-element analyses in which variable data-reduction parameters are used for each element

  1. Assessing the economic impact of North China’s water scarcity mitigation strategy : a multi - region, water - extended computable general equilibrium analysis

    NARCIS (Netherlands)

    Qin, Changbo; Qin, C.; Su, Zhongbo; Bressers, Johannes T.A.; Jia, Y.; Wang, H.

    2013-01-01

    This paper describes a multi-region computable general equilibrium model for analyzing the effectiveness of measures and policies for mitigating North China’s water scarcity with respect to three different groups of scenarios. The findings suggest that a reduction in groundwater use would negatively

  2. Modeling and simulation of multi-physics multi-scale transport phenomenain bio-medical applications

    Science.gov (United States)

    Kenjereš, Saša

    2014-08-01

    We present a short overview of some of our most recent work that combines the mathematical modeling, advanced computer simulations and state-of-the-art experimental techniques of physical transport phenomena in various bio-medical applications. In the first example, we tackle predictions of complex blood flow patterns in the patient-specific vascular system (carotid artery bifurcation) and transfer of the so-called "bad" cholesterol (low-density lipoprotein, LDL) within the multi-layered artery wall. This two-way coupling between the blood flow and corresponding mass transfer of LDL within the artery wall is essential for predictions of regions where atherosclerosis can develop. It is demonstrated that a recently developed mathematical model, which takes into account the complex multi-layer arterial-wall structure, produced LDL profiles within the artery wall in good agreement with in-vivo experiments in rabbits, and it can be used for predictions of locations where the initial stage of development of atherosclerosis may take place. The second example includes a combination of pulsating blood flow and medical drug delivery and deposition controlled by external magnetic field gradients in the patient specific carotid artery bifurcation. The results of numerical simulations are compared with own PIV (Particle Image Velocimetry) and MRI (Magnetic Resonance Imaging) in the PDMS (silicon-based organic polymer) phantom. A very good agreement between simulations and experiments is obtained for different stages of the pulsating cycle. Application of the magnetic drug targeting resulted in an increase of up to ten fold in the efficiency of local deposition of the medical drug at desired locations. Finally, the LES (Large Eddy Simulation) of the aerosol distribution within the human respiratory system that includes up to eight bronchial generations is performed. A very good agreement between simulations and MRV (Magnetic Resonance Velocimetry) measurements is obtained

  3. Modeling and simulation of multi-physics multi-scale transport phenomenain bio-medical applications

    International Nuclear Information System (INIS)

    Kenjereš, Saša

    2014-01-01

    We present a short overview of some of our most recent work that combines the mathematical modeling, advanced computer simulations and state-of-the-art experimental techniques of physical transport phenomena in various bio-medical applications. In the first example, we tackle predictions of complex blood flow patterns in the patient-specific vascular system (carotid artery bifurcation) and transfer of the so-called 'bad' cholesterol (low-density lipoprotein, LDL) within the multi-layered artery wall. This two-way coupling between the blood flow and corresponding mass transfer of LDL within the artery wall is essential for predictions of regions where atherosclerosis can develop. It is demonstrated that a recently developed mathematical model, which takes into account the complex multi-layer arterial-wall structure, produced LDL profiles within the artery wall in good agreement with in-vivo experiments in rabbits, and it can be used for predictions of locations where the initial stage of development of atherosclerosis may take place. The second example includes a combination of pulsating blood flow and medical drug delivery and deposition controlled by external magnetic field gradients in the patient specific carotid artery bifurcation. The results of numerical simulations are compared with own PIV (Particle Image Velocimetry) and MRI (Magnetic Resonance Imaging) in the PDMS (silicon-based organic polymer) phantom. A very good agreement between simulations and experiments is obtained for different stages of the pulsating cycle. Application of the magnetic drug targeting resulted in an increase of up to ten fold in the efficiency of local deposition of the medical drug at desired locations. Finally, the LES (Large Eddy Simulation) of the aerosol distribution within the human respiratory system that includes up to eight bronchial generations is performed. A very good agreement between simulations and MRV (Magnetic Resonance Velocimetry) measurements is

  4. Modelling transport phenomena in a multi-physics context

    Energy Technology Data Exchange (ETDEWEB)

    Marra, Francesco [Dipartimento di Ingegneria Chimica e Alimentare - Università degli studi di Salerno Via Ponte Don Melillo - 84084 Fisciano SA (Italy)

    2015-01-22

    Innovative heating research on cooking, pasteurization/sterilization, defrosting, thawing and drying, often focuses on areas which include the assessment of processing time, evaluation of heating uniformity, studying the impact on quality attributes of the final product as well as considering the energy efficiency of these heating processes. During the last twenty years, so-called electro-heating-processes (radio-frequency - RF, microwaves - MW and ohmic - OH) gained a wide interest in industrial food processing and many applications using the above mentioned technologies have been developed with the aim of reducing processing time, improving process efficiency and, in many cases, the heating uniformity. In the area of innovative heating, electro-heating accounts for a considerable portion of both the scientific literature and commercial applications, which can be subdivided into either direct electro-heating (as in the case of OH heating) where electrical current is applied directly to the food or indirect electro-heating (e.g. MW and RF heating) where the electrical energy is firstly converted to electromagnetic radiation which subsequently generates heat within a product. New software packages, which make easier solution of PDEs based mathematical models, and new computers, capable of larger RAM and more efficient CPU performances, allowed an increasing interest about modelling transport phenomena in systems and processes - as the ones encountered in food processing - that can be complex in terms of geometry, composition, boundary conditions but also - as in the case of electro-heating assisted applications - in terms of interaction with other physical phenomena such as displacement of electric or magnetic field. This paper deals with the description of approaches used in modelling transport phenomena in a multi-physics context such as RF, MW and OH assisted heating.

  5. Modelling transport phenomena in a multi-physics context

    Science.gov (United States)

    Marra, Francesco

    2015-01-01

    Innovative heating research on cooking, pasteurization/sterilization, defrosting, thawing and drying, often focuses on areas which include the assessment of processing time, evaluation of heating uniformity, studying the impact on quality attributes of the final product as well as considering the energy efficiency of these heating processes. During the last twenty years, so-called electro-heating-processes (radio-frequency - RF, microwaves - MW and ohmic - OH) gained a wide interest in industrial food processing and many applications using the above mentioned technologies have been developed with the aim of reducing processing time, improving process efficiency and, in many cases, the heating uniformity. In the area of innovative heating, electro-heating accounts for a considerable portion of both the scientific literature and commercial applications, which can be subdivided into either direct electro-heating (as in the case of OH heating) where electrical current is applied directly to the food or indirect electro-heating (e.g. MW and RF heating) where the electrical energy is firstly converted to electromagnetic radiation which subsequently generates heat within a product. New software packages, which make easier solution of PDEs based mathematical models, and new computers, capable of larger RAM and more efficient CPU performances, allowed an increasing interest about modelling transport phenomena in systems and processes - as the ones encountered in food processing - that can be complex in terms of geometry, composition, boundary conditions but also - as in the case of electro-heating assisted applications - in terms of interaction with other physical phenomena such as displacement of electric or magnetic field. This paper deals with the description of approaches used in modelling transport phenomena in a multi-physics context such as RF, MW and OH assisted heating.

  6. Modelling transport phenomena in a multi-physics context

    International Nuclear Information System (INIS)

    Marra, Francesco

    2015-01-01

    Innovative heating research on cooking, pasteurization/sterilization, defrosting, thawing and drying, often focuses on areas which include the assessment of processing time, evaluation of heating uniformity, studying the impact on quality attributes of the final product as well as considering the energy efficiency of these heating processes. During the last twenty years, so-called electro-heating-processes (radio-frequency - RF, microwaves - MW and ohmic - OH) gained a wide interest in industrial food processing and many applications using the above mentioned technologies have been developed with the aim of reducing processing time, improving process efficiency and, in many cases, the heating uniformity. In the area of innovative heating, electro-heating accounts for a considerable portion of both the scientific literature and commercial applications, which can be subdivided into either direct electro-heating (as in the case of OH heating) where electrical current is applied directly to the food or indirect electro-heating (e.g. MW and RF heating) where the electrical energy is firstly converted to electromagnetic radiation which subsequently generates heat within a product. New software packages, which make easier solution of PDEs based mathematical models, and new computers, capable of larger RAM and more efficient CPU performances, allowed an increasing interest about modelling transport phenomena in systems and processes - as the ones encountered in food processing - that can be complex in terms of geometry, composition, boundary conditions but also - as in the case of electro-heating assisted applications - in terms of interaction with other physical phenomena such as displacement of electric or magnetic field. This paper deals with the description of approaches used in modelling transport phenomena in a multi-physics context such as RF, MW and OH assisted heating

  7. Performance evaluation of throughput computing workloads using multi-core processors and graphics processors

    Science.gov (United States)

    Dave, Gaurav P.; Sureshkumar, N.; Blessy Trencia Lincy, S. S.

    2017-11-01

    Current trend in processor manufacturing focuses on multi-core architectures rather than increasing the clock speed for performance improvement. Graphic processors have become as commodity hardware for providing fast co-processing in computer systems. Developments in IoT, social networking web applications, big data created huge demand for data processing activities and such kind of throughput intensive applications inherently contains data level parallelism which is more suited for SIMD architecture based GPU. This paper reviews the architectural aspects of multi/many core processors and graphics processors. Different case studies are taken to compare performance of throughput computing applications using shared memory programming in OpenMP and CUDA API based programming.

  8. Infinite elements for soil-structure interaction analysis in multi-layered halfspaces

    International Nuclear Information System (INIS)

    Yun, Chung Bang; Kim, Jae Min; Yang, Shin Chu

    1994-01-01

    This paper presents the theoretical aspects of a computer code (KIESSI) for soil-structure interaction analysis in a multi-layered halfspace using infinite elements. The shape functions of the infinite elements are derived from approximate expressions of the analytical solutions. Three different infinite elements are developed. They are the horizontal, the vertical and the comer infinite elements (HIE, VIE and CIE). Numerical example analyses are presented for demonstrating the effectiveness of the proposed infinite elements

  9. The challenges of developing computational physics: the case of South Africa

    International Nuclear Information System (INIS)

    Salagaram, T; Chetty, N

    2013-01-01

    Most modern scientific research problems are complex and interdisciplinary in nature. It is impossible to study such problems in detail without the use of computation in addition to theory and experiment. Although it is widely agreed that students should be introduced to computational methods at the undergraduate level, it remains a challenge to do this in a full traditional undergraduate curriculum. In this paper, we report on a survey that we conducted of undergraduate physics curricula in South Africa to determine the content and the approach taken in the teaching of computational physics. We also considered the pedagogy of computational physics at the postgraduate and research levels at various South African universities, research facilities and institutions. We conclude that the state of computational physics training in South Africa, especially at the undergraduate teaching level, is generally weak and needs to be given more attention at all universities. Failure to do so will impact negatively on the countrys capacity to grow its endeavours generally in the field of computational sciences, with negative impacts on research, and in commerce and industry

  10. Interferences and events on epistemic shifts in physics through computer simulations

    CERN Document Server

    Warnke, Martin

    2017-01-01

    Computer simulations are omnipresent media in today's knowledge production. For scientific endeavors such as the detection of gravitational waves and the exploration of subatomic worlds, simulations are essential; however, the epistemic status of computer simulations is rather controversial as they are neither just theory nor just experiment. Therefore, computer simulations have challenged well-established insights and common scientific practices as well as our very understanding of knowledge. This volume contributes to the ongoing discussion on the epistemic position of computer simulations in a variety of physical disciplines, such as quantum optics, quantum mechanics, and computational physics. Originating from an interdisciplinary event, it shows that accounts of contemporary physics can constructively interfere with media theory, philosophy, and the history of science.

  11. IV International Conference on Computer Algebra in Physical Research. Collection of abstracts

    International Nuclear Information System (INIS)

    Rostovtsev, V.A.

    1990-01-01

    The abstracts of the reports made on IV International conference on computer algebra in physical research are presented. The capabilities of application of computers for algebraic computations in high energy physics and quantum field theory are discussed. Particular attention is paid to a software for the REDUCE computer algebra system

  12. An intelligent multi-media human-computer dialogue system

    Science.gov (United States)

    Neal, J. G.; Bettinger, K. E.; Byoun, J. S.; Dobes, Z.; Thielman, C. Y.

    1988-01-01

    Sophisticated computer systems are being developed to assist in the human decision-making process for very complex tasks performed under stressful conditions. The human-computer interface is a critical factor in these systems. The human-computer interface should be simple and natural to use, require a minimal learning period, assist the user in accomplishing his task(s) with a minimum of distraction, present output in a form that best conveys information to the user, and reduce cognitive load for the user. In pursuit of this ideal, the Intelligent Multi-Media Interfaces project is devoted to the development of interface technology that integrates speech, natural language text, graphics, and pointing gestures for human-computer dialogues. The objective of the project is to develop interface technology that uses the media/modalities intelligently in a flexible, context-sensitive, and highly integrated manner modelled after the manner in which humans converse in simultaneous coordinated multiple modalities. As part of the project, a knowledge-based interface system, called CUBRICON (CUBRC Intelligent CONversationalist) is being developed as a research prototype. The application domain being used to drive the research is that of military tactical air control.

  13. Human Pose Estimation and Activity Recognition from Multi-View Videos

    DEFF Research Database (Denmark)

    Holte, Michael Boelstoft; Tran, Cuong; Trivedi, Mohan

    2012-01-01

    approaches which have been proposed to comply with these requirements. We report a comparison of the most promising methods for multi-view human action recognition using two publicly available datasets: the INRIA Xmas Motion Acquisition Sequences (IXMAS) Multi-View Human Action Dataset, and the i3DPost Multi......–computer interaction (HCI), assisted living, gesture-based interactive games, intelligent driver assistance systems, movies, 3D TV and animation, physical therapy, autonomous mental development, smart environments, sport motion analysis, video surveillance, and video annotation. Next, we review and categorize recent......-View Human Action and Interaction Dataset. To compare the proposed methods, we give a qualitative assessment of methods which cannot be compared quantitatively, and analyze some prominent 3D pose estimation techniques for application, where not only the performed action needs to be identified but a more...

  14. Multi-index Stochastic Collocation Convergence Rates for Random PDEs with Parametric Regularity

    KAUST Repository

    Haji Ali, Abdul Lateef; Nobile, Fabio; Tamellini, Lorenzo; Tempone, Raul

    2016-01-01

    We analyze the recent Multi-index Stochastic Collocation (MISC) method for computing statistics of the solution of a partial differential equation (PDE) with random data, where the random coefficient is parametrized by means of a countable sequence of terms in a suitable expansion. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data, and naturally, the error analysis uses the joint regularity of the solution with respect to both the variables in the physical domain and parametric variables. In MISC, the number of problem solutions performed at each discretization level is not determined by balancing the spatial and stochastic components of the error, but rather by suitably extending the knapsack-problem approach employed in the construction of the quasi-optimal sparse-grids and Multi-index Monte Carlo methods, i.e., we use a greedy optimization procedure to select the most effective mixed differences to include in the MISC estimator. We apply our theoretical estimates to a linear elliptic PDE in which the log-diffusion coefficient is modeled as a random field, with a covariance similar to a Matérn model, whose realizations have spatial regularity determined by a scalar parameter. We conduct a complexity analysis based on a summability argument showing algebraic rates of convergence with respect to the overall computational work. The rate of convergence depends on the smoothness parameter, the physical dimensionality and the efficiency of the linear solver. Numerical experiments show the effectiveness of MISC in this infinite dimensional setting compared with the Multi-index Monte Carlo method and compare the convergence rate against the rates predicted in our theoretical analysis. © 2016 SFoCM

  15. Multi-index Stochastic Collocation Convergence Rates for Random PDEs with Parametric Regularity

    KAUST Repository

    Haji Ali, Abdul Lateef

    2016-08-26

    We analyze the recent Multi-index Stochastic Collocation (MISC) method for computing statistics of the solution of a partial differential equation (PDE) with random data, where the random coefficient is parametrized by means of a countable sequence of terms in a suitable expansion. MISC is a combination technique based on mixed differences of spatial approximations and quadratures over the space of random data, and naturally, the error analysis uses the joint regularity of the solution with respect to both the variables in the physical domain and parametric variables. In MISC, the number of problem solutions performed at each discretization level is not determined by balancing the spatial and stochastic components of the error, but rather by suitably extending the knapsack-problem approach employed in the construction of the quasi-optimal sparse-grids and Multi-index Monte Carlo methods, i.e., we use a greedy optimization procedure to select the most effective mixed differences to include in the MISC estimator. We apply our theoretical estimates to a linear elliptic PDE in which the log-diffusion coefficient is modeled as a random field, with a covariance similar to a Matérn model, whose realizations have spatial regularity determined by a scalar parameter. We conduct a complexity analysis based on a summability argument showing algebraic rates of convergence with respect to the overall computational work. The rate of convergence depends on the smoothness parameter, the physical dimensionality and the efficiency of the linear solver. Numerical experiments show the effectiveness of MISC in this infinite dimensional setting compared with the Multi-index Monte Carlo method and compare the convergence rate against the rates predicted in our theoretical analysis. © 2016 SFoCM

  16. The experimental study of the effect of microwave on the physical properties of multi-walled carbon nanotubes

    Energy Technology Data Exchange (ETDEWEB)

    Haque, A.K.M. Mahmudul [Department of Ocean System Engineering, Gyeongsang National University, Cheondaegukchi-Gil 38, Tongyeong, Gyeongnam 650-160 (Korea, Republic of); Oh, Geum Seok; Kim, Taeoh [Department of Energy and Mechanical Engineering, Gyeongsang National University, Cheondaegukchi-Gil 38, Tongyeong, Gyeongnam 650-160 (Korea, Republic of); Kim, Junhyo [Department of Marine Engineering, Mokpo National Maritime University Haeyangdaehang-Ro 91, Mokpo-si, Jeollanam-do (Korea, Republic of); Noh, Jungpil; Huh, Sunchul; Chung, Hanshik [Department of Energy and Mechanical Engineering, Gyeongsang National University, Institute of Marine Industry, Cheondaegukchi-Gil 38, Tongyeong, Gyeongnam 650-160 (Korea, Republic of); Jeong, Hyomin, E-mail: hmjeong@gnu.ac.kr [Department of Energy and Mechanical Engineering, Gyeongsang National University, Institute of Marine Industry, Cheondaegukchi-Gil 38, Tongyeong, Gyeongnam 650-160 (Korea, Republic of)

    2016-01-15

    Highlights: • We study the microwave effect on the multi-walled carbon nanotubes (MWCNTs). • We examine the non uniform heating effect on the physical structure of MWCNTs. • We examine the purification of MWCNTs by microwave. • We analyze the thermal characteristics of microwave treated MWCNTs. - Abstract: This paper reports the effect of microwave on the physical properties of multi-walled carbon nanotubes (MWCNTs) where different power levels of microwave were applied on MWCNTs in order to apprehend the effect of microwave on MWCNTs distinctly. A low energy ball milling in aqueous circumstance was also applied on both MWCNTs and microwave treated MWCNTs. Temperature profile, morphological analysis by field emission scanning electron microscopy (FESEM), defect analysis by Raman spectroscopy, thermal conductivity, thermal diffusivity as well as heat transfer coefficient enhancement ratio were studied which expose some strong witnesses of the effect of microwave on the both purification and dispersion properties of MWCNTs in base fluid distilled water. The highest thermal conductivity enhancement (6.06% at 40 °C) of MWCNTs based nanofluid is achieved by five minutes microwave treatment as well as wet grinding at 500 rpm for two hours.

  17. The experimental study of the effect of microwave on the physical properties of multi-walled carbon nanotubes

    International Nuclear Information System (INIS)

    Haque, A.K.M. Mahmudul; Oh, Geum Seok; Kim, Taeoh; Kim, Junhyo; Noh, Jungpil; Huh, Sunchul; Chung, Hanshik; Jeong, Hyomin

    2016-01-01

    Highlights: • We study the microwave effect on the multi-walled carbon nanotubes (MWCNTs). • We examine the non uniform heating effect on the physical structure of MWCNTs. • We examine the purification of MWCNTs by microwave. • We analyze the thermal characteristics of microwave treated MWCNTs. - Abstract: This paper reports the effect of microwave on the physical properties of multi-walled carbon nanotubes (MWCNTs) where different power levels of microwave were applied on MWCNTs in order to apprehend the effect of microwave on MWCNTs distinctly. A low energy ball milling in aqueous circumstance was also applied on both MWCNTs and microwave treated MWCNTs. Temperature profile, morphological analysis by field emission scanning electron microscopy (FESEM), defect analysis by Raman spectroscopy, thermal conductivity, thermal diffusivity as well as heat transfer coefficient enhancement ratio were studied which expose some strong witnesses of the effect of microwave on the both purification and dispersion properties of MWCNTs in base fluid distilled water. The highest thermal conductivity enhancement (6.06% at 40 °C) of MWCNTs based nanofluid is achieved by five minutes microwave treatment as well as wet grinding at 500 rpm for two hours.

  18. Computational movement analysis

    CERN Document Server

    Laube, Patrick

    2014-01-01

    This SpringerBrief discusses the characteristics of spatiotemporal movement data, including uncertainty and scale. It investigates three core aspects of Computational Movement Analysis: Conceptual modeling of movement and movement spaces, spatiotemporal analysis methods aiming at a better understanding of movement processes (with a focus on data mining for movement patterns), and using decentralized spatial computing methods in movement analysis. The author presents Computational Movement Analysis as an interdisciplinary umbrella for analyzing movement processes with methods from a range of fi

  19. International MultiConference of Engineers and Computer Scientists 2013

    CERN Document Server

    Ao, Sio-Iong; Huang, Xu; Castillo, Oscar

    2014-01-01

    This volume contains revised and extended research articles written by prominent researchers participating in the international conference on Advances in Engineering Technologies and Physical Science was held in Hong Kong, 13-15 March, 2013. Topics covered include engineering physics, engineering mathematics, scientific computing, control theory, automation, artificial intelligence, electrical engineering, and industrial applications. The book offers the state of art of tremendous advances in engineering technologies and physical science and applications, and also serves as an excellent reference work for researchers and graduate students working with/on engineering technologies and physical science and applications.

  20. Multi-Scale Factor Analysis of High-Dimensional Brain Signals

    KAUST Repository

    Ting, Chee-Ming

    2017-05-18

    In this paper, we develop an approach to modeling high-dimensional networks with a large number of nodes arranged in a hierarchical and modular structure. We propose a novel multi-scale factor analysis (MSFA) model which partitions the massive spatio-temporal data defined over the complex networks into a finite set of regional clusters. To achieve further dimension reduction, we represent the signals in each cluster by a small number of latent factors. The correlation matrix for all nodes in the network are approximated by lower-dimensional sub-structures derived from the cluster-specific factors. To estimate regional connectivity between numerous nodes (within each cluster), we apply principal components analysis (PCA) to produce factors which are derived as the optimal reconstruction of the observed signals under the squared loss. Then, we estimate global connectivity (between clusters or sub-networks) based on the factors across regions using the RV-coefficient as the cross-dependence measure. This gives a reliable and computationally efficient multi-scale analysis of both regional and global dependencies of the large networks. The proposed novel approach is applied to estimate brain connectivity networks using functional magnetic resonance imaging (fMRI) data. Results on resting-state fMRI reveal interesting modular and hierarchical organization of human brain networks during rest.

  1. Progress in Computational Physics (PiCP) Volume 1 Wave Propagation in Periodic Media

    CERN Document Server

    Ehrhardt, Matthias

    2010-01-01

    Progress in Computational Physics is a new e-book series devoted to recent research trends in computational physics. It contains chapters contributed by outstanding experts of modeling of physical problems. The series focuses on interdisciplinary computational perspectives of current physical challenges, new numerical techniques for the solution of mathematical wave equations and describes certain real-world applications. With the help of powerful computers and sophisticated methods of numerical mathematics it is possible to simulate many ultramodern devices, e.g. photonic crystals structures,

  2. Multi-GPU parallel algorithm design and analysis for improved inversion of probability tomography with gravity gradiometry data

    Science.gov (United States)

    Hou, Zhenlong; Huang, Danian

    2017-09-01

    In this paper, we make a study on the inversion of probability tomography (IPT) with gravity gradiometry data at first. The space resolution of the results is improved by multi-tensor joint inversion, depth weighting matrix and the other methods. Aiming at solving the problems brought by the big data in the exploration, we present the parallel algorithm and the performance analysis combining Compute Unified Device Architecture (CUDA) with Open Multi-Processing (OpenMP) based on Graphics Processing Unit (GPU) accelerating. In the test of the synthetic model and real data from Vinton Dome, we get the improved results. It is also proved that the improved inversion algorithm is effective and feasible. The performance of parallel algorithm we designed is better than the other ones with CUDA. The maximum speedup could be more than 200. In the performance analysis, multi-GPU speedup and multi-GPU efficiency are applied to analyze the scalability of the multi-GPU programs. The designed parallel algorithm is demonstrated to be able to process larger scale of data and the new analysis method is practical.

  3. Physics, Computer Science and Mathematics Division annual report, 1 January-31 December 1983

    Energy Technology Data Exchange (ETDEWEB)

    Jackson, J.D.

    1984-08-01

    This report summarizes the research performed in the Physics, Computer Science and Mathematics Division of the Lawrence Berkeley Laboratory during calendar year 1983. The major activity of the Division is research in high-energy physics, both experimental and theoretical, and research and development in associated technologies. A smaller, but still significant, program is in computer science and applied mathematics. During 1983 there were approximately 160 people in the Division active in or supporting high-energy physics research, including about 40 graduate students. In computer science and mathematics, the total staff, including students and faculty, was roughly 50. Because of the creation in late 1983 of a Computing Division at LBL and the transfer of the Computer Science activities to the new Division, this annual report is the last from the Physics, Computer Science and Mathematics Division. In December 1983 the Division reverted to its historic name, the Physics Division. Its future annual reports will document high energy physics activities and also those of its Mathematics Department.

  4. Physics, Computer Science and Mathematics Division annual report, 1 January-31 December 1983

    International Nuclear Information System (INIS)

    Jackson, J.D.

    1984-08-01

    This report summarizes the research performed in the Physics, Computer Science and Mathematics Division of the Lawrence Berkeley Laboratory during calendar year 1983. The major activity of the Division is research in high-energy physics, both experimental and theoretical, and research and development in associated technologies. A smaller, but still significant, program is in computer science and applied mathematics. During 1983 there were approximately 160 people in the Division active in or supporting high-energy physics research, including about 40 graduate students. In computer science and mathematics, the total staff, including students and faculty, was roughly 50. Because of the creation in late 1983 of a Computing Division at LBL and the transfer of the Computer Science activities to the new Division, this annual report is the last from the Physics, Computer Science and Mathematics Division. In December 1983 the Division reverted to its historic name, the Physics Division. Its future annual reports will document high energy physics activities and also those of its Mathematics Department

  5. Volumetric analysis of coronary plaque characterization in patients with metabolic syndrome using 64-slice multi-detector computed tomography

    International Nuclear Information System (INIS)

    Arai, Kosuke; Ishii, Hideki; Amano, Tetasuya

    2010-01-01

    Metabolic syndrome (MetS) is associated with adverse cardiovascular events and mortality, where acute coronary syndrome significantly impacts on mortality and morbidity. In contrast, evidences have accumulated that the lipid-rich plaque might play a critical role in acute coronary syndrome. The study population consisted of 94 patients with suspected angina pectoris who underwent multi-detector computed tomography (MDCT). Of those, we identified 41 with MetS. In MDCT analysis, low-density plaque volume (LDPV) (42±28 vs 24±18 mm 3 , P=0.0003), moderate-density plaque volume (105±41 vs 82±33 mm(3), P=0.003), total plaque volume (164±70 vs 118±59 mm 3 ), P=0.0008) and %LDPV (24.2±10.0 vs 18.3±7.1%, P=0.01) were significantly increased in the MetS group compared to the non-MetS group. Multivariate linear regression analysis after adjusting for confounding variables revealed that MetS was significantly correlated with an increase in %LDPV (β=0.48, P=0.0001). Multivariate logistic regression analysis for lipid-rich plaque after adjusting for confounding variables indicated that MetS was significantly associated with lipid-rich plaque (odds ratio: 5.99, 95% confidence intervals: 1.94-18.6, P=0.002). Patients with MetS were strongly related to having a lipid-rich composition in their coronary plaque, as detected by MDCT. (author)

  6. Computer algebra as a research tool in physics

    International Nuclear Information System (INIS)

    Drouffe, J.M.

    1985-04-01

    The progress of computer algebra observed during these last years has had certainly an impact in physics. I want to precise the role of these new techniques in this application domain and to analyze their present limitations. In Section 1, I describe briefly the use of algebraic manipulation programs at the elementary level. The numerical and symbolic solutions of problems are compared in Section 2. Section 3 is devoted to a prospective about the use of computer algebra at the highest level, as an ''intelligent'' system. I recall in Section 4 what is required from a system to be used in physics

  7. Multi-scale analysis of nuclear reactor thermal-hydraulics-first applications using the NEPTUNE platform

    International Nuclear Information System (INIS)

    Guelfi, A.; Boucker, M.; Mimouni, S.; Bestion, D.; Boudier, P.

    2005-01-01

    The NEPTUNE project aims at building a new two-phase flow thermal-hydraulics platform for nuclear reactor simulation. EDF (Electricite de France) and CEA (Commissariat a l'Energie Atomique) with the co-sponsorship of IRSN (Institut de Radioprotection et Surete Nucleaire) and FRAMATOME-ANP, are jointly developing the NEPTUNE multi-scale platform that includes new physical models and numerical methods for each of the computing scales. One usually distinguishes three different scales for industrial simulations: the 'system' scale, the 'component' scale (subchannel analysis) and CFD (Computational Fluid Dynamics). In addition DNS (Direct Numerical Simulation) can provide information at a smaller scale that can be useful for the development of the averaged scales. The NEPTUNE project also includes work on software architecture and research on new numerical methods for coupling codes since both are required to improve industrial calculations. All these R and D challenges have been defined in order to meet industrial needs and the underlying stakes (mainly the competitiveness and the safety of Nuclear Power Plants). This paper focuses on three high priority needs: DNB (Departure from Nucleate Boiling) prediction, directly linked to fuel performance; PTS (Pressurized Thermal Shock), a key issue when studying the lifespan of critical components and LBLOCA (Large Break Loss of Coolant Accident), a reference accident for safety studies. For each of these industrial applications, we provide a review of the last developments within the NEPTUNE platform and we present the first results. A particular attention is also given to physical validation and the needs for further experimental data. (authors)

  8. ANALYSIS OF ENVIRONMENTAL FRAGILITY USING MULTI-CRITERIA ANALYSIS (MCE FOR INTEGRATED LANDSCAPE ASSESSMENT

    Directory of Open Access Journals (Sweden)

    Abimael Cereda Junior

    2014-01-01

    Full Text Available The Geographic Information Systems brought greater possibilitie s to the representation and interpretation of the landscap e as well as the integrated a nalysis. However, this approach does not dispense technical and methodological substan tiation for achieving the computational universe. This work is grounded in ecodynamic s and empirical analysis of natural and anthr opogenic environmental Fragility a nd aims to propose and present an integrated paradigm of Multi-criteria Analysis and F uzzy Logic Model of Environmental Fragility, taking as a case study of the Basin of Monjolinho Stream in São Carlos-SP. The use of this methodology allowed for a reduct ion in the subjectivism influences of decision criteria, which factors might have its cartographic expression, respecting the complex integrated landscape.

  9. Introduction to massively-parallel computing in high-energy physics

    CERN Document Server

    AUTHOR|(CDS)2083520

    1993-01-01

    Ever since computers were first used for scientific and numerical work, there has existed an "arms race" between the technical development of faster computing hardware, and the desires of scientists to solve larger problems in shorter time-scales. However, the vast leaps in processor performance achieved through advances in semi-conductor science have reached a hiatus as the technology comes up against the physical limits of the speed of light and quantum effects. This has lead all high performance computer manufacturers to turn towards a parallel architecture for their new machines. In these lectures we will introduce the history and concepts behind parallel computing, and review the various parallel architectures and software environments currently available. We will then introduce programming methodologies that allow efficient exploitation of parallel machines, and present case studies of the parallelization of typical High Energy Physics codes for the two main classes of parallel computing architecture (S...

  10. A Collaborative Digital Pathology System for Multi-Touch Mobile and Desktop Computing Platforms

    KAUST Repository

    Jeong, W.

    2013-06-13

    Collaborative slide image viewing systems are becoming increasingly important in pathology applications such as telepathology and E-learning. Despite rapid advances in computing and imaging technology, current digital pathology systems have limited performance with respect to remote viewing of whole slide images on desktop or mobile computing devices. In this paper we present a novel digital pathology client-server system that supports collaborative viewing of multi-plane whole slide images over standard networks using multi-touch-enabled clients. Our system is built upon a standard HTTP web server and a MySQL database to allow multiple clients to exchange image and metadata concurrently. We introduce a domain-specific image-stack compression method that leverages real-time hardware decoding on mobile devices. It adaptively encodes image stacks in a decorrelated colour space to achieve extremely low bitrates (0.8 bpp) with very low loss of image quality. We evaluate the image quality of our compression method and the performance of our system for diagnosis with an in-depth user study. Collaborative slide image viewing systems are becoming increasingly important in pathology applications such as telepathology and E-learning. Despite rapid advances in computing and imaging technology, current digital pathology systems have limited performance with respect to remote viewing of whole slide images on desktop or mobile computing devices. In this paper we present a novel digital pathology client-server systems that supports collaborative viewing of multi-plane whole slide images over standard networks using multi-touch enabled clients. Our system is built upon a standard HTTP web server and a MySQL database to allow multiple clients to exchange image and metadata concurrently. © 2013 The Eurographics Association and John Wiley & Sons Ltd.

  11. A Collaborative Digital Pathology System for Multi-Touch Mobile and Desktop Computing Platforms

    KAUST Repository

    Jeong, W.; Schneider, J.; Hansen, A.; Lee, M.; Turney, S. G.; Faulkner-Jones, B. E.; Hecht, J. L.; Najarian, R.; Yee, E.; Lichtman, J. W.; Pfister, H.

    2013-01-01

    Collaborative slide image viewing systems are becoming increasingly important in pathology applications such as telepathology and E-learning. Despite rapid advances in computing and imaging technology, current digital pathology systems have limited performance with respect to remote viewing of whole slide images on desktop or mobile computing devices. In this paper we present a novel digital pathology client-server system that supports collaborative viewing of multi-plane whole slide images over standard networks using multi-touch-enabled clients. Our system is built upon a standard HTTP web server and a MySQL database to allow multiple clients to exchange image and metadata concurrently. We introduce a domain-specific image-stack compression method that leverages real-time hardware decoding on mobile devices. It adaptively encodes image stacks in a decorrelated colour space to achieve extremely low bitrates (0.8 bpp) with very low loss of image quality. We evaluate the image quality of our compression method and the performance of our system for diagnosis with an in-depth user study. Collaborative slide image viewing systems are becoming increasingly important in pathology applications such as telepathology and E-learning. Despite rapid advances in computing and imaging technology, current digital pathology systems have limited performance with respect to remote viewing of whole slide images on desktop or mobile computing devices. In this paper we present a novel digital pathology client-server systems that supports collaborative viewing of multi-plane whole slide images over standard networks using multi-touch enabled clients. Our system is built upon a standard HTTP web server and a MySQL database to allow multiple clients to exchange image and metadata concurrently. © 2013 The Eurographics Association and John Wiley & Sons Ltd.

  12. Introducing Enabling Computational Tools to the Climate Sciences: Multi-Resolution Climate Modeling with Adaptive Cubed-Sphere Grids

    Energy Technology Data Exchange (ETDEWEB)

    Jablonowski, Christiane [Univ. of Michigan, Ann Arbor, MI (United States)

    2015-07-14

    The research investigates and advances strategies how to bridge the scale discrepancies between local, regional and global phenomena in climate models without the prohibitive computational costs of global cloud-resolving simulations. In particular, the research explores new frontiers in computational geoscience by introducing high-order Adaptive Mesh Refinement (AMR) techniques into climate research. AMR and statically-adapted variable-resolution approaches represent an emerging trend for atmospheric models and are likely to become the new norm in future-generation weather and climate models. The research advances the understanding of multi-scale interactions in the climate system and showcases a pathway how to model these interactions effectively with advanced computational tools, like the Chombo AMR library developed at the Lawrence Berkeley National Laboratory. The research is interdisciplinary and combines applied mathematics, scientific computing and the atmospheric sciences. In this research project, a hierarchy of high-order atmospheric models on cubed-sphere computational grids have been developed that serve as an algorithmic prototype for the finite-volume solution-adaptive Chombo-AMR approach. The foci of the investigations have lied on the characteristics of both static mesh adaptations and dynamically-adaptive grids that can capture flow fields of interest like tropical cyclones. Six research themes have been chosen. These are (1) the introduction of adaptive mesh refinement techniques into the climate sciences, (2) advanced algorithms for nonhydrostatic atmospheric dynamical cores, (3) an assessment of the interplay between resolved-scale dynamical motions and subgrid-scale physical parameterizations, (4) evaluation techniques for atmospheric model hierarchies, (5) the comparison of AMR refinement strategies and (6) tropical cyclone studies with a focus on multi-scale interactions and variable-resolution modeling. The results of this research project

  13. Computational Approaches for Integrative Analysis of the Metabolome and Microbiome

    Directory of Open Access Journals (Sweden)

    Jasmine Chong

    2017-11-01

    Full Text Available The study of the microbiome, the totality of all microbes inhabiting the host or an environmental niche, has experienced exponential growth over the past few years. The microbiome contributes functional genes and metabolites, and is an important factor for maintaining health. In this context, metabolomics is increasingly applied to complement sequencing-based approaches (marker genes or shotgun metagenomics to enable resolution of microbiome-conferred functionalities associated with health. However, analyzing the resulting multi-omics data remains a significant challenge in current microbiome studies. In this review, we provide an overview of different computational approaches that have been used in recent years for integrative analysis of metabolome and microbiome data, ranging from statistical correlation analysis to metabolic network-based modeling approaches. Throughout the process, we strive to present a unified conceptual framework for multi-omics integration and interpretation, as well as point out potential future directions.

  14. Parallel Computing:. Some Activities in High Energy Physics

    Science.gov (United States)

    Willers, Ian

    This paper examines some activities in High Energy Physics that utilise parallel computing. The topic includes all computing from the proposed SIMD front end detectors, the farming applications, high-powered RISC processors and the large machines in the computer centers. We start by looking at the motivation behind using parallelism for general purpose computing. The developments around farming are then described from its simplest form to the more complex system in Fermilab. Finally, there is a list of some developments that are happening close to the experiments.

  15. Unit physics performance of a mix model in Eulerian fluid computations

    Energy Technology Data Exchange (ETDEWEB)

    Vold, Erik [Los Alamos National Laboratory; Douglass, Rod [Los Alamos National Laboratory

    2011-01-25

    In this report, we evaluate the performance of a K-L drag-buoyancy mix model, described in a reference study by Dimonte-Tipton [1] hereafter denoted as [D-T]. The model was implemented in an Eulerian multi-material AMR code, and the results are discussed here for a series of unit physics tests. The tests were chosen to calibrate the model coefficients against empirical data, principally from RT (Rayleigh-Taylor) and RM (Richtmyer-Meshkov) experiments, and the present results are compared to experiments and to results reported in [D-T]. Results show the Eulerian implementation of the mix model agrees well with expectations for test problems in which there is no convective flow of the mass averaged fluid, i.e., in RT mix or in the decay of homogeneous isotropic turbulence (HIT). In RM shock-driven mix, the mix layer moves through the Eulerian computational grid, and there are differences with the previous results computed in a Lagrange frame [D-T]. The differences are attributed to the mass averaged fluid motion and examined in detail. Shock and re-shock mix are not well matched simultaneously. Results are also presented and discussed regarding model sensitivity to coefficient values and to initial conditions (IC), grid convergence, and the generation of atomically mixed volume fractions.

  16. apeNEXT A Multi-Tflops LQCD Computing Project

    CERN Document Server

    Alfieri, R; Onofri, E.; Bartoloni, A.; Battista, C.; Cabibbo, N.; Cosimi, M.; Lonardo, A.; Michelotti, A.; Rapuano, F.; Proietti, B.; Rossetti, D.; Sacco, G.; Tassa, S.; Torelli, M.; Vicini, P.; Boucaud, Philippe; Pene, O.; Errico, W.; Magazzu, G.; Sartori, L.; Schifano, F.; Tripiccione, R.; De Riso, P.; Petronzio, R.; Destri, C.; Frezzotti, R.; Marchesini, G.; Gensch, U.; Kretzschmann, A.; Leich, H.; Paschedag, N.; Schwendicke, U.; Simma, H.; Sommer, R.; Sulanke, K.; Wegner, P.; Pleiter, D.; Jansen, K.; Fucci, A.; Martin, B.; Pech, J.; Panizzi, E.; Petricola, A.

    2001-01-01

    This paper is a slightly modified and reduced version of the proposal of the {\\bf apeNEXT} project, which was submitted to DESY and INFN in spring 2000. .It presents the basic motivations and ideas of a next generation lattice QCD (LQCD) computing project, whose goal is the construction and operation of several large scale Multi-TFlops LQCD engines, providing an integrated peak performance of tens of TFlops, and a sustained (double precision) performance on key LQCD kernels of about 50% of peak speed.

  17. Multi-objective and multi-physics optimization methodology for SFR core: application to CFV concept

    International Nuclear Information System (INIS)

    Fabbris, Olivier

    2014-01-01

    Nuclear reactor core design is a highly multidisciplinary task where neutronics, thermal-hydraulics, fuel thermo-mechanics and fuel cycle are involved. The problem is moreover multi-objective (several performances) and highly dimensional (several tens of design parameters).As the reference deterministic calculation codes for core characterization require important computing resources, the classical design method is not well suited to investigate and optimize new innovative core concepts. To cope with these difficulties, a new methodology has been developed in this thesis. Our work is based on the development and validation of simplified neutronics and thermal-hydraulics calculation schemes allowing the full characterization of Sodium-cooled Fast Reactor core regarding both neutronics performances and behavior during thermal hydraulic dimensioning transients.The developed methodology uses surrogate models (or meta-models) able to replace the neutronics and thermal-hydraulics calculation chain. Advanced mathematical methods for the design of experiment, building and validation of meta-models allows substituting this calculation chain by regression models with high prediction capabilities.The methodology is applied on a very large design space to a challenging core called CFV (French acronym for low void effect core) with a large gain on the sodium void effect. Global sensitivity analysis leads to identify the significant design parameters on the core design and its behavior during unprotected transient which can lead to severe accidents. Multi-objective optimizations lead to alternative core configurations with significantly improved performances. Validation results demonstrate the relevance of the methodology at the pre-design stage of a Sodium-cooled Fast Reactor core. (author) [fr

  18. Multi-parameter CAMAC compatible ADC scanner

    Energy Technology Data Exchange (ETDEWEB)

    Midttun, G J; Ingebretsen, F [Oslo Univ. (Norway). Fysisk Inst.; Johnsen, P J [Norsk Data A.S., Box 163, Oekern, Oslo 5, Norway

    1979-02-15

    A fast ADC scanner for multi-parameter nuclear physics experiments is described. The scanner is based on a standard CAMAC crate, and data from several different experiments can be handled simultaneously through a direct memory access (DMA) channel. The implementation on a PDP-7 computer is outlined.

  19. Computational cost of isogeometric multi-frontal solvers on parallel distributed memory machines

    KAUST Repository

    Woźniak, Maciej; Paszyński, Maciej R.; Pardo, D.; Dalcin, Lisandro; Calo, Victor M.

    2015-01-01

    This paper derives theoretical estimates of the computational cost for isogeometric multi-frontal direct solver executed on parallel distributed memory machines. We show theoretically that for the Cp-1 global continuity of the isogeometric solution

  20. UWALK: the development of a multi-strategy, community-wide physical activity program.

    Science.gov (United States)

    Jennings, Cally A; Berry, Tanya R; Carson, Valerie; Culos-Reed, S Nicole; Duncan, Mitch J; Loitz, Christina C; McCormack, Gavin R; McHugh, Tara-Leigh F; Spence, John C; Vallance, Jeff K; Mummery, W Kerry

    2017-03-01

    UWALK is a multi-strategy, multi-sector, theory-informed, community-wide approach using e and mHealth to promote physical activity in Alberta, Canada. The aim of UWALK is to promote physical activity, primarily via the accumulation of steps and flights of stairs, through a single over-arching brand. This paper describes the development of the UWALK program. A social ecological model and the social cognitive theory guided the development of key strategies, including the marketing and communication activities, establishing partnerships with key stakeholders, and e and mHealth programs. The program promotes the use of physical activity monitoring devices to self-monitor physical activity. This includes pedometers, electronic devices, and smartphone applications. In addition to entering physical activity data manually, the e and mHealth program provides the function for objective data to be automatically uploaded from select electronic devices (Fitbit®, Garmin and the smartphone application Moves) The RE-AIM framework is used to guide the evaluation of UWALK. Funding for the program commenced in February 2013. The UWALK brand was introduced on April 12, 2013 with the official launch, including the UWALK website on September 20, 2013. This paper describes the development and evaluation framework of a physical activity promotion program. This program has the potential for population level dissemination and uptake of an ecologically valid physical activity promotion program that is evidence-based and theoretically framed.

  1. The challenge of quantum computer simulations of physical phenomena

    International Nuclear Information System (INIS)

    Ortiz, G.; Knill, E.; Gubernatis, J.E.

    2002-01-01

    The goal of physics simulation using controllable quantum systems ('physics imitation') is to exploit quantum laws to advantage, and thus accomplish efficient simulation of physical phenomena. In this Note, we discuss the fundamental concepts behind this paradigm of information processing, such as the connection between models of computation and physical systems. The experimental simulation of a toy quantum many-body problem is described

  2. Development of computer code models for analysis of subassembly voiding in the LMFBR

    International Nuclear Information System (INIS)

    Hinkle, W.

    1979-12-01

    The research program discussed in this report was started in FY1979 under the combined sponsorship of the US Department of Energy (DOE), General Electric (GE) and Hanford Engineering Development Laboratory (HEDL). The objective of the program is to develop multi-dimensional computer codes which can be used for the analysis of subassembly voiding incoherence under postulated accident conditions in the LMFBR. Two codes are being developed in parallel. The first will use a two fluid (6 equation) model which is more difficult to develop but has the potential for providing a code with the utmost in flexibility and physical consistency for use in the long term. The other will use a mixture (< 6 equation) model which is less general but may be more amenable to interpretation and use of experimental data and therefore, easier to develop for use in the near term. To assure that the models developed are not design dependent, geometries and transient conditions typical of both foreign and US designs are being considered

  3. Pervasive Brain Monitoring and Data Sharing based on Multi-tier Distributed Computing and Linked Data Technology

    Directory of Open Access Journals (Sweden)

    John Kar-Kin Zao

    2014-06-01

    Full Text Available EEG-based Brain-computer interfaces (BCI are facing grant challenges in their real-world applications. The technical difficulties in developing truly wearable multi-modal BCI systems that are capable of making reliable real-time prediction of users’ cognitive states under dynamic real-life situations may appear at times almost insurmountable. Fortunately, recent advances in miniature sensors, wireless communication and distributed computing technologies offered promising ways to bridge these chasms. In this paper, we report our attempt to develop a pervasive on-line BCI system by employing state-of-art technologies such as multi-tier fog and cloud computing, semantic Linked Data search and adaptive prediction/classification models. To verify our approach, we implement a pilot system using wireless dry-electrode EEG headsets and MEMS motion sensors as the front-end devices, Android mobile phones as the personal user interfaces, compact personal computers as the near-end fog servers and the computer clusters hosted by the Taiwan National Center for High-performance Computing (NCHC as the far-end cloud servers. We succeeded in conducting synchronous multi-modal global data streaming in March and then running a multi-player on-line BCI game in September, 2013. We are currently working with the ARL Translational Neuroscience Branch and the UCSD Movement Disorder Center to use our system in real-life personal stress and in-home Parkinson’s disease patient monitoring experiments. We shall proceed to develop a necessary BCI ontology and add automatic semantic annotation and progressive model refinement capability to our system.

  4. Advanced computational workflow for the multi-scale modeling of the bone metabolic processes.

    Science.gov (United States)

    Dao, Tien Tuan

    2017-06-01

    Multi-scale modeling of the musculoskeletal system plays an essential role in the deep understanding of complex mechanisms underlying the biological phenomena and processes such as bone metabolic processes. Current multi-scale models suffer from the isolation of sub-models at each anatomical scale. The objective of this present work was to develop a new fully integrated computational workflow for simulating bone metabolic processes at multi-scale levels. Organ-level model employs multi-body dynamics to estimate body boundary and loading conditions from body kinematics. Tissue-level model uses finite element method to estimate the tissue deformation and mechanical loading under body loading conditions. Finally, cell-level model includes bone remodeling mechanism through an agent-based simulation under tissue loading. A case study on the bone remodeling process located on the human jaw was performed and presented. The developed multi-scale model of the human jaw was validated using the literature-based data at each anatomical level. Simulation outcomes fall within the literature-based ranges of values for estimated muscle force, tissue loading and cell dynamics during bone remodeling process. This study opens perspectives for accurately simulating bone metabolic processes using a fully integrated computational workflow leading to a better understanding of the musculoskeletal system function from multiple length scales as well as to provide new informative data for clinical decision support and industrial applications.

  5. Retrospective Analysis of Communication Events - Understanding the Dynamics of Collaborative Multi-Party Discourse

    Energy Technology Data Exchange (ETDEWEB)

    Cowell, Andrew J.; Haack, Jereme N.; McColgin, Dave W.

    2006-06-08

    This research is aimed at understanding the dynamics of collaborative multi-party discourse across multiple communication modalities. Before we can truly make sig-nificant strides in devising collaborative communication systems, there is a need to understand how typical users utilize com-putationally supported communications mechanisms such as email, instant mes-saging, video conferencing, chat rooms, etc., both singularly and in conjunction with traditional means of communication such as face-to-face meetings, telephone calls and postal mail. Attempting to un-derstand an individual’s communications profile with access to only a single modal-ity is challenging at best and often futile. Here, we discuss the development of RACE – Retrospective Analysis of Com-munications Events – a test-bed prototype to investigate issues relating to multi-modal multi-party discourse.

  6. Distributed analysis with PROOF in ATLAS collaboration

    International Nuclear Information System (INIS)

    Panitkin, S Y; Ernst, M; Ito, H; Maeno, T; Majewski, S; Rind, O; Tarrade, F; Wenaus, T; Ye, S; Benjamin, D; Montoya, G Carillo; Guan, W; Mellado, B; Xu, N; Cranmer, K; Shibata, A

    2010-01-01

    The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF can be configured to work with centralized storage systems, but it is especially effective together with distributed local storage systems - like Xrootd, when data are distributed over computing nodes. It works efficiently on different types of hardware and scales well from a multi-core laptop to large computing farms. From that point of view it is well suited for both large central analysis facilities and Tier 3 type analysis farms. PROOF can be used in interactive or batch like regimes. The interactive regime allows the user to work with typically distributed data from the ROOT command prompt and get a real time feedback on analysis progress and intermediate results. We will discuss our experience with PROOF in the context of ATLAS Collaboration distributed analysis. In particular we will discuss PROOF performance in various analysis scenarios and in multi-user, multi-session environments. We will also describe PROOF integration with the ATLAS distributed data management system and prospects of running PROOF on geographically distributed analysis farms.

  7. Distributed analysis with PROOF in ATLAS collaboration

    Energy Technology Data Exchange (ETDEWEB)

    Panitkin, S Y; Ernst, M; Ito, H; Maeno, T; Majewski, S; Rind, O; Tarrade, F; Wenaus, T; Ye, S [Brookhaven National Laboratory, Upton, NY 11973 (United States); Benjamin, D [Duke University, Durham, NC 27708 (United States); Montoya, G Carillo; Guan, W; Mellado, B; Xu, N [University of Wisconsin-Madison, Madison, WI 53706 (United States); Cranmer, K; Shibata, A [New York University, New York, NY 10003 (United States)

    2010-04-01

    The Parallel ROOT Facility - PROOF is a distributed analysis system which allows to exploit inherent event level parallelism of high energy physics data. PROOF can be configured to work with centralized storage systems, but it is especially effective together with distributed local storage systems - like Xrootd, when data are distributed over computing nodes. It works efficiently on different types of hardware and scales well from a multi-core laptop to large computing farms. From that point of view it is well suited for both large central analysis facilities and Tier 3 type analysis farms. PROOF can be used in interactive or batch like regimes. The interactive regime allows the user to work with typically distributed data from the ROOT command prompt and get a real time feedback on analysis progress and intermediate results. We will discuss our experience with PROOF in the context of ATLAS Collaboration distributed analysis. In particular we will discuss PROOF performance in various analysis scenarios and in multi-user, multi-session environments. We will also describe PROOF integration with the ATLAS distributed data management system and prospects of running PROOF on geographically distributed analysis farms.

  8. Perceptually-Inspired Computing

    Directory of Open Access Journals (Sweden)

    Ming Lin

    2015-08-01

    Full Text Available Human sensory systems allow individuals to see, hear, touch, and interact with the surrounding physical environment. Understanding human perception and its limit enables us to better exploit the psychophysics of human perceptual systems to design more efficient, adaptive algorithms and develop perceptually-inspired computational models. In this talk, I will survey some of recent efforts on perceptually-inspired computing with applications to crowd simulation and multimodal interaction. In particular, I will present data-driven personality modeling based on the results of user studies, example-guided physics-based sound synthesis using auditory perception, as well as perceptually-inspired simplification for multimodal interaction. These perceptually guided principles can be used to accelerating multi-modal interaction and visual computing, thereby creating more natural human-computer interaction and providing more immersive experiences. I will also present their use in interactive applications for entertainment, such as video games, computer animation, and shared social experience. I will conclude by discussing possible future research directions.

  9. Multi-Objective Optimal Design of a Building Envelope and Structural System Using Cyber-Physical Modeling in a Wind Tunnel

    Directory of Open Access Journals (Sweden)

    Michael L. Whiteman

    2018-03-01

    Full Text Available This paper explores the use of a cyber-physical systems (CPS “loop-in-the-model” approach to optimally design the envelope and structural system of low-rise buildings subject to wind loads. Both the components and cladding (C&C and the main wind force resisting system (MWFRS are considered through multi-objective optimization. The CPS approach combines the physical accuracy of wind tunnel testing and efficiency of numerical optimization algorithms to obtain an optimal design. The approach is autonomous: experiments are executed in a boundary layer wind tunnel (BLWT, sensor feedback is monitored and analyzed by a computer, and optimization algorithms dictate physical changes to the structural model in the BLWT through actuators. To explore a CPS approach to multi-objective optimization, a low-rise building with a parapet wall of variable height is considered. In the BLWT, servo-motors are used to adjust the parapet to a particular height. Parapet walls alter the location of the roof corner vortices, reducing suction loads on the windward facing roof corners and edges, a C&C design load. At the same time, parapet walls increase the surface area of the building, leading to an increase in demand on the MWFRS. A combination of non-stochastic and stochastic optimization algorithms were implemented to minimize the magnitude of suction and positive pressures on the roof of a low-rise building model, followed by stochastic multi-objective optimization to simultaneously minimize the magnitude of suction pressures and base shear. Experiments were conducted at the University of Florida Experimental Facility (UFEF of the National Science Foundation’s (NSF Natural Hazard Engineering Research Infrastructure (NHERI program.

  10. An Introduction to Quantum Computing, Without the Physics

    OpenAIRE

    Nannicini, Giacomo

    2017-01-01

    This paper is a gentle but rigorous introduction to quantum computing intended for discrete mathematicians. Starting from a small set of assumptions on the behavior of quantum computing devices, we analyze their main characteristics, stressing the differences with classical computers, and finally describe two well-known algorithms (Simon's algorithm and Grover's algorithm) using the formalism developed in previous sections. This paper does not touch on the physics of the devices, and therefor...

  11. Complex network problems in physics, computer science and biology

    Science.gov (United States)

    Cojocaru, Radu Ionut

    There is a close relation between physics and mathematics and the exchange of ideas between these two sciences are well established. However until few years ago there was no such a close relation between physics and computer science. Even more, only recently biologists started to use methods and tools from statistical physics in order to study the behavior of complex system. In this thesis we concentrate on applying and analyzing several methods borrowed from computer science to biology and also we use methods from statistical physics in solving hard problems from computer science. In recent years physicists have been interested in studying the behavior of complex networks. Physics is an experimental science in which theoretical predictions are compared to experiments. In this definition, the term prediction plays a very important role: although the system is complex, it is still possible to get predictions for its behavior, but these predictions are of a probabilistic nature. Spin glasses, lattice gases or the Potts model are a few examples of complex systems in physics. Spin glasses and many frustrated antiferromagnets map exactly to computer science problems in the NP-hard class defined in Chapter 1. In Chapter 1 we discuss a common result from artificial intelligence (AI) which shows that there are some problems which are NP-complete, with the implication that these problems are difficult to solve. We introduce a few well known hard problems from computer science (Satisfiability, Coloring, Vertex Cover together with Maximum Independent Set and Number Partitioning) and then discuss their mapping to problems from physics. In Chapter 2 we provide a short review of combinatorial optimization algorithms and their applications to ground state problems in disordered systems. We discuss the cavity method initially developed for studying the Sherrington-Kirkpatrick model of spin glasses. We extend this model to the study of a specific case of spin glass on the Bethe

  12. Machine learning, computer vision, and probabilistic models in jet physics

    CERN Multimedia

    CERN. Geneva; NACHMAN, Ben

    2015-01-01

    In this talk we present recent developments in the application of machine learning, computer vision, and probabilistic models to the analysis and interpretation of LHC events. First, we will introduce the concept of jet-images and computer vision techniques for jet tagging. Jet images enabled the connection between jet substructure and tagging with the fields of computer vision and image processing for the first time, improving the performance to identify highly boosted W bosons with respect to state-of-the-art methods, and providing a new way to visualize the discriminant features of different classes of jets, adding a new capability to understand the physics within jets and to design more powerful jet tagging methods. Second, we will present Fuzzy jets: a new paradigm for jet clustering using machine learning methods. Fuzzy jets view jet clustering as an unsupervised learning task and incorporate a probabilistic assignment of particles to jets to learn new features of the jet structure. In particular, we wi...

  13. Computation of multi-region relaxed magnetohydrodynamic equilibria

    Energy Technology Data Exchange (ETDEWEB)

    Hudson, S. R.; Lazerson, S. [Princeton Plasma Physics Laboratory, P.O. Box 451, Princeton, New Jersey 08543 (United States); Dewar, R. L.; Dennis, G.; Hole, M. J.; McGann, M.; Nessi, G. von [Plasma Research Laboratory, Research School of Physics and Engineering, Australian National University, Canberra ACT 0200 (Australia)

    2012-11-15

    We describe the construction of stepped-pressure equilibria as extrema of a multi-region, relaxed magnetohydrodynamic (MHD) energy functional that combines elements of ideal MHD and Taylor relaxation, and which we call MRXMHD. The model is compatible with Hamiltonian chaos theory and allows the three-dimensional MHD equilibrium problem to be formulated in a well-posed manner suitable for computation. The energy-functional is discretized using a mixed finite-element, Fourier representation for the magnetic vector potential and the equilibrium geometry; and numerical solutions are constructed using the stepped-pressure equilibrium code, SPEC. Convergence studies with respect to radial and Fourier resolution are presented.

  14. Development of computer-based analytical tool for assessing physical protection system

    Energy Technology Data Exchange (ETDEWEB)

    Mardhi, Alim, E-mail: alim-m@batan.go.id [National Nuclear Energy Agency Indonesia, (BATAN), PUSPIPTEK area, Building 80, Serpong, Tangerang Selatan, Banten (Indonesia); Chulalongkorn University, Faculty of Engineering, Nuclear Engineering Department, 254 Phayathai Road, Pathumwan, Bangkok Thailand. 10330 (Thailand); Pengvanich, Phongphaeth, E-mail: ppengvan@gmail.com [Chulalongkorn University, Faculty of Engineering, Nuclear Engineering Department, 254 Phayathai Road, Pathumwan, Bangkok Thailand. 10330 (Thailand)

    2016-01-22

    Assessment of physical protection system effectiveness is the priority for ensuring the optimum protection caused by unlawful acts against a nuclear facility, such as unauthorized removal of nuclear materials and sabotage of the facility itself. Since an assessment based on real exercise scenarios is costly and time-consuming, the computer-based analytical tool can offer the solution for approaching the likelihood threat scenario. There are several currently available tools that can be used instantly such as EASI and SAPE, however for our research purpose it is more suitable to have the tool that can be customized and enhanced further. In this work, we have developed a computer–based analytical tool by utilizing the network methodological approach for modelling the adversary paths. The inputs are multi-elements in security used for evaluate the effectiveness of the system’s detection, delay, and response. The tool has capability to analyze the most critical path and quantify the probability of effectiveness of the system as performance measure.

  15. Development of computer-based analytical tool for assessing physical protection system

    International Nuclear Information System (INIS)

    Mardhi, Alim; Pengvanich, Phongphaeth

    2016-01-01

    Assessment of physical protection system effectiveness is the priority for ensuring the optimum protection caused by unlawful acts against a nuclear facility, such as unauthorized removal of nuclear materials and sabotage of the facility itself. Since an assessment based on real exercise scenarios is costly and time-consuming, the computer-based analytical tool can offer the solution for approaching the likelihood threat scenario. There are several currently available tools that can be used instantly such as EASI and SAPE, however for our research purpose it is more suitable to have the tool that can be customized and enhanced further. In this work, we have developed a computer–based analytical tool by utilizing the network methodological approach for modelling the adversary paths. The inputs are multi-elements in security used for evaluate the effectiveness of the system’s detection, delay, and response. The tool has capability to analyze the most critical path and quantify the probability of effectiveness of the system as performance measure

  16. Computational physics of electric discharges in gas flows

    CERN Document Server

    Surzhikov, Sergey T

    2012-01-01

    Gas discharges are of interest for many processes in mechanics, manufacturing, materials science and aerophysics. To understand the physics behind the phenomena is of key importance for the effective use and development of gas discharge devices. This worktreats methods of computational modeling of electrodischarge processes and dynamics of partially ionized gases. These methods are necessary to tackleproblems of physical mechanics, physics of gas discharges and aerophysics.Particular attention is given to a solution of two-dimensional problems of physical mechanics of glow discharges.The use o

  17. Complexity vs energy: theory of computation and theoretical physics

    International Nuclear Information System (INIS)

    Manin, Y I

    2014-01-01

    This paper is a survey based upon the talk at the satellite QQQ conference to ECM6, 3Quantum: Algebra Geometry Information, Tallinn, July 2012. It is dedicated to the analogy between the notions of complexity in theoretical computer science and energy in physics. This analogy is not metaphorical: I describe three precise mathematical contexts, suggested recently, in which mathematics related to (un)computability is inspired by and to a degree reproduces formalisms of statistical physics and quantum field theory.

  18. Benchmark of physics design of a proposed 30 MW Multi Purpose Research Reactor using a Monte Carlo code MCNP

    International Nuclear Information System (INIS)

    Singh, Tej; Kumar, Jainendra; Sharma, Archana; Singh, Kanchhi; Raina, V.K.; Srinivasan, P.

    2009-01-01

    At present Dhruva and Cirus reactors provide majority of research reactor based experimental/irradiation facilities to cater to various needs of the vast pool of researchers in the field of sciences research and development work for nuclear power plants and production of radioisotopes. With a view to further consolidate and expand the scope of research and development in nuclear and allied sciences, a new 30 MWt Multi Purpose Research Reactor is proposed to be constructed. This paper describes some of the physics design features of this reactor using MCNP code to validate the deterministic methods. The criticality calculations for 100 material testing reactor (JHR) of France and 610 MW SAVANNAH thermal reactor were performed using MCNP computer codes to boost the confidence level in designing the physics design of reactor core. (author)

  19. Computational models in physics teaching: a framework

    Directory of Open Access Journals (Sweden)

    Marco Antonio Moreira

    2012-08-01

    Full Text Available The purpose of the present paper is to present a theoretical framework to promote and assist meaningful physics learning through computational models. Our proposal is based on the use of a tool, the AVM diagram, to design educational activities involving modeling and computer simulations. The idea is to provide a starting point for the construction and implementation of didactical approaches grounded in a coherent epistemological view about scientific modeling.

  20. Generalized modeling of multi-component vaporization/condensation phenomena for multi-phase-flow analysis

    International Nuclear Information System (INIS)

    Morita, K.; Fukuda, K.; Tobita, Y.; Kondo, Sa.; Suzuki, T.; Maschek, W.

    2003-01-01

    A new multi-component vaporization/condensation (V/C) model was developed to provide a generalized model for safety analysis codes of liquid metal cooled reactors (LMRs). These codes simulate thermal-hydraulic phenomena of multi-phase, multi-component flows, which is essential to investigate core disruptive accidents of LMRs such as fast breeder reactors and accelerator driven systems. The developed model characterizes the V/C processes associated with phase transition by employing heat transfer and mass-diffusion limited models for analyses of relatively short-time-scale multi-phase, multi-component hydraulic problems, among which vaporization and condensation, or simultaneous heat and mass transfer, play an important role. The heat transfer limited model describes the non-equilibrium phase transition processes occurring at interfaces, while the mass-diffusion limited model is employed to represent effects of non-condensable gases and multi-component mixture on V/C processes. Verification of the model and method employed in the multi-component V/C model of a multi-phase flow code was performed successfully by analyzing a series of multi-bubble condensation experiments. The applicability of the model to the accident analysis of LMRs is also discussed by comparison between steam and metallic vapor systems. (orig.)

  1. School of Analytic Computing in Theoretical High-Energy Physics

    CERN Document Server

    2015-01-01

    In recent years, a huge progress has been made on computing rates for production processes of direct relevance to experiments at the Large Hadron Collider (LHC). Crucial to that remarkable advance has been our understanding and ability to compute scattering amplitudes and cross sections. The aim of the School is to bring together young theorists working on the phenomenology of LHC physics with those working in more formal areas, and to provide them the analytic tools to compute amplitudes in gauge theories. The school is addressed to Ph.D. students and post-docs in Theoretical High-Energy Physics. 30 hours of lectures and 4 hours of tutorials will be delivered over the 6 days of the School.

  2. Computer methods for transient fluid-structure analysis of nuclear reactors

    International Nuclear Information System (INIS)

    Belytschko, T.; Liu, W.K.

    1985-01-01

    Fluid-structure interaction problems in nuclear engineering are categorized according to the dominant physical phenomena and the appropriate computational methods. Linear fluid models that are considered include acoustic fluids, incompressible fluids undergoing small disturbances, and small amplitude sloshing. Methods available in general-purpose codes for these linear fluid problems are described. For nonlinear fluid problems, the major features of alternative computational treatments are reviewed; some special-purpose and multipurpose computer codes applicable to these problems are then described. For illustration, some examples of nuclear reactor problems that entail coupled fluid-structure analysis are described along with computational results

  3. Symbolic Computation, Number Theory, Special Functions, Physics and Combinatorics

    CERN Document Server

    Ismail, Mourad

    2001-01-01

    These are the proceedings of the conference "Symbolic Computation, Number Theory, Special Functions, Physics and Combinatorics" held at the Department of Mathematics, University of Florida, Gainesville, from November 11 to 13, 1999. The main emphasis of the conference was Com­ puter Algebra (i. e. symbolic computation) and how it related to the fields of Number Theory, Special Functions, Physics and Combinatorics. A subject that is common to all of these fields is q-series. We brought together those who do symbolic computation with q-series and those who need q-series in­ cluding workers in Physics and Combinatorics. The goal of the conference was to inform mathematicians and physicists who use q-series of the latest developments in the field of q-series and especially how symbolic computa­ tion has aided these developments. Over 60 people were invited to participate in the conference. We ended up having 45 participants at the conference, including six one hour plenary speakers and 28 half hour speakers. T...

  4. B-physics computations from Nf=2 tmQCD

    CERN Document Server

    Carrasco, N.; Dimopoulos, P.; Frezzotti, R.; Gimenez, V.; Herdoiza, G.; Lubicz, V.; Michael, C.; Picca, E.; Rossi, G.C.; Sanfilippo, F.; Shindler, A.; Silvestrini, L.; Simula, S.; Tarantino, C.

    2014-01-01

    We present an accurate lattice QCD computation of the b-quark mass, the B and Bs decay constants, the B-mixing bag-parameters for the full four-fermion operator basis, as well as estimates for \\xi and f_{Bq}\\sqrt{B_q} extrapolated to the continuum limit and the physical pion mass. We have used Nf = 2 dynamical quark gauge configurations at four values of the lattice spacing generated by ETMC. Extrapolation in the heavy quark mass from the charm to the bottom quark region has been carried out using ratios of physical quantities computed at nearby quark masses, having an exactly known infinite mass limit.

  5. Multi-dimensional Analysis for SLB Transient in ATLAS Facility as Activity of DSP (Domestic Standard Problem)

    International Nuclear Information System (INIS)

    Bae, B. U.; Park, Y. S.; Kim, J. R.; Kang, K. H.; Choi, K. Y.; Sung, H. J.; Hwang, M. J.; Kang, D. H.; Lim, S. G.; Jun, S. S.

    2015-01-01

    Participants of DSP-03 were divided in three groups and each group has focused on the specific subject related to the enhancement of the code analysis. The group A tried to investigate scaling capability of ATLAS test data by comparing to the code analysis for a prototype, and the group C studied to investigate effect of various models in the one-dimensional codes. This paper briefly summarizes the code analysis result from the group B participants in the DSP-03 of the ATLAS test facility. The code analysis by Group B focuses highly on investigating the multi-dimensional thermal hydraulic phenomena in the ATLAS facility during the SLB transient. Even though the one-dimensional system analysis code cannot simulate the whole system of the ATLAS facility with a nodalization of the CFD (Computational Fluid Dynamics) scale, a reactor pressure vessel can be considered with multi-dimensional components to reflect the thermal mixing phenomena inside a downcomer and a core. Also, the CFD could give useful information for understanding complex phenomena in specific components such as the reactor pressure vessel. From the analysis activity of Group B in ATLAS DSP-03, participants adopted a multi-dimensional approach to the code analysis for the SLB transient in the ATLAS test facility. The main purpose of the analysis was to investigate prediction capability of multi-dimensional analysis tools for the SLB experiment result. In particular, the asymmetric cooling and thermal mixing phenomena in the reactor pressure vessel could be significantly focused for modeling the multi-dimensional components

  6. Discovering transcription factor binding sites in highly repetitive regions of genomes with multi-read analysis of ChIP-Seq data.

    Science.gov (United States)

    Chung, Dongjun; Kuan, Pei Fen; Li, Bo; Sanalkumar, Rajendran; Liang, Kun; Bresnick, Emery H; Dewey, Colin; Keleş, Sündüz

    2011-07-01

    Chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq) is rapidly replacing chromatin immunoprecipitation combined with genome-wide tiling array analysis (ChIP-chip) as the preferred approach for mapping transcription-factor binding sites and chromatin modifications. The state of the art for analyzing ChIP-seq data relies on using only reads that map uniquely to a relevant reference genome (uni-reads). This can lead to the omission of up to 30% of alignable reads. We describe a general approach for utilizing reads that map to multiple locations on the reference genome (multi-reads). Our approach is based on allocating multi-reads as fractional counts using a weighted alignment scheme. Using human STAT1 and mouse GATA1 ChIP-seq datasets, we illustrate that incorporation of multi-reads significantly increases sequencing depths, leads to detection of novel peaks that are not otherwise identifiable with uni-reads, and improves detection of peaks in mappable regions. We investigate various genome-wide characteristics of peaks detected only by utilization of multi-reads via computational experiments. Overall, peaks from multi-read analysis have similar characteristics to peaks that are identified by uni-reads except that the majority of them reside in segmental duplications. We further validate a number of GATA1 multi-read only peaks by independent quantitative real-time ChIP analysis and identify novel target genes of GATA1. These computational and experimental results establish that multi-reads can be of critical importance for studying transcription factor binding in highly repetitive regions of genomes with ChIP-seq experiments.

  7. Discovering transcription factor binding sites in highly repetitive regions of genomes with multi-read analysis of ChIP-Seq data.

    Directory of Open Access Journals (Sweden)

    Dongjun Chung

    2011-07-01

    Full Text Available Chromatin immunoprecipitation followed by high-throughput sequencing (ChIP-seq is rapidly replacing chromatin immunoprecipitation combined with genome-wide tiling array analysis (ChIP-chip as the preferred approach for mapping transcription-factor binding sites and chromatin modifications. The state of the art for analyzing ChIP-seq data relies on using only reads that map uniquely to a relevant reference genome (uni-reads. This can lead to the omission of up to 30% of alignable reads. We describe a general approach for utilizing reads that map to multiple locations on the reference genome (multi-reads. Our approach is based on allocating multi-reads as fractional counts using a weighted alignment scheme. Using human STAT1 and mouse GATA1 ChIP-seq datasets, we illustrate that incorporation of multi-reads significantly increases sequencing depths, leads to detection of novel peaks that are not otherwise identifiable with uni-reads, and improves detection of peaks in mappable regions. We investigate various genome-wide characteristics of peaks detected only by utilization of multi-reads via computational experiments. Overall, peaks from multi-read analysis have similar characteristics to peaks that are identified by uni-reads except that the majority of them reside in segmental duplications. We further validate a number of GATA1 multi-read only peaks by independent quantitative real-time ChIP analysis and identify novel target genes of GATA1. These computational and experimental results establish that multi-reads can be of critical importance for studying transcription factor binding in highly repetitive regions of genomes with ChIP-seq experiments.

  8. Application of the new GeN-Foam multi-physics solver to the European sodium fast reactor and verification against available codes - 15226

    International Nuclear Information System (INIS)

    Fiorina, C.; Mikityuk, K.

    2015-01-01

    A new multi-physics solver for nuclear reactor analysis, named GeN-Foam (Generalized Nuclear Foam), has been developed by the FAST group at the Paul Scherrer Institut. It is based on OpenFOAM and has been developed for the multi-physics transient analyses of pin-based (e.g., liquid metal Fast Reactors, Light Water Reactors) or homogeneous (e.g., fast spectrum Molten Salt Reactors) nuclear reactors. It includes solutions of coarse or fine mesh thermal-hydraulics, thermal-mechanics and neutron diffusion. In particular, thermal-hydraulics solution can combine on the same mesh both a traditional RANS model and a porous medium model, depending on the desired degree of approximation for each region. In case the active reactor core is modeled as a porous medium, a simple sub-solver computes the sub-scale radial temperature profiles in fuel and cladding. The mesh used for neutronics calculations is deformed according to the displacement field predicted by the thermal-mechanics solver, thus allowing for a direct prediction of expansion-related feedback effects in Fast Reactors. To limit computational requirements, GeN-Foam permits the use of three different unstructured meshes for thermal-hydraulics, thermal-mechanics and neutron diffusion. For the same reason, an adaptive time step is employed. The different equations can be solved altogether or selectively included. In this work, GeN-Foam is applied to the analysis of the European Sodium Fast Reactor (ESFR). In particular, a 3-D model of the ESFR core is set up employing a coarse-mesh porous-medium approach for the thermal-hydraulics. The reactor steady-state and different accidental transients are investigated to offer an overview of GeN-Foam use and capabilities, as well as to preliminarily investigate the impact of a relatively accurate thermal-mechanic treatment on the predicted ESFR behavior. A code-to-code benchmark against the TRACE system code is performed to verify the adequacy of the results provided by the new

  9. A systematic and efficient method to compute multi-loop master integrals

    Science.gov (United States)

    Liu, Xiao; Ma, Yan-Qing; Wang, Chen-Yu

    2018-04-01

    We propose a novel method to compute multi-loop master integrals by constructing and numerically solving a system of ordinary differential equations, with almost trivial boundary conditions. Thus it can be systematically applied to problems with arbitrary kinematic configurations. Numerical tests show that our method can not only achieve results with high precision, but also be much faster than the only existing systematic method sector decomposition. As a by product, we find a new strategy to compute scalar one-loop integrals without reducing them to master integrals.

  10. Effect of Physical Education Teachers' Computer Literacy on Technology Use in Physical Education

    Science.gov (United States)

    Kretschmann, Rolf

    2015-01-01

    Teachers' computer literacy has been identified as a factor that determines their technology use in class. The aim of this study was to investigate the relationship between physical education (PE) teachers' computer literacy and their technology use in PE. The study group consisted of 57 high school level in-service PE teachers. A survey was used…

  11. Analysis of multi cloud storage applications for resource constrained mobile devices

    Directory of Open Access Journals (Sweden)

    Rajeev Kumar Bedi

    2016-09-01

    Full Text Available Cloud storage, which can be a surrogate for all physical hardware storage devices, is a term which gives a reflection of an enormous advancement in engineering (Hung et al., 2012. However, there are many issues that need to be handled when accessing cloud storage on resource constrained mobile devices due to inherent limitations of mobile devices as limited storage capacity, processing power and battery backup (Yeo et al., 2014. There are many multi cloud storage applications available, which handle issues faced by single cloud storage applications. In this paper, we are providing analysis of different multi cloud storage applications developed for resource constrained mobile devices to check their performance on the basis of parameters as battery consumption, CPU usage, data usage and time consumed by using mobile phone device Sony Xperia ZL (smart phone on WiFi network. Lastly, conclusion and open research challenges in these multi cloud storage apps are discussed.

  12. Secure Data Access Control for Fog Computing Based on Multi-Authority Attribute-Based Signcryption with Computation Outsourcing and Attribute Revocation.

    Science.gov (United States)

    Xu, Qian; Tan, Chengxiang; Fan, Zhijie; Zhu, Wenye; Xiao, Ya; Cheng, Fujia

    2018-05-17

    Nowadays, fog computing provides computation, storage, and application services to end users in the Internet of Things. One of the major concerns in fog computing systems is how fine-grained access control can be imposed. As a logical combination of attribute-based encryption and attribute-based signature, Attribute-based Signcryption (ABSC) can provide confidentiality and anonymous authentication for sensitive data and is more efficient than traditional "encrypt-then-sign" or "sign-then-encrypt" strategy. Thus, ABSC is suitable for fine-grained access control in a semi-trusted cloud environment and is gaining more and more attention recently. However, in many existing ABSC systems, the computation cost required for the end users in signcryption and designcryption is linear with the complexity of signing and encryption access policy. Moreover, only a single authority that is responsible for attribute management and key generation exists in the previous proposed ABSC schemes, whereas in reality, mostly, different authorities monitor different attributes of the user. In this paper, we propose OMDAC-ABSC, a novel data access control scheme based on Ciphertext-Policy ABSC, to provide data confidentiality, fine-grained control, and anonymous authentication in a multi-authority fog computing system. The signcryption and designcryption overhead for the user is significantly reduced by outsourcing the undesirable computation operations to fog nodes. The proposed scheme is proven to be secure in the standard model and can provide attribute revocation and public verifiability. The security analysis, asymptotic complexity comparison, and implementation results indicate that our construction can balance the security goals with practical efficiency in computation.

  13. Multi-representation ability of students on the problem solving physics

    Science.gov (United States)

    Theasy, Y.; Wiyanto; Sujarwata

    2018-03-01

    Accuracy in representing knowledge possessed by students will show how the level of student understanding. The multi-representation ability of students on the problem solving of physics has been done through qualitative method of grounded theory model and implemented on physics education student of Unnes academic year 2016/2017. Multiforms of representation used are verbal (V), images/diagrams (D), graph (G), and mathematically (M). High and low category students have an accurate use of graphical representation (G) of 83% and 77.78%, and medium category has accurate use of image representation (D) equal to 66%.

  14. Resonance self-shielding method using resonance interference factor library for practical lattice physics computations of LWRs

    International Nuclear Information System (INIS)

    Choi, Sooyoung; Khassenov, Azamat; Lee, Deokjung

    2016-01-01

    This paper presents a new method of resonance interference effect treatment using resonance interference factor for high fidelity analysis of light water reactors (LWRs). Although there have been significant improvements in the lattice physics calculations over the several decades, there exist still relatively large errors in the resonance interference treatment, in the order of ∼300 pcm in the reactivity prediction of LWRs. In the newly developed method, the impact of resonance interference to the multi-group cross-sections has been quantified and tabulated in a library which can be used in lattice physics calculation as adjustment factors of multi-group cross-sections. The verification of the new method has been performed with Mosteller benchmark, UO_2 and MOX pin-cell depletion problems, and a 17×17 fuel assembly loaded with gadolinia burnable poison, and significant improvements were demonstrated in the accuracy of reactivity and pin power predictions, with reactivity errors down to the order of ∼100 pcm. (author)

  15. Aerodynamic Analysis and Three-Dimensional Redesign of a Multi-Stage Axial Flow Compressor

    Directory of Open Access Journals (Sweden)

    Tao Ning

    2016-04-01

    Full Text Available This paper describes the introduction of three-dimension (3-D blade designs into a 5-stage axial compressor with multi-stage computational fluid dynamic (CFD methods. Prior to a redesign, a validation study is conducted for the overall performance and flow details based on full-scale test data, proving that the multi-stage CFD applied is a relatively reliable tool for the analysis of the follow-up redesign. Furthermore, at the near stall point, the aerodynamic analysis demonstrates that significant separation exists in the last stator, leading to the aerodynamic redesign, which is the focus of the last stator. Multi-stage CFD methods are applied throughout the three-dimensional redesign process for the last stator to explore their aerodynamic improvement potential. An unconventional asymmetric bow configuration incorporated with leading edge re-camber and re-solidity is employed to reduce the high loss region dominated by the mainstream. The final redesigned version produces a 13% increase in the stall margin while maintaining the efficiency at the design point.

  16. Measurement of multi-bunch transfer functions using time-domain data and Fourier analysis

    International Nuclear Information System (INIS)

    Hindi, H.; Sapozhnikov, L.; Fox, J.; Prabhakar, S.; Oxoby, G.; Linscott, I.; Drago, A.

    1993-12-01

    Multi-bunch transfer functions are principal ingredients in understanding both the behavior of high-current storage rings as well as control of their instabilities. The measurement of transfer functions on a bunch-by-bunch basis is particularly important in the design of active feedback systems. Traditional methods of network analysis that work well in the single bunch case become difficult to implement for many bunches. We have developed a method for obtaining empirical estimates of the multi-bunch longitudinal transfer functions from the time-domain measurements of the bunches' phase oscillations. This method involves recording the response of the bunch of interest to a white-noise excitation. The transfer function can then be computed as the ratio of the fast Fourier transforms (FFTs) of the response and excitation sequences, averaged over several excitations. The calculation is performed off-line on bunch-phase data and is well-suited to the multi-bunch case. A description of this method and an analysis of its performance is presented with results obtained using the longitudinal quick prototype feedback system developed at SLAC

  17. Physics for computer science students with emphasis on atomic and semiconductor physics

    CERN Document Server

    Garcia, Narciso

    1991-01-01

    This text is the product of several years' effort to develop a course to fill a specific educational gap. It is our belief that computer science students should know how a computer works, particularly in light of rapidly changing tech­ nologies. The text was designed for computer science students who have a calculus background but have not necessarily taken prior physics courses. However, it is clearly not limited to these students. Anyone who has had first-year physics can start with Chapter 17. This includes all science and engineering students who would like a survey course of the ideas, theories, and experiments that made our modern electronics age possible. This textbook is meant to be used in a two-semester sequence. Chapters 1 through 16 can be covered during the first semester, and Chapters 17 through 28 in the second semester. At Queens College, where preliminary drafts have been used, the material is presented in three lecture periods (50 minutes each) and one recitation period per week, 15 weeks p...

  18. Neutron physics computation of CERCA fuel elements for Maria Reactor

    International Nuclear Information System (INIS)

    Andrzejewski, K.J.; Kulikowska, T.; Marcinkowska, Z.

    2008-01-01

    Neutron physics parameters of CERCA design fuel elements were calculated in the framework of the RERTR (Reduced Enrichment for Research and Test Reactors) program for Maria reactor. The analysis comprises burnup of experimental CERCA design fuel elements for 4 cycles in Maria Reactor To predict the behavior of the mixed core the differences between the CERCA fuel (485 g U-235 as U 3 Si 2 , 5 fuel tubes, low enrichment 19.75 % - LEU) and the presently used MR-6 fuel (430 g as UO 2 , 6 fuel tubes, high enrichment 36 % - HEU) had to be taken into account. The basic tool used in neutron-physics analysis of Maria reactor is program REBUS using in its dedicated libraries of effective microscopic cross sections. The cross sections were prepared using WIMS-ANL code, taking into account the actual structure, temperature and material composition of the fuel elements required preparation of new libraries.The problem is described in the first part of the present paper. In the second part the applicability of the new library is shown on the basis of the fuel core computational analysis. (author)

  19. Multi-fluid CFD analysis in Process Engineering

    Science.gov (United States)

    Hjertager, B. H.

    2017-12-01

    An overview of modelling and simulation of flow processes in gas/particle and gas/liquid systems are presented. Particular emphasis is given to computational fluid dynamics (CFD) models that use the multi-dimensional multi-fluid techniques. Turbulence modelling strategies for gas/particle flows based on the kinetic theory for granular flows are given. Sub models for the interfacial transfer processes and chemical kinetics modelling are presented. Examples are shown for some gas/particle systems including flow and chemical reaction in risers as well as gas/liquid systems including bubble columns and stirred tanks.

  20. Non-adaptive measurement-based quantum computation and multi-party Bell inequalities

    International Nuclear Information System (INIS)

    Hoban, Matty J; Campbell, Earl T; Browne, Dan E; Loukopoulos, Klearchos

    2011-01-01

    Quantum correlations exhibit behaviour that cannot be resolved with a local hidden variable picture of the world. In quantum information, they are also used as resources for information processing tasks, such as measurement-based quantum computation (MQC). In MQC, universal quantum computation can be achieved via adaptive measurements on a suitable entangled resource state. In this paper, we look at a version of MQC in which we remove the adaptivity of measurements and aim to understand what computational abilities remain in the resource. We show that there are explicit connections between this model of computation and the question of non-classicality in quantum correlations. We demonstrate this by focusing on deterministic computation of Boolean functions, in which natural generalizations of the Greenberger-Horne-Zeilinger paradox emerge; we then explore probabilistic computation via, which multipartite Bell inequalities can be defined. We use this correspondence to define families of multi-party Bell inequalities, which we show to have a number of interesting contrasting properties.