WorldWideScience

Sample records for high computational complexity

  1. Computational Complexity

    Directory of Open Access Journals (Sweden)

    J. A. Tenreiro Machado

    2017-02-01

    Full Text Available Complex systems (CS involve many elements that interact at different scales in time and space. The challenges in modeling CS led to the development of novel computational tools with applications in a wide range of scientific areas. The computational problems posed by CS exhibit intrinsic difficulties that are a major concern in Computational Complexity Theory. [...

  2. A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers

    Energy Technology Data Exchange (ETDEWEB)

    Potok, Thomas E [ORNL; Schuman, Catherine D [ORNL; Young, Steven R [ORNL; Patton, Robert M [ORNL; Spedalieri, Federico [University of Southern California, Information Sciences Institute; Liu, Jeremy [University of Southern California, Information Sciences Institute; Yao, Ke-Thia [University of Southern California, Information Sciences Institute; Rose, Garrett [University of Tennessee (UT); Chakma, Gangotree [University of Tennessee (UT)

    2016-01-01

    Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.

  3. Study of application technology of ultra-high speed computer to the elucidation of complex phenomena

    Energy Technology Data Exchange (ETDEWEB)

    Sekiguchi, Tomotsugu [Electrotechnical Lab., Tsukuba, Ibaraki (Japan)

    1996-06-01

    The basic design of numerical information library in the decentralized computer network was explained at the first step of constructing the application technology of ultra-high speed computer to the elucidation of complex phenomena. Establishment of the system makes possible to construct the efficient application environment of ultra-high speed computer system to be scalable with the different computing systems. We named the system Ninf (Network Information Library for High Performance Computing). The summary of application technology of library was described as follows: the application technology of library under the distributed environment, numeric constants, retrieval of value, library of special functions, computing library, Ninf library interface, Ninf remote library and registration. By the system, user is able to use the program concentrating the analyzing technology of numerical value with high precision, reliability and speed. (S.Y.)

  4. High performance parallel computing of flows in complex geometries: II. Applications

    Energy Technology Data Exchange (ETDEWEB)

    Gourdain, N; Gicquel, L; Staffelbach, G; Vermorel, O; Duchaine, F; Boussuge, J-F [Computational Fluid Dynamics Team, CERFACS, Toulouse, 31057 (France); Poinsot, T [Institut de Mecanique des Fluides de Toulouse, Toulouse, 31400 (France)], E-mail: Nicolas.gourdain@cerfacs.fr

    2009-01-01

    Present regulations in terms of pollutant emissions, noise and economical constraints, require new approaches and designs in the fields of energy supply and transportation. It is now well established that the next breakthrough will come from a better understanding of unsteady flow effects and by considering the entire system and not only isolated components. However, these aspects are still not well taken into account by the numerical approaches or understood whatever the design stage considered. The main challenge is essentially due to the computational requirements inferred by such complex systems if it is to be simulated by use of supercomputers. This paper shows how new challenges can be addressed by using parallel computing platforms for distinct elements of a more complex systems as encountered in aeronautical applications. Based on numerical simulations performed with modern aerodynamic and reactive flow solvers, this work underlines the interest of high-performance computing for solving flow in complex industrial configurations such as aircrafts, combustion chambers and turbomachines. Performance indicators related to parallel computing efficiency are presented, showing that establishing fair criterions is a difficult task for complex industrial applications. Examples of numerical simulations performed in industrial systems are also described with a particular interest for the computational time and the potential design improvements obtained with high-fidelity and multi-physics computing methods. These simulations use either unsteady Reynolds-averaged Navier-Stokes methods or large eddy simulation and deal with turbulent unsteady flows, such as coupled flow phenomena (thermo-acoustic instabilities, buffet, etc). Some examples of the difficulties with grid generation and data analysis are also presented when dealing with these complex industrial applications.

  5. Complex networks and computing

    Institute of Scientific and Technical Information of China (English)

    Shuigeng ZHOU; Zhongzhi ZHANG

    2009-01-01

    @@ Nowadays complex networks are pervasive in various areas of science and technology. Popular examples of complex networks include the Internet, social networks of collaboration, citations and co-authoring, as well as biological networks such as gene and protein interactions and others. Complex networks research spans across mathematics, computer science, engineering, biology and the social sciences. Even in computer science area, increasing problems are either found to be related to complex networks or studied from the perspective of complex networks, such as searching on Web and P2P networks, routing in sensor networks, language processing, software engineering etc. The interaction and mergence of complex networks and computing is inspiring new chances and challenges in computer science.

  6. High-level ab initio computations of the absorption spectra of organic iridium complexes.

    Science.gov (United States)

    Plasser, Felix; Dreuw, Andreas

    2015-02-12

    The excited states of fac-tris(phenylpyridinato)iridium [Ir(ppy)3] and the smaller model complex Ir(C3H4N)3 are computed using a number of high-level ab initio methods, including the recently implemented algebraic diagrammatic construction method to third-order ADC(3). A detailed description of the states is provided through advanced analysis methods, which allow a quantification of different charge transfer and orbital relaxation effects and give extended insight into the many-body wave functions. Compared to the ADC(3) benchmark an unexpected striking difference of ADC(2) is found for Ir(C3H4N)3, which derives from an overstabilization of charge transfer effects. Time-dependent density functional theory (TDDFT) using the B3LYP functional shows an analogous but less severe error for charge transfer states, whereas the ωB97 results are in good agreement with ADC(3). Multireference configuration interaction computations, which are in reasonable agreement with ADC(3), reveal that static correlation does not play a significant role. In the case of the larger Ir(ppy)3 complex, results at the TDDFT/B3LYP and TDDFT/ωB97 levels of theory are presented. Strong discrepancies between the two functionals, which are found with respect to the energies, characters, as well as the density of the low lying states, are discussed in detail and compared to experiment.

  7. Computer-aided design-based high-frequency electromagnetic wave scattering from complex bodies

    Science.gov (United States)

    Baldauf, John Eric

    1991-02-01

    This work investigates the use of high frequency electromagnetic scattering techniques, such as the physical theory of diffraction (PTD) and the geometrical theory of diffraction (GTD) and the shooting and bouncing rays (SBR) method combined with computer aided design (CAD) compatible geometries, to perform the electromagnetic scattering analysis of complex arbitrary bodies. The use of CAD formats such as solid modelled bodies and bodies modelled with triangular patch surface elements allows the scattering analysis of arbitrary bodies which can be constructed using CAD packages. The scattering analyses are applied to radar cross section (RCS) problems, cavity radiation problems, and antenna pattern predictions of complex electrically large structures, thereby showing that it is feasible to accurately approximate the electromagnetic wave scattering from general complex bodies using CAD techniques and high frequency scattering techniques. First, the RCS of large targets which involve multiple geometric optics (GO) interactions are investigated by comparing the RCS calculated using CAD designed radar targets and the SBR method and PTD for targets such as trihedral corner reflectors and an idealized military vehicle model with the experimentally obtained RCS. The comparisons between the calculated and measured results demonstrate that the SBR and PTD can provide accurate approximations of the RCS for targets which have complex multiple GO interactions. Second, the problem of interior cavity radiation for closed cavities is approached using a ray tracing and GO method based on the SBR method and triangular surface patch described geometries. Comparisons between the ray-based calculations and more exact techniques such as the method of moments (MM) for two-dimensional cavities demonstrate that ray-based methods can provide good approximations for the field behavior inside of nonresonant cavities. A three-dimensional case is shown to demonstrate that this technique can be

  8. Computational model explains high activity and rapid cycling of Rho GTPases within protein complexes.

    Directory of Open Access Journals (Sweden)

    Andrew B Goryachev

    2006-12-01

    Full Text Available Formation of multiprotein complexes on cellular membranes is critically dependent on the cyclic activation of small GTPases. FRAP-based analyses demonstrate that within protein complexes, some small GTPases cycle nearly three orders of magnitude faster than they would spontaneously cycle in vitro. At the same time, experiments report concomitant excess of the activated, GTP-bound form of GTPases over their inactive form. Intuitively, high activity and rapid turnover are contradictory requirements. How the cells manage to maximize both remains poorly understood. Here, using GTPases of the Rab and Rho families as a prototype, we introduce a computational model of the GTPase cycle. We quantitatively investigate several plausible layouts of the cycling control module that consist of GEFs, GAPs, and GTPase effectors. We explain the existing experimental data and predict how the cycling of GTPases is controlled by the regulatory proteins in vivo. Our model explains distinct and separable roles that the activating GEFs and deactivating GAPs play in the GTPase cycling control. While the activity of GTPase is mainly defined by GEF, the turnover rate is a sole function of GAP. Maximization of the GTPase activity and turnover rate places conflicting requirements on the concentration of GAP. Therefore, to achieve a high activity and turnover rate at once, cells must carefully maintain concentrations of GEFs and GAPs within the optimal range. The values of these optimal concentrations indicate that efficient cycling can be achieved only within dense protein complexes typically assembled on the membrane surfaces. We show that the concentration requirement for GEF can be dramatically reduced by a GEF-activating GTPase effector that can also significantly boost the cycling efficiency. Interestingly, we find that the cycling regimes are only weakly dependent on the concentration of GTPase itself.

  9. Theories of computational complexity

    CERN Document Server

    Calude, C

    1988-01-01

    This volume presents four machine-independent theories of computational complexity, which have been chosen for their intrinsic importance and practical relevance. The book includes a wealth of results - classical, recent, and others which have not been published before.In developing the mathematics underlying the size, dynamic and structural complexity measures, various connections with mathematical logic, constructive topology, probability and programming theories are established. The facts are presented in detail. Extensive examples are provided, to help clarify notions and constructions. The lists of exercises and problems include routine exercises, interesting results, as well as some open problems.

  10. A high performance, low power computational platform for complex sensing operations in smart cities

    KAUST Repository

    Jiang, Jiming

    2017-02-02

    This paper presents a new wireless platform designed for an integrated traffic/flash flood monitoring system. The sensor platform is built around a 32-bit ARM Cortex M4 microcontroller and a 2.4GHz 802.15.4802.15.4 ISM compliant radio module. It can be interfaced with fixed traffic sensors, or receive data from vehicle transponders. This platform is specifically designed for solar-powered, low bandwidth, high computational performance wireless sensor network applications. A self-recovering unit is designed to increase reliability and allow periodic hard resets, an essential requirement for sensor networks. A radio monitoring circuitry is proposed to monitor incoming and outgoing transmissions, simplifying software debugging. We illustrate the performance of this wireless sensor platform on complex problems arising in smart cities, such as traffic flow monitoring, machine-learning-based flash flood monitoring or Kalman-filter based vehicle trajectory estimation. All design files have been uploaded and shared in an open science framework, and can be accessed from [1]. The hardware design is under CERN Open Hardware License v1.2.

  11. Computability, complexity, logic

    CERN Document Server

    Börger, Egon

    1989-01-01

    The theme of this book is formed by a pair of concepts: the concept of formal language as carrier of the precise expression of meaning, facts and problems, and the concept of algorithm or calculus, i.e. a formally operating procedure for the solution of precisely described questions and problems. The book is a unified introduction to the modern theory of these concepts, to the way in which they developed first in mathematical logic and computability theory and later in automata theory, and to the theory of formal languages and complexity theory. Apart from considering the fundamental themes an

  12. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    Science.gov (United States)

    Hadjidoukas, P. E.; Angelikopoulos, P.; Papadimitriou, C.; Koumoutsakos, P.

    2015-03-01

    We present Π4U, an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  13. Π4U: A high performance computing framework for Bayesian uncertainty quantification of complex models

    Energy Technology Data Exchange (ETDEWEB)

    Hadjidoukas, P.E.; Angelikopoulos, P. [Computational Science and Engineering Laboratory, ETH Zürich, CH-8092 (Switzerland); Papadimitriou, C. [Department of Mechanical Engineering, University of Thessaly, GR-38334 Volos (Greece); Koumoutsakos, P., E-mail: petros@ethz.ch [Computational Science and Engineering Laboratory, ETH Zürich, CH-8092 (Switzerland)

    2015-03-01

    We present Π4U,{sup 1} an extensible framework, for non-intrusive Bayesian Uncertainty Quantification and Propagation (UQ+P) of complex and computationally demanding physical models, that can exploit massively parallel computer architectures. The framework incorporates Laplace asymptotic approximations as well as stochastic algorithms, along with distributed numerical differentiation and task-based parallelism for heterogeneous clusters. Sampling is based on the Transitional Markov Chain Monte Carlo (TMCMC) algorithm and its variants. The optimization tasks associated with the asymptotic approximations are treated via the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). A modified subset simulation method is used for posterior reliability measurements of rare events. The framework accommodates scheduling of multiple physical model evaluations based on an adaptive load balancing library and shows excellent scalability. In addition to the software framework, we also provide guidelines as to the applicability and efficiency of Bayesian tools when applied to computationally demanding physical models. Theoretical and computational developments are demonstrated with applications drawn from molecular dynamics, structural dynamics and granular flow.

  14. The Complex Tasks Using for Practical Work in the Study of Computer Science in High School

    Directory of Open Access Journals (Sweden)

    Nelya V. Degtyareva

    2014-06-01

    Full Text Available In this article discusses methods for diagnosing of training results of high school students in the study of computer science. The author highlights the positive aspects of some methods of diagnosing anddiscusses the need for combining tasks in the practical work: testing, integrative tasks and tasks of competency-based. The author argues that testing helps to master the theoretical foundations of computer science, practical exercise promotes the development of skills of used information technology, the creative task stimulates intellectual activity and an unconventional approach to solving. The author gives an example of the practical work for the study of spreadsheets. In practical work demonstrates the testing and use of various types of tests in more detail. This article describes the results of a survey of teachers and students about objective method of diagnostics of knowledge. There is a study of the formation of IT-competencies and core competencies while performing tasks such.

  15. Designing Computer-Supported Complex Systems Curricula for the Next Generation Science Standards in High School Science Classrooms

    Directory of Open Access Journals (Sweden)

    Susan A. Yoon

    2016-12-01

    Full Text Available We present a curriculum and instruction framework for computer-supported teaching and learning about complex systems in high school science classrooms. This work responds to a need in K-12 science education research and practice for the articulation of design features for classroom instruction that can address the Next Generation Science Standards (NGSS recently launched in the USA. We outline the features of the framework, including curricular relevance, cognitively rich pedagogies, computational tools for teaching and learning, and the development of content expertise, and provide examples of how the framework is translated into practice. We follow this up with evidence from a preliminary study conducted with 10 teachers and 361 students, aimed at understanding the extent to which students learned from the activities. Results demonstrated gains in students’ complex systems understanding and biology content knowledge. In interviews, students identified influences of various aspects of the curriculum and instruction framework on their learning.

  16. Theory of computational complexity

    CERN Document Server

    Du, Ding-Zhu

    2011-01-01

    DING-ZHU DU, PhD, is a professor in the Department of Computer Science at the University of Minnesota. KER-I KO, PhD, is a professor in the Department of Computer Science at the State University of New York at Stony Brook.

  17. Complex matrix multiplication operations with data pre-conditioning in a high performance computing architecture

    Science.gov (United States)

    Eichenberger, Alexandre E; Gschwind, Michael K; Gunnels, John A

    2014-02-11

    Mechanisms for performing a complex matrix multiplication operation are provided. A vector load operation is performed to load a first vector operand of the complex matrix multiplication operation to a first target vector register. The first vector operand comprises a real and imaginary part of a first complex vector value. A complex load and splat operation is performed to load a second complex vector value of a second vector operand and replicate the second complex vector value within a second target vector register. The second complex vector value has a real and imaginary part. A cross multiply add operation is performed on elements of the first target vector register and elements of the second target vector register to generate a partial product of the complex matrix multiplication operation. The partial product is accumulated with other partial products and a resulting accumulated partial product is stored in a result vector register.

  18. Progress in Computational Complexity Theory

    Institute of Scientific and Technical Information of China (English)

    Jin-Yi Cai; Hong Zhu

    2005-01-01

    We briefly survey a number of important recent achievements in Theoretical Computer Science (TCS), especially Computational Complexity Theory. We will discuss the PCP Theorem, its implications to inapproximability on combinatorial optimization problems; space bounded computations, especially deterministic logspace algorithm for undirected graph connectivity problem; deterministic polynomial-time primality test; lattice complexity, worst-case to average-case reductions;pseudorandomness and extractor constructions; and Valiant's new theory of holographic algorithms and reductions.

  19. The Effects of Limiters on High Resolution Computations of Hypersonic Flows over Bodies with Complex Shapes

    Institute of Scientific and Technical Information of China (English)

    BoZHENG; Chun-HianLEE

    1998-01-01

    The effects of certain limiters used in TVD-type of schemes on the resolution of numerical computations in hypersonic flows are investigayted.An explicit TVD scheme of Harten-Yee type with velocity-dependent entropy correction function is employed in the computations,Numerical experiments for hypersonic invicid as well as viscous lfows over a double-ellipsoid show that the limiters will affect the numerical results substantially,and may even cause the solutions to diverge.

  20. Advances in computational complexity theory

    CERN Document Server

    Cai, Jin-Yi

    1993-01-01

    This collection of recent papers on computational complexity theory grew out of activities during a special year at DIMACS. With contributions by some of the leading experts in the field, this book is of lasting value in this fast-moving field, providing expositions not found elsewhere. Although aimed primarily at researchers in complexity theory and graduate students in mathematics or computer science, the book is accessible to anyone with an undergraduate education in mathematics or computer science. By touching on some of the major topics in complexity theory, this book sheds light on this burgeoning area of research.

  1. Computational Complexity in Electronic Structure

    CERN Document Server

    Whitfield, James D; Aspuru-Guzik, Alan

    2012-01-01

    In quantum chemistry, the price paid by all known efficient model chemistries is either the truncation of the Hilbert space or uncontrolled approximations. Theoretical computer science suggests that these restrictions are not mere shortcomings of the algorithm designers and programmers but could stem from the inherent difficulty of simulating quantum systems. Extensions of computer science and information processing exploiting quantum mechanics has led to new ways of understanding the ultimate limitations of computational power. Interestingly, this perspective helps us understand widely used model chemistries in a new light. In this article, the fundamentals of computational complexity will be reviewed and motivated from the vantage point of chemistry. Then recent results from the computational complexity literature regarding common model chemistries including Hartree-Fock and density functional theory are discussed.

  2. Software Accelerates Computing Time for Complex Math

    Science.gov (United States)

    2014-01-01

    Ames Research Center awarded Newark, Delaware-based EM Photonics Inc. SBIR funding to utilize graphic processing unit (GPU) technology- traditionally used for computer video games-to develop high-computing software called CULA. The software gives users the ability to run complex algorithms on personal computers with greater speed. As a result of the NASA collaboration, the number of employees at the company has increased 10 percent.

  3. Computational models of complex systems

    CERN Document Server

    Dabbaghian, Vahid

    2014-01-01

    Computational and mathematical models provide us with the opportunities to investigate the complexities of real world problems. They allow us to apply our best analytical methods to define problems in a clearly mathematical manner and exhaustively test our solutions before committing expensive resources. This is made possible by assuming parameter(s) in a bounded environment, allowing for controllable experimentation, not always possible in live scenarios. For example, simulation of computational models allows the testing of theories in a manner that is both fundamentally deductive and experimental in nature. The main ingredients for such research ideas come from multiple disciplines and the importance of interdisciplinary research is well recognized by the scientific community. This book provides a window to the novel endeavours of the research communities to present their works by highlighting the value of computational modelling as a research tool when investigating complex systems. We hope that the reader...

  4. Complex computation in the retina

    Science.gov (United States)

    Deshmukh, Nikhil Rajiv

    Elucidating the general principles of computation in neural circuits is a difficult problem requiring both a tractable model circuit as well as sophisticated measurement tools. This thesis advances our understanding of complex computation in the salamander retina and its underlying circuitry and furthers the development of advanced tools to enable detailed study of neural circuits. The retina provides an ideal model system for neural circuits in general because it is capable of producing complex representations of the visual scene, and both its inputs and outputs are accessible to the experimenter. Chapter 2 describes the biophysical mechanisms that give rise to the omitted stimulus response in retinal ganglion cells described in Schwartz et al., (2007) and Schwartz and Berry, (2008). The extra response to omitted flashes is generated at the input to bipolar cells, and is separable from the characteristic latency shift of the OSR apparent in ganglion cells, which must occur downstream in the circuit. Chapter 3 characterizes the nonlinearities at the first synapse of the ON pathway in response to high contrast flashes and develops a phenomenological model that captures the effect of synaptic activation and intracellular signaling dynamics on flash responses. This work is the first attempt to model the dynamics of the poorly characterized mGluR6 transduction cascade unique to ON bipolar cells, and explains the second lobe of the biphasic flash response. Complementary to the study of neural circuits, recent advances in wafer-scale photolithography have made possible new devices to measure the electrical and mechanical properties of neurons. Chapter 4 reports a novel piezoelectric sensor that facilitates the simultaneous measurement of electrical and mechanical signals in neural tissue. This technology could reveal the relationship between the electrical activity of neurons and their local mechanical environment, which is critical to the study of mechanoreceptors

  5. Computational complexity of topological invariants

    CERN Document Server

    Amann, Manuel

    2011-01-01

    We answer the following question posed by Lechuga: Given a simply-connected space $X$ with both $H_*(X,\\qq)$ and $\\pi_*(X)\\otimes \\qq$ being finite-dimensional, what is the computational complexity of an algorithm computing the cup-length and the rational Lusternik--Schnirelmann category of $X$? Basically, by a reduction from the decision problem whether a given graph is $k$-colourable (for $k\\geq 3$) we show that (even stricter versions of the) problems above are $\\mathbf{NP}$-hard.

  6. Complex three dimensional modelling of porous media using high performance computing and multi-scale incompressible approach

    Science.gov (United States)

    Martin, R.; Orgogozo, L.; Noiriel, C. N.; Guibert, R.; Golfier, F.; Debenest, G.; Quintard, M.

    2013-05-01

    In the context of biofilm growth in porous media, we developed high performance computing tools to study the impact of biofilms on the fluid transport through pores of a solid matrix. Indeed, biofilms are consortia of micro-organisms that are developing in polymeric extracellular substances that are generally located at a fluid-solid interfaces like pore interfaces in a water-saturated porous medium. Several applications of biofilms in porous media are encountered for instance in bio-remediation methods by allowing the dissolution of organic pollutants. Many theoretical studies have been done on the resulting effective properties of these modified media ([1],[2], [3]) but the bio-colonized porous media under consideration are mainly described following simplified theoretical media (stratified media, cubic networks of spheres ...). Therefore, recent experimental advances have provided tomography images of bio-colonized porous media which allow us to observe realistic biofilm micro-structures inside the porous media [4]. To solve closure system of equations related to upscaling procedures in realistic porous media, we solve the velocity field of fluids through pores on complex geometries that are described with a huge number of cells (up to billions). Calculations are made on a realistic 3D sample geometry obtained by X micro-tomography. Cell volumes are coming from a percolation experiment performed to estimate the impact of precipitation processes on the properties of a fluid transport phenomena in porous media [5]. Average permeabilities of the sample are obtained from velocities by using MPI-based high performance computing on up to 1000 processors. Steady state Stokes equations are solved using finite volume approach. Relaxation pre-conditioning is introduced to accelerate the code further. Good weak or strong scaling are reached with results obtained in hours instead of weeks. Factors of accelerations of 20 up to 40 can be reached. Tens of geometries can now be

  7. Computational complexity of Boolean functions

    Energy Technology Data Exchange (ETDEWEB)

    Korshunov, Aleksei D [Sobolev Institute of Mathematics, Siberian Branch of the Russian Academy of Sciences, Novosibirsk (Russian Federation)

    2012-02-28

    Boolean functions are among the fundamental objects of discrete mathematics, especially in those of its subdisciplines which fall under mathematical logic and mathematical cybernetics. The language of Boolean functions is convenient for describing the operation of many discrete systems such as contact networks, Boolean circuits, branching programs, and some others. An important parameter of discrete systems of this kind is their complexity. This characteristic has been actively investigated starting from Shannon's works. There is a large body of scientific literature presenting many fundamental results. The purpose of this survey is to give an account of the main results over the last sixty years related to the complexity of computation (realization) of Boolean functions by contact networks, Boolean circuits, and Boolean circuits without branching. Bibliography: 165 titles.

  8. Bioinspired computation in combinatorial optimization: algorithms and their computational complexity

    DEFF Research Database (Denmark)

    Neumann, Frank; Witt, Carsten

    2012-01-01

    Bioinspired computation methods, such as evolutionary algorithms and ant colony optimization, are being applied successfully to complex engineering and combinatorial optimization problems, and it is very important that we understand the computational complexity of these algorithms. This tutorials...

  9. Computability-theoretic learning complexity.

    Science.gov (United States)

    Case, John; Kötzing, Timo

    2012-07-28

    Initially discussed are some of Alan Turing's wonderfully profound and influential ideas about mind and mechanism-including regarding their connection to the main topic of the present study, which is within the field of computability-theoretic learning theory. Herein is investigated the part of this field concerned with the algorithmic, trial-and-error inference of eventually correct programs for functions from their data points. As to the main content of this study: in prior papers, beginning with the seminal work by Freivalds et al. in 1995, the notion of intrinsic complexity is used to analyse the learning complexity of sets of functions in a Gold-style learning setting. Herein are pointed out some weaknesses of this notion. Offered is an alternative based on epitomizing sets of functions-sets that are learnable under a given learning criterion, but not under other criteria that are not at least as powerful. To capture the idea of epitomizing sets, new reducibility notions are given based on robust learning (closure of learning under certain sets of computable operators). Various degrees of epitomizing sets are characterized as the sets complete with respect to corresponding reducibility notions! These characterizations also provide an easy method for showing sets to be epitomizers, and they are then employed to prove several sets to be epitomizing. Furthermore, a scheme is provided to generate easily very strong epitomizers for a multitude of learning criteria. These strong epitomizers are the so-called self-learning sets, previously applied by Case & Kötzing in 2010. These strong epitomizers can be easily generated and employed in a myriad of settings to witness with certainty the strict separation in learning power between the criteria so epitomized and other not as powerful criteria!

  10. A fully-coupled upwind discontinuous Galerkin method for incompressible porous media flows: High-order computations of viscous fingering instabilities in complex geometry

    Science.gov (United States)

    Scovazzi, G.; Huang, H.; Collis, S. S.; Yin, J.

    2013-11-01

    We present a new approach to the simulation of viscous fingering instabilities in incompressible, miscible displacement flows in porous media. In the past, high resolution computational simulations of viscous fingering instabilities have always been performed using high-order finite difference or Fourier-spectral methods which do not posses the flexibility to compute very complex subsurface geometries. Our approach, instead, by means of a fully-coupled nonlinear implementation of the discontinuous Galerkin method, possesses a fundamental differentiating feature, in that it maintains high-order accuracy on fully unstructured meshes. In addition, the proposed method shows very low sensitivity to mesh orientation, in contrast with classical finite volume approximation used in porous media flow simulations. The robustness and accuracy of the method are demonstrated in a number of challenging computational problems.

  11. Computability, complexity, and languages fundamentals of theoretical computer science

    CERN Document Server

    Davis, Martin D; Rheinboldt, Werner

    1983-01-01

    Computability, Complexity, and Languages: Fundamentals of Theoretical Computer Science provides an introduction to the various aspects of theoretical computer science. Theoretical computer science is the mathematical study of models of computation. This text is composed of five parts encompassing 17 chapters, and begins with an introduction to the use of proofs in mathematics and the development of computability theory in the context of an extremely simple abstract programming language. The succeeding parts demonstrate the performance of abstract programming language using a macro expa

  12. Structures, bonding and reactivity of iron and manganese high-valent metal-oxo complexes: A computational investigation

    Indian Academy of Sciences (India)

    Bhawana Pandey; Azaj Ansari; Nidhi Vyas; Gopalan Rajaraman

    2015-02-01

    Iron and manganese ions with terminal oxo and hydroxo ligands are discovered as key intermediates in several synthetic and biochemical catalytic cycles. Since many of these species possess vigorous catalytic abilities, they are extremely transient in nature and experiments which probe the structure and bonding on such elusive species are still rare. We present here comprehensive computational studies on eight iron and manganese oxo and hydroxo (FeIII/IV/V-O, FeIII-OH and MnIII/IV/V-O, MnIII-OH) species using dispersion corrected (B3LYP-D2) density functional method. By computing all the possible spin states for these eight species, we set out to determine the ground state S value of these species; and later on employing MO analysis, we have analysed the bonding aspects which contribute to the high reactivity of these species. Direct structural comparison to iron and manganese-oxo species are made and the observed similarity and differences among them are attributed to the intricate metal–oxygen bonding. By thoroughly probing the bonding in all these species, their reactivity towards common chemical reactions such as C–H activation and oxygen atom transfer are discussed.

  13. Computational Steering with Reduced Complexity

    OpenAIRE

    Butnaru, Daniel

    2014-01-01

    Computational Steering increases the understanding of relationships between the output of a simulation and its parametrized input, such as boundary conditions, physical parameters, or domain geometry. Steering relies on a running simulation which delivers results to a visualization system. However, many simulation codes cannot deliver the required interactive results. For the first time, this work investigates the use of surrogate models to augment computational steering approaches. Based...

  14. Minimising Computational Complexity of the RRT Algorithm

    DEFF Research Database (Denmark)

    Svenstrup, Mikael; Bak, Thomas; Andersen, Hans Jørgen

    2011-01-01

    Sampling based techniques for robot motion planning have become more widespread during the last decade. The algorithms however, still struggle with for example narrow passages in the configuration space and suffer from high number of necessary samples, especially in higher dimensions. A widely used...... of time to generate better trajectories. The algorithm is based on subdividing the configuration space into boxes, where only specific boxes needs to be searched to find the nearest neighbour. It is shown that the computational complexity is lowered from a theoretical point of view. The result...

  15. Ubiquitous Computing, Complexity and Culture

    DEFF Research Database (Denmark)

    The ubiquitous nature of mobile and pervasive computing has begun to reshape and complicate our notions of space, time, and identity. In this collection, over thirty internationally recognized contributors reflect on ubiquitous computing’s implications for the ways in which we interact with our e...

  16. Using an adaptive expertise lens to understand the quality of teachers' classroom implementation of computer-supported complex systems curricula in high school science

    Science.gov (United States)

    Yoon, Susan A.; Koehler-Yom, Jessica; Anderson, Emma; Lin, Joyce; Klopfer, Eric

    2015-05-01

    Background: This exploratory study is part of a larger-scale research project aimed at building theoretical and practical knowledge of complex systems in students and teachers with the goal of improving high school biology learning through professional development and a classroom intervention. Purpose: We propose a model of adaptive expertise to better understand teachers' classroom practices as they attempt to navigate myriad variables in the implementation of biology units that include working with computer simulations, and learning about and teaching through complex systems ideas. Sample: Research participants were three high school biology teachers, two females and one male, ranging in teaching experience from six to 16 years. Their teaching contexts also ranged in student achievement from 14-47% advanced science proficiency. Design and methods: We used a holistic multiple case study methodology and collected data during the 2011-2012 school year. Data sources include classroom observations, teacher and student surveys, and interviews. Data analyses and trustworthiness measures were conducted through qualitative mining of data sources and triangulation of findings. Results: We illustrate the characteristics of adaptive expertise of more or less successful teaching and learning when implementing complex systems curricula. We also demonstrate differences between case study teachers in terms of particular variables associated with adaptive expertise. Conclusions: This research contributes to scholarship on practices and professional development needed to better support teachers to teach through a complex systems pedagogical and curricular approach.

  17. Cloud Computing for Complex Performance Codes.

    Energy Technology Data Exchange (ETDEWEB)

    Appel, Gordon John [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Hadgu, Teklu [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Klein, Brandon Thorin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Miner, John Gifford [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    This report describes the use of cloud computing services for running complex public domain performance assessment problems. The work consisted of two phases: Phase 1 was to demonstrate complex codes, on several differently configured servers, could run and compute trivial small scale problems in a commercial cloud infrastructure. Phase 2 focused on proving non-trivial large scale problems could be computed in the commercial cloud environment. The cloud computing effort was successfully applied using codes of interest to the geohydrology and nuclear waste disposal modeling community.

  18. Complexity-aware high efficiency video coding

    CERN Document Server

    Correa, Guilherme; Agostini, Luciano; Cruz, Luis A da Silva

    2016-01-01

    This book discusses computational complexity of High Efficiency Video Coding (HEVC) encoders with coverage extending from the analysis of HEVC compression efficiency and computational complexity to the reduction and scaling of its encoding complexity. After an introduction to the topic and a review of the state-of-the-art research in the field, the authors provide a detailed analysis of the HEVC encoding tools compression efficiency and computational complexity.  Readers will benefit from a set of algorithms for scaling the computational complexity of HEVC encoders, all of which take advantage from the flexibility of the frame partitioning structures allowed by the standard.  The authors also provide a set of early termination methods based on data mining and machine learning techniques, which are able to reduce the computational complexity required to find the best frame partitioning structures. The applicability of the proposed methods is finally exemplified with an encoding time control system that emplo...

  19. Metasynthetic computing and engineering of complex systems

    CERN Document Server

    Cao, Longbing

    2015-01-01

    Provides a comprehensive overview and introduction to the concepts, methodologies, analysis, design and applications of metasynthetic computing and engineering. The author: Presents an overview of complex systems, especially open complex giant systems such as the Internet, complex behavioural and social problems, and actionable knowledge discovery and delivery in the big data era. Discusses ubiquitous intelligence in complex systems, including human intelligence, domain intelligence, social intelligence, network intelligence, data intelligence and machine intelligence, and their synergy thro

  20. Computing the Exit Complexity of Knowledge in Distributed Quantum Computers

    Directory of Open Access Journals (Sweden)

    M.A.Abbas

    2013-01-01

    Full Text Available Distributed Quantum computers abide from the exit complexity of the knowledge. The exit complexity is the accrue of the nodal information needed to clarify the total egress system with deference to a distinguished exit node. The core objective of this paper is to compile an arrogant methodology for assessing the exit complexity of the knowledge in distributed quantum computers. The proposed methodology is based on contouring the knowledge using the unlabeled binary trees, hence building an benchmarked and a computer based model. The proposed methodology dramatizes knowledge autocratically calculates the exit complexity. The methodology consists of several amphitheaters, starting with detecting the baron aspect of the tree of others entitled express knowledge and then measure the volume of information and the complexity of behavior destining from the bargain of information. Then calculate egress resulting from episodes that do not lead to the withdrawal of the information. In the end is calculated total egress complexity and then appraised total exit complexity of the system. Given the complexity of the operations within the Distributed Computing Quantity, this research addresses effective transactions that could affect the three-dimensional behavior of knowledge. The results materialized that the best affair where total exit complexity as minimal as possible is a picture of a binary tree is entitled at the rate of positive and negative cardinal points medium value. It could be argued that these cardinal points should not amass the upper bound apex or minimum.

  1. Quantum Computational Complexity of Spin Glasses

    Science.gov (United States)

    2011-03-19

    canonical problem of classical statistical mechanics: computation of the classical partition function. We have approached this problem using the Potts...enumerator polynomial from coding theory and Z and exploited the fact that there exists a quantum algorithm for efficiently estimating Gauss sums in...computational complexity of the canonical problem of classical statistical mechanics: computation of the classical partition function. We have approached this

  2. Unified Computational Intelligence for Complex Systems

    CERN Document Server

    Seiffertt, John

    2010-01-01

    Computational intelligence encompasses a wide variety of techniques that allow computation to learn, to adapt, and to seek. That is, they may be designed to learn information without explicit programming regarding the nature of the content to be retained, they may be imbued with the functionality to adapt to maintain their course within a complex and unpredictably changing environment, and they may help us seek out truths about our own dynamics and lives through their inclusion in complex system modeling. These capabilities place our ability to compute in a category apart from our ability to e

  3. Computer Simulation of Interactions between High-Power Electromagnetic Fields and Electronic Systems in a Complex Environment.

    Science.gov (United States)

    1997-05-01

    protrusion. The radius of the circular cylinder is 5A and the protrusion is 1A wide and 1A high. The monostatic radar cross section (RCS) is given in...15] J. Baldauf, S. W. Lee, L. Lin, S. K. Jeng, S. M. Scarborough, and C. L. Yu, "High frequency scattering from trihedral corner reflectors and...cylindrically conformal waveguide-fed slot arrays, such as the effects of curvature, slot thickness, and waveguide termination on the radar cross section

  4. Why Philosophers Should Care About Computational Complexity

    CERN Document Server

    Aaronson, Scott

    2011-01-01

    One might think that, once we know something is computable, how efficiently it can be computed is a practical question with little further philosophical importance. In this essay, I offer a detailed case that one would be wrong. In particular, I argue that computational complexity theory---the field that studies the resources (such as time, space, and randomness) needed to solve computational problems---leads to new perspectives on the nature of mathematical knowledge, the strong AI debate, computationalism, the problem of logical omniscience, Hume's problem of induction and Goodman's grue riddle, the foundations of quantum mechanics, economic rationality, closed timelike curves, and several other topics of philosophical interest. I end by discussing aspects of complexity theory itself that could benefit from philosophical analysis.

  5. Computational Simulation of Complex Structure Fancy Yarns

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    A study is reported for mathematical model and simulation of complex structure fancy yarns. The investigated complex structure fancy yarns have a multithread structure composed of three components -core, effect, and binder yams. In current research the precondition was accepted that the cross-sections of the both two yarns of the effect intermediate product in the complex structure fancy yarn remain the circles shaped, and this shape does not change during manufacturing of the fancy yarn. Mathematical model of complex structure fancy yarn is established based on parameter equation of space helix line and computer simulation is further carried out using the computational mathematical tool Matlab 6.5. Theoretical structure of fancy yarn is compared with an experimental sample. The simulation system would help for further the set ofinformation in designing of new assortment of the complex structure fancy yarns and prediction of visual effects of fancy yarns in end-use fabrics.

  6. DNA computing, computation complexity and problem of biological evolution rate.

    Science.gov (United States)

    Melkikh, Alexey V

    2008-12-01

    An analogy between the evolution of organisms and some complex computational problems (cryptosystem cracking, determination of the shortest path in a graph) is considered. It is shown that in the absence of a priori information about possible species of organisms such a problem is complex (is rated in the class NP) and cannot be solved in a polynomial number of steps. This conclusion suggests the need for re-examination of evolution mechanisms. Ideas of a deterministic approach to the evolution are discussed.

  7. Computational error and complexity in science and engineering computational error and complexity

    CERN Document Server

    Lakshmikantham, Vangipuram; Chui, Charles K; Chui, Charles K

    2005-01-01

    The book "Computational Error and Complexity in Science and Engineering” pervades all the science and engineering disciplines where computation occurs. Scientific and engineering computation happens to be the interface between the mathematical model/problem and the real world application. One needs to obtain good quality numerical values for any real-world implementation. Just mathematical quantities symbols are of no use to engineers/technologists. Computational complexity of the numerical method to solve the mathematical model, also computed along with the solution, on the other hand, will tell us how much computation/computational effort has been spent to achieve that quality of result. Anyone who wants the specified physical problem to be solved has every right to know the quality of the solution as well as the resources spent for the solution. The computed error as well as the complexity provide the scientific convincing answer to these questions. Specifically some of the disciplines in which the book w...

  8. Tuning complex computer code to data

    Energy Technology Data Exchange (ETDEWEB)

    Cox, D.; Park, J.S.; Sacks, J.; Singer, C.

    1992-01-01

    The problem of estimating parameters in a complex computer simulator of a nuclear fusion reactor from an experimental database is treated. Practical limitations do not permit a standard statistical analysis using nonlinear regression methodology. The assumption that the function giving the true theoretical predictions is a realization of a Gaussian stochastic process provides a statistical method for combining information from relatively few computer runs with information from the experimental database and making inferences on the parameters.

  9. Computation of the Complex Probability Function

    Energy Technology Data Exchange (ETDEWEB)

    Trainer, Amelia Jo [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ledwith, Patrick John [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-22

    The complex probability function is important in many areas of physics and many techniques have been developed in an attempt to compute it for some z quickly and e ciently. Most prominent are the methods that use Gauss-Hermite quadrature, which uses the roots of the nth degree Hermite polynomial and corresponding weights to approximate the complex probability function. This document serves as an overview and discussion of the use, shortcomings, and potential improvements on the Gauss-Hermite quadrature for the complex probability function.

  10. Computational Biology and High Performance Computing 2000

    Energy Technology Data Exchange (ETDEWEB)

    Simon, Horst D.; Zorn, Manfred D.; Spengler, Sylvia J.; Shoichet, Brian K.; Stewart, Craig; Dubchak, Inna L.; Arkin, Adam P.

    2000-10-19

    The pace of extraordinary advances in molecular biology has accelerated in the past decade due in large part to discoveries coming from genome projects on human and model organisms. The advances in the genome project so far, happening well ahead of schedule and under budget, have exceeded any dreams by its protagonists, let alone formal expectations. Biologists expect the next phase of the genome project to be even more startling in terms of dramatic breakthroughs in our understanding of human biology, the biology of health and of disease. Only today can biologists begin to envision the necessary experimental, computational and theoretical steps necessary to exploit genome sequence information for its medical impact, its contribution to biotechnology and economic competitiveness, and its ultimate contribution to environmental quality. High performance computing has become one of the critical enabling technologies, which will help to translate this vision of future advances in biology into reality. Biologists are increasingly becoming aware of the potential of high performance computing. The goal of this tutorial is to introduce the exciting new developments in computational biology and genomics to the high performance computing community.

  11. High Performance Computing Today

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, Jack; Meuer,Hans; Simon,Horst D.; Strohmaier,Erich

    2000-04-01

    In last 50 years, the field of scientific computing has seen a rapid change of vendors, architectures, technologies and the usage of systems. Despite all these changes the evolution of performance on a large scale however seems to be a very steady and continuous process. Moore's Law is often cited in this context. If the authors plot the peak performance of various computers of the last 5 decades in Figure 1 that could have been called the supercomputers of their time they indeed see how well this law holds for almost the complete lifespan of modern computing. On average they see an increase in performance of two magnitudes of order every decade.

  12. Robust Multiparty Computation with Linear Communication Complexity

    DEFF Research Database (Denmark)

    Hirt, Martin; Nielsen, Jesper Buus

    2006-01-01

    We present a robust multiparty computation protocol. The protocol is for the cryptographic model with open channels and a poly-time adversary, and allows n parties to actively securely evaluate any poly-sized circuit with resilience t communication complexity in bits over the point......-to-point channels is (Sn+n), where S is the size of the circuit being securely evaluated, κ is the security parameter and is the communication complexity of one broadcast of a κ-bit value. This means the average number of bits sent and received by a single party is (S+), which is almost independent of the number...... of participating parties. This is the first robust multiparty computation protocol with this property....

  13. Monotone Rank and Separations in Computational Complexity

    CERN Document Server

    Li, Yang D

    2011-01-01

    In the paper, we introduce the concept of monotone rank, and using it as a powerful tool, we obtain several important and strong separation results in computational complexity. We show a super-exponential separation between monotone and non-monotone computation in the non-commutative model, and thus give the answer to a longstanding open problem posed by Nisan \\cite{Nis1991} in algebraic complexity. More specifically, we exhibit a homogeneous algebraic function $f$ of degree $d$ ($d$ even) on $n$ variables with the monotone algebraic branching program (ABP) complexity $\\Omega(n^{d/2})$ and the non-monotone ABP complexity $O(d^2)$. We propose a relaxed version of the famous Bell's theorem\\cite{Bel1964}\\cite{CHSH1969}. Bell's theorem basically states that local hidden variable theory cannot predict the correlations produced by quantum mechanics, and therefore is an impossibility result. Bell's theorem heavily relies on the diversity of the measurements. We prove that even if we fix the measurement, infinite amo...

  14. High assurance services computing

    CERN Document Server

    2009-01-01

    Covers service-oriented technologies in different domains including high assurance systemsAssists software engineers from industry and government laboratories who develop mission-critical software, and simultaneously provides academia with a practitioner's outlook on the problems of high-assurance software development

  15. Integrated computational and conceptual solutions for complex environmental information management

    Science.gov (United States)

    Rückemann, Claus-Peter

    2016-06-01

    This paper presents the recent results of the integration of computational and conceptual solutions for the complex case of environmental information management. The solution for the major goal of creating and developing long-term multi-disciplinary knowledge resources and conceptual and computational support was achieved by implementing and integrating key components. The key components are long-term knowledge resources providing required structures for universal knowledge creation, documentation, and preservation, universal multi-disciplinary and multi-lingual conceptual knowledge and classification, especially, references to Universal Decimal Classification (UDC), sustainable workflows for environmental information management, and computational support for dynamical use, processing, and advanced scientific computing with Integrated Information and Computing System (IICS) components and High End Computing (HEC) resources.

  16. Is Computational Complexity a Barrier to Manipulation?

    CERN Document Server

    Walsh, Toby

    2010-01-01

    When agents are acting together, they may need a simple mechanism to decide on joint actions. One possibility is to have the agents express their preferences in the form of a ballot and use a voting rule to decide the winning action(s). Unfortunately, agents may try to manipulate such an election by misreporting their preferences. Fortunately, it has been shown that it is NP-hard to compute how to manipulate a number of different voting rules. However, NP-hardness only bounds the worst-case complexity. Recent theoretical results suggest that manipulation may often be easy in practice. To address this issue, I suggest studying empirically if computational complexity is in practice a barrier to manipulation. The basic tool used in my investigations is the identification of computational "phase transitions". Such an approach has been fruitful in identifying hard instances of propositional satisfiability and other NP-hard problems. I show that phase transition behaviour gives insight into the hardness of manipula...

  17. High-frequency power within the QRS complex in ischemic cardiomyopathy patients with ventricular arrhythmias: Insights from a clinical study and computer simulation of cardiac fibrous tissue.

    Science.gov (United States)

    Tsutsumi, Takeshi; Okamoto, Yoshiwo; Takano, Nami; Wakatsuki, Daisuke; Tomaru, Takanobu; Nakajima, Toshiaki

    2017-08-01

    The distribution of frequency power (DFP) within the QRS complex (QRS) is unclear. This study aimed to investigate the DFP within the QRS in ischemic cardiomyopathy (ICM) with lethal ventricular arrhythmias (L-VA). A computer simulation was performed to explore the mechanism of abnormal frequency power. The study included 31 ICM patients with and without L-VA (n = 10 and 21, respectively). We applied the continuous wavelet transform to measure the time-frequency power within the QRS. Integrated time-frequency power (ITFP) was measured within the frequency range of 5-300 Hz. The simulation model consisted of two-dimensional myocardial tissues intermingled with fibroblasts. We examined the relation between frequency power calculated from the simulated QRS and the fibroblast-to-myocyte ratio (r) of the model. The frequency powers significantly increased from 180 to 300 Hz and from 5 to 15 Hz, and also decreased from 45 to 80 Hz in patients with ICM and L-VA compared with the normal individuals. They increased from 110 Hz to 250 Hz in ICM alone. In the simulation, the high-frequency power increased when the ratio (r) were 2.0-2.5. Functional reentry was initiated if the ratio (r) increased to 2.0. Abnormal higher-frequency power (180-300 Hz) may provide arrhythmogenic signals in ICM with L-VA that may be associated with the fibrous tissue proliferation. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Computing the complexity for Schelling segregation models

    Science.gov (United States)

    Gerhold, Stefan; Glebsky, Lev; Schneider, Carsten; Weiss, Howard; Zimmermann, Burkhard

    2008-12-01

    The Schelling segregation models are "agent based" population models, where individual members of the population (agents) interact directly with other agents and move in space and time. In this note we study one-dimensional Schelling population models as finite dynamical systems. We define a natural notion of entropy which measures the complexity of the family of these dynamical systems. The entropy counts the asymptotic growth rate of the number of limit states. We find formulas and deduce precise asymptotics for the number of limit states, which enable us to explicitly compute the entropy.

  19. Factors Affecting Computer Anxiety in High School Computer Science Students.

    Science.gov (United States)

    Hayek, Linda M.; Stephens, Larry

    1989-01-01

    Examines factors related to computer anxiety measured by the Computer Anxiety Index (CAIN). Achievement in two programing courses was inversely related to computer anxiety. Students who had a home computer and had computer experience before high school had lower computer anxiety than those who had not. Lists 14 references. (YP)

  20. Grid computing in high energy physics

    CERN Document Server

    Avery, P

    2004-01-01

    Over the next two decades, major high energy physics (HEP) experiments, particularly at the Large Hadron Collider, will face unprecedented challenges to achieving their scientific potential. These challenges arise primarily from the rapidly increasing size and complexity of HEP datasets that will be collected and the enormous computational, storage and networking resources that will be deployed by global collaborations in order to process, distribute and analyze them. Coupling such vast information technology resources to globally distributed collaborations of several thousand physicists requires extremely capable computing infrastructures supporting several key areas: (1) computing (providing sufficient computational and storage resources for all processing, simulation and analysis tasks undertaken by the collaborations); (2) networking (deploying high speed networks to transport data quickly between institutions around the world); (3) software (supporting simple and transparent access to data and software r...

  1. Universality of Entanglement and Quantum Computation Complexity

    CERN Document Server

    Orus, R; Orus, Roman; Latorre, Jose I.

    2004-01-01

    We study the universality of scaling of entanglement in Shor's factoring algorithm and in adiabatic quantum algorithms across a quantum phase transition for both the NP-complete Exact Cover problem as well as the Grover's problem. The analytic result for Shor's algorithm shows a linear scaling of the entropy in terms of the number of qubits, therefore difficulting the possibility of an efficient classical simulation protocol. A similar result is obtained numerically for the quantum adiabatic evolution Exact Cover algorithm, which also shows universality of the quantum phase transition the system evolves nearby. On the other hand, entanglement in Grover's adiabatic algorithm remains a bounded quantity even at the critical point. A classification of scaling of entanglement appears as a natural grading of the computational complexity of simulating quantum phase transitions.

  2. Zero-field splitting in pseudotetrahedral Co(II) complexes: a magnetic, high-frequency and -field EPR, and computational study.

    Science.gov (United States)

    Idešicová, Monika; Titiš, Ján; Krzystek, J; Boča, Roman

    2013-08-19

    Six pseudotetrahedral cobalt(II) complexes of the type [CoL2Cl2], with L = heterocyclic N-donor ligand, have been studied in parallel by magnetometry, and high-frequency and -field electron paramagnetic resonance (HFEPR). HFEPR powder spectra were recorded in a 50 GHz < ν < 700 GHz range in a 17 T superconducting and 25 T resistive magnet, which allowed constructing of resonance field vs frequency diagrams from which the fitting procedure yielded the S = 3/2 spin ground state Hamiltonian parameters. The sign of the axial anisotropy parameter D was determined unambiguously; the values range between -8 and +11 cm(-1) for the given series of complexes. These data agree well with magnetometric analysis. Finally, quantum chemical ab initio calculations were performed on the whole series of complexes to probe the relationship between the magnetic anisotropy, electronic, and geometric structure.

  3. A complex network approach to cloud computing

    CERN Document Server

    Travieso, Gonzalo; Bruno, Odemir Martinez; Costa, Luciano da Fontoura

    2015-01-01

    Cloud computing has become an important means to speed up computing. One problem influencing heavily the performance of such systems is the choice of nodes as servers responsible for executing the users' tasks. In this article we report how complex networks can be used to model such a problem. More specifically, we investigate the performance of the processing respectively to cloud systems underlain by Erdos-Renyi and Barabasi-Albert topology containing two servers. Cloud networks involving two communities not necessarily of the same size are also considered in our analysis. The performance of each configuration is quantified in terms of two indices: the cost of communication between the user and the nearest server, and the balance of the distribution of tasks between the two servers. Regarding the latter index, the ER topology provides better performance than the BA case for smaller average degrees and opposite behavior for larger average degrees. With respect to the cost, smaller values are found in the BA ...

  4. Condor-COPASI: high-throughput computing for biochemical networks

    OpenAIRE

    Kent Edward; Hoops Stefan; Mendes Pedro

    2012-01-01

    Abstract Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary experti...

  5. Complex optimization for big computational and experimental neutron datasets

    Science.gov (United States)

    Bao, Feng; Archibald, Richard; Niedziela, Jennifer; Bansal, Dipanshu; Delaire, Olivier

    2016-12-01

    We present a framework to use high performance computing to determine accurate solutions to the inverse optimization problem of big experimental data against computational models. We demonstrate how image processing, mathematical regularization, and hierarchical modeling can be used to solve complex optimization problems on big data. We also demonstrate how both model and data information can be used to further increase solution accuracy of optimization by providing confidence regions for the processing and regularization algorithms. We use the framework in conjunction with the software package SIMPHONIES to analyze results from neutron scattering experiments on silicon single crystals, and refine first principles calculations to better describe the experimental data.

  6. High-frequency complex pitch

    DEFF Research Database (Denmark)

    Santurette, Sébastien; Dau, Torsten

    2012-01-01

    Harmonics in a complex tone are typically considered unresolved when they interact with neighboring harmonics in the cochlea and cannot be heard out separately. Recent studies have suggested that the low pitch evoked by unresolved high-frequency harmonics may be coded via temporal fine-structure ......Harmonics in a complex tone are typically considered unresolved when they interact with neighboring harmonics in the cochlea and cannot be heard out separately. Recent studies have suggested that the low pitch evoked by unresolved high-frequency harmonics may be coded via temporal fine...

  7. On the complexity of computing two nonlinearity measures

    DEFF Research Database (Denmark)

    Find, Magnus Gausdal

    2014-01-01

    We study the computational complexity of two Boolean nonlinearity measures: the nonlinearity and the multiplicative complexity. We show that if one-way functions exist, no algorithm can compute the multiplicative complexity in time 2O(n) given the truth table of length 2n, in fact under the same...

  8. Thermodynamic cost of computation, algorithmic complexity and the information metric

    Science.gov (United States)

    Zurek, W. H.

    1989-01-01

    Algorithmic complexity is discussed as a computational counterpart to the second law of thermodynamics. It is shown that algorithmic complexity, which is a measure of randomness, sets limits on the thermodynamic cost of computations and casts a new light on the limitations of Maxwell's demon. Algorithmic complexity can also be used to define distance between binary strings.

  9. Low Computational Complexity Network Coding For Mobile Networks

    DEFF Research Database (Denmark)

    Heide, Janus

    2012-01-01

    Network Coding (NC) is a technique that can provide benefits in many types of networks, some examples from wireless networks are: In relay networks, either the physical or the data link layer, to reduce the number of transmissions. In reliable multicast, to reduce the amount of signaling and enable...... cooperation among receivers. In meshed networks, to simplify routing schemes and to increase robustness toward node failures. This thesis deals with implementation issues of one NC technique namely Random Linear Network Coding (RLNC) which can be described as a highly decentralized non-deterministic intra......-flow coding technique. One of the key challenges of this technique is its inherent computational complexity which can lead to high computational load and energy consumption in particular on the mobile platforms that are the target platform in this work. To increase the coding throughput several...

  10. The Computational Complexity of Evolving Systems

    NARCIS (Netherlands)

    Verbaan, P.R.A.

    2006-01-01

    Evolving systems are systems that change over time. Examples of evolving systems are computers with soft-and hardware upgrades and dynamic networks of computers that communicate with each other, but also colonies of cooperating organisms or cells within a single organism. In this research, several m

  11. Complex Data Modeling and Computationally Intensive Statistical Methods

    CERN Document Server

    Mantovan, Pietro

    2010-01-01

    The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici

  12. Multiscale modeling of complex materials phenomenological, theoretical and computational aspects

    CERN Document Server

    Trovalusci, Patrizia

    2014-01-01

    The papers in this volume deal with materials science, theoretical mechanics and experimental and computational techniques at multiple scales, providing a sound base and a framework for many applications which are hitherto treated in a phenomenological sense. The basic principles are formulated of multiscale modeling strategies towards modern complex multiphase materials subjected to various types of mechanical, thermal loadings and environmental effects. The focus is on problems where mechanics is highly coupled with other concurrent physical phenomena. Attention is also focused on the historical origins of multiscale modeling and foundations of continuum mechanics currently adopted to model non-classical continua with substructure, for which internal length scales play a crucial role.

  13. Variable-Complexity Multidisciplinary Optimization on Parallel Computers

    Science.gov (United States)

    Grossman, Bernard; Mason, William H.; Watson, Layne T.; Haftka, Raphael T.

    1998-01-01

    This report covers work conducted under grant NAG1-1562 for the NASA High Performance Computing and Communications Program (HPCCP) from December 7, 1993, to December 31, 1997. The objective of the research was to develop new multidisciplinary design optimization (MDO) techniques which exploit parallel computing to reduce the computational burden of aircraft MDO. The design of the High-Speed Civil Transport (HSCT) air-craft was selected as a test case to demonstrate the utility of our MDO methods. The three major tasks of this research grant included: development of parallel multipoint approximation methods for the aerodynamic design of the HSCT, use of parallel multipoint approximation methods for structural optimization of the HSCT, mathematical and algorithmic development including support in the integration of parallel computation for items (1) and (2). These tasks have been accomplished with the development of a response surface methodology that incorporates multi-fidelity models. For the aerodynamic design we were able to optimize with up to 20 design variables using hundreds of expensive Euler analyses together with thousands of inexpensive linear theory simulations. We have thereby demonstrated the application of CFD to a large aerodynamic design problem. For the predicting structural weight we were able to combine hundreds of structural optimizations of refined finite element models with thousands of optimizations based on coarse models. Computations have been carried out on the Intel Paragon with up to 128 nodes. The parallel computation allowed us to perform combined aerodynamic-structural optimization using state of the art models of a complex aircraft configurations.

  14. Computer Controlled High Precise,High Voltage Pules Generator

    Institute of Scientific and Technical Information of China (English)

    但果; 邹积岩; 丛吉远; 董恩源

    2003-01-01

    High precise, high voltage pulse generator made up of high-power IGBT and pulse transformers controlled by a computer are described. A simple main circuit topology employed in this pulse generator can reduce the cost meanwhile it still meets special requirements for pulsed electric fields (PEFs) in food process. The pulse generator utilizes a complex programmable logic device (CPLD) to generate trigger signals. Pulse-frequency, pulse-width and pulse-number are controlled via RS232 bus by a computer. The high voltage pulse generator well suits to the application for fluid food non-thermal effect in pulsed electric fields, for it can increase and decrease by the step length 1.

  15. International Symposium on Complex Computing-Networks

    CERN Document Server

    Sevgi, L; CCN2005; Complex computing networks: Brain-like and wave-oriented electrodynamic algorithms

    2006-01-01

    This book uniquely combines new advances in the electromagnetic and the circuits&systems theory. It integrates both fields regarding computational aspects of common interest. Emphasized subjects are those methods which mimic brain-like and electrodynamic behaviour; among these are cellular neural networks, chaos and chaotic dynamics, attractor-based computation and stream ciphers. The book contains carefully selected contributions from the Symposium CCN2005. Pictures from the bestowal of Honorary Doctorate degrees to Leon O. Chua and Leopold B. Felsen are included.

  16. Perspectives in computational complexity the Somenath Biswas anniversary volume

    CERN Document Server

    Arvind, Vikraman

    2014-01-01

    This book brings together contributions by leading researchers in computational complexity theory written in honor of Somenath Biswas on the occasion of his sixtieth birthday. They discuss current trends and exciting developments in this flourishing area of research and offer fresh perspectives on various aspects of complexity theory. The topics covered include arithmetic circuit complexity, lower bounds and polynomial identity testing, the isomorphism conjecture, space-bounded computation, graph isomorphism, resolution and proof complexity, entropy and randomness. Several chapters have a tutorial flavor. The aim is to make recent research in these topics accessible to graduate students and senior undergraduates in computer science and mathematics. It can also be useful as a resource for teaching advanced level courses in computational complexity.

  17. Computational and analytical modeling of cationic lipid-DNA complexes.

    Science.gov (United States)

    Farago, Oded; Grønbech-Jensen, Niels

    2007-05-01

    We present a theoretical study of the physical properties of cationic lipid-DNA (CL-DNA) complexes--a promising synthetically based nonviral carrier of DNA for gene therapy. The study is based on a coarse-grained molecular model, which is used in Monte Carlo simulations of mesoscopically large systems over timescales long enough to address experimental reality. In the present work, we focus on the statistical-mechanical behavior of lamellar complexes, which in Monte Carlo simulations self-assemble spontaneously from a disordered random initial state. We measure the DNA-interaxial spacing, d(DNA), and the local cationic area charge density, sigma(M), for a wide range of values of the parameter (c) representing the fraction of cationic lipids. For weakly charged complexes (low values of (c)), we find that d(DNA) has a linear dependence on (c)(-1), which is in excellent agreement with x-ray diffraction experimental data. We also observe, in qualitative agreement with previous Poisson-Boltzmann calculations of the system, large fluctuations in the local area charge density with a pronounced minimum of sigma(M) halfway between adjacent DNA molecules. For highly-charged complexes (large (c)), we find moderate charge density fluctuations and observe deviations from linear dependence of d(DNA) on (c)(-1). This last result, together with other findings such as the decrease in the effective stretching modulus of the complex and the increased rate at which pores are formed in the complex membranes, are indicative of the gradual loss of mechanical stability of the complex, which occurs when (c) becomes large. We suggest that this may be the origin of the recently observed enhanced transfection efficiency of lamellar CL-DNA complexes at high charge densities, because the completion of the transfection process requires the disassembly of the complex and the release of the DNA into the cytoplasm. Some of the structural properties of the system are also predicted by a continuum

  18. Large-scale computing techniques for complex system simulations

    CERN Document Server

    Dubitzky, Werner; Schott, Bernard

    2012-01-01

    Complex systems modeling and simulation approaches are being adopted in a growing number of sectors, including finance, economics, biology, astronomy, and many more. Technologies ranging from distributed computing to specialized hardware are explored and developed to address the computational requirements arising in complex systems simulations. The aim of this book is to present a representative overview of contemporary large-scale computing technologies in the context of complex systems simulations applications. The intention is to identify new research directions in this field and

  19. Computing Hypercrossed Complex Pairings in Digital Images

    Directory of Open Access Journals (Sweden)

    Simge Öztunç

    2013-01-01

    Full Text Available We consider an additive group structure in digital images and introduce the commutator in digital images. Then we calculate the hypercrossed complex pairings which generates a normal subgroup in dimension 2 and in dimension 3 by using 8-adjacency and 26-adjacency.

  20. Computer simulation of complexity in plasmas

    Energy Technology Data Exchange (ETDEWEB)

    Hayashi, Takaya; Sato, Tetsuya [National Inst. for Fusion Science, Toki, Gifu (Japan)

    1998-08-01

    By making a comprehensive comparative study of many self-organizing phenomena occurring in magnetohydrodynamics and kinetic plasmas, we came up with a hypothetical grand view of self-organization. This assertion is confirmed by a recent computer simulation for a broader science field, specifically, the structure formation of short polymer chains, where the nature of the interaction is completely different from that of plasmas. It is found that the formation of the global orientation order proceeds stepwise. (author)

  1. Pentacoordinated organoaluminum complexes: A computational insight

    KAUST Repository

    Milione, Stefano

    2012-12-24

    The geometry and the electronic structure of a series of organometallic pentacoordinated aluminum complexes bearing tri- or tetradentate N,O-based ligands have been investigated with theoretical methods. The BP86, B3LYP, and M06 functionals reproduce with low accuracy the geometry of the selected complexes. The worst result was obtained for the complex bearing a Schiff base ligand with a pendant donor arm, aeimpAlMe2 (aeimp = N-2-(dimethylamino)ethyl-(3,5-di-tert-butyl)salicylaldimine). In particular, the Al-Namine bond distance was unacceptably overestimated. This failure suggests a reasonably flat potential energy surface with respect to Al-N elongation, indicating a weak interaction with probably a strong component of dispersion forces. MP2 and M06-2X methods led to an acceptable value for the same Al-N distance. Better results were obtained with the addition of the dispersion correction to the hybrid B3LYP functional (B3LYP-D). Natural bond orbital analysis revealed that the contribution of the d orbital to the bonding is very small, in agreement with several previous studies of hypervalent molecules. The donation of electronic charge from the ligand to metal mainly consists in the interactions of the lone pairs on the donor atoms of the ligands with the s and p valence orbitals of the aluminum. The covalent bonding of the Al with the coordinated ligand is weak, and the interactions between Al and the coordinated ligands are largely ionic. To further explore the geometrical and electronic factors affecting the formation of these pentacoordianted aluminum complexes, we considered the tetracoordinated complex impAlMe2 (imp = N-isopropyl-(3,5-di-tert-butyl)salicylaldimine)), analogous to aeimpAlMe 2, and we investigated the potential energy surface around the aluminum atom corresponding to the approach of NMe3 to the metal center. At the MP2/6-31G(d) level of theory, a weak attraction was revealed only when NMe3 heads toward the metal center through the directions

  2. High Energy Computed Tomographic Inspection of Munitions

    Science.gov (United States)

    2016-11-01

    UNCLASSIFIED UNCLASSIFIED AD-E403 815 Technical Report AREIS-TR-16006 HIGH ENERGY COMPUTED TOMOGRAPHIC INSPECTION OF MUNITIONS...REPORT DATE (DD-MM-YYYY) November 2016 2. REPORT TYPE Final 3. DATES COVERED (From – To) 4. TITLE AND SUBTITLE HIGH ENERGY COMPUTED...otherwise be accomplished by other nondestructive testing methods. 15. SUBJECT TERMS Radiography High energy Computed tomography (CT

  3. Reducing computational complexity of quantum correlations

    Science.gov (United States)

    Chanda, Titas; Das, Tamoghna; Sadhukhan, Debasis; Pal, Amit Kumar; SenDe, Aditi; Sen, Ujjwal

    2015-12-01

    We address the issue of reducing the resource required to compute information-theoretic quantum correlation measures such as quantum discord and quantum work deficit in two qubits and higher-dimensional systems. We show that determination of the quantum correlation measure is possible even if we utilize a restricted set of local measurements. We find that the determination allows us to obtain a closed form of quantum discord and quantum work deficit for several classes of states, with a low error. We show that the computational error caused by the constraint over the complete set of local measurements reduces fast with an increase in the size of the restricted set, implying usefulness of constrained optimization, especially with the increase of dimensions. We perform quantitative analysis to investigate how the error scales with the system size, taking into account a set of plausible constructions of the constrained set. Carrying out a comparative study, we show that the resource required to optimize quantum work deficit is usually higher than that required for quantum discord. We also demonstrate that minimization of quantum discord and quantum work deficit is easier in the case of two-qubit mixed states of fixed ranks and with positive partial transpose in comparison to the corresponding states having nonpositive partial transpose. Applying the methodology to quantum spin models, we show that the constrained optimization can be used with advantage in analyzing such systems in quantum information-theoretic language. For bound entangled states, we show that the error is significantly low when the measurements correspond to the spin observables along the three Cartesian coordinates, and thereby we obtain expressions of quantum discord and quantum work deficit for these bound entangled states.

  4. Computer science of the high performance; Informatica del alto rendimiento

    Energy Technology Data Exchange (ETDEWEB)

    Moraleda, A.

    2008-07-01

    The high performance computing is taking shape as a powerful accelerator of the process of innovation, to drastically reduce the waiting times for access to the results and the findings in a growing number of processes and activities as complex and important as medicine, genetics, pharmacology, environment, natural resources management or the simulation of complex processes in a wide variety of industries. (Author)

  5. High Performance Networks From Supercomputing to Cloud Computing

    CERN Document Server

    Abts, Dennis

    2011-01-01

    Datacenter networks provide the communication substrate for large parallel computer systems that form the ecosystem for high performance computing (HPC) systems and modern Internet applications. The design of new datacenter networks is motivated by an array of applications ranging from communication intensive climatology, complex material simulations and molecular dynamics to such Internet applications as Web search, language translation, collaborative Internet applications, streaming video and voice-over-IP. For both Supercomputing and Cloud Computing the network enables distributed applicati

  6. Computing support for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Avery, P.; Yelton, J. [Univ. of Florida, Gainesville, FL (United States)

    1996-12-01

    This computing proposal (Task S) is submitted separately but in support of the High Energy Experiment (CLEO, Fermilab, CMS) and Theory tasks. The authors have built a very strong computing base at Florida over the past 8 years. In fact, computing has been one of the main contributions to their experimental collaborations, involving not just computing capacity for running Monte Carlos and data reduction, but participation in many computing initiatives, industrial partnerships, computing committees and collaborations. These facts justify the submission of a separate computing proposal.

  7. Computational complexity for the two-point block method

    Science.gov (United States)

    See, Phang Pei; Majid, Zanariah Abdul

    2014-12-01

    In this paper, we discussed and compared the computational complexity for two-point block method and one-point method of Adams type. The computational complexity for both methods is determined based on the number of arithmetic operations performed and expressed in O(n). These two methods will be used to solve two-point second order boundary value problem directly and implemented using variable step size strategy adapted with the multiple shooting technique via three-step iterative method. Two numerical examples will be tested. The results show that the computational complexity of these methods is reliable to estimate the cost of these methods in term of the execution time. We conclude that the two-point block method has better computational performance compare to the one-point method as the total number of steps is larger.

  8. Statistical Mechanics of Classical and Quantum Computational Complexity

    Science.gov (United States)

    Laumann, C. R.; Moessner, R.; Scardicchio, A.; Sondhi, S. L.

    The quest for quantum computers is motivated by their potential for solving problems that defy existing, classical, computers. The theory of computational complexity, one of the crown jewels of computer science, provides a rigorous framework for classifying the hardness of problems according to the computational resources, most notably time, needed to solve them. Its extension to quantum computers allows the relative power of quantum computers to be analyzed. This framework identifies families of problems which are likely hard for classical computers ("NP-complete") and those which are likely hard for quantum computers ("QMA-complete") by indirect methods. That is, they identify problems of comparable worst-case difficulty without directly determining the individual hardness of any given instance. Statistical mechanical methods can be used to complement this classification by directly extracting information about particular families of instances—typically those that involve optimization—by studying random ensembles of them. These pose unusual and interesting (quantum) statistical mechanical questions and the results shed light on the difficulty of problems for large classes of algorithms as well as providing a window on the contrast between typical and worst case complexity. In these lecture notes we present an introduction to this set of ideas with older work on classical satisfiability and recent work on quantum satisfiability as primary examples. We also touch on the connection of computational hardness with the physical notion of glassiness.

  9. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  10. High performance computing for beam physics applications

    Science.gov (United States)

    Ryne, R. D.; Habib, S.

    Several countries are now involved in efforts aimed at utilizing accelerator-driven technologies to solve problems of national and international importance. These technologies have both economic and environmental implications. The technologies include waste transmutation, plutonium conversion, neutron production for materials science and biological science research, neutron production for fusion materials testing, fission energy production systems, and tritium production. All of these projects require a high-intensity linear accelerator that operates with extremely low beam loss. This presents a formidable computational challenge: One must design and optimize over a kilometer of complex accelerating structures while taking into account beam loss to an accuracy of 10 parts per billion per meter. Such modeling is essential if one is to have confidence that the accelerator will meet its beam loss requirement, which ultimately affects system reliability, safety and cost. At Los Alamos, the authors are developing a capability to model ultra-low loss accelerators using the CM-5 at the Advanced Computing Laboratory. They are developing PIC, Vlasov/Poisson, and Langevin/Fokker-Planck codes for this purpose. With slight modification, they have also applied their codes to modeling mesoscopic systems and astrophysical systems. In this paper, they will first describe HPC activities in the accelerator community. Then they will discuss the tools they have developed to model classical and quantum evolution equations. Lastly they will describe how these tools have been used to study beam halo in high current, mismatched charged particle beams.

  11. High-level language computer architecture

    CERN Document Server

    Chu, Yaohan

    1975-01-01

    High-Level Language Computer Architecture offers a tutorial on high-level language computer architecture, including von Neumann architecture and syntax-oriented architecture as well as direct and indirect execution architecture. Design concepts of Japanese-language data processing systems are discussed, along with the architecture of stack machines and the SYMBOL computer system. The conceptual design of a direct high-level language processor is also described.Comprised of seven chapters, this book first presents a classification of high-level language computer architecture according to the pr

  12. High-Productivity Computing in Computational Physics Education

    Science.gov (United States)

    Tel-Zur, Guy

    2011-03-01

    We describe the development of a new course in Computational Physics at the Ben-Gurion University. This elective course for 3rd year undergraduates and MSc. students is being taught during one semester. Computational Physics is by now well accepted as the Third Pillar of Science. This paper's claim is that modern Computational Physics education should deal also with High-Productivity Computing. The traditional approach of teaching Computational Physics emphasizes ``Correctness'' and then ``Accuracy'' and we add also ``Performance.'' Along with topics in Mathematical Methods and case studies in Physics the course deals a significant amount of time with ``Mini-Courses'' in topics such as: High-Throughput Computing - Condor, Parallel Programming - MPI and OpenMP, How to build a Beowulf, Visualization and Grid and Cloud Computing. The course does not intend to teach neither new physics nor new mathematics but it is focused on an integrated approach for solving problems starting from the physics problem, the corresponding mathematical solution, the numerical scheme, writing an efficient computer code and finally analysis and visualization.

  13. Quantifying uncertainty and computational complexity for pore-scale simulations

    Science.gov (United States)

    Chen, C.; Yuan, Z.; Wang, P.; Yang, X.; Zhenyan, L.

    2016-12-01

    Pore-scale simulation is an essential tool to understand the complex physical process in many environmental problems, from multi-phase flow in the subsurface to fuel cells. However, in practice, factors such as sample heterogeneity, data sparsity and in general, our insufficient knowledge of the underlying process, render many simulation parameters and hence the prediction results uncertain. Meanwhile, most pore-scale simulations (in particular, direct numerical simulation) incur high computational cost due to finely-resolved spatio-temporal scales, which further limits our data/samples collection. To address those challenges, we propose a novel framework based on the general polynomial chaos (gPC) and build a surrogate model representing the essential features of the underlying system. To be specific, we apply the novel framework to analyze the uncertainties of the system behavior based on a series of pore-scale numerical experiments, such as flow and reactive transport in 2D heterogeneous porous media and 3D packed beds. Comparing with recent pore-scale uncertainty quantification studies using Monte Carlo techniques, our new framework requires fewer number of realizations and hence considerably reduce the overall computational cost, while maintaining the desired accuracy.

  14. Deterministic Computational Complexity of the Quantum Separability Problem

    CERN Document Server

    Ioannou, L M

    2006-01-01

    Ever since entanglement was identified as a computational and cryptographic resource, effort has been made to find an efficient way to tell whether a given density matrix represents an unentangled, or separable, state. Essentially, this is the quantum separability problem. In Section 1, I begin with a brief introduction to bipartite separability and entanglement, and a basic formal definition of the quantum separability problem. I conclude with a summary of one-sided tests for separability, including those involving semidefinite programming. In Section 2, I treat the separability problem as a computational decision problem and motivate its approximate formulations. After a review of basic complexity-theoretic notions, I discuss the computational complexity of the separability problem (including a Turing-NP-complete formulation of the problem and a proof of "strong NP-hardness" (based on a new NP-hardness proof by Gurvits)). In Section 3, I give a comprehensive survey and complexity analysis of deterministic a...

  15. Evolution and development of complex computational systems using the paradigm of metabolic computing in Epigenetic Tracking

    Directory of Open Access Journals (Sweden)

    Alessandro Fontana

    2013-09-01

    Full Text Available Epigenetic Tracking (ET is an Artificial Embryology system which allows for the evolution and development of large complex structures built from artificial cells. In terms of the number of cells, the complexity of the bodies generated with ET is comparable with the complexity of biological organisms. We have previously used ET to simulate the growth of multicellular bodies with arbitrary 3-dimensional shapes which perform computation using the paradigm of ``metabolic computing''. In this paper we investigate the memory capacity of such computational structures and analyse the trade-off between shape and computation. We now plan to build on these foundations to create a biologically-inspired model in which the encoding of the phenotype is efficient (in terms of the compactness of the genome and evolvable in tasks involving non-trivial computation, robust to damage and capable of self-maintenance and self-repair.

  16. High-performance scientific computing

    CERN Document Server

    Berry, Michael W; Gallopoulos, Efstratios

    2012-01-01

    This book presents the state of the art in parallel numerical algorithms, applications, architectures, and system software. The book examines various solutions for issues of concurrency, scale, energy efficiency, and programmability, which are discussed in the context of a diverse range of applications. Features: includes contributions from an international selection of world-class authorities; examines parallel algorithm-architecture interaction through issues of computational capacity-based codesign and automatic restructuring of programs using compilation techniques; reviews emerging applic

  17. Propagation of computer virus both across the Internet and external computers: A complex-network approach

    Science.gov (United States)

    Gan, Chenquan; Yang, Xiaofan; Liu, Wanping; Zhu, Qingyi; Jin, Jian; He, Li

    2014-08-01

    Based on the assumption that external computers (particularly, infected external computers) are connected to the Internet, and by considering the influence of the Internet topology on computer virus spreading, this paper establishes a novel computer virus propagation model with a complex-network approach. This model possesses a unique (viral) equilibrium which is globally attractive. Some numerical simulations are also given to illustrate this result. Further study shows that the computers with higher node degrees are more susceptible to infection than those with lower node degrees. In this regard, some appropriate protective measures are suggested.

  18. Exponential rise of dynamical complexity in quantum computing through projections.

    Science.gov (United States)

    Burgarth, Daniel Klaus; Facchi, Paolo; Giovannetti, Vittorio; Nakazato, Hiromichi; Pascazio, Saverio; Yuasa, Kazuya

    2014-10-10

    The ability of quantum systems to host exponentially complex dynamics has the potential to revolutionize science and technology. Therefore, much effort has been devoted to developing of protocols for computation, communication and metrology, which exploit this scaling, despite formidable technical difficulties. Here we show that the mere frequent observation of a small part of a quantum system can turn its dynamics from a very simple one into an exponentially complex one, capable of universal quantum computation. After discussing examples, we go on to show that this effect is generally to be expected: almost any quantum dynamics becomes universal once 'observed' as outlined above. Conversely, we show that any complex quantum dynamics can be 'purified' into a simpler one in larger dimensions. We conclude by demonstrating that even local noise can lead to an exponentially complex dynamics.

  19. Settling the Complexity of Computing Two-Player Nash Equilibria

    CERN Document Server

    Chen, Xi; Teng, Shang-Hua

    2007-01-01

    We settle a long-standing open question in algorithmic game theory. We prove that Bimatrix, the problem of finding a Nash equilibrium in a two-player game, is complete for the complexity class PPAD Polynomial Parity Argument, Directed version) introduced by Papadimitriou in 1991. This is the first of a series of results concerning the complexity of Nash equilibria. In particular, we prove the following theorems: Bimatrix does not have a fully polynomial-time approximation scheme unless every problem in PPAD is solvable in polynomial time. The smoothed complexity of the classic Lemke-Howson algorithm and, in fact, of any algorithm for Bimatrix is not polynomial unless every problem in PPAD is solvable in randomized polynomial time. Our results demonstrate that, even in the simplest form of non-cooperative games, equilibrium computation and approximation are polynomial-time equivalent to fixed point computation. Our results also have two broad complexity implications in mathematical economics and operations res...

  20. Computational Biomathematics: Toward Optimal Control of Complex Biological Systems

    Science.gov (United States)

    2016-09-26

    Computational Biomathematics: Toward Optimal Control Of Complex Biological Systems See attached. The views, opinions and/or findings contained in this... Control Of Complex Biological Systems Report Title See attached. (a) Papers published in peer-reviewed journals (N/A for none) Enter List of papers...substantially lowered. Since the equations depend on what information we are interested in, automatic conversion of agent-based models to systems of

  1. Numerical Methods of Computational Electromagnetics for Complex Inhomogeneous Systems

    Energy Technology Data Exchange (ETDEWEB)

    Cai, Wei

    2014-05-15

    Understanding electromagnetic phenomena is the key in many scientific investigation and engineering designs such as solar cell designs, studying biological ion channels for diseases, and creating clean fusion energies, among other things. The objectives of the project are to develop high order numerical methods to simulate evanescent electromagnetic waves occurring in plasmon solar cells and biological ion-channels, where local field enhancement within random media in the former and long range electrostatic interactions in the latter are of major challenges for accurate and efficient numerical computations. We have accomplished these objectives by developing high order numerical methods for solving Maxwell equations such as high order finite element basis for discontinuous Galerkin methods, well-conditioned Nedelec edge element method, divergence free finite element basis for MHD, and fast integral equation methods for layered media. These methods can be used to model the complex local field enhancement in plasmon solar cells. On the other hand, to treat long range electrostatic interaction in ion channels, we have developed image charge based method for a hybrid model in combining atomistic electrostatics and continuum Poisson-Boltzmann electrostatics. Such a hybrid model will speed up the molecular dynamics simulation of transport in biological ion-channels.

  2. Introduction to High Performance Scientific Computing

    OpenAIRE

    2016-01-01

    The field of high performance scientific computing lies at the crossroads of a number of disciplines and skill sets, and correspondingly, for someone to be successful at using high performance computing in science requires at least elementary knowledge of and skills in all these areas. Computations stem from an application context, so some acquaintance with physics and engineering sciences is desirable. Then, problems in these application areas are typically translated into linear algebraic, ...

  3. Complex system modelling and control through intelligent soft computations

    CERN Document Server

    Azar, Ahmad

    2015-01-01

    The book offers a snapshot of the theories and applications of soft computing in the area of complex systems modeling and control. It presents the most important findings discussed during the 5th International Conference on Modelling, Identification and Control, held in Cairo, from August 31-September 2, 2013. The book consists of twenty-nine selected contributions, which have been thoroughly reviewed and extended before their inclusion in the volume. The different chapters, written by active researchers in the field, report on both current theories and important applications of soft-computing. Besides providing the readers with soft-computing fundamentals, and soft-computing based inductive methodologies/algorithms, the book also discusses key industrial soft-computing applications, as well as multidisciplinary solutions developed for a variety of purposes, like windup control, waste management, security issues, biomedical applications and many others. It is a perfect reference guide for graduate students, r...

  4. Computational capacity and energy consumption of complex resistive switch networks

    Directory of Open Access Journals (Sweden)

    Jens Bürger

    2015-12-01

    Full Text Available Resistive switches are a class of emerging nanoelectronics devices that exhibit a wide variety of switching characteristics closely resembling behaviors of biological synapses. Assembled into random networks, such resistive switches produce emerging behaviors far more complex than that of individual devices. This was previously demonstrated in simulations that exploit information processing within these random networks to solve tasks that require nonlinear computation as well as memory. Physical assemblies of such networks manifest complex spatial structures and basic processing capabilities often related to biologically-inspired computing. We model and simulate random resistive switch networks and analyze their computational capacities. We provide a detailed discussion of the relevant design parameters and establish the link to the physical assemblies by relating the modeling parameters to physical parameters. More globally connected networks and an increased network switching activity are means to increase the computational capacity linearly at the expense of exponentially growing energy consumption. We discuss a new modular approach that exhibits higher computational capacities, and energy consumption growing linearly with the number of networks used. The results show how to optimize the trade-o between computational capacity and energy e ciency and are relevant for the design and fabrication of novel computing architectures that harness random assemblies of emerging nanodevices.

  5. China's High Performance Computer Standard Commission Established

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    @@ China's High Performance Computer Standard Commission was established on March 28, 2007, under the guidance of the Science and Technology Bureau of the Ministry of Information Industry. It will prepare relevant professional standards on high performance computers to break through the monopoly in the field by foreign manufacturers and vendors.

  6. NINJA: Java for High Performance Numerical Computing

    Directory of Open Access Journals (Sweden)

    José E. Moreira

    2002-01-01

    Full Text Available When Java was first introduced, there was a perception that its many benefits came at a significant performance cost. In the particularly performance-sensitive field of numerical computing, initial measurements indicated a hundred-fold performance disadvantage between Java and more established languages such as Fortran and C. Although much progress has been made, and Java now can be competitive with C/C++ in many important situations, significant performance challenges remain. Existing Java virtual machines are not yet capable of performing the advanced loop transformations and automatic parallelization that are now common in state-of-the-art Fortran compilers. Java also has difficulties in implementing complex arithmetic efficiently. These performance deficiencies can be attacked with a combination of class libraries (packages, in Java that implement truly multidimensional arrays and complex numbers, and new compiler techniques that exploit the properties of these class libraries to enable other, more conventional, optimizations. Two compiler techniques, versioning and semantic expansion, can be leveraged to allow fully automatic optimization and parallelization of Java code. Our measurements with the NINJA prototype Java environment show that Java can be competitive in performance with highly optimized and tuned Fortran code.

  7. High-throughput computing in the sciences.

    Science.gov (United States)

    Morgan, Mark; Grimshaw, Andrew

    2009-01-01

    While it is true that the modern computer is many orders of magnitude faster than that of yesteryear; this tremendous growth in CPU clock rates is now over. Unfortunately, however, the growth in demand for computational power has not abated; whereas researchers a decade ago could simply wait for computers to get faster, today the only solution to the growing need for more powerful computational resource lies in the exploitation of parallelism. Software parallelization falls generally into two broad categories--"true parallel" and high-throughput computing. This chapter focuses on the latter of these two types of parallelism. With high-throughput computing, users can run many copies of their software at the same time across many different computers. This technique for achieving parallelism is powerful in its ability to provide high degrees of parallelism, yet simple in its conceptual implementation. This chapter covers various patterns of high-throughput computing usage and the skills and techniques necessary to take full advantage of them. By utilizing numerous examples and sample codes and scripts, we hope to provide the reader not only with a deeper understanding of the principles behind high-throughput computing, but also with a set of tools and references that will prove invaluable as she explores software parallelism with her own software applications and research.

  8. Computational complexity of ecological and evolutionary spatial dynamics.

    Science.gov (United States)

    Ibsen-Jensen, Rasmus; Chatterjee, Krishnendu; Nowak, Martin A

    2015-12-22

    There are deep, yet largely unexplored, connections between computer science and biology. Both disciplines examine how information proliferates in time and space. Central results in computer science describe the complexity of algorithms that solve certain classes of problems. An algorithm is deemed efficient if it can solve a problem in polynomial time, which means the running time of the algorithm is a polynomial function of the length of the input. There are classes of harder problems for which the fastest possible algorithm requires exponential time. Another criterion is the space requirement of the algorithm. There is a crucial distinction between algorithms that can find a solution, verify a solution, or list several distinct solutions in given time and space. The complexity hierarchy that is generated in this way is the foundation of theoretical computer science. Precise complexity results can be notoriously difficult. The famous question whether polynomial time equals nondeterministic polynomial time (i.e., P = NP) is one of the hardest open problems in computer science and all of mathematics. Here, we consider simple processes of ecological and evolutionary spatial dynamics. The basic question is: What is the probability that a new invader (or a new mutant) will take over a resident population? We derive precise complexity results for a variety of scenarios. We therefore show that some fundamental questions in this area cannot be answered by simple equations (assuming that P is not equal to NP).

  9. The Computational Complexity of the Parallel Knock-Out Problem

    NARCIS (Netherlands)

    Broersma, H.J.; Johnson, M.; Paulusma, D.; Stewart, I.A.; Correa, J.R.; Hevia, A.; Kiwi, M.

    2006-01-01

    We consider computational complexity questions related to parallel knock-out schemes for graphs. In such schemes, in each round, each remaining vertex of a given graph eliminates exactly one of its neighbours. We show that the problem of whether, for a given graph, such a scheme can be found that el

  10. Development of Onboard Computer Complex for Russian Segment of ISS

    Science.gov (United States)

    Branets, V.; Brand, G.; Vlasov, R.; Graf, I.; Clubb, J.; Mikrin, E.; Samitov, R.

    1998-01-01

    Report present a description of the Onboard Computer Complex (CC) that was developed during the period of 1994-1998 for the Russian Segment of ISS. The system was developed in co-operation with NASA and ESA. ESA developed a new computation system under the RSC Energia Technical Assignment, called DMS-R. The CC also includes elements developed by Russian experts and organizations. A general architecture of the computer system and the characteristics of primary elements of this system are described. The system was integrated at RSC Energia with the participation of American and European specialists. The report contains information on software simulators, verification and de-bugging facilities witch were been developed for both stand-alone and integrated tests and verification. This CC serves as the basis for the Russian Segment Onboard Control Complex on ISS.

  11. Computer proficiency questionnaire: assessing low and high computer proficient seniors.

    Science.gov (United States)

    Boot, Walter R; Charness, Neil; Czaja, Sara J; Sharit, Joseph; Rogers, Wendy A; Fisk, Arthur D; Mitzner, Tracy; Lee, Chin Chin; Nair, Sankaran

    2015-06-01

    Computers and the Internet have the potential to enrich the lives of seniors and aid in the performance of important tasks required for independent living. A prerequisite for reaping these benefits is having the skills needed to use these systems, which is highly dependent on proper training. One prerequisite for efficient and effective training is being able to gauge current levels of proficiency. We developed a new measure (the Computer Proficiency Questionnaire, or CPQ) to measure computer proficiency in the domains of computer basics, printing, communication, Internet, calendaring software, and multimedia use. Our aim was to develop a measure appropriate for individuals with a wide range of proficiencies from noncomputer users to extremely skilled users. To assess the reliability and validity of the CPQ, a diverse sample of older adults, including 276 older adults with no or minimal computer experience, was recruited and asked to complete the CPQ. The CPQ demonstrated excellent reliability (Cronbach's α = .98), with subscale reliabilities ranging from .86 to .97. Age, computer use, and general technology use all predicted CPQ scores. Factor analysis revealed three main factors of proficiency related to Internet and e-mail use; communication and calendaring; and computer basics. Based on our findings, we also developed a short-form CPQ (CPQ-12) with similar properties but 21 fewer questions. The CPQ and CPQ-12 are useful tools to gauge computer proficiency for training and research purposes, even among low computer proficient older adults. © The Author 2013. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  12. PRCA:A highly efficient computing architecture

    Institute of Scientific and Technical Information of China (English)

    Luo Xingguo

    2014-01-01

    Applications can only reach 8 %~15 % of utilization on modern computer systems. There are many obstacles to improving system efficiency. The key root is the conflict between the fixed general computer architecture and the variable requirements of applications. Proactive reconfigurable computing architecture (PRCA) is proposed to improve computing efficiency. PRCA dynamically constructs an efficient computing ar chitecture for a specific application via reconfigurable technology by perceiving requirements,workload and utilization of computing resources. Proactive decision support system (PDSS),hybrid reconfigurable computing array (HRCA) and reconfigurable interconnect (RIC) are intensively researched as the key technologies. The principles of PRCA have been verified with four applications on a test bed. It is shown that PRCA is feasible and highly efficient.

  13. Taming Dynamical Complexity and Managing High Technology

    Institute of Scientific and Technical Information of China (English)

    FANGJin-qing; CHENGuan-rong; ZHAOGeng

    2003-01-01

    Variability is one of the most important features of complexity m complex networks anu systems,which usually depends sensitively on small perturbations. Various possible competing behaviours in a system may provide great flexibility in regulating or taming dynamical complexity, through which the designer may be able to better select and manage a desired behaviour for a specific application. In many high-tech fields, how to regulate or manage complexity is a very important but challenge issue.

  14. NASA High-End Computing Program Website

    Science.gov (United States)

    Cohen, Jarrett S.

    2008-01-01

    If you are a NASA-sponsored scientist or engineer. computing time is available to you at the High-End Computing (HEC) Program's NASA Advanced Supercomputing (NAS) Facility and NASA Center for Computational Sciences (NCCS). The Science Mission Directorate will select from requests NCCS Portals submitted to the e-Books online system for awards beginning on May 1. Current projects set to explore on April 30 must have a request in e-Books to be considered for renewal

  15. NASA High-End Computing Program Website

    Science.gov (United States)

    Cohen, Jarrett S.

    2008-01-01

    If you are a NASA-sponsored scientist or engineer. computing time is available to you at the High-End Computing (HEC) Program's NASA Advanced Supercomputing (NAS) Facility and NASA Center for Computational Sciences (NCCS). The Science Mission Directorate will select from requests NCCS Portals submitted to the e-Books online system for awards beginning on May 1. Current projects set to explore on April 30 must have a request in e-Books to be considered for renewal

  16. Fast algorithm for computing complex number-theoretic transforms

    Science.gov (United States)

    Reed, I. S.; Liu, K. Y.; Truong, T. K.

    1977-01-01

    A high-radix FFT algorithm for computing transforms over FFT, where q is a Mersenne prime, is developed to implement fast circular convolutions. This new algorithm requires substantially fewer multiplications than the conventional FFT.

  17. Local algorithm for computing complex travel time based on the complex eikonal equation.

    Science.gov (United States)

    Huang, Xingguo; Sun, Jianguo; Sun, Zhangqing

    2016-04-01

    The traditional algorithm for computing the complex travel time, e.g., dynamic ray tracing method, is based on the paraxial ray approximation, which exploits the second-order Taylor expansion. Consequently, the computed results are strongly dependent on the width of the ray tube and, in regions with dramatic velocity variations, it is difficult for the method to account for the velocity variations. When solving the complex eikonal equation, the paraxial ray approximation can be avoided and no second-order Taylor expansion is required. However, this process is time consuming. In this case, we may replace the global computation of the whole model with local computation by taking both sides of the ray as curved boundaries of the evanescent wave. For a given ray, the imaginary part of the complex travel time should be zero on the central ray. To satisfy this condition, the central ray should be taken as a curved boundary. We propose a nonuniform grid-based finite difference scheme to solve the curved boundary problem. In addition, we apply the limited-memory Broyden-Fletcher-Goldfarb-Shanno technology for obtaining the imaginary slowness used to compute the complex travel time. The numerical experiments show that the proposed method is accurate. We examine the effectiveness of the algorithm for the complex travel time by comparing the results with those from the dynamic ray tracing method and the Gauss-Newton Conjugate Gradient fast marching method.

  18. Local algorithm for computing complex travel time based on the complex eikonal equation

    Science.gov (United States)

    Huang, Xingguo; Sun, Jianguo; Sun, Zhangqing

    2016-04-01

    The traditional algorithm for computing the complex travel time, e.g., dynamic ray tracing method, is based on the paraxial ray approximation, which exploits the second-order Taylor expansion. Consequently, the computed results are strongly dependent on the width of the ray tube and, in regions with dramatic velocity variations, it is difficult for the method to account for the velocity variations. When solving the complex eikonal equation, the paraxial ray approximation can be avoided and no second-order Taylor expansion is required. However, this process is time consuming. In this case, we may replace the global computation of the whole model with local computation by taking both sides of the ray as curved boundaries of the evanescent wave. For a given ray, the imaginary part of the complex travel time should be zero on the central ray. To satisfy this condition, the central ray should be taken as a curved boundary. We propose a nonuniform grid-based finite difference scheme to solve the curved boundary problem. In addition, we apply the limited-memory Broyden-Fletcher-Goldfarb-Shanno technology for obtaining the imaginary slowness used to compute the complex travel time. The numerical experiments show that the proposed method is accurate. We examine the effectiveness of the algorithm for the complex travel time by comparing the results with those from the dynamic ray tracing method and the Gauss-Newton Conjugate Gradient fast marching method.

  19. Debugging a high performance computing program

    Science.gov (United States)

    Gooding, Thomas M.

    2013-08-20

    Methods, apparatus, and computer program products are disclosed for debugging a high performance computing program by gathering lists of addresses of calling instructions for a plurality of threads of execution of the program, assigning the threads to groups in dependence upon the addresses, and displaying the groups to identify defective threads.

  20. Component-based software for high-performance scientific computing

    Science.gov (United States)

    Alexeev, Yuri; Allan, Benjamin A.; Armstrong, Robert C.; Bernholdt, David E.; Dahlgren, Tamara L.; Gannon, Dennis; Janssen, Curtis L.; Kenny, Joseph P.; Krishnan, Manojkumar; Kohl, James A.; Kumfert, Gary; Curfman McInnes, Lois; Nieplocha, Jarek; Parker, Steven G.; Rasmussen, Craig; Windus, Theresa L.

    2005-01-01

    Recent advances in both computational hardware and multidisciplinary science have given rise to an unprecedented level of complexity in scientific simulation software. This paper describes an ongoing grass roots effort aimed at addressing complexity in high-performance computing through the use of Component-Based Software Engineering (CBSE). Highlights of the benefits and accomplishments of the Common Component Architecture (CCA) Forum and SciDAC ISIC are given, followed by an illustrative example of how the CCA has been applied to drive scientific discovery in quantum chemistry. Thrusts for future research are also described briefly.

  1. Molecular computing towards a novel computing architecture for complex problem solving

    CERN Document Server

    Chang, Weng-Long

    2014-01-01

    This textbook introduces a concise approach to the design of molecular algorithms for students or researchers who are interested in dealing with complex problems. Through numerous examples and exercises, you will understand the main difference of molecular circuits and traditional digital circuits to manipulate the same problem and you will also learn how to design a molecular algorithm of solving any a problem from start to finish. The book starts with an introduction to computational aspects of digital computers and molecular computing, data representation of molecular computing, molecular operations of molecular computing and number representation of molecular computing, and provides many molecular algorithm to construct the parity generator and the parity checker of error-detection codes on digital communication, to encode integers of different formats, single precision and double precision of floating-point numbers, to implement addition and subtraction of unsigned integers, to construct logic operations...

  2. Complex systems relationships between control, communications and computing

    CERN Document Server

    2016-01-01

    This book gives a wide-ranging description of the many facets of complex dynamic networks and systems within an infrastructure provided by integrated control and supervision: envisioning, design, experimental exploration, and implementation. The theoretical contributions and the case studies presented can reach control goals beyond those of stabilization and output regulation or even of adaptive control. Reporting on work of the Control of Complex Systems (COSY) research program, Complex Systems follows from and expands upon an earlier collection: Control of Complex Systems by introducing novel theoretical techniques for hard-to-control networks and systems. The major common feature of all the superficially diverse contributions encompassed by this book is that of spotting and exploiting possible areas of mutual reinforcement between control, computing and communications. These help readers to achieve not only robust stable plant system operation but also properties such as collective adaptivity, integrity an...

  3. Computational molecular basis for improved silica surface complexation models

    Energy Technology Data Exchange (ETDEWEB)

    Sahai, Nita; Rosso, Kevin M.

    2006-06-06

    The acidity and reactivity of surface sites on amorphous and crystalline polymorphs of silica and other oxides control their thermodynamic stability and kinetic reactivity towards reactants in surface-controlled processes of environmental, industrial, biomedical and technological relevance. Recent advances in computational methodologies such as CPMD and increasing computer power combined with spectroscopic measurements are now making it possible to link, with an impressive degree of accuracy, the molecular-level description of these processes to phenomenological, surface complexation models The future challenge now lies in linking mesoscale properties at the nanometer scale to phenomenological models that will afford a more intuitive understanding of the systems under consideration.

  4. Computer modeling of properties of complex molecular systems

    Energy Technology Data Exchange (ETDEWEB)

    Kulkova, E.Yu. [Moscow State University of Technology “STANKIN”, Vadkovsky per., 1, Moscow 101472 (Russian Federation); Khrenova, M.G.; Polyakov, I.V. [Lomonosov Moscow State University, Chemistry Department, Leninskie Gory 1/3, Moscow 119991 (Russian Federation); Nemukhin, A.V. [Lomonosov Moscow State University, Chemistry Department, Leninskie Gory 1/3, Moscow 119991 (Russian Federation); N.M. Emanuel Institute of Biochemical Physics, Russian Academy of Sciences, Kosygina 4, Moscow 119334 (Russian Federation)

    2015-03-10

    Large molecular aggregates present important examples of strongly nonhomogeneous systems. We apply combined quantum mechanics / molecular mechanics approaches that assume treatment of a part of the system by quantum-based methods and the rest of the system with conventional force fields. Herein we illustrate these computational approaches by two different examples: (1) large-scale molecular systems mimicking natural photosynthetic centers, and (2) components of prospective solar cells containing titan dioxide and organic dye molecules. We demonstrate that modern computational tools are capable to predict structures and spectra of such complex molecular aggregates.

  5. Federal High End Computing (HEC) Information Portal

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — This portal provides information about opportunities to engage in U.S. Federal government high performance computing activities, including supercomputer use,...

  6. High School Physics and the Affordable Computer.

    Science.gov (United States)

    Harvey, Norman L.

    1978-01-01

    Explains how the computer was used in a high school physics course; Project Physics program and individualized study PSSC physics program. Evaluates the capabilities and limitations of a $600 microcomputer system. (GA)

  7. Addendum to Computational Complexity and Black Hole Horizons

    CERN Document Server

    Susskind, Leonard

    2014-01-01

    In this addendum to [arXiv:1402.5674] two points are discussed. In the first additional evidence is provided for a dual connection between the geometric length of an Einstein-Rosen bridge and the computational complexity of the quantum state of the dual CFT's. The relation between growth of complexity and Page's ``Extreme Cosmic Censorship" principle is also remarked on. The second point involves a gedanken experiment in which Alice measures a complete set of commuting observables at her end of an Einstein-Rosen bridge is discussed. An apparent paradox is resolved by appealing to the properties of GHZ tripartite entanglement.

  8. Computational Complexity of Decoding Orthogonal Space-Time Block Codes

    CERN Document Server

    Ayanoglu, Ender; Karipidis, Eleftherios

    2009-01-01

    The computational complexity of optimum decoding for an orthogonal space-time block code G satisfying the orthogonality property that the Hermitian transpose of G multiplied by G is equal to a constant c times the sum of the squared symbols of the code times an identity matrix, where c is a positive integer is quantified. Four equivalent techniques of optimum decoding which have the same computational complexity are specified. Modifications to the basic formulation in special cases are calculated and illustrated by means of examples. This paper corrects and extends [1],[2], and unifies them with the results from the literature. In addition, a number of results from the literature are extended to the case c > 1.

  9. Computer models of complex multiloop branched pipeline systems

    Science.gov (United States)

    Kudinov, I. V.; Kolesnikov, S. V.; Eremin, A. V.; Branfileva, A. N.

    2013-11-01

    This paper describes the principal theoretical concepts of the method used for constructing computer models of complex multiloop branched pipeline networks, and this method is based on the theory of graphs and two Kirchhoff's laws applied to electrical circuits. The models make it possible to calculate velocities, flow rates, and pressures of a fluid medium in any section of pipeline networks, when the latter are considered as single hydraulic systems. On the basis of multivariant calculations the reasons for existing problems can be identified, the least costly methods of their elimination can be proposed, and recommendations for planning the modernization of pipeline systems and construction of their new sections can be made. The results obtained can be applied to complex pipeline systems intended for various purposes (water pipelines, petroleum pipelines, etc.). The operability of the model has been verified on an example of designing a unified computer model of the heat network for centralized heat supply of the city of Samara.

  10. A Component Architecture for High-Performance Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, D E; Allan, B A; Armstrong, R; Bertrand, F; Chiu, K; Dahlgren, T L; Damevski, K; Elwasif, W R; Epperly, T W; Govindaraju, M; Katz, D S; Kohl, J A; Krishnan, M; Kumfert, G; Larson, J W; Lefantzi, S; Lewis, M J; Malony, A D; McInnes, L C; Nieplocha, J; Norris, B; Parker, S G; Ray, J; Shende, S; Windus, T L; Zhou, S

    2004-12-14

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  11. A Component Architecture for High-Performance Scientific Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, David E; Allan, Benjamin A; Armstrong, Robert C; Bertrand, Felipe; Chiu, Kenneth; Dahlgren, Tamara L; Damevski, Kostadin; Elwasif, Wael R; Epperly, Thomas G; Govindaraju, Madhusudhan; Katz, Daniel S; Kohl, James A; Krishnan, Manoj Kumar; Kumfert, Gary K; Larson, J Walter; Lefantzi, Sophia; Lewis, Michael J; Malony, Allen D; McInnes, Lois C; Nieplocha, Jarek; Norris, Boyana; Parker, Steven G; Ray, Jaideep; Shende, Sameer; Windus, Theresa L; Zhou, Shujia

    2006-07-03

    The Common Component Architecture (CCA) provides a means for software developers to manage the complexity of large-scale scientific simulations and to move toward a plug-and-play environment for high-performance computing. In the scientific computing context, component models also promote collaboration using independently developed software, thereby allowing particular individuals or groups to focus on the aspects of greatest interest to them. The CCA supports parallel and distributed computing as well as local high-performance connections between components in a language-independent manner. The design places minimal requirements on components and thus facilitates the integration of existing code into the CCA environment. The CCA model imposes minimal overhead to minimize the impact on application performance. The focus on high performance distinguishes the CCA from most other component models. The CCA is being applied within an increasing range of disciplines, including combustion research, global climate simulation, and computational chemistry.

  12. COMPUTER DATA ANALYSIS AND MODELING: COMPLEX STOCHASTIC DATA AND SYSTEMS

    OpenAIRE

    2010-01-01

    This collection of papers includes proceedings of the Ninth International Conference “Computer Data Analysis and Modeling: Complex Stochastic Data and Systems” organized by the Belarusian State University and held in September 2010 in Minsk. The papers are devoted to the topical problems: robust and nonparametric data analysis; statistical analysis of time series and forecasting; multivariate data analysis; design of experiments; statistical signal and image processing...

  13. Computational complexity of symbolic dynamics at the onset of chaos

    Science.gov (United States)

    Lakdawala, Porus

    1996-05-01

    In a variety of studies of dynamical systems, the edge of order and chaos has been singled out as a region of complexity. It was suggested by Wolfram, on the basis of qualitative behavior of cellular automata, that the computational basis for modeling this region is the universal Turing machine. In this paper, following a suggestion of Crutchfield, we try to show that the Turing machine model may often be too powerful as a computational model to describe the boundary of order and chaos. In particular we study the region of the first accumulation of period doubling in unimodal and bimodal maps of the interval, from the point of view of language theory. We show that in relation to the ``extended'' Chomsky hierarchy, the relevant computational model in the unimodal case is the nested stack automaton or the related indexed languages, while the bimodal case is modeled by the linear bounded automaton or the related context-sensitive languages.

  14. The engineering design integration (EDIN) system. [digital computer program complex

    Science.gov (United States)

    Glatt, C. R.; Hirsch, G. N.; Alford, G. E.; Colquitt, W. N.; Reiners, S. J.

    1974-01-01

    A digital computer program complex for the evaluation of aerospace vehicle preliminary designs is described. The system consists of a Univac 1100 series computer and peripherals using the Exec 8 operating system, a set of demand access terminals of the alphanumeric and graphics types, and a library of independent computer programs. Modification of the partial run streams, data base maintenance and construction, and control of program sequencing are provided by a data manipulation program called the DLG processor. The executive control of library program execution is performed by the Univac Exec 8 operating system through a user established run stream. A combination of demand and batch operations is employed in the evaluation of preliminary designs. Applications accomplished with the EDIN system are described.

  15. Team-computer interfaces in complex task environments

    Energy Technology Data Exchange (ETDEWEB)

    Terranova, M.

    1990-09-01

    This research focused on the interfaces (media of information exchange) teams use to interact about the task at hand. This report is among the first to study human-system interfaces in which the human component is a team, and the system functions as part of the team. Two operators dynamically shared a simulated fluid flow process, coordinating control and failure detection responsibilities through computer-mediated communication. Different computer interfaces representing the same system information were used to affect the individual operators' mental models of the process. Communication was identified as the most critical variable, consequently future research is being designed to test effective modes of communication. The results have relevance for the development of team-computer interfaces in complex systems in which responsibility must be shared dynamically among all members of the operation.

  16. Computational complexity of nonequilibrium steady states of quantum spin chains

    Science.gov (United States)

    Marzolino, Ugo; Prosen, Tomaž

    2016-03-01

    We study nonequilibrium steady states (NESS) of spin chains with boundary Markovian dissipation from the computational complexity point of view. We focus on X X chains whose NESS are matrix product operators, i.e., with coefficients of a tensor operator basis described by transition amplitudes in an auxiliary space. Encoding quantum algorithms in the auxiliary space, we show that estimating expectations of operators, being local in the sense that each acts on disjoint sets of few spins covering all the system, provides the answers of problems at least as hard as, and believed by many computer scientists to be much harder than, those solved by quantum computers. We draw conclusions on the hardness of the above estimations.

  17. Condor-COPASI: high-throughput computing for biochemical networks

    Directory of Open Access Journals (Sweden)

    Kent Edward

    2012-07-01

    Full Text Available Abstract Background Mathematical modelling has become a standard technique to improve our understanding of complex biological systems. As models become larger and more complex, simulations and analyses require increasing amounts of computational power. Clusters of computers in a high-throughput computing environment can help to provide the resources required for computationally expensive model analysis. However, exploiting such a system can be difficult for users without the necessary expertise. Results We present Condor-COPASI, a server-based software tool that integrates COPASI, a biological pathway simulation tool, with Condor, a high-throughput computing environment. Condor-COPASI provides a web-based interface, which makes it extremely easy for a user to run a number of model simulation and analysis tasks in parallel. Tasks are transparently split into smaller parts, and submitted for execution on a Condor pool. Result output is presented to the user in a number of formats, including tables and interactive graphical displays. Conclusions Condor-COPASI can effectively use a Condor high-throughput computing environment to provide significant gains in performance for a number of model simulation and analysis tasks. Condor-COPASI is free, open source software, released under the Artistic License 2.0, and is suitable for use by any institution with access to a Condor pool. Source code is freely available for download at http://code.google.com/p/condor-copasi/, along with full instructions on deployment and usage.

  18. 3rd International Conference on High Performance Scientific Computing

    CERN Document Server

    Kostina, Ekaterina; Phu, Hoang; Rannacher, Rolf

    2008-01-01

    This proceedings volume contains a selection of papers presented at the Third International Conference on High Performance Scientific Computing held at the Hanoi Institute of Mathematics, Vietnamese Academy of Science and Technology (VAST), March 6-10, 2006. The conference has been organized by the Hanoi Institute of Mathematics, Interdisciplinary Center for Scientific Computing (IWR), Heidelberg, and its International PhD Program ``Complex Processes: Modeling, Simulation and Optimization'', and Ho Chi Minh City University of Technology. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and applications in practice. Subjects covered are mathematical modelling, numerical simulation, methods for optimization and control, parallel computing, software development, applications of scientific computing in physics, chemistry, biology and mechanics, environmental and hydrology problems, transport, logistics and site loca...

  19. Biological Computation as the Revolution of Complex Engineered Systems

    CERN Document Server

    Gómez-Cruz, Nelson Alfonso

    2011-01-01

    Provided that there is no theoretical frame for complex engineered systems (CES) as yet, this paper claims that bio-inspired engineering can help provide such a frame. Within CES bio-inspired systems play a key role. The disclosure from bio-inspired systems and biological computation has not been sufficiently worked out, however. Biological computation is to be taken as the processing of information by living systems that is carried out in polynomial time, i.e., efficiently; such processing however is grasped by current science and research as an intractable problem (for instance, the protein folding problem). A remark is needed here: P versus NP problems should be well defined and delimited but biological computation problems are not. The shift from conventional engineering to bio-inspired engineering needs bring the subject (or problem) of computability to a new level. Within the frame of computation, so far, the prevailing paradigm is still the Turing-Church thesis. In other words, conventional engineering...

  20. Complex data modeling and computationally intensive methods for estimation and prediction

    CERN Document Server

    Secchi, Piercesare; Advances in Complex Data Modeling and Computational Methods in Statistics

    2015-01-01

    The book is addressed to statisticians working at the forefront of the statistical analysis of complex and high dimensional data and offers a wide variety of statistical models, computer intensive methods and applications: network inference from the analysis of high dimensional data; new developments for bootstrapping complex data; regression analysis for measuring the downsize reputational risk; statistical methods for research on the human genome dynamics; inference in non-euclidean settings and for shape data; Bayesian methods for reliability and the analysis of complex data; methodological issues in using administrative data for clinical and epidemiological research; regression models with differential regularization; geostatistical methods for mobility analysis through mobile phone data exploration. This volume is the result of a careful selection among the contributions presented at the conference "S.Co.2013: Complex data modeling and computationally intensive methods for estimation and prediction" held...

  1. Dawning4000A high performance computer

    Institute of Scientific and Technical Information of China (English)

    SUN Ninghui; MENG Dan

    2007-01-01

    Dawning4000A is an AMD Opteron-based Linux Cluster with 11.2Tflops peak performance and 8.06Tflops Linpack performance.It was developed for the Shanghai Supercomputer Center (SSC)as one of the computing power stations of the China National Grid (CNGrid)project.The Massively Cluster Computer (MCC)architecture is proposed to put added-value on the industry standard system.Several grid-enabling components are developed to support the running environment of the CNGrid.It is an achievement for a high performance computer with the low-cost approach.

  2. Modeling Cu2+-Aβ complexes from computational approaches

    Science.gov (United States)

    Alí-Torres, Jorge; Mirats, Andrea; Maréchal, Jean-Didier; Rodríguez-Santiago, Luis; Sodupe, Mariona

    2015-09-01

    Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD), in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu2+ metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS). A detailed knowledge of the electronic and molecular structure of Cu2+-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu2+-Aβ coordination and build plausible Cu2+-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.

  3. A computational model for cancer growth by using complex networks

    Science.gov (United States)

    Galvão, Viviane; Miranda, José G. V.

    2008-09-01

    In this work we propose a computational model to investigate the proliferation of cancerous cell by using complex networks. In our model the network represents the structure of available space in the cancer propagation. The computational scheme considers a cancerous cell randomly included in the complex network. When the system evolves the cells can assume three states: proliferative, non-proliferative, and necrotic. Our results were compared with experimental data obtained from three human lung carcinoma cell lines. The computational simulations show that the cancerous cells have a Gompertzian growth. Also, our model simulates the formation of necrosis, increase of density, and resources diffusion to regions of lower nutrient concentration. We obtain that the cancer growth is very similar in random and small-world networks. On the other hand, the topological structure of the small-world network is more affected. The scale-free network has the largest rates of cancer growth due to hub formation. Finally, our results indicate that for different average degrees the rate of cancer growth is related to the available space in the network.

  4. Modeling Cu2+-Aβ complexes from computational approaches

    Directory of Open Access Journals (Sweden)

    Jorge Alí-Torres

    2015-09-01

    Full Text Available Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD, in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu2+ metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS. A detailed knowledge of the electronic and molecular structure of Cu2+-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu2+-Aβ coordination and build plausible Cu2+-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.

  5. Modeling Cu{sup 2+}-Aβ complexes from computational approaches

    Energy Technology Data Exchange (ETDEWEB)

    Alí-Torres, Jorge [Departamento de Química, Universidad Nacional de Colombia- Sede Bogotá, 111321 (Colombia); Mirats, Andrea; Maréchal, Jean-Didier; Rodríguez-Santiago, Luis; Sodupe, Mariona, E-mail: Mariona.Sodupe@uab.cat [Departament de Química, Universitat Autònoma de Barcelona, 08193 Bellaterra, Barcelona (Spain)

    2015-09-15

    Amyloid plaques formation and oxidative stress are two key events in the pathology of the Alzheimer disease (AD), in which metal cations have been shown to play an important role. In particular, the interaction of the redox active Cu{sup 2+} metal cation with Aβ has been found to interfere in amyloid aggregation and to lead to reactive oxygen species (ROS). A detailed knowledge of the electronic and molecular structure of Cu{sup 2+}-Aβ complexes is thus important to get a better understanding of the role of these complexes in the development and progression of the AD disease. The computational treatment of these systems requires a combination of several available computational methodologies, because two fundamental aspects have to be addressed: the metal coordination sphere and the conformation adopted by the peptide upon copper binding. In this paper we review the main computational strategies used to deal with the Cu{sup 2+}-Aβ coordination and build plausible Cu{sup 2+}-Aβ models that will afterwards allow determining physicochemical properties of interest, such as their redox potential.

  6. Real Computation with Few Discrete Advice: A Complexity Theory of Nonuniform Computability

    CERN Document Server

    Ziegler, Martin

    2008-01-01

    It is folklore particularly in numerical and computer sciences that, instead of solving some general problem f:A->B, additional structural information about the input x in A (that is any kind of promise that x belongs to a certain subset A' of A) should be taken advantage of. Some examples from real number computation show that such discrete advice can even make the difference between computability and uncomputability. We turn this into a both topological and combinatorial complexity theory of information, investigating for several practical problems how much advice is necessary and sufficient to render them computable. Specifically, finding a nontrivial solution to a homogeneous equation A*x=0 for a given singular real nxn-matrix A is possible when knowing rank(A)=0,1,...,n-1; and we show this to be best possible!

  7. Federal Plan for High-End Computing. Report of the High-End Computing Revitalization Task Force (HECRTF)

    Science.gov (United States)

    2004-07-01

    and other energy feedstock more efficiently. Signal Transduction Pathways Develop atomic-level computational models and simulations of complex...biomolecules to explain and predict cell signal pathways and their disrupters. Yield understanding of initiation of cancer and other diseases and their...calculations also introduces a requirement for a high degree of internodal connectivity (high bisection bandwidth). These needs cannot be met simply by

  8. High-performance computing for airborne applications

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Heather M [Los Alamos National Laboratory; Manuzzato, Andrea [Los Alamos National Laboratory; Fairbanks, Tom [Los Alamos National Laboratory; Dallmann, Nicholas [Los Alamos National Laboratory; Desgeorges, Rose [Los Alamos National Laboratory

    2010-06-28

    Recently, there has been attempts to move common satellite tasks to unmanned aerial vehicles (UAVs). UAVs are significantly cheaper to buy than satellites and easier to deploy on an as-needed basis. The more benign radiation environment also allows for an aggressive adoption of state-of-the-art commercial computational devices, which increases the amount of data that can be collected. There are a number of commercial computing devices currently available that are well-suited to high-performance computing. These devices range from specialized computational devices, such as field-programmable gate arrays (FPGAs) and digital signal processors (DSPs), to traditional computing platforms, such as microprocessors. Even though the radiation environment is relatively benign, these devices could be susceptible to single-event effects. In this paper, we will present radiation data for high-performance computing devices in a accelerated neutron environment. These devices include a multi-core digital signal processor, two field-programmable gate arrays, and a microprocessor. From these results, we found that all of these devices are suitable for many airplane environments without reliability problems.

  9. Reducing the quantum-computing overhead with complex gate distillation

    Science.gov (United States)

    Duclos-Cianci, Guillaume; Poulin, David

    2015-04-01

    In leading fault-tolerant quantum-computing schemes, accurate transformations are obtained by a two-stage process. In a first stage, a discrete universal set of fault-tolerant operations is obtained by error-correcting noisy transformations and distilling resource states. In a second stage, arbitrary transformations are synthesized to desired accuracy by combining elements of this set into a circuit. Here we present a scheme that merges these two stages into a single one, directly distilling complex transformations. We find that our scheme can reduce the total overhead to realize certain gates by up to a few orders of magnitude. In contrast to other schemes, this efficient gate synthesis does not require computationally intensive compilation algorithms and a straightforward generalization of our scheme circumvents compilation and synthesis altogether.

  10. Kolmogorov complexities Kmax, Kmin on computable partially ordered sets

    CERN Document Server

    Ferbus-Zanda, Marie

    2008-01-01

    We introduce a machine free mathematical framework to get a natural formalization of some general notions of infinite computation in the context of Kolmogorov complexity. Namely, the classes Max^{X\\to D}_{PR} and Max^{X\\to D}_{Rec} of functions X \\to D which are pointwise maximum of partial or total computable sequences of functions where D = (D,_{ct} K^{0',D}. We characterize the orders leading to each case. We also show that K^D_{min}, K^D_{max} cannot be both much smaller than K^D at any point. These results are proved in a more general setting with two orders on D, one extending the other.

  11. Improved Method of Blind Speech Separation with Low Computational Complexity

    Directory of Open Access Journals (Sweden)

    Kazunobu Kondo

    2011-01-01

    a frame-wise spectral soft mask method based on an interchannel power ratio of tentative separated signals in the frequency domain. The soft mask cancels the transfer function between sources and separated signals. A theoretical analysis of selection criteria and the soft mask is given. Performance and effectiveness are evaluated via source separation simulations and a computational estimate, and experimental results show the significantly improved performance of the proposed method. The segmental signal-to-noise ratio achieves 7 [dB] and 3 [dB], and the cepstral distortion achieves 1 [dB] and 2.5 [dB], in anechoic and reverberant conditions, respectively. Moreover, computational complexity is reduced by more than 80% compared with unmodified FDICA.

  12. Effective Computational Strategy for Predicting the Response of Complex Systems

    Science.gov (United States)

    1990-10-01

    34Strategies for Large-Scale Structural Problems on High- Performance Computers," Communications in Applied Numerical Methods (to appear). 2. Noor, A. K...23665 To be Published in Communications in Applied Numerical Methods October 1990 STRATEGIFS FOR LARGE-SCALE STRUCTURAL PROBLEMS ON llIGH-PERFORMANCE

  13. Parallel Computation of Persistent Homology using the Blowup Complex

    Energy Technology Data Exchange (ETDEWEB)

    Lewis, Ryan [Stanford Univ., CA (United States); Morozov, Dmitriy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2015-04-27

    We describe a parallel algorithm that computes persistent homology, an algebraic descriptor of a filtered topological space. Our algorithm is distinguished by operating on a spatial decomposition of the domain, as opposed to a decomposition with respect to the filtration. We rely on a classical construction, called the Mayer--Vietoris blowup complex, to glue global topological information about a space from its disjoint subsets. We introduce an efficient algorithm to perform this gluing operation, which may be of independent interest, and describe how to process the domain hierarchically. We report on a set of experiments that help assess the strengths and identify the limitations of our method.

  14. Exact complexity: The spectral decomposition of intrinsic computation

    Science.gov (United States)

    Crutchfield, James P.; Ellison, Christopher J.; Riechers, Paul M.

    2016-03-01

    We give exact formulae for a wide family of complexity measures that capture the organization of hidden nonlinear processes. The spectral decomposition of operator-valued functions leads to closed-form expressions involving the full eigenvalue spectrum of the mixed-state presentation of a process's ɛ-machine causal-state dynamic. Measures include correlation functions, power spectra, past-future mutual information, transient and synchronization informations, and many others. As a result, a direct and complete analysis of intrinsic computation is now available for the temporal organization of finitary hidden Markov models and nonlinear dynamical systems with generating partitions and for the spatial organization in one-dimensional systems, including spin systems, cellular automata, and complex materials via chaotic crystallography.

  15. Complex of computer models for cold stress evaluation in water

    OpenAIRE

    І. I. Ermakova; N. G. Ivanushkina; A. Yu. Nikolaenko; Yu. N. Solopchuk

    2015-01-01

    Introduction. Due to the high value of water thermal conductivity comparing to air, stay of man in cold water (water temperature lower than 25 sup>°C) is associated with high life and health hazard. One of the ways to evaluate survival time of human in water is usage of statistics data about survivors and water temperature organized as tables and curves. Another method to evaluate survival time and physiological state of man in water is computer modelling of human thermoregulatory system. ...

  16. High-performance computing in seismology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-09-01

    The scientific, technical, and economic importance of the issues discussed here presents a clear agenda for future research in computational seismology. In this way these problems will drive advances in high-performance computing in the field of seismology. There is a broad community that will benefit from this work, including the petroleum industry, research geophysicists, engineers concerned with seismic hazard mitigation, and governments charged with enforcing a comprehensive test ban treaty. These advances may also lead to new applications for seismological research. The recent application of high-resolution seismic imaging of the shallow subsurface for the environmental remediation industry is an example of this activity. This report makes the following recommendations: (1) focused efforts to develop validated documented software for seismological computations should be supported, with special emphasis on scalable algorithms for parallel processors; (2) the education of seismologists in high-performance computing technologies and methodologies should be improved; (3) collaborations between seismologists and computational scientists and engineers should be increased; (4) the infrastructure for archiving, disseminating, and processing large volumes of seismological data should be improved.

  17. A New Low Computational Complexity Sphere Decoding Algorithm

    CERN Document Server

    Li, Boyu

    2009-01-01

    The complexity of sphere decoding (SD) has been widely studied in that the algorithm is vital in providing the optimal Maximum Likelihood performance with low complexity. In this paper, we propose a proper tree search technique that reduces overall SD computational complexity without sacrificing performance. We build a check-table to pre-calculate and store some terms, temporally store some mid-stage terms, and take advantage of a new lattice representation of our previous work. This method allows significant reduction for the number of operations required to decode the transmitted symbols. We consider 2x2 and 4x4 systems employing 4-QAM and 64-QAM, and show that this approach achieves large gains for average number of real multiplications and real additions, which range from 70% to 90% and 40% to 75% respectively, depending on the number of antennas and the constellation size of modulation schemes. We also show that these complexity gains become greater when the system dimension and the modulation levels bec...

  18. Linear algebra on high-performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, J.J.; Sorensen, D.C.

    1986-01-01

    This paper surveys work recently done at Argonne National Laboratory in an attempt to discover ways to construct numerical software for high-performance computers. The numerical algorithms are taken from several areas of numerical linear algebra. We discuss certain architectural features of advanced-computer architectures that will affect the design of algorithms. The technique of restructuring algorithms in terms of certain modules is reviewed. This technique has proved successful in obtaining a high level of transportability without severe loss of performance on a wide variety of both vector and parallel computers. The module technique is demonstrably effective for dense linear algebra problems. However, in the case of sparse and structured problems it may be difficult to identify general modules that will be as effective. New algorithms have been devised for certain problems in this category. We present examples in three important areas: banded systems, sparse QR factorization, and symmetric eigenvalue problems. 32 refs., 10 figs., 6 tabs.

  19. High-order hydrodynamic algorithms for exascale computing

    Energy Technology Data Exchange (ETDEWEB)

    Morgan, Nathaniel Ray [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-02-05

    Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broad range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.

  20. High performance computing on vector systems

    CERN Document Server

    Roller, Sabine

    2008-01-01

    Presents the developments in high-performance computing and simulation on modern supercomputer architectures. This book covers trends in hardware and software development in general and specifically the vector-based systems and heterogeneous architectures. It presents innovative fields like coupled multi-physics or multi-scale simulations.

  1. Exact complexity: The spectral decomposition of intrinsic computation

    Energy Technology Data Exchange (ETDEWEB)

    Crutchfield, James P., E-mail: chaos@ucdavis.edu [Complexity Sciences Center and Department of Physics, University of California at Davis, One Shields Avenue, Davis, CA 95616 (United States); Ellison, Christopher J., E-mail: cellison@wisc.edu [Center for Complexity and Collective Computation, University of Wisconsin-Madison, Madison, WI 53706 (United States); Riechers, Paul M., E-mail: pmriechers@ucdavis.edu [Complexity Sciences Center and Department of Physics, University of California at Davis, One Shields Avenue, Davis, CA 95616 (United States)

    2016-03-06

    We give exact formulae for a wide family of complexity measures that capture the organization of hidden nonlinear processes. The spectral decomposition of operator-valued functions leads to closed-form expressions involving the full eigenvalue spectrum of the mixed-state presentation of a process's ϵ-machine causal-state dynamic. Measures include correlation functions, power spectra, past-future mutual information, transient and synchronization informations, and many others. As a result, a direct and complete analysis of intrinsic computation is now available for the temporal organization of finitary hidden Markov models and nonlinear dynamical systems with generating partitions and for the spatial organization in one-dimensional systems, including spin systems, cellular automata, and complex materials via chaotic crystallography. - Highlights: • We provide exact, closed-form expressions for a hidden stationary process' intrinsic computation. • These include information measures such as the excess entropy, transient information, and synchronization information and the entropy-rate finite-length approximations. • The method uses an epsilon-machine's mixed-state presentation. • The spectral decomposition of the mixed-state presentation relies on the recent development of meromorphic functional calculus for nondiagonalizable operators.

  2. High-performance computers for unmanned vehicles

    Science.gov (United States)

    Toms, David; Ettinger, Gil J.

    2005-10-01

    The present trend of increasing functionality onboard unmanned vehicles is made possible by rapid advances in high-performance computers (HPCs). An HPC is characterized by very high computational capability (100s of billions of operations per second) contained in lightweight, rugged, low-power packages. HPCs are critical to the processing of sensor data onboard these vehicles. Operations such as radar image formation, target tracking, target recognition, signal intelligence signature collection and analysis, electro-optic image compression, and onboard data exploitation are provided by these machines. The net effect of an HPC is to minimize communication bandwidth requirements and maximize mission flexibility. This paper focuses on new and emerging technologies in the HPC market. Emerging capabilities include new lightweight, low-power computing systems: multi-mission computing (using a common computer to support several sensors); onboard data exploitation; and large image data storage capacities. These new capabilities will enable an entirely new generation of deployed capabilities at reduced cost. New software tools and architectures available to unmanned vehicle developers will enable them to rapidly develop optimum solutions with maximum productivity and return on investment. These new technologies effectively open the trade space for unmanned vehicle designers.

  3. Coherence and computational complexity of quantifier-free dependence logic formulas

    NARCIS (Netherlands)

    Kontinen, J.; Kontinen, J.; Väänänen, J.

    2010-01-01

    We study the computational complexity of the model checking for quantifier-free dependence logic (D) formulas. We point out three thresholds in the computational complexity: logarithmic space, non- deterministic logarithmic space and non-deterministic polynomial time.

  4. Studying an Eulerian Computer Model on Different High-performance Computer Platforms and Some Applications

    Science.gov (United States)

    Georgiev, K.; Zlatev, Z.

    2010-11-01

    The Danish Eulerian Model (DEM) is an Eulerian model for studying the transport of air pollutants on large scale. Originally, the model was developed at the National Environmental Research Institute of Denmark. The model computational domain covers Europe and some neighbour parts belong to the Atlantic Ocean, Asia and Africa. If DEM model is to be applied by using fine grids, then its discretization leads to a huge computational problem. This implies that such a model as DEM must be run only on high-performance computer architectures. The implementation and tuning of such a complex large-scale model on each different computer is a non-trivial task. Here, some comparison results of running of this model on different kind of vector (CRAY C92A, Fujitsu, etc.), parallel computers with distributed memory (IBM SP, CRAY T3E, Beowulf clusters, Macintosh G4 clusters, etc.), parallel computers with shared memory (SGI Origin, SUN, etc.) and parallel computers with two levels of parallelism (IBM SMP, IBM BlueGene/P, clusters of multiprocessor nodes, etc.) will be presented. The main idea in the parallel version of DEM is domain partitioning approach. Discussions according to the effective use of the cache and hierarchical memories of the modern computers as well as the performance, speed-ups and efficiency achieved will be done. The parallel code of DEM, created by using MPI standard library, appears to be highly portable and shows good efficiency and scalability on different kind of vector and parallel computers. Some important applications of the computer model output are presented in short.

  5. Computational expression deconvolution in a complex mammalian organ

    Directory of Open Access Journals (Sweden)

    Master Stephen R

    2006-07-01

    Full Text Available Abstract Background Microarray expression profiling has been widely used to identify differentially expressed genes in complex cellular systems. However, while such methods can be used to directly infer intracellular regulation within homogeneous cell populations, interpretation of in vivo gene expression data derived from complex organs composed of multiple cell types is more problematic. Specifically, observed changes in gene expression may be due either to changes in gene regulation within a given cell type or to changes in the relative abundance of expressing cell types. Consequently, bona fide changes in intrinsic gene regulation may be either mimicked or masked by changes in the relative proportion of different cell types. To date, few analytical approaches have addressed this problem. Results We have chosen to apply a computational method for deconvoluting gene expression profiles derived from intact tissues by using reference expression data for purified populations of the constituent cell types of the mammary gland. These data were used to estimate changes in the relative proportions of different cell types during murine mammary gland development and Ras-induced mammary tumorigenesis. These computational estimates of changing compartment sizes were then used to enrich lists of differentially expressed genes for transcripts that change as a function of intrinsic intracellular regulation rather than shifts in the relative abundance of expressing cell types. Using this approach, we have demonstrated that adjusting mammary gene expression profiles for changes in three principal compartments – epithelium, white adipose tissue, and brown adipose tissue – is sufficient both to reduce false-positive changes in gene expression due solely to changes in compartment sizes and to reduce false-negative changes by unmasking genuine alterations in gene expression that were otherwise obscured by changes in compartment sizes. Conclusion By adjusting

  6. High-Degree Neurons Feed Cortical Computations.

    Directory of Open Access Journals (Sweden)

    Nicholas M Timme

    2016-05-01

    Full Text Available Recent work has shown that functional connectivity among cortical neurons is highly varied, with a small percentage of neurons having many more connections than others. Also, recent theoretical developments now make it possible to quantify how neurons modify information from the connections they receive. Therefore, it is now possible to investigate how information modification, or computation, depends on the number of connections a neuron receives (in-degree or sends out (out-degree. To do this, we recorded the simultaneous spiking activity of hundreds of neurons in cortico-hippocampal slice cultures using a high-density 512-electrode array. This preparation and recording method combination produced large numbers of neurons recorded at temporal and spatial resolutions that are not currently available in any in vivo recording system. We utilized transfer entropy (a well-established method for detecting linear and nonlinear interactions in time series and the partial information decomposition (a powerful, recently developed tool for dissecting multivariate information processing into distinct parts to quantify computation between neurons where information flows converged. We found that computations did not occur equally in all neurons throughout the networks. Surprisingly, neurons that computed large amounts of information tended to receive connections from high out-degree neurons. However, the in-degree of a neuron was not related to the amount of information it computed. To gain insight into these findings, we developed a simple feedforward network model. We found that a degree-modified Hebbian wiring rule best reproduced the pattern of computation and degree correlation results seen in the real data. Interestingly, this rule also maximized signal propagation in the presence of network-wide correlations, suggesting a mechanism by which cortex could deal with common random background input. These are the first results to show that the extent to

  7. High Performance Numerical Computing for High Energy Physics: A New Challenge for Big Data Science

    Directory of Open Access Journals (Sweden)

    Florin Pop

    2014-01-01

    Full Text Available Modern physics is based on both theoretical analysis and experimental validation. Complex scenarios like subatomic dimensions, high energy, and lower absolute temperature are frontiers for many theoretical models. Simulation with stable numerical methods represents an excellent instrument for high accuracy analysis, experimental validation, and visualization. High performance computing support offers possibility to make simulations at large scale, in parallel, but the volume of data generated by these experiments creates a new challenge for Big Data Science. This paper presents existing computational methods for high energy physics (HEP analyzed from two perspectives: numerical methods and high performance computing. The computational methods presented are Monte Carlo methods and simulations of HEP processes, Markovian Monte Carlo, unfolding methods in particle physics, kernel estimation in HEP, and Random Matrix Theory used in analysis of particles spectrum. All of these methods produce data-intensive applications, which introduce new challenges and requirements for ICT systems architecture, programming paradigms, and storage capabilities.

  8. High performance computing and communications panel report

    Energy Technology Data Exchange (ETDEWEB)

    1992-12-01

    In FY92, a presidential initiative entitled High Performance Computing and Communications (HPCC) was launched, aimed at securing U.S. preeminence in high performance computing and related communication technologies. The stated goal of the initiative is threefold: extend U.S. technological leadership in high performance computing and computer communications; provide wide dissemination and application of the technologies; and spur gains in U.S. productivity and industrial competitiveness, all within the context of the mission needs of federal agencies. Because of the importance of the HPCC program to the national well-being, especially its potential implication for industrial competitiveness, the Assistant to the President for Science and Technology has asked that the President's Council of Advisors in Science and Technology (PCAST) establish a panel to advise PCAST on the strengths and weaknesses of the HPCC program. The report presents a program analysis based on strategy, balance, management, and vision. Both constructive recommendations for program improvement and positive reinforcement of successful program elements are contained within the report.

  9. A High-Performance Communication Service for Parallel Servo Computing

    Directory of Open Access Journals (Sweden)

    Cheng Xin

    2010-11-01

    Full Text Available Complexity of algorithms for the servo control in the multi-dimensional, ultra-precise stage application has made multi-processor parallel computing technology needed. Considering the specific communication requirements in the parallel servo computing, we propose a communication service scheme based on VME bus, which provides high-performance data transmission and precise synchronization trigger support for the processors involved. Communications service is implemented on both standard VME bus and user-defined Internal Bus (IB, and can be redefined online. This paper introduces parallel servo computing architecture and communication service, describes structure and implementation details of each module in the service, and finally provides data transmission model and analysis. Experimental results show that communication services can provide high-speed data transmission with sub-nanosecond-level error of transmission latency, and synchronous trigger with nanosecond-level synchronization error. Moreover, the performance of communication service is not affected by the increasing number of processors.

  10. The complexity of class polynomial computation via floating point approximations

    Science.gov (United States)

    Enge, Andreas

    2009-06-01

    We analyse the complexity of computing class polynomials, that are an important ingredient for CM constructions of elliptic curves, via complex floating point approximations of their roots. The heart of the algorithm is the evaluation of modular functions in several arguments. The fastest one of the presented approaches uses a technique devised by Dupont to evaluate modular functions by Newton iterations on an expression involving the arithmetic-geometric mean. Under the heuristic assumption, justified by experiments, that the correctness of the result is not perturbed by rounding errors, the algorithm runs in time O left( sqrt {\\vert D\\vert} log^3 \\vert D\\vert M left( sq... ...arepsilon} \\vert D\\vert right) subseteq O left( h^{2 + \\varepsilon} right) for any \\varepsilon > 0 , where D is the CM discriminant, h is the degree of the class polynomial and M (n) is the time needed to multiply two n -bit numbers. Up to logarithmic factors, this running time matches the size of the constructed polynomials. The estimate also relies on a new result concerning the complexity of enumerating the class group of an imaginary quadratic order and on a rigorously proven upper bound for the height of class polynomials.

  11. High-Precision Computation and Mathematical Physics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, David H.; Borwein, Jonathan M.

    2008-11-03

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

  12. Optical interconnection networks for high-performance computing systems.

    Science.gov (United States)

    Biberman, Aleksandr; Bergman, Keren

    2012-04-01

    Enabled by silicon photonic technology, optical interconnection networks have the potential to be a key disruptive technology in computing and communication industries. The enduring pursuit of performance gains in computing, combined with stringent power constraints, has fostered the ever-growing computational parallelism associated with chip multiprocessors, memory systems, high-performance computing systems and data centers. Sustaining these parallelism growths introduces unique challenges for on- and off-chip communications, shifting the focus toward novel and fundamentally different communication approaches. Chip-scale photonic interconnection networks, enabled by high-performance silicon photonic devices, offer unprecedented bandwidth scalability with reduced power consumption. We demonstrate that the silicon photonic platforms have already produced all the high-performance photonic devices required to realize these types of networks. Through extensive empirical characterization in much of our work, we demonstrate such feasibility of waveguides, modulators, switches and photodetectors. We also demonstrate systems that simultaneously combine many functionalities to achieve more complex building blocks. We propose novel silicon photonic devices, subsystems, network topologies and architectures to enable unprecedented performance of these photonic interconnection networks. Furthermore, the advantages of photonic interconnection networks extend far beyond the chip, offering advanced communication environments for memory systems, high-performance computing systems, and data centers.

  13. PREFACE: High Performance Computing Symposium 2011

    Science.gov (United States)

    Talon, Suzanne; Mousseau, Normand; Peslherbe, Gilles; Bertrand, François; Gauthier, Pierre; Kadem, Lyes; Moitessier, Nicolas; Rouleau, Guy; Wittig, Rod

    2012-02-01

    HPCS (High Performance Computing Symposium) is a multidisciplinary conference that focuses on research involving High Performance Computing and its application. Attended by Canadian and international experts and renowned researchers in the sciences, all areas of engineering, the applied sciences, medicine and life sciences, mathematics, the humanities and social sciences, it is Canada's pre-eminent forum for HPC. The 25th edition was held in Montréal, at the Université du Québec à Montréal, from 15-17 June and focused on HPC in Medical Science. The conference was preceded by tutorials held at Concordia University, where 56 participants learned about HPC best practices, GPU computing, parallel computing, debugging and a number of high-level languages. 274 participants from six countries attended the main conference, which involved 11 invited and 37 contributed oral presentations, 33 posters, and an exhibit hall with 16 booths from our sponsors. The work that follows is a collection of papers presented at the conference covering HPC topics ranging from computer science to bioinformatics. They are divided here into four sections: HPC in Engineering, Physics and Materials Science, HPC in Medical Science, HPC Enabling to Explore our World and New Algorithms for HPC. We would once more like to thank the participants and invited speakers, the members of the Scientific Committee, the referees who spent time reviewing the papers and our invaluable sponsors. To hear the invited talks and learn about 25 years of HPC development in Canada visit the Symposium website: http://2011.hpcs.ca/lang/en/conference/keynote-speakers/ Enjoy the excellent papers that follow, and we look forward to seeing you in Vancouver for HPCS 2012! Gilles Peslherbe Chair of the Scientific Committee Normand Mousseau Co-Chair of HPCS 2011 Suzanne Talon Chair of the Organizing Committee UQAM Sponsors The PDF also contains photographs from the conference banquet.

  14. High Performance Computing in Science and Engineering '15 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2015. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  15. HIGH AND LOW RESOLUTION TEXTURED MODELS OF COMPLEX ARCHITECTURAL SURFACES

    Directory of Open Access Journals (Sweden)

    E. K. Stathopoulou

    2012-09-01

    Full Text Available During the recent years it has become obvious that 3D technology, applied mainly with the use of terrestrial laser scanners (TLS is the most suitable technique for the complete geometric documentation of complex objects, whether they are monuments or architectural constructions in general. However, it is rather a challenging task to convert an acquired point cloud into a realistic 3D polygonal model that can simultaneously satisfy high resolution modeling and visualization demands. The aim of the visualization of a simple or complex object is to create a 3D model that best describes the reality within the computer environment. This paper is dedicated especially in the visualization of a complex object's 3D model, through high, as well as low resolution textured models. The object of interest for this study was the Almoina (Romanesque Door of the Cathedral of Valencia in Spain.

  16. High and Low Resolution Textured Models of Complex Architectural Surfaces

    Science.gov (United States)

    Stathopoulou, E. K.; Valanis, A.; Lerma, J. L.; Georgopoulos, A.

    2011-09-01

    During the recent years it has become obvious that 3D technology, applied mainly with the use of terrestrial laser scanners (TLS) is the most suitable technique for the complete geometric documentation of complex objects, whether they are monuments or architectural constructions in general. However, it is rather a challenging task to convert an acquired point cloud into a realistic 3D polygonal model that can simultaneously satisfy high resolution modeling and visualization demands. The aim of the visualization of a simple or complex object is to create a 3D model that best describes the reality within the computer environment. This paper is dedicated especially in the visualization of a complex object's 3D model, through high, as well as low resolution textured models. The object of interest for this study was the Almoina (Romanesque) Door of the Cathedral of Valencia in Spain.

  17. On the computational complexity of sequence design problems

    Energy Technology Data Exchange (ETDEWEB)

    Hart, W.E. [Sandia National Labs., Albuquerque, NM (United States). Algorithms and Discrete Mathematics Dept.

    1996-12-31

    Inverse protein folding concerns the identification of an amino acid sequence that folds to a given structure. Sequence design problems attempt to avoid the apparent difficulty of inverse protein folding by defining an energy that can be minimized to find protein-like sequences. The authors evaluate the practical relevance of two sequence design problems by analyzing their computation complexity. They show that the canonical method of sequence design is intractable, and describe approximation algorithms for this problem. The authors also describe an efficient algorithm that exactly solves the grand canonical method. The analysis shows how sequence design problems can fail to reduce the difficulty of the inverse protein folding problem, and highlights the need to analyze these problems to evaluate their practical relevance.

  18. Monitoring SLAC High Performance UNIX Computing Systems

    Energy Technology Data Exchange (ETDEWEB)

    Lettsome, Annette K.; /Bethune-Cookman Coll. /SLAC

    2005-12-15

    Knowledge of the effectiveness and efficiency of computers is important when working with high performance systems. The monitoring of such systems is advantageous in order to foresee possible misfortunes or system failures. Ganglia is a software system designed for high performance computing systems to retrieve specific monitoring information. An alternative storage facility for Ganglia's collected data is needed since its default storage system, the round-robin database (RRD), struggles with data integrity. The creation of a script-driven MySQL database solves this dilemma. This paper describes the process took in the creation and implementation of the MySQL database for use by Ganglia. Comparisons between data storage by both databases are made using gnuplot and Ganglia's real-time graphical user interface.

  19. Atomic switch networks—nanoarchitectonic design of a complex system for natural computing

    Science.gov (United States)

    Demis, E. C.; Aguilera, R.; Sillin, H. O.; Scharnhorst, K.; Sandouk, E. J.; Aono, M.; Stieg, A. Z.; Gimzewski, J. K.

    2015-05-01

    Self-organized complex systems are ubiquitous in nature, and the structural complexity of these natural systems can be used as a model to design new classes of functional nanotechnology based on highly interconnected networks of interacting units. Conventional fabrication methods for electronic computing devices are subject to known scaling limits, confining the diversity of possible architectures. This work explores methods of fabricating a self-organized complex device known as an atomic switch network and discusses its potential utility in computing. Through a merger of top-down and bottom-up techniques guided by mathematical and nanoarchitectonic design principles, we have produced functional devices comprising nanoscale elements whose intrinsic nonlinear dynamics and memorization capabilities produce robust patterns of distributed activity and a capacity for nonlinear transformation of input signals when configured in the appropriate network architecture. Their operational characteristics represent a unique potential for hardware implementation of natural computation, specifically in the area of reservoir computing—a burgeoning field that investigates the computational aptitude of complex biologically inspired systems.

  20. Recent Developments in Complex Analysis and Computer Algebra

    CERN Document Server

    Kajiwara, Joji; Xu, Yongzhi

    1999-01-01

    This volume consists of papers presented in the special sessions on "Complex and Numerical Analysis", "Value Distribution Theory and Complex Domains", and "Use of Symbolic Computation in Mathematics Education" of the ISAAC'97 Congress held at the University of Delaware, during June 2-7, 1997. The ISAAC Congress coincided with a U.S.-Japan Seminar also held at the University of Delaware. The latter was supported by the National Science Foundation through Grant INT-9603029 and the Japan Society for the Promotion of Science through Grant MTCS-134. It was natural that the participants of both meetings should interact and consequently several persons attending the Congress also presented papers in the Seminar. The success of the ISAAC Congress and the U.S.-Japan Seminar has led to the ISAAC'99 Congress being held in Fukuoka, Japan during August 1999. Many of the same participants will return to this Seminar. Indeed, it appears that the spirit of the U.S.-Japan Seminar will be continued every second year as part of...

  1. Turing’s algorithmic lens: From computability to complexity theory

    Directory of Open Access Journals (Sweden)

    Díaz, Josep

    2013-12-01

    Full Text Available The decidability question, i.e., whether any mathematical statement could be computationally proven true or false, was raised by Hilbert and remained open until Turing answered it in the negative. Then, most efforts in theoretical computer science turned to complexity theory and the need to classify decidable problems according to their difficulty. Among others, the classes P (problems solvable in polynomial time and NP (problems solvable in non-deterministic polynomial time were defined, and one of the most challenging scientific quests of our days arose: whether P = NP. This still open question has implications not only in computer science, mathematics and physics, but also in biology, sociology and economics, and it can be seen as a direct consequence of Turing’s way of looking through the algorithmic lens at different disciplines to discover how pervasive computation is.La cuestión de la decidibilidad, es decir, si es posible demostrar computacionalmente que una expresión matemática es verdadera o falsa, fue planteada por Hilbert y permaneció abierta hasta que Turing la respondió de forma negativa. Establecida la no-decidibilidad de las matemáticas, los esfuerzos en informática teórica se centraron en el estudio de la complejidad computacional de los problemas decidibles. En este artículo presentamos una breve introducción a las clases P (problemas resolubles en tiempo polinómico y NP (problemas resolubles de manera no determinista en tiempo polinómico, al tiempo que exponemos la dificultad de establecer si P = NP y las consecuencias que se derivarían de que ambas clases de problemas fueran iguales. Esta cuestión tiene implicaciones no solo en los campos de la informática, las matemáticas y la física, sino también para la biología, la sociología y la economía. La idea seminal del estudio de la complejidad computacional es consecuencia directa del modo en que Turing abordaba problemas en diferentes ámbitos mediante lo

  2. Detecting highly cyclic structure with complex eigenpairs

    CERN Document Server

    Klymko, Christine

    2016-01-01

    Many large, real-world complex networks have rich community structure that a network scientist seeks to understand. These communities may overlap or have intricate internal structure. Extracting communities with particular topological structure, even when they overlap with other communities, is a powerful capability that would provide novel avenues of focusing in on structure of interest. In this work we consider extracting highly-cyclic regions of directed graphs (digraphs). We demonstrate that embeddings derived from complex-valued eigenvectors associated with stochastic propagator eigenvalues near roots of unity are well-suited for this purpose. We prove several fundamental theoretic results demonstrating the connection between these eigenpairs and the presence of highly-cyclic structure and we demonstrate the use of these vectors on a few real-world examples.

  3. Computer vision for high content screening.

    Science.gov (United States)

    Kraus, Oren Z; Frey, Brendan J

    2016-01-01

    High Content Screening (HCS) technologies that combine automated fluorescence microscopy with high throughput biotechnology have become powerful systems for studying cell biology and drug screening. These systems can produce more than 100 000 images per day, making their success dependent on automated image analysis. In this review, we describe the steps involved in quantifying microscopy images and different approaches for each step. Typically, individual cells are segmented from the background using a segmentation algorithm. Each cell is then quantified by extracting numerical features, such as area and intensity measurements. As these feature representations are typically high dimensional (>500), modern machine learning algorithms are used to classify, cluster and visualize cells in HCS experiments. Machine learning algorithms that learn feature representations, in addition to the classification or clustering task, have recently advanced the state of the art on several benchmarking tasks in the computer vision community. These techniques have also recently been applied to HCS image analysis.

  4. The path toward HEP High Performance Computing

    Science.gov (United States)

    Apostolakis, John; Brun, René; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-06-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a "High Performance" implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on the development of a highperformance prototype for particle transport. Achieving a good concurrency level on the emerging parallel architectures without a complete redesign of the framework can only be done by parallelizing at event level, or with a much larger effort at track level. Apart the shareable data structures, this typically implies a multiplication factor in terms of memory consumption compared to the single threaded version, together with sub-optimal handling of event processing tails. Besides this, the low level instruction pipelining of modern processors cannot be used efficiently to speedup the program. We have implemented a framework that allows scheduling vectors of particles to an arbitrary number of computing resources in a fine grain parallel approach. The talk will review the current optimisation activities within the SFT group with a particular emphasis on the development perspectives towards a simulation framework able to profit best from

  5. Computing High Accuracy Power Spectra with Pico

    CERN Document Server

    Fendt, William A

    2007-01-01

    This paper presents the second release of Pico (Parameters for the Impatient COsmologist). Pico is a general purpose machine learning code which we have applied to computing the CMB power spectra and the WMAP likelihood. For this release, we have made improvements to the algorithm as well as the data sets used to train Pico, leading to a significant improvement in accuracy. For the 9 parameter nonflat case presented here Pico can on average compute the TT, TE and EE spectra to better than 1% of cosmic standard deviation for nearly all $\\ell$ values over a large region of parameter space. Performing a cosmological parameter analysis of current CMB and large scale structure data, we show that these power spectra give very accurate 1 and 2 dimensional parameter posteriors. We have extended Pico to allow computation of the tensor power spectrum and the matter transfer function. Pico runs about 1500 times faster than CAMB at the default accuracy and about 250,000 times faster at high accuracy. Training Pico can be...

  6. Low Complexity Performance Effective Task Scheduling Algorithm for Heterogeneous Computing Environments

    Directory of Open Access Journals (Sweden)

    E. Ilavarasan

    2007-01-01

    Full Text Available A heterogeneous computing environment is a suite of heterogeneous processors interconnected by high-speed networks, thereby promising high speed processing of computationally intensive applications with diverse computing needs. Scheduling of an application modeled by Directed Acyclic Graph (DAG is a key issue when aiming at high performance in this kind of environment. The problem is generally addressed in terms of task scheduling, where tasks are the schedulable units of a program. The task scheduling problems have been shown to be NP-complete in general as well as several restricted cases. In this study we present a simple scheduling algorithm based on list scheduling, namely, low complexity Performance Effective Task Scheduling (PETS algorithm for heterogeneous computing systems with complexity O (e (p+ log v, which provides effective results for applications represented by DAGs. The analysis and experiments based on both randomly generated graphs and graphs of some real applications show that the PETS algorithm substantially outperforms the existing scheduling algorithms such as Heterogeneous Earliest Finish Time (HEFT, Critical-Path-On a Processor (CPOP and Levelized Min Time (LMT, in terms of schedule length ratio, speedup, efficiency, running time and frequency of best results.

  7. A primer on high-throughput computing for genomic selection.

    Science.gov (United States)

    Wu, Xiao-Lin; Beissinger, Timothy M; Bauck, Stewart; Woodward, Brent; Rosa, Guilherme J M; Weigel, Kent A; Gatti, Natalia de Leon; Gianola, Daniel

    2011-01-01

    High-throughput computing (HTC) uses computer clusters to solve advanced computational problems, with the goal of accomplishing high-throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long, and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl, and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general-purpose computation on a graphics processing unit provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin-Madison, which can be leveraged for genomic selection, in terms of central processing unit capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general-purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of marker panels to realized

  8. Integrated modeling tool for performance engineering of complex computer systems

    Science.gov (United States)

    Wright, Gary; Ball, Duane; Hoyt, Susan; Steele, Oscar

    1989-01-01

    This report summarizes Advanced System Technologies' accomplishments on the Phase 2 SBIR contract NAS7-995. The technical objectives of the report are: (1) to develop an evaluation version of a graphical, integrated modeling language according to the specification resulting from the Phase 2 research; and (2) to determine the degree to which the language meets its objectives by evaluating ease of use, utility of two sets of performance predictions, and the power of the language constructs. The technical approach followed to meet these objectives was to design, develop, and test an evaluation prototype of a graphical, performance prediction tool. The utility of the prototype was then evaluated by applying it to a variety of test cases found in the literature and in AST case histories. Numerous models were constructed and successfully tested. The major conclusion of this Phase 2 SBIR research and development effort is that complex, real-time computer systems can be specified in a non-procedural manner using combinations of icons, windows, menus, and dialogs. Such a specification technique provides an interface that system designers and architects find natural and easy to use. In addition, PEDESTAL's multiview approach provides system engineers with the capability to perform the trade-offs necessary to produce a design that meets timing performance requirements. Sample system designs analyzed during the development effort showed that models could be constructed in a fraction of the time required by non-visual system design capture tools.

  9. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  10. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  11. Ultra-high resolution computed tomography imaging

    Energy Technology Data Exchange (ETDEWEB)

    Paulus, Michael J. (Knoxville, TN); Sari-Sarraf, Hamed (Knoxville, TN); Tobin, Jr., Kenneth William (Harriman, TN); Gleason, Shaun S. (Knoxville, TN); Thomas, Jr., Clarence E. (Knoxville, TN)

    2002-01-01

    A method for ultra-high resolution computed tomography imaging, comprising the steps of: focusing a high energy particle beam, for example x-rays or gamma-rays, onto a target object; acquiring a 2-dimensional projection data set representative of the target object; generating a corrected projection data set by applying a deconvolution algorithm, having an experimentally determined a transfer function, to the 2-dimensional data set; storing the corrected projection data set; incrementally rotating the target object through an angle of approximately 180.degree., and after each the incremental rotation, repeating the radiating, acquiring, generating and storing steps; and, after the rotating step, applying a cone-beam algorithm, for example a modified tomographic reconstruction algorithm, to the corrected projection data sets to generate a 3-dimensional image. The size of the spot focus of the beam is reduced to not greater than approximately 1 micron, and even to not greater than approximately 0.5 microns.

  12. Visualization of complex processes in lipid systems using computer simulations and molecular graphics.

    Science.gov (United States)

    Telenius, Jelena; Vattulainen, Ilpo; Monticelli, Luca

    2009-01-01

    Computer simulation has become an increasingly popular tool in the study of lipid membranes, complementing experimental techniques by providing information on structure and dynamics at high spatial and temporal resolution. Molecular visualization is the most powerful way to represent the results of molecular simulations, and can be used to illustrate complex transformations of lipid aggregates more easily and more effectively than written text. In this chapter, we review some basic aspects of simulation methodologies commonly employed in the study of lipid membranes and we describe a few examples of complex phenomena that have been recently investigated using molecular simulations. We then explain how molecular visualization provides added value to computational work in the field of biological membranes, and we conclude by listing a few molecular graphics packages widely used in scientific publications.

  13. Computational probes into the basis of silver ion chromatography. II. Silver(I)-olefin complexes

    NARCIS (Netherlands)

    Kaneti, J.; Smet, de L.C.P.M.; Boom, R.M.; Zuilhof, H.; Sudhölter, E.J.R.

    2002-01-01

    Alkene complexes of silver(I) are studied by four computational methodologies: ab initio RHF, MP2, and MP4 computations, and density functional B3LYP computations, with a variety of all-electron and effective core potential basis sets. Methodological studies indicate that MP2/SBK(d) computations can

  14. Characterization of a titanium(IV)-porphyrin complex as a highly sensitive and selective reagent for the determination of hydrogen peroxide: a computational chemistry approach and a critical review.

    Science.gov (United States)

    Takamura, Kiyoko; Matsumoto, Takatoshi

    2008-06-01

    The Ti-TPyP reagent, i.e. an acidic aqueous solution of the oxo[5,10,15,20-tetra(4-pyridyl)porphyrinato] titanium(IV) complex, TiO(tpyp), was developed as a highly sensitive and selective spectrophotometric reagent for determination of traces of hydrogen peroxide. Using this reagent, determination of hydrogen peroxide was performed by flow-injection analysis with a detection limit of 0.5 pmol per test. The method was actually applied to determination of several constituents of foods, human blood, and urine mediated by appropriate oxidase enzymes. The reaction specificity of the TiO(tpyp) complex for hydrogen peroxide was clarified from the viewpoint of the reaction mechanisms and molecular orbitals based on ab initio calculations. The results provided a well-grounded argument for determination of hydrogen peroxide using the Ti-TPyP reagent experimentally. This review deals with characterization of the high sensitivity and reaction specificity of the Ti-TPyP reagent for determination of hydrogen peroxide, to prove its reliability in analytical applications.

  15. Interactive Computational Algorithms for Acoustic Simulation in Complex Environments

    Science.gov (United States)

    2015-07-19

    Heo, Ruo-Feng Tong. VolCCD, ACM Transactions on Graphics , (10 2011): 0. doi: 10.1145/2019627.2019630 Charlie C.L. Wang, Dinesh Manocha. GPU -based...Visualization and Computer Graphics (TVCG). , . : , JIa Pan, Dinesh Manocha. GPU -based Parallel Collision Detection for Real-Time Motion Planning...Computer Graphics (02 2011) Jie-yi Zhao, Min Tang, Ruo-feng Tong, Dinesh Manocha. GPU accelerated convex hull computation, Computers & Graphics (8 2012

  16. Fostering complex learning-task performance through scripting student use of computer supported representational tools

    NARCIS (Netherlands)

    Slof, Bert; Erkens, Gijs; Kirschner, Paul A.; Janssen, Jeroen; Phielix, Chris

    2010-01-01

    Slof, B., Erkens, G., Kirschner, P. A., Janssen, J., & Phielix, C. (2010). Fostering complex learning-task performance through scripting student use of computer supported representational tools. Computers & Education, 55(4), 1707-1720.

  17. On the Computational Complexity of Sphere Decoder for Lattice Space-Time Coded MIMO Channel

    CERN Document Server

    Abediseid, Walid

    2011-01-01

    The exact complexity analysis of the basic sphere decoder for general space-time codes applied to multi-input multi-output (MIMO) wireless channel is known to be difficult. In this work, we shed the light on the computational complexity of sphere decoding for the quasi-static, LAttice Space-Time (LAST) coded MIMO channel. Specifically, we derive the asymptotic tail distribution of the decoder's computational complexity in the high signal-to-noise ratio (SNR) regime. For the uncoded $M\\times N$ MIMO channel (e.g., V-BLAST), the analysis in [6] revealed that the tail distribution of such a decoder is of a Pareto-type with tail exponent that is equivalent to $N-M+1$. In our analysis, we show that the tail exponent of the sphere decoder's complexity distribution is equivalent to the diversity-multiplexing tradeoff achieved by LAST coding and lattice decoding schemes. This leads to extend the channel's tradeoff to include the decoding complexity. Moreover, we show analytically how minimum-mean square-error decisio...

  18. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  19. Implementing an Affordable High-Performance Computing for Teaching-Oriented Computer Science Curriculum

    Science.gov (United States)

    Abuzaghleh, Omar; Goldschmidt, Kathleen; Elleithy, Yasser; Lee, Jeongkyu

    2013-01-01

    With the advances in computing power, high-performance computing (HPC) platforms have had an impact on not only scientific research in advanced organizations but also computer science curriculum in the educational community. For example, multicore programming and parallel systems are highly desired courses in the computer science major. However,…

  20. Computational Proteomics: High-throughput Analysis for Systems Biology

    Energy Technology Data Exchange (ETDEWEB)

    Cannon, William R.; Webb-Robertson, Bobbie-Jo M.

    2007-01-03

    High-throughput (HTP) proteomics is a rapidly developing field that offers the global profiling of proteins from a biological system. The HTP technological advances are fueling a revolution in biology, enabling analyses at the scales of entire systems (e.g., whole cells, tumors, or environmental communities). However, simply identifying the proteins in a cell is insufficient for understanding the underlying complexity and operating mechanisms of the overall system. Systems level investigations are relying more and more on computational analyses, especially in the field of proteomics generating large-scale global data.

  1. Scalable resource management in high performance computers.

    Energy Technology Data Exchange (ETDEWEB)

    Frachtenberg, E. (Eitan); Petrini, F. (Fabrizio); Fernandez Peinador, J. (Juan); Coll, S. (Salvador)

    2002-01-01

    Clusters of workstations have emerged as an important platform for building cost-effective, scalable and highly-available computers. Although many hardware solutions are available today, the largest challenge in making large-scale clusters usable lies in the system software. In this paper we present STORM, a resource management tool designed to provide scalability, low overhead and the flexibility necessary to efficiently support and analyze a wide range of job scheduling algorithms. STORM achieves these feats by closely integrating the management daemons with the low-level features that are common in state-of-the-art high-performance system area networks. The architecture of STORM is based on three main technical innovations. First, a sizable part of the scheduler runs in the thread processor located on the network interface. Second, we use hardware collectives that are highly scalable both for implementing control heartbeats and to distribute the binary of a parallel job in near-constant time, irrespective of job and machine sizes. Third, we use an I/O bypass protocol that allows fast data movements from the file system to the communication buffers in the network interface and vice versa. The experimental results show that STORM can launch a job with a binary of 12MB on a 64 processor/32 node cluster in less than 0.25 sec on an empty network, in less than 0.45 sec when all the processors are busy computing other jobs, and in less than 0.65 sec when the network is flooded with a background traffic. This paper provides experimental and analytical evidence that these results scale to a much larger number of nodes. To the best of our knowledge, STORM is at least two orders of magnitude faster than existing production schedulers in launching jobs, performing resource management tasks and gang scheduling.

  2. Study on High-Speed Magnitude Approximation for Complex Vectors

    Institute of Scientific and Technical Information of China (English)

    陈建春; 杨万海; 许少英

    2003-01-01

    High-speed magnitude approximation algorithms for complex vectors are discussed intensively. The performance and the convergence speed of these approximation algorithms are analyzed. For the polygon fitting algorithms, the approximation formula under the least mean square error criterion is derived. For the iterative algorithms, a modified CORDIC (coordinate rotation digital computer) algorithm is developed. This modified CORDIC algorithm is proved to be with a maximum relative error about one half that of the original CORDIC algorithm. Finally, the effects of the finite register length on these algorithms are also concerned, which shows that 9 to 12-bit coefficients are sufficient for practical applications.

  3. Cooperative task-oriented computing algorithms and complexity

    CERN Document Server

    Georgiou, Chryssis

    2011-01-01

    Cooperative network supercomputing is becoming increasingly popular for harnessing the power of the global Internet computing platform. A typical Internet supercomputer consists of a master computer or server and a large number of computers called workers, performing computation on behalf of the master. Despite the simplicity and benefits of a single master approach, as the scale of such computing environments grows, it becomes unrealistic to assume the existence of the infallible master that is able to coordinate the activities of multitudes of workers. Large-scale distributed systems are inh

  4. High Performance Computing in Science and Engineering '02 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2003-01-01

    This book presents the state-of-the-art in modeling and simulation on supercomputers. Leading German research groups present their results achieved on high-end systems of the High Performance Computing Center Stuttgart (HLRS) for the year 2002. Reports cover all fields of supercomputing simulation ranging from computational fluid dynamics to computer science. Special emphasis is given to industrially relevant applications. Moreover, by presenting results for both vector sytems and micro-processor based systems the book allows to compare performance levels and usability of a variety of supercomputer architectures. It therefore becomes an indispensable guidebook to assess the impact of the Japanese Earth Simulator project on supercomputing in the years to come.

  5. A computational approach to modeling cellular-scale blood flow in complex geometry

    Science.gov (United States)

    Balogh, Peter; Bagchi, Prosenjit

    2017-04-01

    We present a computational methodology for modeling cellular-scale blood flow in arbitrary and highly complex geometry. Our approach is based on immersed-boundary methods, which allow modeling flows in arbitrary geometry while resolving the large deformation and dynamics of every blood cell with high fidelity. The present methodology seamlessly integrates different modeling components dealing with stationary rigid boundaries of complex shape, moving rigid bodies, and highly deformable interfaces governed by nonlinear elasticity. Thus it enables us to simulate 'whole' blood suspensions flowing through physiologically realistic microvascular networks that are characterized by multiple bifurcating and merging vessels, as well as geometrically complex lab-on-chip devices. The focus of the present work is on the development of a versatile numerical technique that is able to consider deformable cells and rigid bodies flowing in three-dimensional arbitrarily complex geometries over a diverse range of scenarios. After describing the methodology, a series of validation studies are presented against analytical theory, experimental data, and previous numerical results. Then, the capability of the methodology is demonstrated by simulating flows of deformable blood cells and heterogeneous cell suspensions in both physiologically realistic microvascular networks and geometrically intricate microfluidic devices. It is shown that the methodology can predict several complex microhemodynamic phenomena observed in vascular networks and microfluidic devices. The present methodology is robust and versatile, and has the potential to scale up to very large microvascular networks at organ levels.

  6. The path toward HEP High Performance Computing

    CERN Document Server

    Apostolakis, John; Carminati, Federico; Gheata, Andrei; Wenzel, Sandro

    2014-01-01

    High Energy Physics code has been known for making poor use of high performance computing architectures. Efforts in optimising HEP code on vector and RISC architectures have yield limited results and recent studies have shown that, on modern architectures, it achieves a performance between 10% and 50% of the peak one. Although several successful attempts have been made to port selected codes on GPUs, no major HEP code suite has a 'High Performance' implementation. With LHC undergoing a major upgrade and a number of challenging experiments on the drawing board, HEP cannot any longer neglect the less-than-optimal performance of its code and it has to try making the best usage of the hardware. This activity is one of the foci of the SFT group at CERN, which hosts, among others, the Root and Geant4 project. The activity of the experiments is shared and coordinated via a Concurrency Forum, where the experience in optimising HEP code is presented and discussed. Another activity is the Geant-V project, centred on th...

  7. Slovak High School Students' Attitudes toward Computers

    Science.gov (United States)

    Kubiatko, Milan; Halakova, Zuzana; Nagyova, Sona; Nagy, Tibor

    2011-01-01

    The pervasive involvement of information and communication technologies and computers in our daily lives influences changes of attitude toward computers. We focused on finding these ecological effects in the differences in computer attitudes as a function of gender and age. A questionnaire with 34 Likert-type items was used in our research. The…

  8. High speed and large scale scientific computing

    CERN Document Server

    Gentzsch, W; Joubert, GR

    2010-01-01

    Over the years parallel technologies have completely transformed main stream computing. This book deals with the issues related to the area of cloud computing and discusses developments in grids, applications and information processing, as well as e-science. It is suitable for computer scientists, IT engineers and IT managers.

  9. Efficient Quantification of Uncertainties in Complex Computer Code Results Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Propagation of parameter uncertainties through large computer models can be very resource intensive. Frameworks and tools for uncertainty quantification are...

  10. Computational RNA secondary structure design: empirical complexity and improved methods

    Directory of Open Access Journals (Sweden)

    Condon Anne

    2007-01-01

    Full Text Available Abstract Background We investigate the empirical complexity of the RNA secondary structure design problem, that is, the scaling of the typical difficulty of the design task for various classes of RNA structures as the size of the target structure is increased. The purpose of this work is to understand better the factors that make RNA structures hard to design for existing, high-performance algorithms. Such understanding provides the basis for improving the performance of one of the best algorithms for this problem, RNA-SSD, and for characterising its limitations. Results To gain insights into the practical complexity of the problem, we present a scaling analysis on random and biologically motivated structures using an improved version of the RNA-SSD algorithm, and also the RNAinverse algorithm from the Vienna package. Since primary structure constraints are relevant for designing RNA structures, we also investigate the correlation between the number and the location of the primary structure constraints when designing structures and the performance of the RNA-SSD algorithm. The scaling analysis on random and biologically motivated structures supports the hypothesis that the running time of both algorithms scales polynomially with the size of the structure. We also found that the algorithms are in general faster when constraints are placed only on paired bases in the structure. Furthermore, we prove that, according to the standard thermodynamic model, for some structures that the RNA-SSD algorithm was unable to design, there exists no sequence whose minimum free energy structure is the target structure. Conclusion Our analysis helps to better understand the strengths and limitations of both the RNA-SSD and RNAinverse algorithms, and suggests ways in which the performance of these algorithms can be further improved.

  11. Opportunities and challenges of high-performance computing in chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Guest, M.F.; Kendall, R.A.; Nichols, J.A. [eds.] [and others

    1995-06-01

    The field of high-performance computing is developing at an extremely rapid pace. Massively parallel computers offering orders of magnitude increase in performance are under development by all the major computer vendors. Many sites now have production facilities that include massively parallel hardware. Molecular modeling methodologies (both quantum and classical) are also advancing at a brisk pace. The transition of molecular modeling software to a massively parallel computing environment offers many exciting opportunities, such as the accurate treatment of larger, more complex molecular systems in routine fashion, and a viable, cost-effective route to study physical, biological, and chemical `grand challenge` problems that are impractical on traditional vector supercomputers. This will have a broad effect on all areas of basic chemical science at academic research institutions and chemical, petroleum, and pharmaceutical industries in the United States, as well as chemical waste and environmental remediation processes. But, this transition also poses significant challenges: architectural issues (SIMD, MIMD, local memory, global memory, etc.) remain poorly understood and software development tools (compilers, debuggers, performance monitors, etc.) are not well developed. In addition, researchers that understand and wish to pursue the benefits offered by massively parallel computing are often hindered by lack of expertise, hardware, and/or information at their site. A conference and workshop organized to focus on these issues was held at the National Institute of Health, Bethesda, Maryland (February 1993). This report is the culmination of the organized workshop. The main conclusion: a drastic acceleration in the present rate of progress is required for the chemistry community to be positioned to exploit fully the emerging class of Teraflop computers, even allowing for the significant work to date by the community in developing software for parallel architectures.

  12. High Performance Computing in Science and Engineering '98 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    1999-01-01

    The book contains reports about the most significant projects from science and industry that are using the supercomputers of the Federal High Performance Computing Center Stuttgart (HLRS). These projects are from different scientific disciplines, with a focus on engineering, physics and chemistry. They were carefully selected in a peer-review process and are showcases for an innovative combination of state-of-the-art physical modeling, novel algorithms and the use of leading-edge parallel computer technology. As HLRS is in close cooperation with industrial companies, special emphasis has been put on the industrial relevance of results and methods.

  13. High Performance Computing in Science and Engineering '99 : Transactions of the High Performance Computing Center

    CERN Document Server

    Jäger, Willi

    2000-01-01

    The book contains reports about the most significant projects from science and engineering of the Federal High Performance Computing Center Stuttgart (HLRS). They were carefully selected in a peer-review process and are showcases of an innovative combination of state-of-the-art modeling, novel algorithms and the use of leading-edge parallel computer technology. The projects of HLRS are using supercomputer systems operated jointly by university and industry and therefore a special emphasis has been put on the industrial relevance of results and methods.

  14. High-Throughput Neuroimaging-Genetics Computational Infrastructure

    Directory of Open Access Journals (Sweden)

    Ivo D Dinov

    2014-04-01

    Full Text Available Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate and disseminate novel scientific methods, computational resources and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval and aggregation. Computational processing involves the necessary software, hardware and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical and phenotypic data and meta-data. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI and the Laboratory of Neuro Imaging (LONI at University of Southern California (USC. INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer’s and Parkinson’s data, we provide several examples of translational applications using this infrastructure.

  15. High-throughput neuroimaging-genetics computational infrastructure.

    Science.gov (United States)

    Dinov, Ivo D; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Hobel, Sam; Vespa, Paul; Woo Moon, Seok; Van Horn, John D; Franco, Joseph; Toga, Arthur W

    2014-01-01

    Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize

  16. Computational Analyses of Complex Flows with Chemical Reactions

    Science.gov (United States)

    Bae, Kang-Sik

    The heat and mass transfer phenomena in micro-scale for the mass transfer phenomena on drug in cylindrical matrix system, the simulation of oxygen/drug diffusion in a three dimensional capillary network, and a reduced chemical kinetic modeling of gas turbine combustion for Jet propellant-10 have been studied numerically. For the numerical analysis of the mass transfer phenomena on drug in cylindrical matrix system, the governing equations are derived from the cylindrical matrix systems, Krogh cylinder model, which modeling system is comprised of a capillary to a surrounding cylinder tissue along with the arterial distance to veins. ADI (Alternative Direction Implicit) scheme and Thomas algorithm are applied to solve the nonlinear partial differential equations (PDEs). This study shows that the important factors which have an effect on the drug penetration depth to the tissue are the mass diffusivity and the consumption of relevant species during the time allowed for diffusion to the brain tissue. Also, a computational fluid dynamics (CFD) model has been developed to simulate the blood flow and oxygen/drug diffusion in a three dimensional capillary network, which are satisfied in the physiological range of a typical capillary. A three dimensional geometry has been constructed to replicate the one studied by Secomb et al. (2000), and the computational framework features a non-Newtonian viscosity model for blood, the oxygen transport model including in oxygen-hemoglobin dissociation and wall flux due to tissue absorption, as well as an ability to study the diffusion of drugs and other materials in the capillary streams. Finally, a chemical kinetic mechanism of JP-10 has been compiled and validated for a wide range of combustion regimes, covering pressures of 1atm to 40atm with temperature ranges of 1,200 K--1,700 K, which is being studied as a possible Jet propellant for the Pulse Detonation Engine (PDE) and other high-speed flight applications such as hypersonic

  17. Scout: high-performance heterogeneous computing made simple

    Energy Technology Data Exchange (ETDEWEB)

    Jablin, James [Los Alamos National Laboratory; Mc Cormick, Patrick [Los Alamos National Laboratory; Herlihy, Maurice [BROWN UNIV.

    2011-01-26

    Researchers must often write their own simulation and analysis software. During this process they simultaneously confront both computational and scientific problems. Current strategies for aiding the generation of performance-oriented programs do not abstract the software development from the science. Furthermore, the problem is becoming increasingly complex and pressing with the continued development of many-core and heterogeneous (CPU-GPU) architectures. To acbieve high performance, scientists must expertly navigate both software and hardware. Co-design between computer scientists and research scientists can alleviate but not solve this problem. The science community requires better tools for developing, optimizing, and future-proofing codes, allowing scientists to focus on their research while still achieving high computational performance. Scout is a parallel programming language and extensible compiler framework targeting heterogeneous architectures. It provides the abstraction required to buffer scientists from the constantly-shifting details of hardware while still realizing higb-performance by encapsulating software and hardware optimization within a compiler framework.

  18. Computational Redox Potential Predictions: Applications to Inorganic and Organic Aqueous Complexes, and Complexes Adsorbed to Mineral Surfaces

    Directory of Open Access Journals (Sweden)

    Krishnamoorthy Arumugam

    2014-04-01

    Full Text Available Applications of redox processes range over a number of scientific fields. This review article summarizes the theory behind the calculation of redox potentials in solution for species such as organic compounds, inorganic complexes, actinides, battery materials, and mineral surface-bound-species. Different computational approaches to predict and determine redox potentials of electron transitions are discussed along with their respective pros and cons for the prediction of redox potentials. Subsequently, recommendations are made for certain necessary computational settings required for accurate calculation of redox potentials. This article reviews the importance of computational parameters, such as basis sets, density functional theory (DFT functionals, and relativistic approaches and the role that physicochemical processes play on the shift of redox potentials, such as hydration or spin orbit coupling, and will aid in finding suitable combinations of approaches for different chemical and geochemical applications. Identifying cost-effective and credible computational approaches is essential to benchmark redox potential calculations against experiments. Once a good theoretical approach is found to model the chemistry and thermodynamics of the redox and electron transfer process, this knowledge can be incorporated into models of more complex reaction mechanisms that include diffusion in the solute, surface diffusion, and dehydration, to name a few. This knowledge is important to fully understand the nature of redox processes be it a geochemical process that dictates natural redox reactions or one that is being used for the optimization of a chemical process in industry. In addition, it will help identify materials that will be useful to design catalytic redox agents, to come up with materials to be used for batteries and photovoltaic processes, and to identify new and improved remediation strategies in environmental engineering, for example the

  19. 77 FR 50726 - Software Requirement Specifications for Digital Computer Software and Complex Electronics Used in...

    Science.gov (United States)

    2012-08-22

    ... COMMISSION Software Requirement Specifications for Digital Computer Software and Complex Electronics Used in... Digital Computer Software and Complex Electronics used in Safety Systems of Nuclear Power Plants.'' The DG... National Standards Institute and Institute of Electrical and Electronics Engineers (ANSI/IEEE) Standard...

  20. Comparing Virtual and Physical Robotics Environments for Supporting Complex Systems and Computational Thinking

    Science.gov (United States)

    Berland, Matthew; Wilensky, Uri

    2015-01-01

    Both complex systems methods (such as agent-based modeling) and computational methods (such as programming) provide powerful ways for students to understand new phenomena. To understand how to effectively teach complex systems and computational content to younger students, we conducted a study in four urban middle school classrooms comparing…

  1. Automation of multi-agent control for complex dynamic systems in heterogeneous computational network

    Science.gov (United States)

    Oparin, Gennady; Feoktistov, Alexander; Bogdanova, Vera; Sidorov, Ivan

    2017-01-01

    The rapid progress of high-performance computing entails new challenges related to solving large scientific problems for various subject domains in a heterogeneous distributed computing environment (e.g., a network, Grid system, or Cloud infrastructure). The specialists in the field of parallel and distributed computing give the special attention to a scalability of applications for problem solving. An effective management of the scalable application in the heterogeneous distributed computing environment is still a non-trivial issue. Control systems that operate in networks, especially relate to this issue. We propose a new approach to the multi-agent management for the scalable applications in the heterogeneous computational network. The fundamentals of our approach are the integrated use of conceptual programming, simulation modeling, network monitoring, multi-agent management, and service-oriented programming. We developed a special framework for an automation of the problem solving. Advantages of the proposed approach are demonstrated on the parametric synthesis example of the static linear regulator for complex dynamic systems. Benefits of the scalable application for solving this problem include automation of the multi-agent control for the systems in a parallel mode with various degrees of its detailed elaboration.

  2. Complexity of images: experimental and computational estimates compared.

    Science.gov (United States)

    Chikhman, Valeriy; Bondarko, Valeriya; Danilova, Marina; Goluzina, Anna; Shelepin, Yuri

    2012-01-01

    We tested whether visual complexity can be modeled through the use of parameters relevant to known mechanisms of visual processing. In psychophysical experiments observers ranked the complexity of two groups of stimuli: 15 unfamiliar Chinese hieroglyphs and 24 outline images of well-known common objects. To predict image complexity, we considered: (i) spatial characteristics of the images, (ii) spatial-frequency characteristics, (iii) a combination of spatial and Fourier properties, and (iv) the size of the image encoded as a JPEG file. For hieroglyphs the highest correlation was obtained when complexity was calculated as the product of the squared spatial-frequency median and the image area. This measure accounts for the larger number of lines, strokes, and local periodic patterns in the hieroglyphs. For outline objects the best predictor of the experimental data was complexity estimated as the number of turns in the image, as Attneave (1957 Journal of Experimental Psychology 53 221-227) obtained for his abstract outlined images. Other predictors of complexity gave significant but lower correlations with the experimental ranking. We conclude that our modeling measures can be used to estimate the complexity of visual images but for different classes of images different measures of complexity may be required.

  3. Complex fluids in biological systems experiment, theory, and computation

    CERN Document Server

    2015-01-01

    This book serves as an introduction to the continuum mechanics and mathematical modeling of complex fluids in living systems. The form and function of living systems are intimately tied to the nature of surrounding fluid environments, which commonly exhibit nonlinear and history dependent responses to forces and displacements. With ever-increasing capabilities in the visualization and manipulation of biological systems, research on the fundamental phenomena, models, measurements, and analysis of complex fluids has taken a number of exciting directions. In this book, many of the world’s foremost experts explore key topics such as: Macro- and micro-rheological techniques for measuring the material properties of complex biofluids and the subtleties of data interpretation Experimental observations and rheology of complex biological materials, including mucus, cell membranes, the cytoskeleton, and blood The motility of microorganisms in complex fluids and the dynamics of active suspensions Challenges and solut...

  4. Efficient High Performance Computing on Heterogeneous Platforms

    NARCIS (Netherlands)

    Shen, J.

    2015-01-01

    Heterogeneous platforms are mixes of different processing units in a compute node (e.g., CPUs+GPUs, CPU+MICs) or a chip package (e.g., APUs). This type of platforms keeps gaining popularity in various computer systems ranging from supercomputers to mobile devices. In this context, improving their

  5. Efficient High Performance Computing on Heterogeneous Platforms

    NARCIS (Netherlands)

    Shen, J.

    2015-01-01

    Heterogeneous platforms are mixes of different processing units in a compute node (e.g., CPUs+GPUs, CPU+MICs) or a chip package (e.g., APUs). This type of platforms keeps gaining popularity in various computer systems ranging from supercomputers to mobile devices. In this context, improving their ef

  6. Software Synthesis for High Productivity Exascale Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bodik, Rastislav [Univ. of Washington, Seattle, WA (United States)

    2010-09-01

    Over the three years of our project, we accomplished three key milestones: We demonstrated how ideas from generative programming and software synthesis can help support the development of bulk-synchronous distributed memory kernels. These ideas are realized in a new language called MSL, a C-like language that combines synthesis features with high level notations for array manipulation and bulk-synchronous parallelism to simplify the semantic analysis required for synthesis. We also demonstrated that these high level notations map easily to low level C code and show that the performance of this generated code matches that of handwritten Fortran. Second, we introduced the idea of solver-aided domain-specific languages (SDSLs), which are an emerging class of computer-aided programming systems. SDSLs ease the construction of programs by automating tasks such as verification, debugging, synthesis, and non-deterministic execution. SDSLs are implemented by translating the DSL program into logical constraints. Next, we developed a symbolic virtual machine called Rosette, which simplifies the construction of such SDSLs and their compilers. We have used Rosette to build SynthCL, a subset of OpenCL that supports synthesis. Third, we developed novel numeric algorithms that move as little data as possible, either between levels of a memory hierarchy or between parallel processors over a network. We achieved progress in three aspects of this problem. First we determined lower bounds on communication. Second, we compared these lower bounds to widely used versions of these algorithms, and noted that these widely used algorithms usually communicate asymptotically more than is necessary. Third, we identified or invented new algorithms for most linear algebra problems that do attain these lower bounds, and demonstrated large speed-ups in theory and practice.

  7. Computer/information security design approaches for Complex 21/Reconfiguration facilities

    Energy Technology Data Exchange (ETDEWEB)

    Hunteman, W.J.; Zack, N.R. [Los Alamos National Lab., NM (United States). Safeguards Systems Group; Jaeger, C.D. [Sandia National Labs., Albuquerque, NM (United States). Surety/Dismantlement Dept.

    1993-12-31

    Los Alamos National Laboratory and Sandia National Laboratories have been designated the technical lead laboratories to develop the design of the computer/information security, safeguards, and physical security systems for all of the DOE Complex 21/Reconfiguration facilities. All of the automated information processing systems and networks in these facilities will be required to implement the new DOE orders on computer and information security. The planned approach for a highly integrated information processing capability in each of the facilities will require careful consideration of the requirements in DOE Orders 5639.6 and 1360.2A. The various information protection requirements and user clearances within the facilities will also have a significant effect on the design of the systems and networks. Fulfilling the requirements for proper protection of the information and compliance with DOE orders will be possible because the computer and information security concerns are being incorporated in the early design activities. This paper will discuss the computer and information security issues being addressed in the integrated design effort for the tritium, uranium/lithium, plutonium, plutonium storage, and high explosive/assembly facilities.

  8. Computer/information security design approaches for Complex 21/Reconfiguration facilities

    Energy Technology Data Exchange (ETDEWEB)

    Hunteman, W.J.; Zack, N.R. [Los Alamos National Lab., NM (United States); Jaeger, C.D. [Sandia National Labs., Albuquerque, NM (United States)

    1993-08-01

    Los Alamos National Laboratory and Sandia National Laboratories have been designated the technical lead laboratories to develop the design of the computer/information security, safeguards, and physical security systems for all of the DOE Complex 21/Reconfiguration facilities. All of the automated information processing systems and networks in these facilities will be required to implement the new DOE orders on computer and information security. The planned approach for a highly integrated information processing capability in each of the facilities will require careful consideration of the requirements in DOE Orders 5639.6 and 1360.2A. The various information protection requirements and user clearances within the facilities will also have a significant effect on the design of the systems and networks. Fulfilling the requirements for proper protection of the information and compliance with DOE orders will be possible because the computer and information security concerns are being incorporated in the early design activities. This paper will discuss the computer and information security addressed in the integrated design effort, uranium/lithium, plutonium, plutonium high explosive/assembly facilities.

  9. Field programmable gate array-assigned complex-valued computation and its limits

    Energy Technology Data Exchange (ETDEWEB)

    Bernard-Schwarz, Maria, E-mail: maria.bernardschwarz@ni.com [National Instruments, Ganghoferstrasse 70b, 80339 Munich (Germany); Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien (Austria); Zwick, Wolfgang; Klier, Jochen [National Instruments, Ganghoferstrasse 70b, 80339 Munich (Germany); Wenzel, Lothar [National Instruments, 11500 N MOPac Expy, Austin, Texas 78759 (United States); Gröschl, Martin [Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien (Austria)

    2014-09-15

    We discuss how leveraging Field Programmable Gate Array (FPGA) technology as part of a high performance computing platform reduces latency to meet the demanding real time constraints of a quantum optics simulation. Implementations of complex-valued operations using fixed point numeric on a Virtex-5 FPGA compare favorably to more conventional solutions on a central processing unit. Our investigation explores the performance of multiple fixed point options along with a traditional 64 bits floating point version. With this information, the lowest execution times can be estimated. Relative error is examined to ensure simulation accuracy is maintained.

  10. High-performance Scientific Computing using Parallel Computing to Improve Performance Optimization Problems

    Directory of Open Access Journals (Sweden)

    Florica Novăcescu

    2011-10-01

    Full Text Available HPC (High Performance Computing has become essential for the acceleration of innovation and the companies’ assistance in creating new inventions, better models and more reliable products as well as obtaining processes and services at low costs. The information in this paper focuses particularly on: description the field of high performance scientific computing, parallel computing, scientific computing, parallel computers, and trends in the HPC field, presented here reveal important new directions toward the realization of a high performance computational society. The practical part of the work is an example of use of the HPC tool to accelerate solving an electrostatic optimization problem using the Parallel Computing Toolbox that allows solving computational and data-intensive problems using MATLAB and Simulink on multicore and multiprocessor computers.

  11. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    DEFF Research Database (Denmark)

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational thi...

  12. High Fidelity Adiabatic Quantum Computation via Dynamical Decoupling

    CERN Document Server

    Quiroz, Gregory

    2012-01-01

    We introduce high-order dynamical decoupling strategies for open system adiabatic quantum computation. Our numerical results demonstrate that a judicious choice of high-order dynamical decoupling method, in conjunction with an encoding which allows computation to proceed alongside decoupling, can dramatically enhance the fidelity of adiabatic quantum computation in spite of decoherence.

  13. Controlling Combinatorial Complexity in Software and Malware Behavior Computation

    Energy Technology Data Exchange (ETDEWEB)

    Pleszkoch, Mark G [ORNL; Linger, Richard C [ORNL

    2015-01-01

    Virtually all software is out of intellectual control in that no one knows its full behavior. Software Behavior Computation (SBC) is a new technology for understanding everything software does. SBC applies the mathematics of denotational semantics implemented by function composition in Functional Trace Tables (FTTs) to compute the behavior of programs, expressed as disjoint cases of conditional concurrent assignments. In some circumstances, combinatorial explosions in the number of cases can occur when calculating the behavior of sequences of multiple branching structures. This paper describes computational methods that avoid combinatorial explosions. The predicates that control branching structures such as ifthenelses can be organized into three categories: 1) Independent, resulting in no behavior case explosion, 2) Coordinated, resulting in two behavior cases, or 3) Goaloriented, with potential exponential growth in the number of cases. Traditional FTT-based behavior computation can be augmented by two additional computational methods, namely, Single-Value Function Abstractions (SVFAs) and, introduced in this paper, Relational Trace Tables (RTTs). These methods can be applied to the three predicate categories to avoid combinatorial growth in behavior cases while maintaining mathematical correctness.

  14. Computational protocol for predicting the binding affinities of zinc containing metalloprotein-ligand complexes.

    Science.gov (United States)

    Jain, Tarun; Jayaram, B

    2007-06-01

    Zinc is one of the most important metal ions found in proteins performing specific functions associated with life processes. Coordination geometry of the zinc ion in the active site of the metalloprotein-ligand complexes poses a challenge in determining ligand binding affinities accurately in structure-based drug design. We report here an all atom force field based computational protocol for estimating rapidly the binding affinities of zinc containing metalloprotein-ligand complexes, considering electrostatics, van der Waals, hydrophobicity, and loss in conformational entropy of protein side chains upon ligand binding along with a nonbonded approach to model the interactions of the zinc ion with all the other atoms of the complex. We examined the sensitivity of the binding affinity predictions to the choice of Lennard-Jones parameters, partial atomic charges, and dielectric treatments adopted for system preparation and scoring. The highest correlation obtained was R2 = 0.77 (r = 0.88) for the predicted binding affinity against the experiment on a heterogenous dataset of 90 zinc containing metalloprotein-ligand complexes consisting of five unique protein targets. Model validation and parameter analysis studies underscore the robustness and predictive ability of the scoring function. The high correlation obtained suggests the potential applicability of the methodology in designing novel ligands for zinc-metalloproteins. The scoring function has been web enabled for free access at www.scfbio-iitd.res.in/software/drugdesign/bapplz.jsp as BAPPL-Z server (Binding Affinity Prediction of Protein-Ligand complexes containing Zinc metal ions).

  15. A computational approach to handle complex microstructure geometries

    OpenAIRE

    Moës, Nicolas; Cloirec, Mathieu; Cartraud, Patrice; Remacle, Jean-François

    2003-01-01

    International audience; In multiscale analysis of components, there is usually a need to solve micro-structures with complex geometries. In this paper, we use the extended finite element method (X-FEM) to solve scales involving complex geometries. The X-FEM allows one to use meshes not necessarily matchingthe physical surface of the problem while retaining the accuracy of the classical finite element approach. For material interfaces, this is achieved by introducing a new enrichment strategy....

  16. Optimizing high performance computing workflow for protein functional annotation.

    Science.gov (United States)

    Stanberry, Larissa; Rekepalli, Bhanu; Liu, Yuan; Giblock, Paul; Higdon, Roger; Montague, Elizabeth; Broomall, William; Kolker, Natali; Kolker, Eugene

    2014-09-10

    Functional annotation of newly sequenced genomes is one of the major challenges in modern biology. With modern sequencing technologies, the protein sequence universe is rapidly expanding. Newly sequenced bacterial genomes alone contain over 7.5 million proteins. The rate of data generation has far surpassed that of protein annotation. The volume of protein data makes manual curation infeasible, whereas a high compute cost limits the utility of existing automated approaches. In this work, we present an improved and optmized automated workflow to enable large-scale protein annotation. The workflow uses high performance computing architectures and a low complexity classification algorithm to assign proteins into existing clusters of orthologous groups of proteins. On the basis of the Position-Specific Iterative Basic Local Alignment Search Tool the algorithm ensures at least 80% specificity and sensitivity of the resulting classifications. The workflow utilizes highly scalable parallel applications for classification and sequence alignment. Using Extreme Science and Engineering Discovery Environment supercomputers, the workflow processed 1,200,000 newly sequenced bacterial proteins. With the rapid expansion of the protein sequence universe, the proposed workflow will enable scientists to annotate big genome data.

  17. Parameterized Computation and Complexity: A New Approach Dealing with NP-Hardness

    Institute of Scientific and Technical Information of China (English)

    Jian-Er Chen

    2005-01-01

    The theory of parameterized computation and complexity is a recently developed subarea in theoretical computer science. The theory is aimed at practically solving a large number of computational problems that are theoretically intractable.The theory is based on the observation that many intractable computational problems in practice are associated with a parameter that varies within a small or moderate range: Therefore, by taking the advantages of the small parameters, many theoretically intractable problems can be solved effectively and practically. On the other hand, the theory of parameterized computation and complexity has also offered powerful techniques that enable us to derive strong computational lower bounds for many computational problems, thus explaining why certain theoretically tractable problems cannot be solved effectively and practically. The theory of parameterized computation and complexity has found wide applications in areas such as database systems, programming languages, networks, VLSI design, parallel and distributed computing, computational biology, and robotics.This survey gives an overview on the fundamentals, algorithms, techniques, and applications developed in the research of parameterized computation and complexity. We will also report the most recent advances and excitements, and discuss further research directions in the area.

  18. Analog computation through high-dimensional physical chaotic neuro-dynamics

    Science.gov (United States)

    Horio, Yoshihiko; Aihara, Kazuyuki

    2008-07-01

    Conventional von Neumann computers have difficulty in solving complex and ill-posed real-world problems. However, living organisms often face such problems in real life, and must quickly obtain suitable solutions through physical, dynamical, and collective computations involving vast assemblies of neurons. These highly parallel computations through high-dimensional dynamics (computation through dynamics) are completely different from the numerical computations on von Neumann computers (computation through algorithms). In this paper, we explore a novel computational mechanism with high-dimensional physical chaotic neuro-dynamics. We physically constructed two hardware prototypes using analog chaotic-neuron integrated circuits. These systems combine analog computations with chaotic neuro-dynamics and digital computation through algorithms. We used quadratic assignment problems (QAPs) as benchmarks. The first prototype utilizes an analog chaotic neural network with 800-dimensional dynamics. An external algorithm constructs a solution for a QAP using the internal dynamics of the network. In the second system, 300-dimensional analog chaotic neuro-dynamics drive a tabu-search algorithm. We demonstrate experimentally that both systems efficiently solve QAPs through physical chaotic dynamics. We also qualitatively analyze the underlying mechanism of the highly parallel and collective analog computations by observing global and local dynamics. Furthermore, we introduce spatial and temporal mutual information to quantitatively evaluate the system dynamics. The experimental results confirm the validity and efficiency of the proposed computational paradigm with the physical analog chaotic neuro-dynamics.

  19. Computational neural networks driving complex analytical problem solving.

    Science.gov (United States)

    Hanrahan, Grady

    2010-06-01

    Neural network computing demonstrates advanced analytical problem solving abilities to meet the demands of modern chemical research. (To listen to a podcast about this article, please go to the Analytical Chemistry multimedia page at pubs.acs.org/page/ancham/audio/index.html .).

  20. Low-complexity computer simulation of multichannel room impulse responses

    NARCIS (Netherlands)

    Martínez Castañeda, J.A.

    2013-01-01

    The "telephone'' model has been, for the last one hundred thirty years, the base of modern telecommunications with virtually no changes in its fundamental concept. The arise of smaller and more powerful computing devices have opened new possibilities. For example, to build systems able to give to th

  1. Low-complexity computer simulation of multichannel room impulse responses

    NARCIS (Netherlands)

    Martínez Castañeda, J.A.

    2013-01-01

    The "telephone'' model has been, for the last one hundred thirty years, the base of modern telecommunications with virtually no changes in its fundamental concept. The arise of smaller and more powerful computing devices have opened new possibilities. For example, to build systems able to give to

  2. Synthesis and spectroscopic characterization of high-spin mononuclear iron(II) p-semiquinonate complexes.

    Science.gov (United States)

    Baum, Amanda E; Park, Heaweon; Lindeman, Sergey V; Fiedler, Adam T

    2014-12-01

    Two mononuclear iron(II) p-semiquinonate (pSQ) complexes have been generated via one-electron reduction of precursor complexes containing a substituted 1,4-naphthoquinone ligand. Detailed spectroscopic and computational analysis confirmed the presence of a coordinated pSQ radical ferromagnetically coupled to the high-spin Fe(II) center. The complexes are intended to model electronic interactions between (semi)quinone and iron cofactors in biology.

  3. Synthesis and Spectroscopic Characterization of High-Spin Mononuclear Iron(II) p-Semiquinonate Complexes

    OpenAIRE

    Baum, Amanda E.; Park, Heaweon; Lindeman, Sergey V.; Fiedler, Adam T.

    2014-01-01

    Two mononuclear iron(II) p-semiquinonate (pSQ) complexes have been generated via one-electron reduction of precursor complexes containing a substituted 1,4-naphthoquinone ligand. Detailed spectroscopic and computational analysis confirmed the presence of a coordinated pSQ radical ferromagnetically coupled to the high-spin FeII center. The complexes are intended to model electronic interactions between (semi)quinone and iron cofactors in biology.

  4. A new entropy based method for computing software structural complexity

    CERN Document Server

    Roca, J L

    2002-01-01

    In this paper a new methodology for the evaluation of software structural complexity is described. It is based on the entropy evaluation of the random uniform response function associated with the so called software characteristic function SCF. The behavior of the SCF with the different software structures and their relationship with the number of inherent errors is investigated. It is also investigated how the entropy concept can be used to evaluate the complexity of a software structure considering the SCF as a canonical representation of the graph associated with the control flow diagram. The functions, parameters and algorithms that allow to carry out this evaluation are also introduced. After this analytic phase follows the experimental phase, verifying the consistency of the proposed metric and their boundary conditions. The conclusion is that the degree of software structural complexity can be measured as the entropy of the random uniform response function of the SCF. That entropy is in direct relation...

  5. Federal Plan for High-End Computing

    Data.gov (United States)

    Networking and Information Technology Research and Development, Executive Office of the President — Since the World War II era, when scientists, mathematicians, and engineers began using revolutionary electronic machinery that could rapidly perform complex...

  6. A Component Architecture for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Bernholdt, D E; Elwasif, W R; Kohl, J A; Epperly, T G W

    2003-01-21

    The Common Component Architecture (CCA) provides a means for developers to manage the complexity of large-scale scientific software systems and to move toward a ''plug and play'' environment for high-performance computing. The CCA model allows for a direct connection between components within the same process to maintain performance on inter-component calls. It is neutral with respect to parallelism, allowing components to use whatever means they desire to communicate within their parallel ''cohort.'' We will discuss in detail the importance of performance in the design of the CCA and will analyze the performance costs associated with features of the CCA.

  7. Towards the global complexity, topology and chaos in modelling, simulation and computation

    CERN Document Server

    Meyer, D A

    1997-01-01

    Topological effects produce chaos in multiagent simulation and distributed computation. We explain this result by developing three themes concerning complex systems in the natural and social sciences: (i) Pragmatically, a system is complex when it is represented efficiently by different models at different scales. (ii) Nontrivial topology, identifiable as we scale towards the global, induces complexity in this sense. (iii) Complex systems with nontrivial topology are typically chaotic.

  8. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  9. Proceedings CSR 2010 Workshop on High Productivity Computations

    CERN Document Server

    Ablayev, Farid; Vasiliev, Alexander; 10.4204/EPTCS.52

    2011-01-01

    This volume contains the proceedings of the Workshop on High Productivity Computations (HPC 2010) which took place on June 21-22 in Kazan, Russia. This workshop was held as a satellite workshop of the 5th International Computer Science Symposium in Russia (CSR 2010). HPC 2010 was intended to organize the discussions about high productivity computing means and models, including but not limited to high performance and quantum information processing.

  10. HIGH PERFORMANCE PHOTOGRAMMETRIC PROCESSING ON COMPUTER CLUSTERS

    Directory of Open Access Journals (Sweden)

    V. N. Adrov

    2012-07-01

    Full Text Available Most cpu consuming tasks in photogrammetric processing can be done in parallel. The algorithms take independent bits as input and produce independent bits as output. The independence of bits comes from the nature of such algorithms since images, stereopairs or small image blocks parts can be processed independently. Many photogrammetric algorithms are fully automatic and do not require human interference. Photogrammetric workstations can perform tie points measurements, DTM calculations, orthophoto construction, mosaicing and many other service operations in parallel using distributed calculations. Distributed calculations save time reducing several days calculations to several hours calculations. Modern trends in computer technology show the increase of cpu cores in workstations, speed increase in local networks, and as a result dropping the price of the supercomputers or computer clusters that can contain hundreds or even thousands of computing nodes. Common distributed processing in DPW is usually targeted for interactive work with a limited number of cpu cores and is not optimized for centralized administration. The bottleneck of common distributed computing in photogrammetry can be in the limited lan throughput and storage performance, since the processing of huge amounts of large raster images is needed.

  11. High-Performance Cloud Computing: A View of Scientific Applications

    CERN Document Server

    Vecchiola, Christian; Buyya, Rajkumar

    2009-01-01

    Scientific computing often requires the availability of a massive number of computers for performing large scale experiments. Traditionally, these needs have been addressed by using high-performance computing solutions and installed facilities such as clusters and super computers, which are difficult to setup, maintain, and operate. Cloud computing provides scientists with a completely new model of utilizing the computing infrastructure. Compute resources, storage resources, as well as applications, can be dynamically provisioned (and integrated within the existing infrastructure) on a pay per use basis. These resources can be released when they are no more needed. Such services are often offered within the context of a Service Level Agreement (SLA), which ensure the desired Quality of Service (QoS). Aneka, an enterprise Cloud computing solution, harnesses the power of compute resources by relying on private and public Clouds and delivers to users the desired QoS. Its flexible and service based infrastructure...

  12. Design of magnetic coordination complexes for quantum computing.

    Science.gov (United States)

    Aromí, Guillem; Aguilà, David; Gamez, Patrick; Luis, Fernando; Roubeau, Olivier

    2012-01-21

    A very exciting prospect in coordination chemistry is to manipulate spins within magnetic complexes for the realization of quantum logic operations. An introduction to the requirements for a paramagnetic molecule to act as a 2-qubit quantum gate is provided in this tutorial review. We propose synthetic methods aimed at accessing such type of functional molecules, based on ligand design and inorganic synthesis. Two strategies are presented: (i) the first consists in targeting molecules containing a pair of well-defined and weakly coupled paramagnetic metal aggregates, each acting as a carrier of one potential qubit, (ii) the second is the design of dinuclear complexes of anisotropic metal ions, exhibiting dissimilar environments and feeble magnetic coupling. The first systems obtained from this synthetic program are presented here and their properties are discussed.

  13. Resampling Algorithms for Particle Filters: A Computational Complexity Perspective

    Directory of Open Access Journals (Sweden)

    Miodrag Bolić

    2004-11-01

    Full Text Available Newly developed resampling algorithms for particle filters suitable for real-time implementation are described and their analysis is presented. The new algorithms reduce the complexity of both hardware and DSP realization through addressing common issues such as decreasing the number of operations and memory access. Moreover, the algorithms allow for use of higher sampling frequencies by overlapping in time the resampling step with the other particle filtering steps. Since resampling is not dependent on any particular application, the analysis is appropriate for all types of particle filters that use resampling. The performance of the algorithms is evaluated on particle filters applied to bearings-only tracking and joint detection and estimation in wireless communications. We have demonstrated that the proposed algorithms reduce the complexity without performance degradation.

  14. A computational framework for modeling targets as complex adaptive systems

    Science.gov (United States)

    Santos, Eugene; Santos, Eunice E.; Korah, John; Murugappan, Vairavan; Subramanian, Suresh

    2017-05-01

    Modeling large military targets is a challenge as they can be complex systems encompassing myriad combinations of human, technological, and social elements that interact, leading to complex behaviors. Moreover, such targets have multiple components and structures, extending across multiple spatial and temporal scales, and are in a state of change, either in response to events in the environment or changes within the system. Complex adaptive system (CAS) theory can help in capturing the dynamism, interactions, and more importantly various emergent behaviors, displayed by the targets. However, a key stumbling block is incorporating information from various intelligence, surveillance and reconnaissance (ISR) sources, while dealing with the inherent uncertainty, incompleteness and time criticality of real world information. To overcome these challenges, we present a probabilistic reasoning network based framework called complex adaptive Bayesian Knowledge Base (caBKB). caBKB is a rigorous, overarching and axiomatic framework that models two key processes, namely information aggregation and information composition. While information aggregation deals with the union, merger and concatenation of information and takes into account issues such as source reliability and information inconsistencies, information composition focuses on combining information components where such components may have well defined operations. Since caBKBs can explicitly model the relationships between information pieces at various scales, it provides unique capabilities such as the ability to de-aggregate and de-compose information for detailed analysis. Using a scenario from the Network Centric Operations (NCO) domain, we will describe how our framework can be used for modeling targets with a focus on methodologies for quantifying NCO performance metrics.

  15. Design, synthesis and study of coordination complexes for quantum computing

    OpenAIRE

    Aguilà Avilés, David

    2013-01-01

    This thesis presents different strategies for the design of molecular complexes with the requirements to be used as two-qubit quantum gates. The approaches followed towards the preparation of potential qubit systems have been carried out focusing on the synthesis of ligands with β-diketone coordination units, which are very versatile for the design of metallocluster assemblies. One of the main advantages of using this kind of ligands is that they can be easily prepared through simple Claisen ...

  16. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2013-01-01

    Contemporary High Performance Computing: From Petascale toward Exascale focuses on the ecosystems surrounding the world's leading centers for high performance computing (HPC). It covers many of the important factors involved in each ecosystem: computer architectures, software, applications, facilities, and sponsors. The first part of the book examines significant trends in HPC systems, including computer architectures, applications, performance, and software. It discusses the growth from terascale to petascale computing and the influence of the TOP500 and Green500 lists. The second part of the

  17. Multiple spectral kernel learning and a gaussian complexity computation.

    Science.gov (United States)

    Reyhani, Nima

    2013-07-01

    Multiple kernel learning (MKL) partially solves the kernel selection problem in support vector machines and similar classifiers by minimizing the empirical risk over a subset of the linear combination of given kernel matrices. For large sample sets, the size of the kernel matrices becomes a numerical issue. In many cases, the kernel matrix is of low-efficient rank. However, the low-rank property is not efficiently utilized in MKL algorithms. Here, we suggest multiple spectral kernel learning that efficiently uses the low-rank property by finding a kernel matrix from a set of Gram matrices of a few eigenvectors from all given kernel matrices, called a spectral kernel set. We provide a new bound for the gaussian complexity of the proposed kernel set, which depends on both the geometry of the kernel set and the number of Gram matrices. This characterization of the complexity implies that in an MKL setting, adding more kernels may not monotonically increase the complexity, while previous bounds show otherwise.

  18. Intro - High Performance Computing for 2015 HPC Annual Report

    Energy Technology Data Exchange (ETDEWEB)

    Klitsner, Tom [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    The recent Executive Order creating the National Strategic Computing Initiative (NSCI) recognizes the value of high performance computing for economic competitiveness and scientific discovery and commits to accelerate delivery of exascale computing. The HPC programs at Sandia –the NNSA ASC program and Sandia’s Institutional HPC Program– are focused on ensuring that Sandia has the resources necessary to deliver computation in the national interest.

  19. A high-throughput bioinformatics distributed computing platform

    OpenAIRE

    Keane, Thomas M; Page, Andrew J.; McInerney, James O; Naughton, Thomas J.

    2005-01-01

    In the past number of years the demand for high performance computing has greatly increased in the area of bioinformatics. The huge increase in size of many genomic databases has meant that many common tasks in bioinformatics are not possible to complete in a reasonable amount of time on a single processor. Recently distributed computing has emerged as an inexpensive alternative to dedicated parallel computing. We have developed a general-purpose distributed computing platform ...

  20. Fusion in computer vision understanding complex visual content

    CERN Document Server

    Ionescu, Bogdan; Piatrik, Tomas

    2014-01-01

    This book presents a thorough overview of fusion in computer vision, from an interdisciplinary and multi-application viewpoint, describing successful approaches, evaluated in the context of international benchmarks that model realistic use cases. Features: examines late fusion approaches for concept recognition in images and videos; describes the interpretation of visual content by incorporating models of the human visual system with content understanding methods; investigates the fusion of multi-modal features of different semantic levels, as well as results of semantic concept detections, fo

  1. Progress and Challenges in High Performance Computer Technology

    Institute of Scientific and Technical Information of China (English)

    Xue-Jun Yang; Yong Dou; Qing-Feng Hu

    2006-01-01

    High performance computers provide strategic computing power in the construction of national economy and defense, and become one of symbols of the country's overall strength. Over 30 years, with the supports of governments, the technology of high performance computers is in the process of rapid development, during which the computing performance increases nearly 3 million times and the processors number expands over 10 hundred thousands times. To solve the critical issues related with parallel efficiency and scalability, scientific researchers pursued extensive theoretical studies and technical innovations. The paper briefly looks back the course of building high performance computer systems both at home and abroad,and summarizes the significant breakthroughs of international high performance computer technology. We also overview the technology progress of China in the area of parallel computer architecture, parallel operating system and resource management,parallel compiler and performance optimization, environment for parallel programming and network computing. Finally, we examine the challenging issues, "memory wall", system scalability and "power wall", and discuss the issues of high productivity computers, which is the trend in building next generation high performance computers.

  2. Infeasible-interior-point algorithm for a class of nonmonotone complementarity problems and its computational complexity

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper presents an infeasible-interior-point algorithm for aclass of nonmonotone complementarity problems, and analyses its convergence and computational complexity. The results indicate that the proposed algorithm is a polynomial-time one.

  3. Prediction and Uncertainty in Computational Modeling of Complex Phenomena: A Whitepaper

    Energy Technology Data Exchange (ETDEWEB)

    Trucano, T.G.

    1999-01-20

    This report summarizes some challenges associated with the use of computational science to predict the behavior of complex phenomena. As such, the document is a compendium of ideas that have been generated by various staff at Sandia. The report emphasizes key components of the use of computational to predict complex phenomena, including computational complexity and correctness of implementations, the nature of the comparison with data, the importance of uncertainty quantification in comprehending what the prediction is telling us, and the role of risk in making and using computational predictions. Both broad and more narrowly focused technical recommendations for research are given. Several computational problems are summarized that help to illustrate the issues we have emphasized. The tone of the report is informal, with virtually no mathematics. However, we have attempted to provide a useful bibliography that would assist the interested reader in pursuing the content of this report in greater depth.

  4. Research Activity in Computational Physics utilizing High Performance Computing: Co-authorship Network Analysis

    Science.gov (United States)

    Ahn, Sul-Ah; Jung, Youngim

    2016-10-01

    The research activities of the computational physicists utilizing high performance computing are analyzed by bibliometirc approaches. This study aims at providing the computational physicists utilizing high-performance computing and policy planners with useful bibliometric results for an assessment of research activities. In order to achieve this purpose, we carried out a co-authorship network analysis of journal articles to assess the research activities of researchers for high-performance computational physics as a case study. For this study, we used journal articles of the Scopus database from Elsevier covering the time period of 2004-2013. We extracted the author rank in the physics field utilizing high-performance computing by the number of papers published during ten years from 2004. Finally, we drew the co-authorship network for 45 top-authors and their coauthors, and described some features of the co-authorship network in relation to the author rank. Suggestions for further studies are discussed.

  5. Cognitive engineering models: A prerequisite to the design of human-computer interaction in complex dynamic systems

    Science.gov (United States)

    Mitchell, Christine M.

    1993-01-01

    This chapter examines a class of human-computer interaction applications, specifically the design of human-computer interaction for the operators of complex systems. Such systems include space systems (e.g., manned systems such as the Shuttle or space station, and unmanned systems such as NASA scientific satellites), aviation systems (e.g., the flight deck of 'glass cockpit' airplanes or air traffic control) and industrial systems (e.g., power plants, telephone networks, and sophisticated, e.g., 'lights out,' manufacturing facilities). The main body of human-computer interaction (HCI) research complements but does not directly address the primary issues involved in human-computer interaction design for operators of complex systems. Interfaces to complex systems are somewhat special. The 'user' in such systems - i.e., the human operator responsible for safe and effective system operation - is highly skilled, someone who in human-machine systems engineering is sometimes characterized as 'well trained, well motivated'. The 'job' or task context is paramount and, thus, human-computer interaction is subordinate to human job interaction. The design of human interaction with complex systems, i.e., the design of human job interaction, is sometimes called cognitive engineering.

  6. Stochastic Computational Approach for Complex Nonlinear Ordinary Differential Equations

    Institute of Scientific and Technical Information of China (English)

    Junaid Ali Khan; Muhammad Asif Zahoor Raja; Ijaz Mansoor Qureshi

    2011-01-01

    @@ We present an evolutionary computational approach for the solution of nonlinear ordinary differential equations (NLODEs).The mathematical modeling is performed by a feed-forward artificial neural network that defines an unsupervised error.The training of these networks is achieved by a hybrid intelligent algorithm, a combination of global search with genetic algorithm and local search by pattern search technique.The applicability of this approach ranges from single order NLODEs, to systems of coupled differential equations.We illustrate the method by solving a variety of model problems and present comparisons with solutions obtained by exact methods and classical numerical methods.The solution is provided on a continuous finite time interval unlike the other numerical techniques with comparable accuracy.With the advent of neuroprocessors and digital signal processors the method becomes particularly interesting due to the expected essential gains in the execution speed.%We present an evolutionary computational approach for the solution of nonlinear ordinary differential equations (NLODEs). The mathematical modeling is performed by a feed-forward artificial neural network that defines an unsupervised error. The training of these networks is achieved by a hybrid intelligent algorithm, a combination of global search with genetic algorithm and local search by pattern search technique. The applicability of this approach ranges from single order NLODEs, to systems of coupled differential equations. We illustrate the method by solving a variety of model problems and present comparisons with solutions obtained by exact methods and classical numerical methods. The solution is provided on a continuous finite time interval unlike the other numerical techniques with comparable accuracy. With the advent of neuroprocessors and digital signal processors the method becomes particularly interesting due to the expected essential gains in the execution speed.

  7. Computational Thinking and Practice - A Generic Approach to Computing in Danish High Schools

    DEFF Research Database (Denmark)

    Caspersen, Michael E.; Nowack, Palle

    2014-01-01

    Internationally, there is a growing awareness on the necessity of providing relevant computing education in schools, particularly high schools. We present a new and generic approach to Computing in Danish High Schools based on a conceptual framework derived from ideas related to computational...... thinking. We present two main theses on which the subject is based, and we present the included knowledge areas and didactical design principles. Finally we summarize the status and future plans for the subject and related development projects....

  8. Accuracy, resolution, and computational complexity of a discontinuous Galerkin finite element method

    NARCIS (Netherlands)

    Ven, van der H.; Vegt, van der J.J.W.; Cockburn, B.; Karniadakis, G.E.; Shu, C.-W.

    2000-01-01

    This series contains monographs of lecture notes type, lecture course material, and high-quality proceedings on topics described by the term "computational science and engineering". This includes theoretical aspects of scientific computing such as mathematical modeling, optimization methods, discret

  9. Software Systems for High-performance Quantum Computing

    Energy Technology Data Exchange (ETDEWEB)

    Humble, Travis S [ORNL; Britt, Keith A [ORNL

    2016-01-01

    Quantum computing promises new opportunities for solving hard computational problems, but harnessing this novelty requires breakthrough concepts in the design, operation, and application of computing systems. We define some of the challenges facing the development of quantum computing systems as well as software-based approaches that can be used to overcome these challenges. Following a brief overview of the state of the art, we present models for the quantum programming and execution models, the development of architectures for hybrid high-performance computing systems, and the realization of software stacks for quantum networking. This leads to a discussion of the role that conventional computing plays in the quantum paradigm and how some of the current challenges for exascale computing overlap with those facing quantum computing.

  10. A Two Layer Approach to the Computability and Complexity of Real Functions

    DEFF Research Database (Denmark)

    Lambov, Branimir Zdravkov

    2003-01-01

    We present a new model for computability and complexity of real functions together with an implementation that it based on it. The model uses a two-layer approach in which low-type basic objects perform the computation of a real function, but, whenever needed, can be complemented with higher type...... in computable analysis, while the efficiency of the implementation is not compromised by the need to create and maintain higher-type objects....

  11. Copper (II) diamino acid complexes: Quantum chemical computations regarding diastereomeric effects on the energy of complexation

    NARCIS (Netherlands)

    Zuilhof, H.; Morokuma, K.

    2003-01-01

    Quantum chemical calculations were used to rationalize the observed enantiodifferentiation in the complexation of alpha-amino acids to chiral Cu(II) complexes. Apart from Cu(II)-pi interactions and steric repulsions between the anchoring cholesteryl-Glu moiety and an aromatic amino acid R group, hyd

  12. Computational Study on the Stacking Interaction in Catechol Complexes

    Science.gov (United States)

    Estévez, Laura; Otero, Nicolás; Mosquera, Ricardo A.

    2009-09-01

    The stability and electron density topology of catechol complexes (dimers and tetramer) were studied using the MPW1B95 functional. The QTAIM analysis shows that both dimers (face to face and C-H/π one) display a different electronic origin. The formation of the former is accompanied by a significant change in the values of atomic electron dipole and quadrupole components, flattening the most diffuse part of the electron density distribution toward the molecular plane. A small electron population transfer is observed between catechol monomers connected by C-H/π interactions, whose QTAIM characterization does not differ from that of a weak hydrogen bond. Cooperative effects in the tetramer on binding energies are small and negligible for bond properties and charge transfer. Nevertheless, they are significant on atomic electron populations.

  13. Game-Theoretic Rate-Distortion-Complexity Optimization of High Efficiency Video Coding

    DEFF Research Database (Denmark)

    Ukhanova, Ann; Milani, Simone; Forchhammer, Søren

    2013-01-01

    This paper presents an algorithm for rate-distortioncomplexity optimization for the emerging High Efficiency Video Coding (HEVC) standard, whose high computational requirements urge the need for low-complexity optimization algorithms. Optimization approaches need to specify different complexity...... profiles in order to tailor the computational load to the different hardware and power-supply resources of devices. In this work, we focus on optimizing the quantization parameter and partition depth in HEVC via a game-theoretic approach. The proposed rate control strategy alone provides 0.2 dB improvement...

  14. High Performance Computing in Science and Engineering '08 : Transactions of the High Performance Computing Center

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2009-01-01

    The discussions and plans on all scienti?c, advisory, and political levels to realize an even larger “European Supercomputer” in Germany, where the hardware costs alone will be hundreds of millions Euro – much more than in the past – are getting closer to realization. As part of the strategy, the three national supercomputing centres HLRS (Stuttgart), NIC/JSC (Julic ¨ h) and LRZ (Munich) have formed the Gauss Centre for Supercomputing (GCS) as a new virtual organization enabled by an agreement between the Federal Ministry of Education and Research (BMBF) and the state ministries for research of Baden-Wurttem ¨ berg, Bayern, and Nordrhein-Westfalen. Already today, the GCS provides the most powerful high-performance computing - frastructure in Europe. Through GCS, HLRS participates in the European project PRACE (Partnership for Advances Computing in Europe) and - tends its reach to all European member countries. These activities aligns well with the activities of HLRS in the European HPC infrastructur...

  15. Accurate Computed Enthalpies of Spin Crossover in Iron and Cobalt Complexes

    DEFF Research Database (Denmark)

    Kepp, Kasper Planeta; Cirera, J

    2009-01-01

    Despite their importance in many chemical processes, the relative energies of spin states of transition metal complexes have so far been haunted by large computational errors. By the use of six functionals, B3LYP, BP86, TPSS, TPSSh, M06L, and M06L, this work studies nine complexes (seven with iron...

  16. Choice of velocity variables for complex flow computation

    Science.gov (United States)

    Shyy, W.; Chang, G. C.

    1991-01-01

    The issue of adopting the velocity components as dependent velocity variables for the Navier-Stokes flow computations is investigated. The viewpoint advocated is that a numerical algorithm should preferably honor both the physical conservation law in differential form and the geometric conservation law in discrete form. With the use of Cartesian velocity vector, the momentum equations in curvilinear coordinates can retain the full conservation-law form and satisfy the physical conservation laws. With the curvilinear velocity components, source terms appear in differential equations and hence the full conservation law form can not be retained. In discrete expressions, algorithms based on the Cartesian components can satisfy the geometric conservation-law form for convection terms but not for viscous terms; those based on the curvilinear components, on the other hand, cannot satisfy the geometric conservation-law form for either convection or viscous terms. Several flow solutions for domain with 90 and 360 degree turnings are presented to illustrate the issues of using the Cartesian velocity components and the staggered grid arrangement.

  17. High Performance Spaceflight Computing (HPSC) Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In 2012, the NASA Game Changing Development Program (GCDP), residing in the NASA Space Technology Mission Directorate (STMD), commissioned a High Performance...

  18. 8th International Workshop on Parallel Tools for High Performance Computing

    CERN Document Server

    Gracia, José; Knüpfer, Andreas; Resch, Michael; Nagel, Wolfgang

    2015-01-01

    Numerical simulation and modelling using High Performance Computing has evolved into an established technique in academic and industrial research. At the same time, the High Performance Computing infrastructure is becoming ever more complex. For instance, most of the current top systems around the world use thousands of nodes in which classical CPUs are combined with accelerator cards in order to enhance their compute power and energy efficiency. This complexity can only be mastered with adequate development and optimization tools. Key topics addressed by these tools include parallelization on heterogeneous systems, performance optimization for CPUs and accelerators, debugging of increasingly complex scientific applications, and optimization of energy usage in the spirit of green IT. This book represents the proceedings of the 8th International Parallel Tools Workshop, held October 1-2, 2014 in Stuttgart, Germany – which is a forum to discuss the latest advancements in the parallel tools.

  19. Fast high-resolution computer-generated hologram computation using multiple graphics processing unit cluster system.

    Science.gov (United States)

    Takada, Naoki; Shimobaba, Tomoyoshi; Nakayama, Hirotaka; Shiraki, Atsushi; Okada, Naohisa; Oikawa, Minoru; Masuda, Nobuyuki; Ito, Tomoyoshi

    2012-10-20

    To overcome the computational complexity of a computer-generated hologram (CGH), we implement an optimized CGH computation in our multi-graphics processing unit cluster system. Our system can calculate a CGH of 6,400×3,072 pixels from a three-dimensional (3D) object composed of 2,048 points in 55 ms. Furthermore, in the case of a 3D object composed of 4096 points, our system is 553 times faster than a conventional central processing unit (using eight threads).

  20. High Available COTS Based Computer for Space

    Science.gov (United States)

    Hartmann, J.; Magistrati, Giorgio

    2015-09-01

    The availability and reliability factors of a system are central requirements of a target application. From a simple fuel injection system used in cars up to a flight control system of an autonomous navigating spacecraft, each application defines its specific availability factor under the target application boundary conditions. Increasing quality requirements on data processing systems used in space flight applications calling for new architectures to fulfill the availability, reliability as well as the increase of the required data processing power. Contrary to the increased quality request simplification and use of COTS components to decrease costs while keeping the interface compatibility to currently used system standards are clear customer needs. Data processing system design is mostly dominated by strict fulfillment of the customer requirements and reuse of available computer systems were not always possible caused by obsolescence of EEE-Parts, insufficient IO capabilities or the fact that available data processing systems did not provide the required scalability and performance.

  1. Combinatorial computational chemistry approach for materials design: applications in deNOx catalysis, Fischer-Tropsch synthesis, lanthanoid complex, and lithium ion secondary battery.

    Science.gov (United States)

    Koyama, Michihisa; Tsuboi, Hideyuki; Endou, Akira; Takaba, Hiromitsu; Kubo, Momoji; Del Carpio, Carlos A; Miyamoto, Akira

    2007-02-01

    Computational chemistry can provide fundamental knowledge regarding various aspects of materials. While its impact in scientific research is greatly increasing, its contributions to industrially important issues are far from satisfactory. In order to realize industrial innovation by computational chemistry, a new concept "combinatorial computational chemistry" has been proposed by introducing the concept of combinatorial chemistry to computational chemistry. This combinatorial computational chemistry approach enables theoretical high-throughput screening for materials design. In this manuscript, we review the successful applications of combinatorial computational chemistry to deNO(x) catalysts, Fischer-Tropsch catalysts, lanthanoid complex catalysts, and cathodes of the lithium ion secondary battery.

  2. COMPLEXITY ANALYSIS AND COMPUTATION OF THE OPTIMAL HARVESTING FOR ONE-SPECIES

    Institute of Scientific and Technical Information of China (English)

    Haiying JING; Zhaoyu YANG

    2006-01-01

    The exploitation of renewable resources creates many complex problems for culture, ecology and economics as well. Ascertaining the essentials behind the complex problems is very important. In this paper, we mainly study various complex relations appearing in the optimal exploitation process for renewable resources. First, we derive a sufficient condition on the existence of optimal harvesting policies for one-species population resources. Then we present every possible optimal harvesting pattern for such a model. On the basis of this, we give a computing formula for estimating the optimal harvesting period, optimal transitional period, and optimal recruitment period. The main difference with respect to the previous works in literature is that our optimal harvesting policy is a piece-wise continuous function of time t, at the piecewise point tc, which is called switching time. At the switching time we switch the harvesting rate from h to some transitional control u*, then to 0. Clearly this kind of harvesting policy is easier to carry out than those by others, provided that there exists a managing department which can highly supervise the resources.

  3. CRPC research into linear algebra software for high performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Choi, J.; Walker, D.W. [Oak Ridge National Lab., TN (United States). Mathematical Sciences Section; Dongarra, J.J. [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Computer Science]|[Oak Ridge National Lab., TN (United States). Mathematical Sciences Section; Pozo, R. [Univ. of Tennessee, Knoxville, TN (United States). Dept. of Computer Science; Sorensen, D.C. [Rice Univ., Houston, TX (United States). Dept. of Computational and Applied Mathematics

    1994-12-31

    In this paper the authors look at a number of approaches being investigated in the Center for Research on Parallel Computation (CRPC) to develop linear algebra software for high-performance computers. These approaches are exemplified by the LAPACK, templates, and ARPACK projects. LAPACK is a software library for performing dense and banded linear algebra computations, and was designed to run efficiently on high-performance computers. The authors focus on the design of the distributed-memory version of LAPACK, and on an object-oriented interface to LAPACK.

  4. High throughput computing: a solution for scientific analysis

    Science.gov (United States)

    O'Donnell, M.

    2011-01-01

    Public land management agencies continually face resource management problems that are exacerbated by climate warming, land-use change, and other human activities. As the U.S. Geological Survey (USGS) Fort Collins Science Center (FORT) works with managers in U.S. Department of the Interior (DOI) agencies and other federal, state, and private entities, researchers are finding that the science needed to address these complex ecological questions across time and space produces substantial amounts of data. The additional data and the volume of computations needed to analyze it require expanded computing resources well beyond single- or even multiple-computer workstations. To meet this need for greater computational capacity, FORT investigated how to resolve the many computational shortfalls previously encountered when analyzing data for such projects. Our objectives included finding a solution that would:

  5. An accurate and efficient computation method of the hydration free energy of a large, complex molecule.

    Science.gov (United States)

    Yoshidome, Takashi; Ekimoto, Toru; Matubayasi, Nobuyuki; Harano, Yuichi; Kinoshita, Masahiro; Ikeguchi, Mitsunori

    2015-05-07

    The hydration free energy (HFE) is a crucially important physical quantity to discuss various chemical processes in aqueous solutions. Although an explicit-solvent computation with molecular dynamics (MD) simulations is a preferable treatment of the HFE, huge computational load has been inevitable for large, complex solutes like proteins. In the present paper, we propose an efficient computation method for the HFE. In our method, the HFE is computed as a sum of 〈UUV〉/2 (〈UUV〉 is the ensemble average of the sum of pair interaction energy between solute and water molecule) and the water reorganization term mainly reflecting the excluded volume effect. Since 〈UUV〉 can readily be computed through a MD of the system composed of solute and water, an efficient computation of the latter term leads to a reduction of computational load. We demonstrate that the water reorganization term can quantitatively be calculated using the morphometric approach (MA) which expresses the term as the linear combinations of the four geometric measures of a solute and the corresponding coefficients determined with the energy representation (ER) method. Since the MA enables us to finish the computation of the solvent reorganization term in less than 0.1 s once the coefficients are determined, the use of the MA enables us to provide an efficient computation of the HFE even for large, complex solutes. Through the applications, we find that our method has almost the same quantitative performance as the ER method with substantial reduction of the computational load.

  6. Benchmarking: More Aspects of High Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Ravindrudu, Rahul [Iowa State Univ., Ames, IA (United States)

    2004-01-01

    pattern for the left-looking factorization. The right-looking algorithm performs better for in-core data, but the left-looking will perform better for out-of-core data due to the reduced I/O operations. Hence the conclusion that out-of-core algorithms will perform better when designed from start. The out-of-core and thread based computation do not interact in this case, since I/O is not done by the threads. The performance of the thread based computation does not depend on I/O as the algorithms are in the BLAS algorithms which assumes all the data to be in memory. This is the reason the out-of-core results and OpenMP threads results were presented separately and no attempt to combine them was made. In general, the modified HPL performs better with larger block sizes, due to less I/O involved for out-of-core part and better cache utilization for the thread based computation.

  7. Biomedical Requirements for High Productivity Computing Systems

    Science.gov (United States)

    2005-04-01

    operations performed by embedded C ++ libraries. While Python is not currently used directly for numerically intensive work it would be quite desirable if...performed by embedded C ++ libraries. The availability of a higher performance python solution is highly desirable, i.e. – a python compiler or better JIT...be desirable. - Virtually all high-level programming is now done in Python with numerically intensive operations performed by embedded C ++ libraries

  8. Cationic complexation with dissolved organic matter: Insights from molecular dynamics computer simulations and NMR spectroscopy

    Science.gov (United States)

    Kalinichev, A. G.; Xu, X.; Kirkpatrick, R.

    2006-12-01

    Dissolved organic matter (DOM) is ubiquitous in soil and surface water and plays many important geochemical and environmental roles acting as a proton donor/acceptor and pH buffer and interacting with metal ions, minerals and organic species to form water-soluble and water-insoluble complexes of widely differing chemical and biological stabilities. There are strong correlations among the concentration of DOM and the speciation, solubility and toxicity of many trace metals in soil and water due to metal-DOM interaction. DOM can also significantly negatively affect the performance of nanofiltration and reverse osmosis membranes used industrially for water purification and desalination, being one of the major causes of a so-called `membrane bio- fouling'. The molecular scale mechanisms and dynamics of the DOM interactions with metals and membranes are, however, quite poorly understood. Methods of computational molecular modeling, combined with element- specific nuclear magnetic resonance (NMR) spectroscopy, can serve as highly effective tools to probe and quantify on a fundamental molecular level the DOM interactions with metal cations in aqueous solutions, and to develop predictive models of the molecular mechanisms responsible for the metal-DOM complexation in the environment. This paper presents the results of molecular dynamics (MD) computer simulations of the interaction of DOM with dissolved Na+, Cs+, Mg2+, and Ca2+. Na+ forms only very weak outer-sphere complexes with DOM. These results and the results of other recent molecular modeling efforts (e.g., Sutton et al., Environmental Toxicology and Chemistry, 24, 1902-1911, 2005), clearly indicate that both the structural and dynamic aspects of the cation-DOM complexation follow a simple trend in terms of the charge/size ratio for the ions. Due to the competition between ion hydration in bulk aqueous solution and adsorption of these cations by the negatively charged DOM functional groups (primarily carboxylate

  9. A high performance scientific cloud computing environment for materials simulations

    CERN Document Server

    Jorissen, Kevin; Rehr, John J

    2011-01-01

    We describe the development of a scientific cloud computing (SCC) platform that offers high performance computation capability. The platform consists of a scientific virtual machine prototype containing a UNIX operating system and several materials science codes, together with essential interface tools (an SCC toolset) that offers functionality comparable to local compute clusters. In particular, our SCC toolset provides automatic creation of virtual clusters for parallel computing, including tools for execution and monitoring performance, as well as efficient I/O utilities that enable seamless connections to and from the cloud. Our SCC platform is optimized for the Amazon Elastic Compute Cloud (EC2). We present benchmarks for prototypical scientific applications and demonstrate performance comparable to local compute clusters. To facilitate code execution and provide user-friendly access, we have also integrated cloud computing capability in a JAVA-based GUI. Our SCC platform may be an alternative to traditi...

  10. Computer simulations reveal complex distribution of haemodynamic forces in a mouse retina model of angiogenesis

    CERN Document Server

    Bernabeu, Miguel O; Jones, Martin; Nielsen, Jens H; Krüger, Timm; Nash, Rupert W; Groen, Derek; Hetherington, James; Gerhardt, Holger; Coveney, Peter V

    2013-01-01

    There is currently limited understanding of the role played by haemodynamic forces on the processes governing vascular development. One of many obstacles to be overcome is being able to measure those forces, at the required resolution level, on vessels only a few micrometres thick. In the current paper, we present an in silico method for the computation of the haemodynamic forces experienced by murine retinal vasculature (a widely used vascular development animal model) beyond what is measurable experimentally. Our results show that it is possible to reconstruct high-resolution three-dimensional geometrical models directly from samples of retinal vasculature and that the lattice-Boltzmann algorithm can be used to obtain accurate estimates of the haemodynamics in these domains. Our findings show that the flow patterns recovered are complex, that branches of predominant flow exist from early development stages, and that the pruning process tends to make the wall shear stress experienced by the capillaries incre...

  11. Computational Complexities and Breaches in Authentication Frameworks of Broadband Wireless Access

    CERN Document Server

    Hashmi, Raheel Maqsood; Jabeen, Memoona; Alimgeer, Khurram S; Khan, Shahid A

    2009-01-01

    Secure access of communication networks has become an increasingly important area of consideration for the communication service providers of present day. Broadband Wireless Access (BWA) networks are proving to be an efficient and cost effective solution for the provisioning of high rate wireless traffic links in static and mobile domains. The secure access of these networks is necessary to ensure their superior operation and revenue efficacy. Although authentication process is a key to secure access in BWA networks, the breaches present in them limit the networks performance. In this paper, the vulnerabilities in the authentication frameworks of BWA networks have been unveiled. Moreover, this paper also describes the limitations of these protocols and of the solutions proposed to them due to the involved computational complexities and overheads. The possible attacks on privacy and performance of BWA networks have been discussed and explained in detail.

  12. Experiment and computation: a combined approach to study the van der Waals complexes

    Directory of Open Access Journals (Sweden)

    Surin L.A.

    2017-01-01

    Full Text Available A review of recent results on the millimetre-wave spectroscopy of weakly bound van der Waals complexes, mostly those which contain H2 and He, is presented. In our work, we compared the experimental spectra to the theoretical bound state results, thus providing a critical test of the quality of the M–H2 and M–He potential energy surfaces (PESs which are a key issue for reliable computations of the collisional excitation and de-excitation of molecules (M = CO, NH3, H2O in the dense interstellar medium. The intermolecular interactions with He and H2 play also an important role for high resolution spectroscopy of helium or para-hydrogen clusters doped by a probe molecule (CO, HCN. Such experiments are directed on the detection of superfluid response of molecular rotation in the He and p-H2 clusters.

  13. The Principals and Practice of Distributed High Throughput Computing

    CERN Document Server

    CERN. Geneva

    2016-01-01

    The potential of Distributed Processing Systems to deliver computing capabilities with qualities ranging from high availability and reliability to easy expansion in functionality and capacity were recognized and formalized in the 1970’s. For more three decade these principals Distributed Computing guided the development of the HTCondor resource and job management system. The widely adopted suite of software tools offered by HTCondor are based on novel distributed computing technologies and are driven by the evolving needs of High Throughput scientific applications. We will review the principals that underpin our work, the distributed computing frameworks and technologies we developed and the lessons we learned from delivering effective and dependable software tools in an ever changing landscape computing technologies and needs that range today from a desktop computer to tens of thousands of cores offered by commercial clouds. About the speaker Miron Livny received a B.Sc. degree in Physics and Mat...

  14. Computer-Aided Design of Drugs on Emerging Hybrid High Performance Computers

    Science.gov (United States)

    2013-09-01

    Clustering using MapReduce , Workshop on Trends in High-Performance Distributed Computing, Vrije Universiteit, Amsterdam, NL. (Invited Talk) [25] February...and middleware packages for polarizable force fields on multi-core and GPU systems, supported by the MapReduce paradigm. NSF MRI #0922657, $451,051...High-throughput Molecular Datasets for Scalable Clustering using MapReduce , Workshop on Trends in High-Performance Distributed Computing, Vrije

  15. Comparing computer experiments for fitting high-order polynomial metamodels

    OpenAIRE

    Johnson, Rachel T.; Montgomery, Douglas C.; Jones, Bradley; Parker, Peter T.

    2010-01-01

    The use of simulation as a modeling and analysis tool is wide spread. Simulation is an enabling tool for experimentally virtually on a validated computer environment. Often the underlying function for a computer experiment result has too much curvalture to be adequately modeled by a low-order polynomial. In such cases, finding an appropriate experimental design is not easy. We evaluate several computer experiments assuming the modeler is interested in fitting a high-order polynomial to th...

  16. Nuclear Forces and High-Performance Computing: The Perfect Match

    Energy Technology Data Exchange (ETDEWEB)

    Luu, T; Walker-Loud, A

    2009-06-12

    High-performance computing is now enabling the calculation of certain nuclear interaction parameters directly from Quantum Chromodynamics, the quantum field theory that governs the behavior of quarks and gluons and is ultimately responsible for the nuclear strong force. We briefly describe the state of the field and describe how progress in this field will impact the greater nuclear physics community. We give estimates of computational requirements needed to obtain certain milestones and describe the scientific and computational challenges of this field.

  17. Efficient physical embedding of topologically complex information processing networks in brains and computer circuits.

    Directory of Open Access Journals (Sweden)

    Danielle S Bassett

    2010-04-01

    Full Text Available Nervous systems are information processing networks that evolved by natural selection, whereas very large scale integrated (VLSI computer circuits have evolved by commercially driven technology development. Here we follow historic intuition that all physical information processing systems will share key organizational properties, such as modularity, that generally confer adaptivity of function. It has long been observed that modular VLSI circuits demonstrate an isometric scaling relationship between the number of processing elements and the number of connections, known as Rent's rule, which is related to the dimensionality of the circuit's interconnect topology and its logical capacity. We show that human brain structural networks, and the nervous system of the nematode C. elegans, also obey Rent's rule, and exhibit some degree of hierarchical modularity. We further show that the estimated Rent exponent of human brain networks, derived from MRI data, can explain the allometric scaling relations between gray and white matter volumes across a wide range of mammalian species, again suggesting that these principles of nervous system design are highly conserved. For each of these fractal modular networks, the dimensionality of the interconnect topology was greater than the 2 or 3 Euclidean dimensions of the space in which it was embedded. This relatively high complexity entailed extra cost in physical wiring: although all networks were economically or cost-efficiently wired they did not strictly minimize wiring costs. Artificial and biological information processing systems both may evolve to optimize a trade-off between physical cost and topological complexity, resulting in the emergence of homologous principles of economical, fractal and modular design across many different kinds of nervous and computational networks.

  18. Linear Transceiver Design for Interference Alignment: Complexity and Computation

    CERN Document Server

    Razaviyayn, Meisam; Luo, Zhi-Quan

    2010-01-01

    Consider a MIMO interference channel whereby each transmitter and receiver are equipped with multiple antennas. The basic problem is to design optimal linear transceivers (or beamformers) that can maximize system throughput. The recent work [1] suggests that optimal beamformers should maximize the total degrees of freedom and achieve interference alignment in high SNR. In this paper we first consider the interference alignment problem in spatial domain and prove that the problem of maximizing the total degrees of freedom for a given MIMO interference channel is NP-hard. Furthermore, we show that even checking the achievability of a given tuple of degrees of freedom for all receivers is NP-hard when each receiver is equipped with at least three antennas. Interestingly, the same problem becomes polynomial time solvable when each transmit/receive node is equipped with no more than two antennas. Finally, we propose a distributed algorithm for transmit covariance matrix design, while assuming each receiver uses a ...

  19. Low latency, high bandwidth data communications between compute nodes in a parallel computer

    Science.gov (United States)

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2010-11-02

    Methods, parallel computers, and computer program products are disclosed for low latency, high bandwidth data communications between compute nodes in a parallel computer. Embodiments include receiving, by an origin direct memory access (`DMA`) engine of an origin compute node, data for transfer to a target compute node; sending, by the origin DMA engine of the origin compute node to a target DMA engine on the target compute node, a request to send (`RTS`) message; transferring, by the origin DMA engine, a predetermined portion of the data to the target compute node using memory FIFO operation; determining, by the origin DMA engine whether an acknowledgement of the RTS message has been received from the target DMA engine; if the an acknowledgement of the RTS message has not been received, transferring, by the origin DMA engine, another predetermined portion of the data to the target compute node using a memory FIFO operation; and if the acknowledgement of the RTS message has been received by the origin DMA engine, transferring, by the origin DMA engine, any remaining portion of the data to the target compute node using a direct put operation.

  20. High resolution pollutant measurements in complex urban ...

    Science.gov (United States)

    Measuring air pollution in real-time using an instrumented vehicle platform has been an emerging strategy to resolve air pollution trends at a very fine spatial scale (10s of meters). Achieving second-by-second data representative of urban air quality trends requires advanced instrumentation, such as a quantum cascade laser utilized to resolve carbon monoxide and real-time optical detection of black carbon. An equally challenging area of development is processing and visualization of complex geospatial air monitoring data to decipher key trends of interest. EPA’s Office of Research and Development staff have applied air monitoring to evaluate community air quality in a variety of environments, including assessing air quality surrounding rail yards, evaluating noise wall or tree stand effects on roadside and on-road air quality, and surveying of traffic-related exposure zones for comparison with land-use regression estimates. ORD has ongoing efforts to improve mobile monitoring data collection and interpretation, including instrumentation testing, evaluating the effect of post-processing algorithms on derived trends, and developing a web-based tool called Real-Time Geospatial Data Viewer (RETIGO) allowing for a simple plug-and-play of mobile monitoring data. Example findings from mobile data sets include an estimated 50% in roadside ultrafine particle levels when immediately downwind of a noise barrier, increases in neighborhood-wide black carbon levels (3

  1. Transforming High School Physics with Modeling and Computation

    CERN Document Server

    Aiken, John M

    2013-01-01

    The Engage to Excel (PCAST) report, the National Research Council's Framework for K-12 Science Education, and the Next Generation Science Standards all call for transforming the physics classroom into an environment that teaches students real scientific practices. This work describes the early stages of one such attempt to transform a high school physics classroom. Specifically, a series of model-building and computational modeling exercises were piloted in a ninth grade Physics First classroom. Student use of computation was assessed using a proctored programming assignment, where the students produced and discussed a computational model of a baseball in motion via a high-level programming environment (VPython). Student views on computation and its link to mechanics was assessed with a written essay and a series of think-aloud interviews. This pilot study shows computation's ability for connecting scientific practice to the high school science classroom.

  2. High Performance Computing in Science and Engineering '14

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2015-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS). The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and   engineers. The book comes with a wealth of color illustrations and tables of results.  

  3. ATLAS FTK a – very complex – custom super computer

    CERN Document Server

    Kimura, Naoki; The ATLAS collaboration

    2016-01-01

    In the ever increasing pile-up LHC environment advanced techniques of analyzing the data are implemented in order to increase the rate of relevant physics processes with respect to background processes. The Fast TracKer (FTK) is a track finding implementation at hardware level that is designed to deliver full-scan tracks with PT above 1GeV to the ATLAS trigger system for every L1 accept (at a maximum rate of 100kHz). In order to achieve this performance a highly parallel system was designed and now it is under installation in ATLAS. In the beginning of 2016 it will provide tracks for the trigger system in a region covering the central part of the ATLAS detector, and during the year it's coverage will be extended to the full detector coverage. The system relies on matching hits coming from the silicon tracking detectors against 1 billion patterns stored in specially designed ASICS chips (Associative memory - AM06). In a first stage coarse resolution hits are matched against the patterns and the accepted hits u...

  4. Domain Decomposition Based High Performance Parallel Computing

    CERN Document Server

    Raju, Mandhapati P

    2009-01-01

    The study deals with the parallelization of finite element based Navier-Stokes codes using domain decomposition and state-ofart sparse direct solvers. There has been significant improvement in the performance of sparse direct solvers. Parallel sparse direct solvers are not found to exhibit good scalability. Hence, the parallelization of sparse direct solvers is done using domain decomposition techniques. A highly efficient sparse direct solver PARDISO is used in this study. The scalability of both Newton and modified Newton algorithms are tested.

  5. High-Speed Computer-Controlled Switch-Matrix System

    Science.gov (United States)

    Spisz, E.; Cory, B.; Ho, P.; Hoffman, M.

    1985-01-01

    High-speed computer-controlled switch-matrix system developed for communication satellites. Satellite system controlled by onboard computer and all message-routing functions between uplink and downlink beams handled by newly developed switch-matrix system. Message requires only 2-microsecond interconnect period, repeated every millisecond.

  6. High Throughput Screening for Neurodegeneration and Complex Disease Phenotypes

    OpenAIRE

    Varma, Hemant; Lo, Donald C.; Stockwell, Brent R.

    2008-01-01

    High throughput screening (HTS) for complex diseases is challenging. This stems from the fact that complex phenotypes are difficult to adapt to rapid, high throughput assays. We describe the recent development of high throughput and high-content screens (HCS) for neurodegenerative diseases, with a focus on inherited neurodegenerative disorders, such as Huntington's disease. We describe, among others, HTS assays based on protein aggregation, neuronal death, caspase activation and mutant protei...

  7. Scientific and high-performance computing at FAIR

    Directory of Open Access Journals (Sweden)

    Kisel Ivan

    2015-01-01

    Full Text Available Future FAIR experiments have to deal with very high input rates, large track multiplicities, make full event reconstruction and selection on-line on a large dedicated computer farm equipped with heterogeneous many-core CPU/GPU compute nodes. To develop efficient and fast algorithms, which are optimized for parallel computations, is a challenge for the groups of experts dealing with the HPC computing. Here we present and discuss the status and perspectives of the data reconstruction and physics analysis software of one of the future FAIR experiments, namely, the CBM experiment.

  8. Multicore Challenges and Benefits for High Performance Scientific Computing

    Directory of Open Access Journals (Sweden)

    Ida M.B. Nielsen

    2008-01-01

    Full Text Available Until recently, performance gains in processors were achieved largely by improvements in clock speeds and instruction level parallelism. Thus, applications could obtain performance increases with relatively minor changes by upgrading to the latest generation of computing hardware. Currently, however, processor performance improvements are realized by using multicore technology and hardware support for multiple threads within each core, and taking full advantage of this technology to improve the performance of applications requires exposure of extreme levels of software parallelism. We will here discuss the architecture of parallel computers constructed from many multicore chips as well as techniques for managing the complexity of programming such computers, including the hybrid message-passing/multi-threading programming model. We will illustrate these ideas with a hybrid distributed memory matrix multiply and a quantum chemistry algorithm for energy computation using Møller–Plesset perturbation theory.

  9. Resource estimation in high performance medical image computing.

    Science.gov (United States)

    Banalagay, Rueben; Covington, Kelsie Jade; Wilkes, D M; Landman, Bennett A

    2014-10-01

    Medical imaging analysis processes often involve the concatenation of many steps (e.g., multi-stage scripts) to integrate and realize advancements from image acquisition, image processing, and computational analysis. With the dramatic increase in data size for medical imaging studies (e.g., improved resolution, higher throughput acquisition, shared databases), interesting study designs are becoming intractable or impractical on individual workstations and servers. Modern pipeline environments provide control structures to distribute computational load in high performance computing (HPC) environments. However, high performance computing environments are often shared resources, and scheduling computation across these resources necessitates higher level modeling of resource utilization. Submission of 'jobs' requires an estimate of the CPU runtime and memory usage. The resource requirements for medical image processing algorithms are difficult to predict since the requirements can vary greatly between different machines, different execution instances, and different data inputs. Poor resource estimates can lead to wasted resources in high performance environments due to incomplete executions and extended queue wait times. Hence, resource estimation is becoming a major hurdle for medical image processing algorithms to efficiently leverage high performance computing environments. Herein, we present our implementation of a resource estimation system to overcome these difficulties and ultimately provide users with the ability to more efficiently utilize high performance computing resources.

  10. High Energy Physics Experiments In Grid Computing Networks

    Directory of Open Access Journals (Sweden)

    Andrzej Olszewski

    2008-01-01

    Full Text Available The demand for computing resources used for detector simulations and data analysis in HighEnergy Physics (HEP experiments is constantly increasing due to the development of studiesof rare physics processes in particle interactions. The latest generation of experiments at thenewly built LHC accelerator at CERN in Geneva is planning to use computing networks fortheir data processing needs. A Worldwide LHC Computing Grid (WLCG organization hasbeen created to develop a Grid with properties matching the needs of these experiments. Inthis paper we present the use of Grid computing by HEP experiments and describe activitiesat the participating computing centers with the case of Academic Computing Center, ACKCyfronet AGH, Kraków, Poland.

  11. GPU-based high-performance computing for radiation therapy.

    Science.gov (United States)

    Jia, Xun; Ziegenhein, Peter; Jiang, Steve B

    2014-02-21

    Recent developments in radiotherapy therapy demand high computation powers to solve challenging problems in a timely fashion in a clinical environment. The graphics processing unit (GPU), as an emerging high-performance computing platform, has been introduced to radiotherapy. It is particularly attractive due to its high computational power, small size, and low cost for facility deployment and maintenance. Over the past few years, GPU-based high-performance computing in radiotherapy has experienced rapid developments. A tremendous amount of study has been conducted, in which large acceleration factors compared with the conventional CPU platform have been observed. In this paper, we will first give a brief introduction to the GPU hardware structure and programming model. We will then review the current applications of GPU in major imaging-related and therapy-related problems encountered in radiotherapy. A comparison of GPU with other platforms will also be presented.

  12. High performance computing: Clusters, constellations, MPPs, and future directions

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, Jack; Sterling, Thomas; Simon, Horst; Strohmaier, Erich

    2003-06-10

    Last year's paper by Bell and Gray [1] examined past trends in high performance computing and asserted likely future directions based on market forces. While many of the insights drawn from this perspective have merit and suggest elements governing likely future directions for HPC, there are a number of points put forth that we feel require further discussion and, in certain cases, suggest alternative, more likely views. One area of concern relates to the nature and use of key terms to describe and distinguish among classes of high end computing systems, in particular the authors use of ''cluster'' to relate to essentially all parallel computers derived through the integration of replicated components. The taxonomy implicit in their previous paper, while arguable and supported by some elements of our community, fails to provide the essential semantic discrimination critical to the effectiveness of descriptive terms as tools in managing the conceptual space of consideration. In this paper, we present a perspective that retains the descriptive richness while providing a unifying framework. A second area of discourse that calls for additional commentary is the likely future path of system evolution that will lead to effective and affordable Petaflops-scale computing including the future role of computer centers as facilities for supporting high performance computing environments. This paper addresses the key issues of taxonomy, future directions towards Petaflops computing, and the important role of computer centers in the 21st century.

  13. High Performance Computing in Science and Engineering '16 : Transactions of the High Performance Computing Center, Stuttgart (HLRS) 2016

    CERN Document Server

    Kröner, Dietmar; Resch, Michael

    2016-01-01

    This book presents the state-of-the-art in supercomputer simulation. It includes the latest findings from leading researchers using systems from the High Performance Computing Center Stuttgart (HLRS) in 2016. The reports cover all fields of computational science and engineering ranging from CFD to computational physics and from chemistry to computer science with a special emphasis on industrially relevant applications. Presenting findings of one of Europe’s leading systems, this volume covers a wide variety of applications that deliver a high level of sustained performance. The book covers the main methods in high-performance computing. Its outstanding results in achieving the best performance for production codes are of particular interest for both scientists and engineers. The book comes with a wealth of color illustrations and tables of results.

  14. Speeding up ecological and evolutionary computations in R; essentials of high performance computing for biologists.

    Science.gov (United States)

    Visser, Marco D; McMahon, Sean M; Merow, Cory; Dixon, Philip M; Record, Sydne; Jongejans, Eelke

    2015-03-01

    Computation has become a critical component of research in biology. A risk has emerged that computational and programming challenges may limit research scope, depth, and quality. We review various solutions to common computational efficiency problems in ecological and evolutionary research. Our review pulls together material that is currently scattered across many sources and emphasizes those techniques that are especially effective for typical ecological and environmental problems. We demonstrate how straightforward it can be to write efficient code and implement techniques such as profiling or parallel computing. We supply a newly developed R package (aprof) that helps to identify computational bottlenecks in R code and determine whether optimization can be effective. Our review is complemented by a practical set of examples and detailed Supporting Information material (S1-S3 Texts) that demonstrate large improvements in computational speed (ranging from 10.5 times to 14,000 times faster). By improving computational efficiency, biologists can feasibly solve more complex tasks, ask more ambitious questions, and include more sophisticated analyses in their research.

  15. High Performance Computing Assets for Ocean Acoustics Research

    Science.gov (United States)

    2016-11-18

    that make them easily parallelizable in the manner that, for example, atmospheric or ocean general circulation models (GCMs) are parallel. Many GCMs...Enclosed is the Final Report for ONR Grant No. NOOO 14-15-1-2840 entitled "High Performance Computing Assets for Ocean Acoustjc Research," Principal...distribution is unlimited. ONR DURIP Grant Final Report High Performance Computing Assets for Ocean Acoustics Research Timothy F. Dud a Applied Ocean

  16. Dynamic Resource Management and Job Scheduling for High Performance Computing

    OpenAIRE

    2016-01-01

    Job scheduling and resource management plays an essential role in high-performance computing. Supercomputing resources are usually managed by a batch system, which is responsible for the effective mapping of jobs onto resources (i.e., compute nodes). From the system perspective, a batch system must ensure high system utilization and throughput, while from the user perspective it must ensure fast response times and fairness when allocating resources across jobs. Parallel jobs can be divide...

  17. Compact high performance spectrometers using computational imaging

    Science.gov (United States)

    Morton, Kenneth; Weisberg, Arel

    2016-05-01

    Compressive sensing technology can theoretically be used to develop low cost compact spectrometers with the performance of larger and more expensive systems. Indeed, compressive sensing for spectroscopic systems has been previously demonstrated using coded aperture techniques, wherein a mask is placed between the grating and a charge coupled device (CCD) and multiple measurements are collected with different masks. Although proven effective for some spectroscopic sensing paradigms (e.g. Raman), this approach requires that the signal being measured is static between shots (low noise and minimal signal fluctuation). Many spectroscopic techniques applicable to remote sensing are inherently noisy and thus coded aperture compressed sensing will likely not be effective. This work explores an alternative approach to compressed sensing that allows for reconstruction of a high resolution spectrum in sensing paradigms featuring significant signal fluctuations between measurements. This is accomplished through relatively minor changes to the spectrometer hardware together with custom super-resolution algorithms. Current results indicate that a potential overall reduction in CCD size of up to a factor of 4 can be attained without a loss of resolution. This reduction can result in significant improvements in cost, size, and weight of spectrometers incorporating the technology.

  18. Memory Benchmarks for SMP-Based High Performance Parallel Computers

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, A B; de Supinski, B; Mueller, F; Mckee, S A

    2001-11-20

    As the speed gap between CPU and main memory continues to grow, memory accesses increasingly dominates the performance of many applications. The problem is particularly acute for symmetric multiprocessor (SMP) systems, where the shared memory may be accessed concurrently by a group of threads running on separate CPUs. Unfortunately, several key issues governing memory system performance in current systems are not well understood. Complex interactions between the levels of the memory hierarchy, buses or switches, DRAM back-ends, system software, and application access patterns can make it difficult to pinpoint bottlenecks and determine appropriate optimizations, and the situation is even more complex for SMP systems. To partially address this problem, we formulated a set of multi-threaded microbenchmarks for characterizing and measuring the performance of the underlying memory system in SMP-based high-performance computers. We report our use of these microbenchmarks on two important SMP-based machines. This paper has four primary contributions. First, we introduce a microbenchmark suite to systematically assess and compare the performance of different levels in SMP memory hierarchies. Second, we present a new tool based on hardware performance monitors to determine a wide array of memory system characteristics, such as cache sizes, quickly and easily; by using this tool, memory performance studies can be targeted to the full spectrum of performance regimes with many fewer data points than is otherwise required. Third, we present experimental results indicating that the performance of applications with large memory footprints remains largely constrained by memory. Fourth, we demonstrate that thread-level parallelism further degrades memory performance, even for the latest SMPs with hardware prefetching and switch-based memory interconnects.

  19. After Installation: Ubiquitous Computing and High School Science in Three Experienced, High-Technology Schools

    Science.gov (United States)

    Drayton, Brian; Falk, Joni K.; Stroud, Rena; Hobbs, Kathryn; Hammerman, James

    2010-01-01

    There are few studies of the impact of ubiquitous computing on high school science, and the majority of studies of ubiquitous computing report only on the early stages of implementation. The present study presents data on 3 high schools with carefully elaborated ubiquitous computing systems that have gone through at least one "obsolescence cycle"…

  20. DEVELOPING OF THE SYSTEM INFORMATION SECURITY MODEL FOR COMPUTER TRAINING COMPLEX

    Directory of Open Access Journals (Sweden)

    Viktoriia N. Kovalchuk

    2010-08-01

    Full Text Available The regulatory documents regarding the computer training rooms and information communication technologies in respect to the information safety are being analyzed in the given paper. The model of information security system of the computer training complex is developed. In particular there are considered the requirements to the security system construction, its functioning and the stages of the lifecycle. The analysis of typical risks for the information resources is conducted, the main methods of their information security are offered.

  1. VBOT: Motivating computational and complex systems fluencies with constructionist virtual/physical robotics

    Science.gov (United States)

    Berland, Matthew W.

    As scientists use the tools of computational and complex systems theory to broaden science perspectives (e.g., Bar-Yam, 1997; Holland, 1995; Wolfram, 2002), so can middle-school students broaden their perspectives using appropriate tools. The goals of this dissertation project are to build, study, evaluate, and compare activities designed to foster both computational and complex systems fluencies through collaborative constructionist virtual and physical robotics. In these activities, each student builds an agent (e.g., a robot-bird) that must interact with fellow students' agents to generate a complex aggregate (e.g., a flock of robot-birds) in a participatory simulation environment (Wilensky & Stroup, 1999a). In a participatory simulation, students collaborate by acting in a common space, teaching each other, and discussing content with one another. As a result, the students improve both their computational fluency and their complex systems fluency, where fluency is defined as the ability to both consume and produce relevant content (DiSessa, 2000). To date, several systems have been designed to foster computational and complex systems fluencies through computer programming and collaborative play (e.g., Hancock, 2003; Wilensky & Stroup, 1999b); this study suggests that, by supporting the relevant fluencies through collaborative play, they become mutually reinforcing. In this work, I will present both the design of the VBOT virtual/physical constructionist robotics learning environment and a comparative study of student interaction with the virtual and physical environments across four middle-school classrooms, focusing on the contrast in systems perspectives differently afforded by the two environments. In particular, I found that while performance gains were similar overall, the physical environment supported agent perspectives on aggregate behavior, and the virtual environment supported aggregate perspectives on agent behavior. The primary research questions

  2. Calculation of Computational Complexity for Radix-2 (p) Fast Fourier Transform Algorithms for Medical Signals.

    Science.gov (United States)

    Amirfattahi, Rassoul

    2013-10-01

    Owing to its simplicity radix-2 is a popular algorithm to implement fast fourier transform. Radix-2(p) algorithms have the same order of computational complexity as higher radices algorithms, but still retain the simplicity of radix-2. By defining a new concept, twiddle factor template, in this paper, we propose a method for exact calculation of multiplicative complexity for radix-2(p) algorithms. The methodology is described for radix-2, radix-2 (2) and radix-2 (3) algorithms. Results show that radix-2 (2) and radix-2 (3) have significantly less computational complexity compared with radix-2. Another interesting result is that while the number of complex multiplications in radix-2 (3) algorithm is slightly more than radix-2 (2), the number of real multiplications for radix-2 (3) is less than radix-2 (2). This is because of the twiddle factors in the form of which need less number of real multiplications and are more frequent in radix-2 (3) algorithm.

  3. A Primer on High-Throughput Computing for Genomic Selection

    Directory of Open Access Journals (Sweden)

    Xiao-Lin eWu

    2011-02-01

    Full Text Available High-throughput computing (HTC uses computer clusters to solve advanced computational problems, with the goal of accomplishing high throughput over relatively long periods of time. In genomic selection, for example, a set of markers covering the entire genome is used to train a model based on known data, and the resulting model is used to predict the genetic merit of selection candidates. Sophisticated models are very computationally demanding and, with several traits to be evaluated sequentially, computing time is long and output is low. In this paper, we present scenarios and basic principles of how HTC can be used in genomic selection, implemented using various techniques from simple batch processing to pipelining in distributed computer clusters. Various scripting languages, such as shell scripting, Perl and R, are also very useful to devise pipelines. By pipelining, we can reduce total computing time and consequently increase throughput. In comparison to the traditional data processing pipeline residing on the central processors, performing general purpose computation on a graphics processing unit (GPU provide a new-generation approach to massive parallel computing in genomic selection. While the concept of HTC may still be new to many researchers in animal breeding, plant breeding, and genetics, HTC infrastructures have already been built in many institutions, such as the University of Wisconsin – Madison, which can be leveraged for genomic selection, in terms of central processing unit (CPU capacity, network connectivity, storage availability, and middleware connectivity. Exploring existing HTC infrastructures as well as general purpose computing environments will further expand our capability to meet increasing computing demands posed by unprecedented genomic data that we have today. We anticipate that HTC will impact genomic selection via better statistical models, faster solutions, and more competitive products (e.g., from design of

  4. Conceptual Considerations for Reducing the Computational Complexity in Software Defined Radio using Cooperative Wireless Networks

    DEFF Research Database (Denmark)

    Kristensen, Jesper Michael; Fitzek, Frank H. P.; Koch, Peter

    2005-01-01

    This paper motivates the application of Software defined radio as the enabling technology in the implementation of future wireless terminals for 4G. It outlines the advantages and disadvantages of SDR in terms of Flexibility and reconfigurability versus computational complexity. To mitigate...... the expected increase in complexity leading to a decrease in energy efficiency, cooperative wireless networks are introduced. Cooperative wireless networks enables the concept of resource sharing. Resource sharing is interpreted as collaborative signal processing. This interpretation leads to the concept...

  5. On the Computational Complexity of L1-Approximation

    DEFF Research Database (Denmark)

    Oliva, Paulo Borges

    2002-01-01

    t is well known that for a given continuous function f : [0, 1] and a number n there exists a unique polynomial pn Pn (polynomials of degree n) which best L1-approximates f. We establish the first upper bound on the complexity of the sequence (pn)n , assuming f is polynomial-time computable. Our...... complexity analysis makes essential use of the modulus of uniqueness for L1-approximation presented in [13]....

  6. Programs EMCUPL and SCHCOPL: computation of electromagnetic coupling on a layered halfspace with complex conductivities

    Science.gov (United States)

    Kauahikaua, James P.; Anderson, Walter L.

    1979-01-01

    A number of efficient numerical computer algorithms are incorporated into a general program called EMCUPL, which calculates the electromagnetic (EM) coupling between two straight wires on the surface of a multilayered half space. Each layer has an isotropic conductivity which may be either real or complex. A second computer program, called SCHCOPL, is described which calculates the coupling for the special case of a Schlumberger or Wenner array also on a multilayered half space. Comparison with other programs shows that EMCUPL is at least as accurate, more generally applicable, and computationally more efficient FORTRAN listings of all subprograms and example calculations are given in the Appendix.

  7. CRITICAL ISSUES IN HIGH END COMPUTING - FINAL REPORT

    Energy Technology Data Exchange (ETDEWEB)

    Corones, James [Krell Institute

    2013-09-23

    High-End computing (HEC) has been a driver for advances in science and engineering for the past four decades. Increasingly HEC has become a significant element in the national security, economic vitality, and competitiveness of the United States. Advances in HEC provide results that cut across traditional disciplinary and organizational boundaries. This program provides opportunities to share information about HEC systems and computational techniques across multiple disciplines and organizations through conferences and exhibitions of HEC advances held in Washington DC so that mission agency staff, scientists, and industry can come together with White House, Congressional and Legislative staff in an environment conducive to the sharing of technical information, accomplishments, goals, and plans. A common thread across this series of conferences is the understanding of computational science and applied mathematics techniques across a diverse set of application areas of interest to the Nation. The specific objectives of this program are: Program Objective 1. To provide opportunities to share information about advances in high-end computing systems and computational techniques between mission critical agencies, agency laboratories, academics, and industry. Program Objective 2. To gather pertinent data, address specific topics of wide interest to mission critical agencies. Program Objective 3. To promote a continuing discussion of critical issues in high-end computing. Program Objective 4.To provide a venue where a multidisciplinary scientific audience can discuss the difficulties applying computational science techniques to specific problems and can specify future research that, if successful, will eliminate these problems.

  8. Nonlinear dynamics of high-power ultrashort laser pulses: exaflop computations on a laboratory computer station and subcycle light bullets

    Science.gov (United States)

    Voronin, A. A.; Zheltikov, A. M.

    2016-09-01

    The propagation of high-power ultrashort light pulses involves intricate nonlinear spatio-temporal dynamics where various spectral-temporal field transformation effects are strongly coupled to the beam dynamics, which, in turn, varies from the leading to the trailing edge of the pulse. Analysis of this nonlinear dynamics, accompanied by spatial instabilities, beam breakup into multiple filaments, and unique phenomena leading to the generation of extremely short optical field waveforms, is equivalent in its computational complexity to a simulation of the time evolution of a few billion-dimensional physical system. Such an analysis requires exaflops of computational operations and is usually performed on high-performance supercomputers. Here, we present methods of physical modeling and numerical analysis that allow problems of this class to be solved on a laboratory computer boosted by a cluster of graphic accelerators. Exaflop computations performed with the application of these methods reveal new unique phenomena in the spatio-temporal dynamics of high-power ultrashort laser pulses. We demonstrate that unprecedentedly short light bullets can be generated as a part of that dynamics, providing optical field localization in both space and time through a delicate balance between dispersion and nonlinearity with simultaneous suppression of diffraction-induced beam divergence due to the joint effect of Kerr and ionization nonlinearities.

  9. Computation for High Excited Stark Levels of hydrogen Atoms in Uniform Electric Fields

    Institute of Scientific and Technical Information of China (English)

    田人和

    2003-01-01

    We present a new method for the numerical calculation of exact complex eigenvalues of Schrodinger equations for a hydrogen atom in a uniform electric field. This method allows a direct calculation for complex eigenvalues without using any auxiliary treatment, such as the Breit-Wigner parametrization and the complex scale transformation,etc. The characteristics of high excited atoms in electric field have attracted extensive interest in experimental aspect, however, the existing theoretical calculation is only up to n = 40. Here we present the computation results ranging from n = 1 to 100. The data for n(<,_ ) 40 are in agreement with the results of other researchers.

  10. Distances to galactic high-velocity clouds : Complex C

    NARCIS (Netherlands)

    Wakker, B. P.; York, D. G.; Howk, J. C.; Barentine, J. C.; Wilhelm, R.; Peletier, R. F.; van Woerden, H.; Beers, T. C.; Ivezic, Z.; Richter, P.; Schwarz, U. J.

    2007-01-01

    We report the first determination of a distance bracket for the high- velocity cloud (HVC) complex C. Combined with previous measurements showing that this cloud has a metallicity of 0.15 times solar, these results provide ample evidence that complex C traces the continuing accretion of intergalacti

  11. Distances to galactic high-velocity clouds : Complex C

    NARCIS (Netherlands)

    Wakker, B. P.; York, D. G.; Howk, J. C.; Barentine, J. C.; Wilhelm, R.; Peletier, R. F.; van Woerden, H.; Beers, T. C.; Ivezic, Z.; Richter, P.; Schwarz, U. J.

    2007-01-01

    We report the first determination of a distance bracket for the high- velocity cloud (HVC) complex C. Combined with previous measurements showing that this cloud has a metallicity of 0.15 times solar, these results provide ample evidence that complex C traces the continuing accretion of intergalacti

  12. Computationally efficient algorithm for high sampling-frequency operation of active noise control

    Science.gov (United States)

    Rout, Nirmal Kumar; Das, Debi Prasad; Panda, Ganapati

    2015-05-01

    In high sampling-frequency operation of active noise control (ANC) system the length of the secondary path estimate and the ANC filter are very long. This increases the computational complexity of the conventional filtered-x least mean square (FXLMS) algorithm. To reduce the computational complexity of long order ANC system using FXLMS algorithm, frequency domain block ANC algorithms have been proposed in past. These full block frequency domain ANC algorithms are associated with some disadvantages such as large block delay, quantization error due to computation of large size transforms and implementation difficulties in existing low-end DSP hardware. To overcome these shortcomings, the partitioned block ANC algorithm is newly proposed where the long length filters in ANC are divided into a number of equal partitions and suitably assembled to perform the FXLMS algorithm in the frequency domain. The complexity of this proposed frequency domain partitioned block FXLMS (FPBFXLMS) algorithm is quite reduced compared to the conventional FXLMS algorithm. It is further reduced by merging one fast Fourier transform (FFT)-inverse fast Fourier transform (IFFT) combination to derive the reduced structure FPBFXLMS (RFPBFXLMS) algorithm. Computational complexity analysis for different orders of filter and partition size are presented. Systematic computer simulations are carried out for both the proposed partitioned block ANC algorithms to show its accuracy compared to the time domain FXLMS algorithm.

  13. A review of High Performance Computing foundations for scientists

    CERN Document Server

    Ibáñez, Pablo García-Risueño Pablo E

    2012-01-01

    The increase of existing computational capabilities has made simulation emerge as a third discipline of Science, lying midway between experimental and purely theoretical branches [1, 2]. Simulation enables the evaluation of quantities which otherwise would not be accessible, helps to improve experiments and provides new insights on systems which are analysed [3-6]. Knowing the fundamentals of computation can be very useful for scientists, for it can help them to improve the performance of their theoretical models and simulations. This review includes some technical essentials that can be useful to this end, and it is devised as a complement for researchers whose education is focused on scientific issues and not on technological respects. In this document we attempt to discuss the fundamentals of High Performance Computing (HPC) [7] in a way which is easy to understand without much previous background. We sketch the way standard computers and supercomputers work, as well as discuss distributed computing and di...

  14. 5th International Conference on High Performance Scientific Computing

    CERN Document Server

    Hoang, Xuan; Rannacher, Rolf; Schlöder, Johannes

    2014-01-01

    This proceedings volume gathers a selection of papers presented at the Fifth International Conference on High Performance Scientific Computing, which took place in Hanoi on March 5-9, 2012. The conference was organized by the Institute of Mathematics of the Vietnam Academy of Science and Technology (VAST), the Interdisciplinary Center for Scientific Computing (IWR) of Heidelberg University, Ho Chi Minh City University of Technology, and the Vietnam Institute for Advanced Study in Mathematics. The contributions cover the broad interdisciplinary spectrum of scientific computing and present recent advances in theory, development of methods, and practical applications. Subjects covered include mathematical modeling; numerical simulation; methods for optimization and control; parallel computing; software development; and applications of scientific computing in physics, mechanics and biomechanics, material science, hydrology, chemistry, biology, biotechnology, medicine, sports, psychology, transport, logistics, com...

  15. Computational titration analysis of a multiprotic HIV-1 protease-ligand complex.

    Science.gov (United States)

    Spyrakis, Francesca; Fornabaio, Micaela; Cozzini, Pietro; Mozzarelli, Andrea; Abraham, Donald J; Kellogg, Glen E

    2004-09-29

    A new computational method for analyzing the protonation states of protein-ligand complexes with multiple ionizable groups is applied to the structurally characterized complex between the peptide Glu-Asp-Leu and HIV-1 protease. This complex has eight ionizable groups at the active site: four from the ligand and four Asp residues on the protein. Correlation, with an error of ca. 0.6 kcal mol-1, is made between the calculated titration curve and the experimental titration curve. The analysis suggests that between four and five of the eight ionizable groups are protonated at the pH of crystallization.

  16. Mathematics of complexity in experimental high energy physics

    CERN Document Server

    Eggers, H C

    2004-01-01

    Mathematical ideas and approaches common in complexity-related fields have been fruitfully applied in experimental high energy physics also. We briefly review some of the cross-pollination that is occurring.

  17. Noise effects in the quantum search algorithm from the computational complexity point of view

    OpenAIRE

    Gawron, Piotr; Klamka, Jerzy; Winiarczyk, Ryszard

    2011-01-01

    We analyse the resilience of the quantum search algorithm in the presence of quantum noise modelled as trace preserving completely positive maps. We study the influence of noise on computational complexity of the quantum search algorithm. We show that only for small amounts of noise the quantum search algorithm is still more efficient than any classical algorithm.

  18. On computational complexity and Gallai-type theorems involving graph domination parameters

    DEFF Research Database (Denmark)

    Pedersen, Anders Sune

    2008-01-01

    In this paper we consider the problem of characterizing graphs with $\\mu + \\Delta = n$ or $\\mu + \\delta = n$, where $n$ denotes the number of vertices, $\\delta$ the minimum degree, $\\Delta$ the maximum degree, and $\\mu$ a domination parameter. The computational complexity of the corresponding...

  19. Complexity and Intensionality in a Type-1 Framework for Computable Analysis

    DEFF Research Database (Denmark)

    Lambov, Branimir Zdravkov

    2005-01-01

    This paper describes a type-1 framework for computable analysis designed to facilitate efficient implementations and discusses properties that have not been well studied before for type-1 approaches: the introduction of complexity measures for type-1 representations of real functions, and ways...

  20. Greedy Algorithm Computing Minkowski Reduced Lattice Bases with Quadratic Bit Complexity of Input Vectors

    Institute of Scientific and Technical Information of China (English)

    Hao CHEN; Liqing XU

    2011-01-01

    The authors present an algorithm which is a modification of the Nguyen-Stehle greedy reduction algorithm due to Nguyen and Stehle in 2009.This algorithm can be used to compute the Minkowski reduced lattice bases for arbitrary rank lattices with quadratic bit complexity on the size of the input vectors.The total bit complexity of the algorithm is O(n2·(4n!)n·(n!/2n)n/2·(4/3)n(n-1)/2·log2A),wherenistherankofthe lattice and A is maximal norm of the input base vectors.This is an O(log2 A) algorithm which can be used to compute Minkowski reduced bases for the fixed rank lattices.A time complexity n! · 3n(log A)O(1) algorithm which can be used to compute the successive minima with the help of the dual Hermite-Korkin-Zolotarev base was given by Blomer in 2000 and improved to the time complexity n! · (log A)O(1) by Micciancio in 2008.The algorithm in this paper is more suitable for computing the Minkowski reduced bases of low rank lattices with very large base vector sizes.

  1. Numerical sensitivity computation for discontinuous gradient-only optimization problems using the complex-step method

    CSIR Research Space (South Africa)

    Wilke, DN

    2012-07-01

    Full Text Available , and is based on a Taylor series expansion using a pure imaginary step. The complex-step method is not subject to subtraction errors as with finite difference approaches when computing first order sensitivities and therefore allows for much smaller step sizes...

  2. Environmental Factors Affecting Computer Assisted Language Learning Success: A Complex Dynamic Systems Conceptual Model

    Science.gov (United States)

    Marek, Michael W.; Wu, Wen-Chi Vivian

    2014-01-01

    This conceptual, interdisciplinary inquiry explores Complex Dynamic Systems as the concept relates to the internal and external environmental factors affecting computer assisted language learning (CALL). Based on the results obtained by de Rosnay ["World Futures: The Journal of General Evolution", 67(4/5), 304-315 (2011)], who observed…

  3. Reducing the Computational Complexity of Reconstruction in Compressed Sensing Nonuniform Sampling

    DEFF Research Database (Denmark)

    Grigoryan, Ruben; Jensen, Tobias Lindstrøm; Arildsen, Thomas

    2013-01-01

    This paper proposes a method that reduces the computational complexity of signal reconstruction in single-channel nonuniform sampling while acquiring frequency sparse multi-band signals. Generally, this compressed sensing based signal acquisition allows a decrease in the sampling rate of frequency...

  4. Factors Affecting Learning of Vector Math from Computer-Based Practice: Feedback Complexity and Prior Knowledge

    Science.gov (United States)

    Heckler, Andrew F.; Mikula, Brendon D.

    2016-01-01

    In experiments including over 450 university-level students, we studied the effectiveness and time efficiency of several levels of feedback complexity in simple, computer-based training utilizing static question sequences. The learning domain was simple vector math, an essential skill in introductory physics. In a unique full factorial design, we…

  5. Challenges in Integrating a Complex Systems Computer Simulation in Class: An Educational Design Research

    Science.gov (United States)

    Loke, Swee-Kin; Al-Sallami, Hesham S.; Wright, Daniel F. B.; McDonald, Jenny; Jadhav, Sheetal; Duffull, Stephen B.

    2012-01-01

    Complex systems are typically difficult for students to understand and computer simulations offer a promising way forward. However, integrating such simulations into conventional classes presents numerous challenges. Framed within an educational design research, we studied the use of an in-house built simulation of the coagulation network in four…

  6. COMPUTATIONAL COMPLEXITY IN WORST, STOCHASTIC AND AVERAGE CASE SETTING ON FUNCTIONAL APPROXIMATION PROBLEM OF MULTIVARIATE

    Institute of Scientific and Technical Information of China (English)

    Fang Gensun; Ye Peixin

    2005-01-01

    The order of computational complexity of all bounded linear functional approximation problem is determined for the generalized Sobolev class Wp∧(Id), Nikolskii class Hk∞(Id) in the worst (deterministic), stochastic and average case setting, from which it is concluded that the bounded linear functional approximation problem for the classes stochastic and average case setting.

  7. Magnetic, structural and computational studies on transition metal complexes of a neurotransmitter, histamine

    Science.gov (United States)

    Kaştaş, Gökhan; Paşaoğlu, Hümeyra; Karabulut, Bünyamin

    2011-08-01

    In this study, the transition metal complexes of histamine (His) prepared with oxalate (Ox), that is, [Cu(His)(Ox)(H 2O)], [Zn(His)(Ox)(H 2O)] (or [Zn(His)(Ox)]·(H 2O)), [Cd(His)(Ox)(H 2O) 2] and [Co(His)(Ox)(H 2O)], are investigated experimentally and computationally as part of ongoing studies on the mode of complexation, the tautomeric form and non-covalent interactions of histamine in supramolecular structures. The structural properties of prepared complexes are experimentally studied by X-ray diffraction (XRD) technique and Fourier transform infrared (FT-IR) spectroscopy and computationally by density functional theory (DFT). The magnetic properties of the complexes are investigated by electron paramagnetic resonance (EPR) technique. The [Cu(His)(Ox)(H 2O)] complex has a supramolecular structure constructed by two different non-covalent interactions as hydrogen bond and C-H⋯π interactions. EPR studies on [Cu(His)(Ox)(H 2O)], Cu 2+-doped [Zn(His)(Ox)(H 2O)] and [Cd(His)(Ox)(H 2O) 2] complexes show that the paramagnetic centers have axially symmetric g values. It is also found that the ground state of the unpaired electrons in the complexes is dominantly d and unpaired electrons' life time is spent over this orbital.

  8. Computer Literacy and the Construct Validity of a High-Stakes Computer-Based Writing Assessment

    Science.gov (United States)

    Jin, Yan; Yan, Ming

    2017-01-01

    One major threat to validity in high-stakes testing is construct-irrelevant variance. In this study we explored whether the transition from a paper-and-pencil to a computer-based test mode in a high-stakes test in China, the College English Test, has brought about variance irrelevant to the construct being assessed in this test. Analyses of the…

  9. Radio Synthesis Imaging - A High Performance Computing and Communications Project

    Science.gov (United States)

    Crutcher, Richard M.

    The National Science Foundation has funded a five-year High Performance Computing and Communications project at the National Center for Supercomputing Applications (NCSA) for the direct implementation of several of the computing recommendations of the Astronomy and Astrophysics Survey Committee (the "Bahcall report"). This paper is a summary of the project goals and a progress report. The project will implement a prototype of the next generation of astronomical telescope systems - remotely located telescopes connected by high-speed networks to very high performance, scalable architecture computers and on-line data archives, which are accessed by astronomers over Gbit/sec networks. Specifically, a data link has been installed between the BIMA millimeter-wave synthesis array at Hat Creek, California and NCSA at Urbana, Illinois for real-time transmission of data to NCSA. Data are automatically archived, and may be browsed and retrieved by astronomers using the NCSA Mosaic software. In addition, an on-line digital library of processed images will be established. BIMA data will be processed on a very high performance distributed computing system, with I/O, user interface, and most of the software system running on the NCSA Convex C3880 supercomputer or Silicon Graphics Onyx workstations connected by HiPPI to the high performance, massively parallel Thinking Machines Corporation CM-5. The very computationally intensive algorithms for calibration and imaging of radio synthesis array observations will be optimized for the CM-5 and new algorithms which utilize the massively parallel architecture will be developed. Code running simultaneously on the distributed computers will communicate using the Data Transport Mechanism developed by NCSA. The project will also use the BLANCA Gbit/s testbed network between Urbana and Madison, Wisconsin to connect an Onyx workstation in the University of Wisconsin Astronomy Department to the NCSA CM-5, for development of long

  10. The Role of Computing in High-Energy Physics.

    Science.gov (United States)

    Metcalf, Michael

    1983-01-01

    Examines present and future applications of computers in high-energy physics. Areas considered include high-energy physics laboratories, accelerators, detectors, networking, off-line analysis, software guidelines, event sizes and volumes, graphics applications, event simulation, theoretical studies, and future trends. (JN)

  11. Average-Case Complexity Versus Approximate Simulation of Commuting Quantum Computations

    Science.gov (United States)

    Bremner, Michael J.; Montanaro, Ashley; Shepherd, Dan J.

    2016-08-01

    We use the class of commuting quantum computations known as IQP (instantaneous quantum polynomial time) to strengthen the conjecture that quantum computers are hard to simulate classically. We show that, if either of two plausible average-case hardness conjectures holds, then IQP computations are hard to simulate classically up to constant additive error. One conjecture relates to the hardness of estimating the complex-temperature partition function for random instances of the Ising model; the other concerns approximating the number of zeroes of random low-degree polynomials. We observe that both conjectures can be shown to be valid in the setting of worst-case complexity. We arrive at these conjectures by deriving spin-based generalizations of the boson sampling problem that avoid the so-called permanent anticoncentration conjecture.

  12. Unique optimal solution instance and computational complexity of backbone in the graph bi-partitioning problem

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    As an important tool for heuristic design of NP-hard problems, backbone analysis has become a hot spot in theoretical computer science in recent years. Due to the difficulty in the research on computational complexity of the backbone, many researchers analyzed the backbone by statistic ways. Aiming to increase the backbone size which is usually very small by the existing methods, the unique optimal solution instance construction (UOSIC) is proposed for the graph bi-partitioning problem (GBP). Also, we prove by using the UOSIC that it is NP-hard to obtain the backbone, i.e. no algorithm exists to obtain the backbone of a GBP in polynomial time under the assumption that P ( NP. Our work expands the research area of computational complexity of the backbone. And the UOSIC provides a new way for heuristic design of NP-hard problems.

  13. Numerics of High Performance Computers and Benchmark Evaluation of Distributed Memory Computers

    Directory of Open Access Journals (Sweden)

    H. S. Krishna

    2004-07-01

    Full Text Available The internal representation of numerical data, their speed of manipulation to generate the desired result through efficient utilisation of central processing unit, memory, and communication links are essential steps of all high performance scientific computations. Machine parameters, in particular, reveal accuracy and error bounds of computation, required for performance tuning of codes. This paper reports diagnosis of machine parameters, measurement of computing power of several workstations, serial and parallel computers, and a component-wise test procedure for distributed memory computers. Hierarchical memory structure is illustrated by block copying and unrolling techniques. Locality of reference for cache reuse of data is amply demonstrated by fast Fourier transform codes. Cache and register-blocking technique results in their optimum utilisation with consequent gain in throughput during vector-matrix operations. Implementation of these memory management techniques reduces cache inefficiency loss, which is known to be proportional to the number of processors. Of the two Linux clusters-ANUP16, HPC22 and HPC64, it has been found from the measurement of intrinsic parameters and from application benchmark of multi-block Euler code test run that ANUP16 is suitable for problems that exhibit fine-grained parallelism. The delivered performance of ANUP16 is of immense utility for developing high-end PC clusters like HPC64 and customised parallel computers with added advantage of speed and high degree of parallelism.

  14. Remote control system for high-perfomance computer simulation of crystal growth by the PFC method

    Science.gov (United States)

    Pavlyuk, Evgeny; Starodumov, Ilya; Osipov, Sergei

    2017-04-01

    Modeling of crystallization process by the phase field crystal method (PFC) - one of the important directions of modern computational materials science. In this paper, the practical side of the computer simulation of the crystallization process by the PFC method is investigated. To solve problems using this method, it is necessary to use high-performance computing clusters, data storage systems and other often expensive complex computer systems. Access to such resources is often limited, unstable and accompanied by various administrative problems. In addition, the variety of software and settings of different computing clusters sometimes does not allow researchers to use unified program code. There is a need to adapt the program code for each configuration of the computer complex. The practical experience of the authors has shown that the creation of a special control system for computing with the possibility of remote use can greatly simplify the implementation of simulations and increase the performance of scientific research. In current paper we show the principal idea of such a system and justify its efficiency.

  15. Proceedings from the conference on high speed computing: High speed computing and national security

    Energy Technology Data Exchange (ETDEWEB)

    Hirons, K.P.; Vigil, M.; Carlson, R. [comps.

    1997-07-01

    This meeting covered the following topics: technologies/national needs/policies: past, present and future; information warfare; crisis management/massive data systems; risk assessment/vulnerabilities; Internet law/privacy and rights of society; challenges to effective ASCI programmatic use of 100 TFLOPs systems; and new computing technologies.

  16. An Approach to Experimental Design for the Computer Analysis of Complex Phenomenon

    Science.gov (United States)

    Rutherford, Brian

    2000-01-01

    The ability to make credible system assessments, predictions and design decisions related to engineered systems and other complex phenomenon is key to a successful program for many large-scale investigations in government and industry. Recently, many of these large-scale analyses have turned to computational simulation to provide much of the required information. Addressing specific goals in the computer analysis of these complex phenomenon is often accomplished through the use of performance measures that are based on system response models. The response models are constructed using computer-generated responses together with physical test results where possible. They are often based on probabilistically defined inputs and generally require estimation of a set of response modeling parameters. As a consequence, the performance measures are themselves distributed quantities reflecting these variabilities and uncertainties. Uncertainty in the values of the performance measures leads to uncertainties in predicted performance and can cloud the decisions required of the analysis. A specific goal of this research has been to develop methodology that will reduce this uncertainty in an analysis environment where limited resources and system complexity together restrict the number of simulations that can be performed. An approach has been developed that is based on evaluation of the potential information provided for each "intelligently selected" candidate set of computer runs. Each candidate is evaluated by partitioning the performance measure uncertainty into two components - one component that could be explained through the additional computational simulation runs and a second that would remain uncertain. The portion explained is estimated using a probabilistic evaluation of likely results for the additional computational analyses based on what is currently known about the system. The set of runs indicating the largest potential reduction in uncertainty is then selected

  17. Parallel computing and first-principles calculations: Applications to complex ceramics and Vitamin B12

    Science.gov (United States)

    Ouyang, Lizhi

    A systematic improvement and extension of the orthogonalized linear combinations of atomic orbitals method was carried out using a combined computational and theoretical approach. For high performance parallel computing, a Beowulf class personal computer cluster was constructed. It also served as a parallel program development platform that helped us to port the programs of the method to the national supercomputer facilities. The program, received a language upgrade from Fortran 77 to Fortran 90, and a dynamic memory allocation feature. A preliminary parallel High Performance Fortran version of the program has been developed as well. To be of more benefit though, scalability improvements are needed. In order to circumvent the difficulties of the analytical force calculation in the method, we developed a geometry optimization scheme using the finite difference approximation based on the total energy calculation. The implementation of this scheme was facilitated by the powerful general utility lattice program, which offers many desired features such as multiple optimization schemes and usage of space group symmetry. So far, many ceramic oxides have been tested with the geometry optimization program. Their optimized geometries were in excellent agreement with the experimental data. For nine ceramic oxide crystals, the optimized cell parameters differ from the experimental ones within 0.5%. Moreover, the geometry optimization was recently used to predict a new phase of TiNx. The method has also been used to investigate a complex Vitamin B12-derivative, the OHCbl crystals. In order to overcome the prohibitive disk I/O demand, an on-demand version of the method was developed. Based on the electronic structure calculation of the OHCbl crystal, a partial density of states analysis and a bond order analysis were carried out. The calculated bonding of the corrin ring of OHCbl model was coincident with the big open-ring pi bond. One interesting find of the calculation was

  18. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  19. Topic 14+16: High-performance and scientific applications and extreme-scale computing (Introduction)

    KAUST Repository

    Downes, Turlough P.

    2013-01-01

    As our understanding of the world around us increases it becomes more challenging to make use of what we already know, and to increase our understanding still further. Computational modeling and simulation have become critical tools in addressing this challenge. The requirements of high-resolution, accurate modeling have outstripped the ability of desktop computers and even small clusters to provide the necessary compute power. Many applications in the scientific and engineering domains now need very large amounts of compute time, while other applications, particularly in the life sciences, frequently have large data I/O requirements. There is thus a growing need for a range of high performance applications which can utilize parallel compute systems effectively, which have efficient data handling strategies and which have the capacity to utilise current and future systems. The High Performance and Scientific Applications topic aims to highlight recent progress in the use of advanced computing and algorithms to address the varied, complex and increasing challenges of modern research throughout both the "hard" and "soft" sciences. This necessitates being able to use large numbers of compute nodes, many of which are equipped with accelerators, and to deal with difficult I/O requirements. © 2013 Springer-Verlag.

  20. Computer Security: SAHARA - Security As High As Reasonably Achievable

    CERN Multimedia

    Stefan Lueders, Computer Security Team

    2015-01-01

    History has shown us time and again that our computer systems, computing services and control systems have digital security deficiencies. Too often we deploy stop-gap solutions and improvised hacks, or we just accept that it is too late to change things.    In my opinion, this blatantly contradicts the professionalism we show in our daily work. Other priorities and time pressure force us to ignore security or to consider it too late to do anything… but we can do better. Just look at how “safety” is dealt with at CERN! “ALARA” (As Low As Reasonably Achievable) is the objective set by the CERN HSE group when considering our individual radiological exposure. Following this paradigm, and shifting it from CERN safety to CERN computer security, would give us “SAHARA”: “Security As High As Reasonably Achievable”. In other words, all possible computer security measures must be applied, so long as ...

  1. jGASW: A Service-Oriented Framework Supporting High Throughput Computing and Non-functional Concerns

    OpenAIRE

    Rojas Balderrama, Javier; Montagnat, Johan; Lingrand, Diane

    2016-01-01

    International audience; Although Service-Oriented principles have been widely adopted by High Throughput Computing infrastructure designers, the integration between SOA and HTC is made difficult by legacy. jGASW is a framework for wrapping legacy scientific applications as Web Services and integrating them into an intensive computing-aware SOA framework. It maps complex I/O data structures to command lines and enables dynamic allocation of computing resources; including execu- tion on local h...

  2. Visualization and Data Analysis for High-Performance Computing

    Energy Technology Data Exchange (ETDEWEB)

    Sewell, Christopher Meyer [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-09-27

    This is a set of slides from a guest lecture for a class at the University of Texas, El Paso on visualization and data analysis for high-performance computing. The topics covered are the following: trends in high-performance computing; scientific visualization, such as OpenGL, ray tracing and volume rendering, VTK, and ParaView; data science at scale, such as in-situ visualization, image databases, distributed memory parallelism, shared memory parallelism, VTK-m, "big data", and then an analysis example.

  3. High performance computing and communications: FY 1997 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-12-01

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage, with bipartisan support, of the High-Performance Computing Act of 1991, signed on December 9, 1991. The original Program, in which eight Federal agencies participated, has now grown to twelve agencies. This Plan provides a detailed description of the agencies` FY 1996 HPCC accomplishments and FY 1997 HPCC plans. Section 3 of this Plan provides an overview of the HPCC Program. Section 4 contains more detailed definitions of the Program Component Areas, with an emphasis on the overall directions and milestones planned for each PCA. Appendix A provides a detailed look at HPCC Program activities within each agency.

  4. The Impact of High Speed Machining on Computing and Automation

    Institute of Scientific and Technical Information of China (English)

    KKB Hon; BT Hang Tuah Baharudin

    2006-01-01

    Machine tool technologies, especially Computer Numerical Control (CNC) High Speed Machining (HSM) have emerged as effective mechanisms for Rapid Tooling and Manufacturing applications. These new technologies are attractive for competitive manufacturing because of their technical advantages, i.e. a significant reduction in lead-time, high product accuracy, and good surface finish. However, HSM not only stimulates advancements in cutting tools and materials, it also demands increasingly sophisticated CAD/CAM software, and powerful CNC controllers that require more support technologies. This paper explores the computational requirement and impact of HSM on CNC controller, wear detection,look ahead programming, simulation, and tool management.

  5. A Lanczos eigenvalue method on a parallel computer. [for large complex space structure free vibration analysis

    Science.gov (United States)

    Bostic, Susan W.; Fulton, Robert E.

    1987-01-01

    Eigenvalue analyses of complex structures is a computationally intensive task which can benefit significantly from new and impending parallel computers. This study reports on a parallel computer implementation of the Lanczos method for free vibration analysis. The approach used here subdivides the major Lanczos calculation tasks into subtasks and introduces parallelism down to the subtask levels such as matrix decomposition and forward/backward substitution. The method was implemented on a commercial parallel computer and results were obtained for a long flexible space structure. While parallel computing efficiency is problem and computer dependent, the efficiency for the Lanczos method was good for a moderate number of processors for the test problem. The greatest reduction in time was realized for the decomposition of the stiffness matrix, a calculation which took 70 percent of the time in the sequential program and which took 25 percent of the time on eight processors. For a sample calculation of the twenty lowest frequencies of a 486 degree of freedom problem, the total sequential computing time was reduced by almost a factor of ten using 16 processors.

  6. A computational approach to achieve situational awareness from limited observations of a complex system

    Science.gov (United States)

    Sherwin, Jason

    human activities. Nevertheless, since it is not constrained by computational details, the study of situational awareness provides a unique opportunity to approach complex tasks of operation from an analytical perspective. In other words, with SA, we get to see how humans observe, recognize and react to complex systems on which they exert some control. Reconciling this perspective on complexity with complex systems research, it might be possible to further our understanding of complex phenomena if we can probe the anatomical mechanisms by which we, as humans, do it naturally. At this unique intersection of two disciplines, a hybrid approach is needed. So in this work, we propose just such an approach. In particular, this research proposes a computational approach to the situational awareness (SA) of complex systems. Here we propose to implement certain aspects of situational awareness via a biologically-inspired machine-learning technique called Hierarchical Temporal Memory (HTM). In doing so, we will use either simulated or actual data to create and to test computational implementations of situational awareness. This will be tested in two example contexts, one being more complex than the other. The ultimate goal of this research is to demonstrate a possible approach to analyzing and understanding complex systems. By using HTM and carefully developing techniques to analyze the SA formed from data, it is believed that this goal can be obtained.

  7. Challenges of high dam construction to computational mechanics

    Institute of Scientific and Technical Information of China (English)

    ZHANG Chuhan

    2007-01-01

    The current situations and growing prospects of China's hydro-power development and high dam construction are reviewed,giving emphasis to key issues for safety evaluation of large dams and hydro-power plants,especially those associated with application of state-of-the-art computational mechanics.These include but are not limited to:stress and stability analysis of dam foundations under external loads;earthquake behavior of dam-foundation-reservoir systems,mechanical properties of mass concrete for dams,high velocity flow and energy dissipation for high dams,scientific and technical problems of hydro-power plants and underground structures,and newly developed types of dam-Roll Compacted Concrete (RCC) dams and Concrete Face Rock-fill (CFR)dams.Some examples demonstrating successful utilizations of computational mechanics in high dam engineering are given,including seismic nonlinear analysis for arch dam foundations,nonlinear fracture analysis of arch dams under reservoir loads,and failure analysis of arch dam-foundations.To make more use of the computational mechanics in high dam engineering,it is pointed out that much research including different computational methods,numerical models and solution schemes,and verifications through experimental tests and filed measurements is necessary in the future.

  8. Computational modelling of the complex dynamics of chemically blown polyurethane foam

    Science.gov (United States)

    Ireka, I. E.; Niedziela, D.; Schäfer, K.; Tröltzsch, J.; Steiner, K.; Helbig, F.; Chinyoka, T.; Kroll, L.

    2015-11-01

    This study presents computational analysis of the complex dynamics observed in chemically blown polyurethane foams during reaction injection molding process. The mathematical formulation introduces an experimentally motivated non-divergence free setup for the continuity equations which reflects the self expanding behaviour observed in the physical system. The foam growth phenomena which is normally initiated by adequate pre-mixing of necessary reactant polymers, leading to an exothermic polymerization reaction, bubble nucleation, and gas formation, is captured numerically. We assume the dependence of material viscosity on the degree of cure/polymerization, gas volume fraction, and temperature as well as non-dependence of mixture density on pressure. The set of unsteady nonlinear coupled partial differential equations describing the dynamics of the system are solved numerically for state variables using finite volume techniques such that the front of the flow is tracked with high resolution interface capturing schemes. Graphical representation of the foam volume fraction, evolution of foam heights, and temperature distributions is presented. Results from our simulations are validated with experimental data. These results show good quantitative agreement with observations from experiments.

  9. Complex Odontoma: A Case Report with Micro-Computed Tomography Findings

    Directory of Open Access Journals (Sweden)

    L. A. N. Santos

    2016-01-01

    Full Text Available Odontomas are the most common benign tumors of odontogenic origin. They are normally diagnosed on routine radiographs, due to the absence of symptoms. Histopathologic evaluation confirms the diagnosis especially in cases of complex odontoma, which may be confused during radiographic examination with an osteoma or other highly calcified bone lesions. The micro-CT is a new technology that enables three-dimensional analysis with better spatial resolution compared with cone beam computed tomography. Another great advantage of this technology is that the sample does not need special preparation or destruction in the sectioned area as in histopathologic evaluation. An odontoma with CBCT and microtomography images is presented in a 26-year-old man. It was first observed on panoramic radiographs and then by CBCT. The lesion and the impacted third molar were surgically excised using a modified Neumann approach. After removal, it was evaluated by histopathology and microtomography to confirm the diagnostic hypothesis. According to the results, micro-CT enabled the assessment of the sample similar to histopathology, without destruction of the sample. With further development, micro-CT could be a powerful diagnostic tool in future research.

  10. Complex Odontoma: A Case Report with Micro-Computed Tomography Findings

    Science.gov (United States)

    Santos, L. A. N.; Roque-Torres, G. D.; Oliveira, V. F.; Freitas, D. Q.

    2016-01-01

    Odontomas are the most common benign tumors of odontogenic origin. They are normally diagnosed on routine radiographs, due to the absence of symptoms. Histopathologic evaluation confirms the diagnosis especially in cases of complex odontoma, which may be confused during radiographic examination with an osteoma or other highly calcified bone lesions. The micro-CT is a new technology that enables three-dimensional analysis with better spatial resolution compared with cone beam computed tomography. Another great advantage of this technology is that the sample does not need special preparation or destruction in the sectioned area as in histopathologic evaluation. An odontoma with CBCT and microtomography images is presented in a 26-year-old man. It was first observed on panoramic radiographs and then by CBCT. The lesion and the impacted third molar were surgically excised using a modified Neumann approach. After removal, it was evaluated by histopathology and microtomography to confirm the diagnostic hypothesis. According to the results, micro-CT enabled the assessment of the sample similar to histopathology, without destruction of the sample. With further development, micro-CT could be a powerful diagnostic tool in future research. PMID:27293913

  11. Complex Odontoma: A Case Report with Micro-Computed Tomography Findings.

    Science.gov (United States)

    Santos, L A N; Lopes, L J; Roque-Torres, G D; Oliveira, V F; Freitas, D Q

    2016-01-01

    Odontomas are the most common benign tumors of odontogenic origin. They are normally diagnosed on routine radiographs, due to the absence of symptoms. Histopathologic evaluation confirms the diagnosis especially in cases of complex odontoma, which may be confused during radiographic examination with an osteoma or other highly calcified bone lesions. The micro-CT is a new technology that enables three-dimensional analysis with better spatial resolution compared with cone beam computed tomography. Another great advantage of this technology is that the sample does not need special preparation or destruction in the sectioned area as in histopathologic evaluation. An odontoma with CBCT and microtomography images is presented in a 26-year-old man. It was first observed on panoramic radiographs and then by CBCT. The lesion and the impacted third molar were surgically excised using a modified Neumann approach. After removal, it was evaluated by histopathology and microtomography to confirm the diagnostic hypothesis. According to the results, micro-CT enabled the assessment of the sample similar to histopathology, without destruction of the sample. With further development, micro-CT could be a powerful diagnostic tool in future research.

  12. Platinum Group Thiophenoxyimine Complexes: Syntheses,Crystallographic and Computational Studies of Structural Properties

    Energy Technology Data Exchange (ETDEWEB)

    Krinsky, Jamin L.; Arnold, John; Bergman, Robert G.

    2006-10-03

    Monomeric thiosalicylaldiminate complexes of rhodium(I) and iridium(I) were prepared by ligand transfer from the homoleptic zinc(II) species. In the presence of strongly donating ligands, the iridium complexes undergo insertion of the metal into the imine carbon-hydrogen bond. Thiophenoxyketimines were prepared by non-templated reaction of o-mercaptoacetophenone with anilines, and were complexed with rhodium(I), iridium(I), nickel(II) and platinum(II). X-ray crystallographic studies showed that while the thiosalicylaldiminate complexes display planar ligand conformations, those of the thiophenoxyketiminates are strongly distorted. Results of a computational study were consistent with a steric-strain interpretation of the difference in preferred ligand geometries.

  13. Large Scale Computing and Storage Requirements for High Energy Physics

    Energy Technology Data Exchange (ETDEWEB)

    Gerber, Richard A.; Wasserman, Harvey

    2010-11-24

    The National Energy Research Scientific Computing Center (NERSC) is the leading scientific computing facility for the Department of Energy's Office of Science, providing high-performance computing (HPC) resources to more than 3,000 researchers working on about 400 projects. NERSC provides large-scale computing resources and, crucially, the support and expertise needed for scientists to make effective use of them. In November 2009, NERSC, DOE's Office of Advanced Scientific Computing Research (ASCR), and DOE's Office of High Energy Physics (HEP) held a workshop to characterize the HPC resources needed at NERSC to support HEP research through the next three to five years. The effort is part of NERSC's legacy of anticipating users needs and deploying resources to meet those demands. The workshop revealed several key points, in addition to achieving its goal of collecting and characterizing computing requirements. The chief findings: (1) Science teams need access to a significant increase in computational resources to meet their research goals; (2) Research teams need to be able to read, write, transfer, store online, archive, analyze, and share huge volumes of data; (3) Science teams need guidance and support to implement their codes on future architectures; and (4) Projects need predictable, rapid turnaround of their computational jobs to meet mission-critical time constraints. This report expands upon these key points and includes others. It also presents a number of case studies as representative of the research conducted within HEP. Workshop participants were asked to codify their requirements in this case study format, summarizing their science goals, methods of solution, current and three-to-five year computing requirements, and software and support needs. Participants were also asked to describe their strategy for computing in the highly parallel, multi-core environment that is expected to dominate HPC architectures over the next few years

  14. High performance computing and communications: FY 1996 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1995-05-16

    The High Performance Computing and Communications (HPCC) Program was formally authorized by passage of the High Performance Computing Act of 1991, signed on December 9, 1991. Twelve federal agencies, in collaboration with scientists and managers from US industry, universities, and research laboratories, have developed the Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1995 and FY 1996. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency.

  15. ABOUT THE SUITABILITY OF CLOUDS IN HIGH-PERFORMANCE COMPUTING

    Directory of Open Access Journals (Sweden)

    Harald Richter

    2016-01-01

    Full Text Available Cloud computing has become the ubiquitous computing and storage paradigm. It is also attractive for scientists, because they do not have to care any more for their own IT infrastructure, but can outsource it to a Cloud Service Provider of their choice. However, for the case of High-Performance Computing (HPC in a cloud, as it is needed in simulations or for Big Data analysis, things are getting more intricate, because HPC codes must stay highly efficient, even when executed by many virtual cores (vCPUs. Older clouds or new standard clouds can fulfil this only under special precautions, which are given in this article. The results can be extrapolated to other cloud OSes than OpenStack and to other codes than OpenFOAM, which were used as examples.

  16. High-performance computing, high-speed networks, and configurable computing environments: progress toward fully distributed computing.

    Science.gov (United States)

    Johnston, W E; Jacobson, V L; Loken, S C; Robertson, D W; Tierney, B L

    1992-01-01

    The next several years will see the maturing of a collection of technologies that will enable fully and transparently distributed computing environments. Networks will be used to configure independent computing, storage, and I/O elements into "virtual systems" that are optimal for solving a particular problem. This environment will make the most powerful computing systems those that are logically assembled from network-based components and will also make those systems available to a widespread audience. Anticipating that the necessary technology and communications infrastructure will be available in the next 3 to 5 years, we are developing and demonstrating prototype applications that test and exercise the currently available elements of this configurable environment. The Lawrence Berkeley Laboratory (LBL) Information and Computing Sciences and Research Medicine Divisions have collaborated with the Pittsburgh Supercomputer Center to demonstrate one distributed application that illuminates the issues and potential of using networks to configure virtual systems. This application allows the interactive visualization of large three-dimensional (3D) scalar fields (voxel data sets) by using a network-based configuration of heterogeneous supercomputers and workstations. The specific test case is visualization of 3D magnetic resonance imaging (MRI) data. The virtual system architecture consists of a Connection Machine-2 (CM-2) that performs surface reconstruction from the voxel data, a Cray Y-MP that renders the resulting geometric data into an image, and a workstation that provides the display of the image and the user interface for specifying the parameters for the geometry generation and 3D viewing. These three elements are configured into a virtual system by using several different network technologies. This paper reviews the current status of the software, hardware, and communications technologies that are needed to enable this configurable environment. These

  17. Highly Luminescent Lanthanide Complexes of 1 Hydroxy-2-pyridinones

    Energy Technology Data Exchange (ETDEWEB)

    University of California, Berkeley; Lawrence National Laboratory; Raymond, Kenneth; Moore, Evan G.; Xu, Jide; Jocher, Christoph J.; Castro-Rodriguez, Ingrid; Raymond, Kenneth N.

    2007-11-01

    The synthesis, X-ray structure, stability, and photophysical properties of several trivalent lanthanide complexes formed from two differing bis-bidentate ligands incorporating either alkyl or alkyl ether linkages and featuring the 1-hydroxy-2-pyridinone (1,2-HOPO) chelate group in complex with Eu(III), Sm(III) and Gd(III) are reported. The Eu(III) complexes are among some of the best examples, pairing highly efficient emission ({Phi}{sub tot}{sup Eu} {approx} 21.5%) with high stability (pEu {approx} 18.6) in aqueous solution, and are excellent candidates for use in biological assays. A comparison of the observed behavior of the complexes with differing backbone linkages shows remarkable similarities, both in stability and photophysical properties. Low temperature photophysical measurements for a Gd(III) complex were also used to gain insight into the electronic structure, and were found to agree with corresponding TD-DFT calculations for a model complex. A comparison of the high resolution Eu(III) emission spectra in solution and from single crystals also revealed a more symmetric coordination geometry about the metal ion in solution due to dynamic rotation of the observed solid state structure.

  18. Highly Luminescent Lanthanide Complexes of 1 Hydroxy-2-pyridinones

    Energy Technology Data Exchange (ETDEWEB)

    University of California, Berkeley; Lawrence National Laboratory; Raymond, Kenneth; Moore, Evan G.; Xu, Jide; Jocher, Christoph J.; Castro-Rodriguez, Ingrid; Raymond, Kenneth N.

    2007-11-01

    The synthesis, X-ray structure, stability, and photophysical properties of several trivalent lanthanide complexes formed from two differing bis-bidentate ligands incorporating either alkyl or alkyl ether linkages and featuring the 1-hydroxy-2-pyridinone (1,2-HOPO) chelate group in complex with Eu(III), Sm(III) and Gd(III) are reported. The Eu(III) complexes are among some of the best examples, pairing highly efficient emission ({Phi}{sub tot}{sup Eu} {approx} 21.5%) with high stability (pEu {approx} 18.6) in aqueous solution, and are excellent candidates for use in biological assays. A comparison of the observed behavior of the complexes with differing backbone linkages shows remarkable similarities, both in stability and photophysical properties. Low temperature photophysical measurements for a Gd(III) complex were also used to gain insight into the electronic structure, and were found to agree with corresponding TD-DFT calculations for a model complex. A comparison of the high resolution Eu(III) emission spectra in solution and from single crystals also revealed a more symmetric coordination geometry about the metal ion in solution due to dynamic rotation of the observed solid state structure.

  19. Heads in the Cloud: A Primer on Neuroimaging Applications of High Performance Computing.

    Science.gov (United States)

    Shatil, Anwar S; Younas, Sohail; Pourreza, Hossein; Figley, Chase R

    2015-01-01

    With larger data sets and more sophisticated analyses, it is becoming increasingly common for neuroimaging researchers to push (or exceed) the limitations of standalone computer workstations. Nonetheless, although high-performance computing platforms such as clusters, grids and clouds are already in routine use by a small handful of neuroimaging researchers to increase their storage and/or computational power, the adoption of such resources by the broader neuroimaging community remains relatively uncommon. Therefore, the goal of the current manuscript is to: 1) inform prospective users about the similarities and differences between computing clusters, grids and clouds; 2) highlight their main advantages; 3) discuss when it may (and may not) be advisable to use them; 4) review some of their potential problems and barriers to access; and finally 5) give a few practical suggestions for how interested new users can start analyzing their neuroimaging data using cloud resources. Although the aim of cloud computing is to hide most of the complexity of the infrastructure management from end-users, we recognize that this can still be an intimidating area for cognitive neuroscientists, psychologists, neurologists, radiologists, and other neuroimaging researchers lacking a strong computational background. Therefore, with this in mind, we have aimed to provide a basic introduction to cloud computing in general (including some of the basic terminology, computer architectures, infrastructure and service models, etc.), a practical overview of the benefits and drawbacks, and a specific focus on how cloud resources can be used for various neuroimaging applications.

  20. Heads in the Cloud: A Primer on Neuroimaging Applications of High Performance Computing

    Science.gov (United States)

    Shatil, Anwar S.; Younas, Sohail; Pourreza, Hossein; Figley, Chase R.

    2015-01-01

    With larger data sets and more sophisticated analyses, it is becoming increasingly common for neuroimaging researchers to push (or exceed) the limitations of standalone computer workstations. Nonetheless, although high-performance computing platforms such as clusters, grids and clouds are already in routine use by a small handful of neuroimaging researchers to increase their storage and/or computational power, the adoption of such resources by the broader neuroimaging community remains relatively uncommon. Therefore, the goal of the current manuscript is to: 1) inform prospective users about the similarities and differences between computing clusters, grids and clouds; 2) highlight their main advantages; 3) discuss when it may (and may not) be advisable to use them; 4) review some of their potential problems and barriers to access; and finally 5) give a few practical suggestions for how interested new users can start analyzing their neuroimaging data using cloud resources. Although the aim of cloud computing is to hide most of the complexity of the infrastructure management from end-users, we recognize that this can still be an intimidating area for cognitive neuroscientists, psychologists, neurologists, radiologists, and other neuroimaging researchers lacking a strong computational background. Therefore, with this in mind, we have aimed to provide a basic introduction to cloud computing in general (including some of the basic terminology, computer architectures, infrastructure and service models, etc.), a practical overview of the benefits and drawbacks, and a specific focus on how cloud resources can be used for various neuroimaging applications. PMID:27279746

  1. Development of a Computational Steering Framework for High Performance Computing Environments on Blue Gene/P Systems

    KAUST Repository

    Danani, Bob K.

    2012-07-01

    Computational steering has revolutionized the traditional workflow in high performance computing (HPC) applications. The standard workflow that consists of preparation of an application’s input, running of a simulation, and visualization of simulation results in a post-processing step is now transformed into a real-time interactive workflow that significantly reduces development and testing time. Computational steering provides the capability to direct or re-direct the progress of a simulation application at run-time. It allows modification of application-defined control parameters at run-time using various user-steering applications. In this project, we propose a computational steering framework for HPC environments that provides an innovative solution and easy-to-use platform, which allows users to connect and interact with running application(s) in real-time. This framework uses RealityGrid as the underlying steering library and adds several enhancements to the library to enable steering support for Blue Gene systems. Included in the scope of this project is the development of a scalable and efficient steering relay server that supports many-to-many connectivity between multiple steered applications and multiple steering clients. Steered applications can range from intermediate simulation and physical modeling applications to complex computational fluid dynamics (CFD) applications or advanced visualization applications. The Blue Gene supercomputer presents special challenges for remote access because the compute nodes reside on private networks. This thesis presents an implemented solution and demonstrates it on representative applications. Thorough implementation details and application enablement steps are also presented in this thesis to encourage direct usage of this framework.

  2. Computational study of developing high-quality decision trees

    Science.gov (United States)

    Fu, Zhiwei

    2002-03-01

    Recently, decision tree algorithms have been widely used in dealing with data mining problems to find out valuable rules and patterns. However, scalability, accuracy and efficiency are significant concerns regarding how to effectively deal with large and complex data sets in the implementation. In this paper, we propose an innovative machine learning approach (we call our approach GAIT), combining genetic algorithm, statistical sampling, and decision tree, to develop intelligent decision trees that can alleviate some of these problems. We design our computational experiments and run GAIT on three different data sets (namely Socio- Olympic data, Westinghouse data, and FAA data) to test its performance against standard decision tree algorithm, neural network classifier, and statistical discriminant technique, respectively. The computational results show that our approach outperforms standard decision tree algorithm profoundly at lower sampling levels, and achieves significantly better results with less effort than both neural network and discriminant classifiers.

  3. Computing pK(A) values of hexa-aqua transition metal complexes.

    Science.gov (United States)

    Galstyan, Gegham; Knapp, Ernst-Walter

    2015-01-15

    Aqueous pKA values for 15 hexa-aqua transition metal complexes were computed using a combination of quantum chemical and electrostatic methods. Two different structure models were considered optimizing the isolated complexes in vacuum or in presence of explicit solvent using a QM/MM approach. They yield very good agreement with experimentally measured pKA values with an overall root mean square deviation of about 1 pH unit, excluding a single but different outlier for each of the two structure models. These outliers are hexa-aqua Cr(III) for the vacuum and hexa-aqua Mn(III) for the QM/MM structure model. Reasons leading to the deviations of the outlier complexes are partially explained. Compared to previous approaches from the same lab the precision of the method was systematically improved as discussed in this study. The refined methods to obtain the appropriate geometries of the complexes, developed in this work, may allow also the computation of accurate pKA values for multicore transition metal complexes in different oxidation states.

  4. Simple boron removal from seawater by using polyols as complexing agents: A computational mechanistic study

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Min-Kyung; Eom, Ki Heon; Lim, Jun-Heok; Lee, Jea-Keun; Lee, Ju Dong; Won, Yong Sun [Pukyong National University, Busan (Korea, Republic of)

    2015-11-15

    The complexation of boric acid (B(OH){sub 3}), the primary form of aqueous boron at moderate pH, with polyols is proposed and mechanistically studied as an efficient way to improve membrane processes such as reverse osmosis (RO) for removing boron in seawater by increasing the size of aqueous boron compounds. Computational chemistry based on the density functional theory (DFT) was used to manifest the reaction pathways of the complexation of B(OH){sub 3} with various polyols such as glycerol, xylitol, and mannitol. The reaction energies were calculated as −80.6, −98.1, and −87.2 kcal/mol for glycerol, xylitol, and mannitol, respectively, indicating that xylitol is the most thermodynamically favorable for the complexation with B(OH){sub 3}. Moreover, the 1 : 2 molar ratio of B(OH)3 to polyol was found to be more favorable than the ratio of 1 : 1 for the complexation. Meanwhile, latest lab-scale actual RO experiments successfully supported our computational prediction that 2 moles of xylitol are the most effective as the complexing agent for 1 mole of B(OH){sub 3} in aqueous solution.

  5. Democratizing Computer Science Knowledge: Transforming the Face of Computer Science through Public High School Education

    Science.gov (United States)

    Ryoo, Jean J.; Margolis, Jane; Lee, Clifford H.; Sandoval, Cueponcaxochitl D. M.; Goode, Joanna

    2013-01-01

    Despite the fact that computer science (CS) is the driver of technological innovations across all disciplines and aspects of our lives, including participatory media, high school CS too commonly fails to incorporate the perspectives and concerns of low-income students of color. This article describes a partnership program -- Exploring Computer…

  6. Enabling High-Performance Computing as a Service

    KAUST Repository

    AbdelBaky, Moustafa

    2012-10-01

    With the right software infrastructure, clouds can provide scientists with as a service access to high-performance computing resources. An award-winning prototype framework transforms the Blue Gene/P system into an elastic cloud to run a representative HPC application. © 2012 IEEE.

  7. High Performance Computing tools for the Integrated Tokamak Modelling project

    Energy Technology Data Exchange (ETDEWEB)

    Guillerminet, B., E-mail: bernard.guillerminet@cea.f [Association Euratom-CEA sur la Fusion, IRFM, DSM, CEA Cadarache (France); Plasencia, I. Campos [Instituto de Fisica de Cantabria (IFCA), CSIC, Santander (Spain); Haefele, M. [Universite Louis Pasteur, Strasbourg (France); Iannone, F. [EURATOM/ENEA Fusion Association, Frascati (Italy); Jackson, A. [University of Edinburgh (EPCC) (United Kingdom); Manduchi, G. [EURATOM/ENEA Fusion Association, Padova (Italy); Plociennik, M. [Poznan Supercomputing and Networking Center (PSNC) (Poland); Sonnendrucker, E. [Universite Louis Pasteur, Strasbourg (France); Strand, P. [Chalmers University of Technology (Sweden); Owsiak, M. [Poznan Supercomputing and Networking Center (PSNC) (Poland)

    2010-07-15

    Fusion Modelling and Simulation are very challenging and the High Performance Computing issues are addressed here. Toolset for jobs launching and scheduling, data communication and visualization have been developed by the EUFORIA project and used with a plasma edge simulation code.

  8. Artificial Intelligence and the High School Computer Curriculum.

    Science.gov (United States)

    Dillon, Richard W.

    1993-01-01

    Describes a four-part curriculum that can serve as a model for incorporating artificial intelligence (AI) into the high school computer curriculum. The model includes examining questions fundamental to AI, creating and designing an expert system, language processing, and creating programs that integrate machine vision with robotics and…

  9. Seeking Solution: High-Performance Computing for Science. Background Paper.

    Science.gov (United States)

    Congress of the U.S., Washington, DC. Office of Technology Assessment.

    This is the second publication from the Office of Technology Assessment's assessment on information technology and research, which was requested by the House Committee on Science and Technology and the Senate Committee on Commerce, Science, and Transportation. The first background paper, "High Performance Computing & Networking for…

  10. Replica-Based High-Performance Tuple Space Computing

    DEFF Research Database (Denmark)

    Andric, Marina; De Nicola, Rocco; Lluch Lafuente, Alberto

    2015-01-01

    We present the tuple-based coordination language RepliKlaim, which enriches Klaim with primitives for replica-aware coordination. Our overall goal is to offer suitable solutions to the challenging problems of data distribution and locality in large-scale high performance computing. In particular,...

  11. Contemporary high performance computing from petascale toward exascale

    CERN Document Server

    Vetter, Jeffrey S

    2015-01-01

    A continuation of Contemporary High Performance Computing: From Petascale toward Exascale, this second volume continues the discussion of HPC flagship systems, major application workloads, facilities, and sponsors. The book includes of figures and pictures that capture the state of existing systems: pictures of buildings, systems in production, floorplans, and many block diagrams and charts to illustrate system design and performance.

  12. High Performance Distributed Computing in a Supercomputer Environment: Computational Services and Applications Issues

    Science.gov (United States)

    Kramer, Williams T. C.; Simon, Horst D.

    1994-01-01

    This tutorial proposes to be a practical guide for the uninitiated to the main topics and themes of high-performance computing (HPC), with particular emphasis to distributed computing. The intent is first to provide some guidance and directions in the rapidly increasing field of scientific computing using both massively parallel and traditional supercomputers. Because of their considerable potential computational power, loosely or tightly coupled clusters of workstations are increasingly considered as a third alternative to both the more conventional supercomputers based on a small number of powerful vector processors, as well as high massively parallel processors. Even though many research issues concerning the effective use of workstation clusters and their integration into a large scale production facility are still unresolved, such clusters are already used for production computing. In this tutorial we will utilize the unique experience made at the NAS facility at NASA Ames Research Center. Over the last five years at NAS massively parallel supercomputers such as the Connection Machines CM-2 and CM-5 from Thinking Machines Corporation and the iPSC/860 (Touchstone Gamma Machine) and Paragon Machines from Intel were used in a production supercomputer center alongside with traditional vector supercomputers such as the Cray Y-MP and C90.

  13. Hot Chips and Hot Interconnects for High End Computing Systems

    Science.gov (United States)

    Saini, Subhash

    2005-01-01

    I will discuss several processors: 1. The Cray proprietary processor used in the Cray X1; 2. The IBM Power 3 and Power 4 used in an IBM SP 3 and IBM SP 4 systems; 3. The Intel Itanium and Xeon, used in the SGI Altix systems and clusters respectively; 4. IBM System-on-a-Chip used in IBM BlueGene/L; 5. HP Alpha EV68 processor used in DOE ASCI Q cluster; 6. SPARC64 V processor, which is used in the Fujitsu PRIMEPOWER HPC2500; 7. An NEC proprietary processor, which is used in NEC SX-6/7; 8. Power 4+ processor, which is used in Hitachi SR11000; 9. NEC proprietary processor, which is used in Earth Simulator. The IBM POWER5 and Red Storm Computing Systems will also be discussed. The architectures of these processors will first be presented, followed by interconnection networks and a description of high-end computer systems based on these processors and networks. The performance of various hardware/programming model combinations will then be compared, based on latest NAS Parallel Benchmark results (MPI, OpenMP/HPF and hybrid (MPI + OpenMP). The tutorial will conclude with a discussion of general trends in the field of high performance computing, (quantum computing, DNA computing, cellular engineering, and neural networks).

  14. Using High Performance Computing to Support Water Resource Planning

    Energy Technology Data Exchange (ETDEWEB)

    Groves, David G. [RAND Corporation, Santa Monica, CA (United States); Lembert, Robert J. [RAND Corporation, Santa Monica, CA (United States); May, Deborah W. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Leek, James R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Syme, James [RAND Corporation, Santa Monica, CA (United States)

    2015-10-22

    In recent years, decision support modeling has embraced deliberation-withanalysis— an iterative process in which decisionmakers come together with experts to evaluate a complex problem and alternative solutions in a scientifically rigorous and transparent manner. Simulation modeling supports decisionmaking throughout this process; visualizations enable decisionmakers to assess how proposed strategies stand up over time in uncertain conditions. But running these simulation models over standard computers can be slow. This, in turn, can slow the entire decisionmaking process, interrupting valuable interaction between decisionmakers and analytics.

  15. On the Computational Complexity of Degenerate Unit Distance Representations of Graphs

    Science.gov (United States)

    Horvat, Boris; Kratochvíl, Jan; Pisanski, Tomaž

    Some graphs admit drawings in the Euclidean k-space in such a (natural) way, that edges are represented as line segments of unit length. Such embeddings are called k-dimensional unit distance representations. The embedding is strict if the distances of points representing nonadjacent pairs of vertices are different than 1. When two non-adjacent vertices are drawn in the same point, we say that the representation is degenerate. Computational complexity of nondegenerate embeddings has been studied before. We initiate the study of the computational complexity of (possibly) degenerate embeddings. In particular we prove that for every k ≥ 2, deciding if an input graph has a (possibly) degenerate k-dimensional unit distance representation is NP-hard.

  16. High resolution computed tomography for peripheral facial nerve paralysis

    Energy Technology Data Exchange (ETDEWEB)

    Koester, O.; Straehler-Pohl, H.J.

    1987-01-01

    High resolution computer tomographic examinations of the petrous bones were performed on 19 patients with confirmed peripheral facial nerve paralysis. High resolution CT provides accurate information regarding the extent, and usually regarding the type, of pathological process; this can be accurately localised with a view to possible surgical treatments. The examination also differentiates this from idiopathic paresis, which showed no radiological changes. Destruction of the petrous bone, without facial nerve symptoms, makes early suitable treatment mandatory.

  17. CLiBE: a database of computed ligand binding energy for ligand-receptor complexes.

    Science.gov (United States)

    Chen, X; Ji, Z L; Zhi, D G; Chen, Y Z

    2002-11-01

    Consideration of binding competitiveness of a drug candidate against natural ligands and other drugs that bind to the same receptor site may facilitate the rational development of a candidate into a potent drug. A strategy that can be applied to computer-aided drug design is to evaluate ligand-receptor interaction energy or other scoring functions of a designed drug with that of the relevant ligands known to bind to the same binding site. As a tool to facilitate such a strategy, a database of ligand-receptor interaction energy is developed from known ligand-receptor 3D structural entries in the Protein Databank (PDB). The Energy is computed based on a molecular mechanics force field that has been used in the prediction of therapeutic and toxicity targets of drugs. This database also contains information about ligand function and other properties and it can be accessed at http://xin.cz3.nus.edu.sg/group/CLiBE.asp. The computed energy components may facilitate the probing of the mode of action and other profiles of binding. A number of computed energies of some PDB ligand-receptor complexes in this database are studied and compared to experimental binding affinity. A certain degree of correlation between the computed energy and experimental binding affinity is found, which suggests that the computed energy may be useful in facilitating a qualitative analysis of drug binding competitiveness.

  18. High-performance computing and networking as tools for accurate emission computed tomography reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Passeri, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Formiconi, A.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); De Cristofaro, M.T.E.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Pupi, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Meldolesi, U. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy)

    1997-04-01

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64 x 64) slices could be reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. (orig.). With 4 figs., 1 tab.

  19. A comparison of computational methods and algorithms for the complex gamma function

    Science.gov (United States)

    Ng, E. W.

    1974-01-01

    A survey and comparison of some computational methods and algorithms for gamma and log-gamma functions of complex arguments are presented. Methods and algorithms reported include Chebyshev approximations, Pade expansion and Stirling's asymptotic series. The comparison leads to the conclusion that Algorithm 421 published in the Communications of ACM by H. Kuki is the best program either for individual application or for the inclusion in subroutine libraries.

  20. Computational Complexity Comparison Of Multi-Sensor Single Target Data Fusion Methods By Matlab

    OpenAIRE

    Hoseini, Sayed Amir; Ashraf, Mohammad Reza

    2013-01-01

    Target tracking using observations from multiple sensors can achieve better estimation performance than a single sensor. The most famous estimation tool in target tracking is Kalman filter. There are several mathematical approaches to combine the observations of multiple sensors by use of Kalman filter. An important issue in applying a proper approach is computational complexity. In this paper, four data fusion algorithms based on Kalman filter are considered including three centralized and o...

  1. Affect in Complex Decision-Making Systems: From Psychology to Computer Science Perspectives

    OpenAIRE

    Chohra, Amine; Chohra, Aziza; Madani, Kurosh

    2014-01-01

    Part 2: MHDW Workshop; International audience; The increasing progresses in both psychology and computer science allow continually to deal with more and more complex systems and closer to real-world applications in order to solve particularly the decision-making process with uncertain and incomplete information. The aim of this research work is to highlight the irrationality (affect) and to understand some different ways in which the irrationality enter into the decision-making from psycholog...

  2. Theoretical and Computational Studies of Stability, Transition and Flow Control in High-Speed Flows

    Science.gov (United States)

    2011-02-22

    layers. AIAA J., 47:1057–1068, 2009. [Bat03] G. K. Batchelor . An introduction to fluid dynamics . Cambridge University Press, 2003. [Ber91] F. P. Bertolotti...identified reveals the pervasive importance of several basic fluid dynamic phenomena. One of these, and possibly the least understood, is that of high...Introduction The progress being made in computational fluid dynamics provides an opportunity for reli- able simulations of such complex phenomena as laminar

  3. The computational complexity of symbolic dynamics at the edge of order and chaos

    CERN Document Server

    Lakdawala, P

    1995-01-01

    In a variety of studies of dynamical systems, the edge of order and chaos has been singled out as a region of complexity. It was suggested by Wolfram, on the basis of qualitative behaviour of cellular automata, that the computational basis for modelling this region is the Universal Turing Machine. In this paper, following a suggestion of Crutchfield, we try to show that the Turing machine model may often be too powerful as a computational model to describe the boundary of order and chaos. In particular we study the region of the first accumulation of period doubling in unimodal and bimodal maps of the interval, from the point of view of language theory. We show that in relation to the ``extended'' Chomsky hierarchy, the relevant computational model in the unimodal case is the nested stack automaton or the related indexed languages, while the bimodal case is modeled by the linear bounded automaton or the related context-sensitive languages.

  4. Complexation of DNA with ruthenium organometallic compounds: the high complexation ratio limit.

    Science.gov (United States)

    Despax, Stéphane; Jia, Fuchao; Pfeffer, Michel; Hébraud, Pascal

    2014-06-14

    Interactions between DNA and ruthenium organometallic compounds are studied by using visible light absorption and circular dichroism measurements. A titration technique allowing for the absolute determination of the advancement degree of the complexation, without any assumption about the number of complexation modes is developed. When DNA is in excess, complexation involves intercalation of one of the organometallic compound ligands between DNA base pairs. But, in the high complexation ratio limit, where organometallic compounds are in excess relative to the DNA base pairs, a new mode of interaction is observed, in which the organometallic compound interacts weakly with DNA. The weak interaction mode, moreover, develops when all the DNA intercalation sites are occupied. A regime is reached in which one DNA base pair is linked to more than one organometallic compound.

  5. Adaptive MIMO-OFDM Scheme with Reduced Computational Complexity and Improved Capacity

    Directory of Open Access Journals (Sweden)

    L. C. Siddanna Gowd

    2011-03-01

    Full Text Available The general multidimensional linear channel model adequately represents a plethora of communication system models which utilize multidimensional transmit-receive signals for attaining increased rates and reliability in the presence of fading. The logarithmic dependence of the spectral efficiency of the transmitted power makes it extremely expensive to increase the capacity solely by radiating more power. Also, increasing the transmitted power in a mobile terminal is not advisable due to possible violation of regulatory power masks and possible electromagnetic radiation effects. Alternately, MIMO schemes if properly exploited can exhibit a linearly increasing capacity, due to the presence of a rich scattering environment that provides independent transmission paths from each transmit to each receive antenna. An Idealized practical communication system assumes perfect channel state information (CSI and uses a linear transmitter to maximize the reliability of the wireless multi-antenna link. However, in actual practice the CSI is incomplete. As a result of this, there is a necessity to deal with ergodic and compound capacity formulations and these factors are strongly dependent on the model utilized to characterize the channel. Practical system models include quasi-static multiple-input multipleoutput (MIMO, MIMO-OFDM, ISI, amplify-andforward (AF, decode-and-forward (DF, and MIMO automatic repeat request (ARQ models. Each of the above models introduces its own structure, its own error performance limits, and its own requirements on coding and decoding schemes. Finding general purpose transceiver structures with (provably good performance in these scenarios, and with a reasonable computational complexity, is challenging. Existing MIMO systems are able to provide either high spectral efficiency (spatial multiplexing or low error rate (high diversity via exploiting multiple degrees of freedom available in the channel, but not both simultaneously as

  6. Overview of Parallel Platforms for Common High Performance Computing

    Directory of Open Access Journals (Sweden)

    T. Fryza

    2012-04-01

    Full Text Available The paper deals with various parallel platforms used for high performance computing in the signal processing domain. More precisely, the methods exploiting the multicores central processing units such as message passing interface and OpenMP are taken into account. The properties of the programming methods are experimentally proved in the application of a fast Fourier transform and a discrete cosine transform and they are compared with the possibilities of MATLAB's built-in functions and Texas Instruments digital signal processors with very long instruction word architectures. New FFT and DCT implementations were proposed and tested. The implementation phase was compared with CPU based computing methods and with possibilities of the Texas Instruments digital signal processing library on C6747 floating-point DSPs. The optimal combination of computing methods in the signal processing domain and new, fast routines' implementation is proposed as well.

  7. School of Analytic Computing in Theoretical High-Energy Physics

    CERN Document Server

    2013-01-01

    In recent years, a huge progress has been made on computing rates for production processes of direct relevance to experiments at the Large Hadron Collider (LHC). Crucial to that remarkable advance has been our understanding and ability to compute scattering amplitudes. The aim of the School is to bring together young theorists working on the phenomenology of LHC physics with those working in more formal areas, and to provide them the analytic tools to compute amplitudes in gauge theories. The school is addressed to Ph.D. students and post-docs in Theoretical High-Energy Physics. 30 hours of lectures will be delivered over the 5 days of the School. A Poster Session will be held, at which students are welcome to present their research topics.

  8. School of Analytic Computing in Theoretical High-Energy Physics

    CERN Document Server

    2015-01-01

    In recent years, a huge progress has been made on computing rates for production processes of direct relevance to experiments at the Large Hadron Collider (LHC). Crucial to that remarkable advance has been our understanding and ability to compute scattering amplitudes and cross sections. The aim of the School is to bring together young theorists working on the phenomenology of LHC physics with those working in more formal areas, and to provide them the analytic tools to compute amplitudes in gauge theories. The school is addressed to Ph.D. students and post-docs in Theoretical High-Energy Physics. 30 hours of lectures and 4 hours of tutorials will be delivered over the 6 days of the School.

  9. Parallel computation of seismic analysis of high arch dam

    Institute of Scientific and Technical Information of China (English)

    Chen Houqun; Ma Huaifa; Tu Jin; Cheng Guangqing; Tang Juzhen

    2008-01-01

    Parallel computation programs are developed for three-dimensional meso-mechanics analysis of fully-graded dam concrete and seismic response analysis of high arch dams (ADs), based on the Parallel Finite Element Program Generator (PFEPG). The computational algorithms of the numerical simulation of the meso-structure of concrete specimens were studied. Taking into account damage evolution, static preload, strain rate effect, and the heterogeneity of the meso-structure of dam concrete, the fracture processes of damage evolution and configuration of the cracks can be directly simulated. In the seismic response analysis of ADs, all the following factors are involved, such as the nonlinear contact due to the opening and slipping of the contraction joints, energy dispersion of the far-field foundation, dynamic interactions of the dam-foundation-reservoir system, and the combining effects of seismic action with all static loads. The correctness, reliability and efficiency of the two parallel computational programs are verified with practical illustrations.

  10. Computing trends using graphic processor in high energy physics

    CERN Document Server

    Niculescu, Mihai

    2011-01-01

    One of the main challenges in Heavy Energy Physics is to make fast analysis of high amount of experimental and simulated data. At LHC-CERN one p-p event is approximate 1 Mb in size. The time taken to analyze the data and obtain fast results depends on high computational power. The main advantage of using GPU(Graphic Processor Unit) programming over traditional CPU one is that graphical cards bring a lot of computing power at a very low price. Today a huge number of application(scientific, financial etc) began to be ported or developed for GPU, including Monte Carlo tools or data analysis tools for High Energy Physics. In this paper, we'll present current status and trends in HEP using GPU.

  11. Computing n-dimensional volumes of complexes: Application to constructive entropy bounds

    Energy Technology Data Exchange (ETDEWEB)

    Beiu, V.; Makaruk, H.E.

    1997-11-01

    The constructive bounds on the needed number-of-bits (entropy) for solving a dichotomy (i.e., classification of a given data-set into two distinct classes) can be represented by the quotient of two multidimensional solid volumes. Exact methods for the calculation of the volume of the solids lead to a tighter lower bound on the needed number-of-bits--than the ones previously known. Establishing such bounds is very important for engineering applications, as they can improve certain constructive neural learning algorithms, while also reducing the area of future VLSI implementations of neural networks. The paper will present an effective method for the exact calculation of the volume of any n-dimensional complex. The method uses a divide-and-conquer approach by: (i) partitioning (i.e., slicing) a complex into simplices; and (ii) computing the volumes of these simplices. The slicing of any complex into a sum of simplices always exists, but it is not unique. This non-uniqueness gives us the freedom to choose that specific partitioning which is convenient for a particular case. It will be shown that this optimal choice is related to the symmetries of the complex, and can significantly reduce the computations involved.

  12. The design of linear algebra libraries for high performance computers

    Energy Technology Data Exchange (ETDEWEB)

    Dongarra, J.J. [Tennessee Univ., Knoxville, TN (United States). Dept. of Computer Science]|[Oak Ridge National Lab., TN (United States); Walker, D.W. [Oak Ridge National Lab., TN (United States)

    1993-08-01

    This paper discusses the design of linear algebra libraries for high performance computers. Particular emphasis is placed on the development of scalable algorithms for MIMD distributed memory concurrent computers. A brief description of the EISPACK, LINPACK, and LAPACK libraries is given, followed by an outline of ScaLAPACK, which is a distributed memory version of LAPACK currently under development. The importance of block-partitioned algorithms in reducing the frequency of data movement between different levels of hierarchical memory is stressed. The use of such algorithms helps reduce the message startup costs on distributed memory concurrent computers. Other key ideas in our approach are the use of distributed versions of the Level 3 Basic Linear Algebra Subprograms (BLAS) as computational building blocks, and the use of Basic Linear Algebra Communication Subprograms (BLACS) as communication building blocks. Together the distributed BLAS and the BLACS can be used to construct higher-level algorithms, and hide many details of the parallelism from the application developer. The block-cyclic data distribution is described, and adopted as a good way of distributing block-partitioned matrices. Block-partitioned versions of the Cholesky and LU factorizations are presented, and optimization issues associated with the implementation of the LU factorization algorithm on distributed memory concurrent computers are discussed, together with its performance on the Intel Delta system. Finally, approaches to the design of library interfaces are reviewed.

  13. Towards electromechanical computation: An alternative approach to realize complex logic circuits

    KAUST Repository

    Hafiz, M. A. A.

    2016-08-18

    Electromechanical computing based on micro/nano resonators has recently attracted significant attention. However, full implementation of this technology has been hindered by the difficulty in realizing complex logic circuits. We report here an alternative approach to realize complex logic circuits based on multiple MEMS resonators. As case studies, we report the construction of a single-bit binary comparator, a single-bit 4-to-2 encoder, and parallel XOR/XNOR and AND/NOT logic gates. Toward this, several microresonators are electrically connected and their resonance frequencies are tuned through an electrothermal modulation scheme. The microresonators operating in the linear regime do not require large excitation forces, and work at room temperature and at modest air pressure. This study demonstrates that by reconfiguring the same basic building block, tunable resonator, several essential complex logic functions can be achieved.

  14. Towards electromechanical computation: An alternative approach to realize complex logic circuits

    Science.gov (United States)

    Hafiz, M. A. A.; Kosuru, L.; Younis, M. I.

    2016-08-01

    Electromechanical computing based on micro/nano resonators has recently attracted significant attention. However, full implementation of this technology has been hindered by the difficulty in realizing complex logic circuits. We report here an alternative approach to realize complex logic circuits based on multiple MEMS resonators. As case studies, we report the construction of a single-bit binary comparator, a single-bit 4-to-2 encoder, and parallel XOR/XNOR and AND/NOT logic gates. Toward this, several microresonators are electrically connected and their resonance frequencies are tuned through an electrothermal modulation scheme. The microresonators operating in the linear regime do not require large excitation forces, and work at room temperature and at modest air pressure. This study demonstrates that by reconfiguring the same basic building block, tunable resonator, several essential complex logic functions can be achieved.

  15. High energy density battery based on complex hydrides

    Science.gov (United States)

    Zidan, Ragaiy

    2016-04-26

    A battery and process of operating a battery system is provided using high hydrogen capacity complex hydrides in an organic non-aqueous solvent that allows the transport of hydride ions such as AlH.sub.4.sup.- and metal ions during respective discharging and charging steps.

  16. High energy density battery based on complex hydrides

    Energy Technology Data Exchange (ETDEWEB)

    Zidan, Ragaiy

    2016-04-26

    A battery and process of operating a battery system is provided using high hydrogen capacity complex hydrides in an organic non-aqueous solvent that allows the transport of hydride ions such as AlH.sub.4.sup.- and metal ions during respective discharging and charging steps.

  17. Computational high-resolution optical imaging of the living human retina

    Science.gov (United States)

    Shemonski, Nathan D.; South, Fredrick A.; Liu, Yuan-Zhi; Adie, Steven G.; Scott Carney, P.; Boppart, Stephen A.

    2015-07-01

    High-resolution in vivo imaging is of great importance for the fields of biology and medicine. The introduction of hardware-based adaptive optics (HAO) has pushed the limits of optical imaging, enabling high-resolution near diffraction-limited imaging of previously unresolvable structures. In ophthalmology, when combined with optical coherence tomography, HAO has enabled a detailed three-dimensional visualization of photoreceptor distributions and individual nerve fibre bundles in the living human retina. However, the introduction of HAO hardware and supporting software adds considerable complexity and cost to an imaging system, limiting the number of researchers and medical professionals who could benefit from the technology. Here we demonstrate a fully automated computational approach that enables high-resolution in vivo ophthalmic imaging without the need for HAO. The results demonstrate that computational methods in coherent microscopy are applicable in highly dynamic living systems.

  18. Mechanistic insights on the ortho-hydroxylation of aromatic compounds by non-heme iron complex: a computational case study on the comparative oxidative ability of ferric-hydroperoxo and high-valent Fe(IV)═O and Fe(V)═O intermediates.

    Science.gov (United States)

    Ansari, Azaj; Kaushik, Abhishek; Rajaraman, Gopalan

    2013-03-20

    ortho-Hydroxylation of aromatic compounds by non-heme Fe complexes has been extensively studied in recent years by several research groups. The nature of the proposed oxidant varies from Fe(III)-OOH to high-valent Fe(IV)═O and Fe(V)═O species, and no definitive consensus has emerged. In this comprehensive study, we have investigated the ortho-hydroxylation of aromatic compounds by an iron complex using hybrid density functional theory incorporating dispersion effects. Three different oxidants, Fe(III)-OOH, Fe(IV)═O, and Fe(V)═O, and two different pathways, H-abstraction and electrophilic attack, have been considered to test the oxidative ability of different oxidants and to underpin the exact mechanism of this regiospecific reaction. By mapping the potential energy surface of each oxidant, our calculations categorize Fe(III)-OOH as a sluggish oxidant, as both proximal and distal oxygen atoms of this species have prohibitively high barriers to carry out the aromatic hydroxylation. This is in agreement to the experimental observation where Fe(III)-OOH is found not to directly attack the aromatic ring. A novel mechanism for the explicit generation of non-heme Fe(IV)═O and Fe(V)═O from isomeric forms of Fe(III)-OOH has been proposed where the O···O bond is found to cleave via homolytic (Fe(IV)═O) or heterolytic (Fe(V)═O) fashion exclusively. Apart from having favorable formation energies, the Fe(V)═O species also has a lower barrier height compared to the corresponding Fe(IV)═O species for the aromatic ortho-hydroxylation reaction. The transient Fe(V)═O prefers electrophilic attack on the benzene ring rather than the usual aromatic C-H activation step. A large thermodynamic drive for the formation of a radical intermediate is encountered in the mechanistic scene, and this intermediate substantially diminishes the energy barrier required for C-H activation by the Fe(V)═O species. Further spin density distribution and the frontier orbitals of

  19. High-throughput all-atom molecular dynamics simulations using distributed computing.

    Science.gov (United States)

    Buch, I; Harvey, M J; Giorgino, T; Anderson, D P; De Fabritiis, G

    2010-03-22

    Although molecular dynamics simulation methods are useful in the modeling of macromolecular systems, they remain computationally expensive, with production work requiring costly high-performance computing (HPC) resources. We review recent innovations in accelerating molecular dynamics on graphics processing units (GPUs), and we describe GPUGRID, a volunteer computing project that uses the GPU resources of nondedicated desktop and workstation computers. In particular, we demonstrate the capability of simulating thousands of all-atom molecular trajectories generated at an average of 20 ns/day each (for systems of approximately 30 000-80 000 atoms). In conjunction with a potential of mean force (PMF) protocol for computing binding free energies, we demonstrate the use of GPUGRID in the computation of accurate binding affinities of the Src SH2 domain/pYEEI ligand complex by reconstructing the PMF over 373 umbrella sampling windows of 55 ns each (20.5 mus of total data). We obtain a standard free energy of binding of -8.7 +/- 0.4 kcal/mol within 0.7 kcal/mol from experimental results. This infrastructure will provide the basis for a robust system for high-throughput accurate binding affinity prediction.

  20. Application of the Decomposition Method to the Design Complexity of Computer-based Display

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hyoung Ju; Lee, Seung Woo; Seong, Poong Hyun [Korea Advanced Institute of Science and Technology, Daejeon (Korea, Republic of); Park, Jin Kyun [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2012-05-15

    The importance of the design of human machine interfaces (HMIs) for human performance and safety has long been recognized in process industries. In case of nuclear power plants (NPPs), HMIs have significant implications for the safety of the NPPs since poor implementation of HMIs can impair the operators' information searching ability which is considered as one of the important aspects of human behavior. To support and increase the efficiency of the operators' information searching behavior, advanced HMIs based on computer technology are provided. Operators in advanced main control room (MCR) acquire information through video display units (VDUs), and large display panel (LDP) required for the operation of NPPs. These computer-based displays contain a very large quantity of information and present them in a variety of formats than conventional MCR. For example, these displays contain more elements such as abbreviations, labels, icons, symbols, coding, and highlighting than conventional ones. As computer-based displays contain more information, complexity of the elements becomes greater due to less distinctiveness of each element. A greater understanding is emerging about the effectiveness of designs of computer-based displays, including how distinctively display elements should be designed. And according to Gestalt theory, people tend to group similar elements based on attributes such as shape, color or pattern based on the principle of similarity. Therefore, it is necessary to consider not only human operator's perception but the number of element consisting of computer-based display

  1. Computational Chemistry to the Rescue: Modern Toolboxes for the Assignment of Complex Molecules by GIAO NMR Calculations.

    Science.gov (United States)

    Grimblat, Nicolas; Sarotti, Ariel M

    2016-08-22

    The calculations of NMR properties of molecules using quantum chemical methods have deeply impacted several branches of organic chemistry. They are particularly important in structural or stereochemical assignments of organic compounds, with implications in total synthesis, stereoselective reactions, and natural products chemistry. In studying the evolution of the strategies developed to support (or reject) a structural proposal, it becomes clear that the most effective and accurate ones involve sophisticated procedures to correlate experimental and computational data. Owing to their relatively high mathematical complexity, such calculations (CP3, DP4, ANN-PRA) are often carried out using additional computational resources provided by the authors (such as applets or Excel files). This Minireview will cover the state-of-the-art of these toolboxes in the assignment of organic molecules, including mathematical definitions, updates, and discussion of relevant examples.

  2. Is Model-Based Development a Favorable Approach for Complex and Safety-Critical Computer Systems on Commercial Aircraft?

    Science.gov (United States)

    Torres-Pomales, Wilfredo

    2014-01-01

    A system is safety-critical if its failure can endanger human life or cause significant damage to property or the environment. State-of-the-art computer systems on commercial aircraft are highly complex, software-intensive, functionally integrated, and network-centric systems of systems. Ensuring that such systems are safe and comply with existing safety regulations is costly and time-consuming as the level of rigor in the development process, especially the validation and verification activities, is determined by considerations of system complexity and safety criticality. A significant degree of care and deep insight into the operational principles of these systems is required to ensure adequate coverage of all design implications relevant to system safety. Model-based development methodologies, methods, tools, and techniques facilitate collaboration and enable the use of common design artifacts among groups dealing with different aspects of the development of a system. This paper examines the application of model-based development to complex and safety-critical aircraft computer systems. Benefits and detriments are identified and an overall assessment of the approach is given.

  3. LT^2C^2: A language of thought with Turing-computable Kolmogorov complexity

    Directory of Open Access Journals (Sweden)

    Santiago Figueira

    2013-03-01

    Full Text Available In this paper, we present a theoretical effort to connect the theory of program size to psychology by implementing a concrete language of thought with Turing-computable Kolmogorov complexity (LT^2C^2 satisfying the following requirements: 1 to be simple enough so that the complexity of any given finite binary sequence can be computed, 2 to be based on tangible operations of human reasoning (printing, repeating,. . . , 3 to be sufficiently powerful to generate all possible sequences but not too powerful as to identify regularities which would be invisible to humans. We first formalize LT^2C^2, giving its syntax and semantics, and defining an adequate notion of program size. Our setting leads to a Kolmogorov complexity function relative to LT^2C^2 which is computable in polynomial time, and it also induces a prediction algorithm in the spirit of Solomonoff’s inductive inference theory. We then prove the efficacy of this language by investigating regularities in strings produced by participants attempting to generate random strings. Participants had a profound understanding of randomness and hence avoided typical misconceptions such as exaggerating the number of alternations. We reasoned that remaining regularities would express the algorithmic nature of human thoughts, revealed in the form of specific patterns. Kolmogorov complexity relative to LT^2C^2 passed three expected tests examined here: 1 human sequences were less complex than control PRNG sequences, 2 human sequences were not stationary showing decreasing values of complexity resulting from fatigue 3 each individual showed traces of algorithmic stability since fitting of partial data was more effective to predict subsequent data than average fits. This work extends on previous efforts to combine notions of Kolmogorov complexity theory and algorithmic information theory to psychology, by explicitly proposing a language which may describe the patterns of human thoughts.Received: 12

  4. Computational Cellular Dynamics Based on the Chemical Master Equation: A Challenge for Understanding Complexity.

    Science.gov (United States)

    Liang, Jie; Qian, Hong

    2010-01-01

    Modern molecular biology has always been a great source of inspiration for computational science. Half a century ago, the challenge from understanding macromolecular dynamics has led the way for computations to be part of the tool set to study molecular biology. Twenty-five years ago, the demand from genome science has inspired an entire generation of computer scientists with an interest in discrete mathematics to join the field that is now called bioinformatics. In this paper, we shall lay out a new mathematical theory for dynamics of biochemical reaction systems in a small volume (i.e., mesoscopic) in terms of a stochastic, discrete-state continuous-time formulation, called the chemical master equation (CME). Similar to the wavefunction in quantum mechanics, the dynamically changing probability landscape associated with the state space provides a fundamental characterization of the biochemical reaction system. The stochastic trajectories of the dynamics are best known through the simulations using the Gillespie algorithm. In contrast to the Metropolis algorithm, this Monte Carlo sampling technique does not follow a process with detailed balance. We shall show several examples how CMEs are used to model cellular biochemical systems. We shall also illustrate the computational challenges involved: multiscale phenomena, the interplay between stochasticity and nonlinearity, and how macroscopic determinism arises from mesoscopic dynamics. We point out recent advances in computing solutions to the CME, including exact solution of the steady state landscape and stochastic differential equations that offer alternatives to the Gilespie algorithm. We argue that the CME is an ideal system from which one can learn to understand "complex behavior" and complexity theory, and from which important biological insight can be gained.

  5. High Resolution Muon Computed Tomography at Neutrino Beam Facilities

    CERN Document Server

    Suerfu, Burkhant

    2015-01-01

    X-ray computed tomography (CT) has an indispensable role in constructing 3D images of objects made from light materials. However, limited by absorption coefficients, X-rays cannot deeply penetrate materials such as copper and lead. Here we show via simulation that muon beams can provide high resolution tomographic images of dense objects and of structures within the interior of dense objects. The effects of resolution broadening from multiple scattering diminish with increasing muon momentum. As the momentum of the muon increases, the contrast of the image goes down and therefore requires higher resolution in the muon spectrometer to resolve the image. The variance of the measured muon momentum reaches a minimum and then increases with increasing muon momentum. The impact of the increase in variance is to require a higher integrated muon flux to reduce fluctuations. The flux requirements and level of contrast needed for high resolution muon computed tomography are well matched to the muons produced in the pio...

  6. High performance computing for classic gravitational N-body systems

    CERN Document Server

    Capuzzo-Dolcetta, Roberto

    2009-01-01

    The role of gravity is crucial in astrophysics. It determines the evolution of any system, over an enormous range of time and space scales. Astronomical stellar systems as composed by N interacting bodies represent examples of self-gravitating systems, usually treatable with the aid of newtonian gravity but for particular cases. In this note I will briefly discuss some of the open problems in the dynamical study of classic self-gravitating N-body systems, over the astronomical range of N. I will also point out how modern research in this field compulsorily requires a heavy use of large scale computations, due to the contemporary requirement of high precision and high computational speed.

  7. High Performance Computing Facility Operational Assessment 2015: Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Barker, Ashley D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bernholdt, David E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Bland, Arthur S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Gary, Jeff D. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Hack, James J. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; McNally, Stephen T. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Rogers, James H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Smith, Brian E. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Straatsma, T. P. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Sukumar, Sreenivas Rangan [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Thach, Kevin G. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Tichenor, Suzy [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility; Wells, Jack C. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility

    2016-03-01

    Oak Ridge National Laboratory’s (ORNL’s) Leadership Computing Facility (OLCF) continues to surpass its operational target goals: supporting users; delivering fast, reliable systems; creating innovative solutions for high-performance computing (HPC) needs; and managing risks, safety, and security aspects associated with operating one of the most powerful computers in the world. The results can be seen in the cutting-edge science delivered by users and the praise from the research community. Calendar year (CY) 2015 was filled with outstanding operational results and accomplishments: a very high rating from users on overall satisfaction that ties the highest-ever mark set in CY 2014; the greatest number of core-hours delivered to research projects; the largest percentage of capability usage since the OLCF began tracking the metric in 2009; and success in delivering on the allocation of 60, 30, and 10% of core hours offered for the INCITE (Innovative and Novel Computational Impact on Theory and Experiment), ALCC (Advanced Scientific Computing Research Leadership Computing Challenge), and Director’s Discretionary programs, respectively. These accomplishments, coupled with the extremely high utilization rate, represent the fulfillment of the promise of Titan: maximum use by maximum-size simulations. The impact of all of these successes and more is reflected in the accomplishments of OLCF users, with publications this year in notable journals Nature, Nature Materials, Nature Chemistry, Nature Physics, Nature Climate Change, ACS Nano, Journal of the American Chemical Society, and Physical Review Letters, as well as many others. The achievements included in the 2015 OLCF Operational Assessment Report reflect first-ever or largest simulations in their communities; for example Titan enabled engineers in Los Angeles and the surrounding region to design and begin building improved critical infrastructure by enabling the highest-resolution Cybershake map for Southern

  8. Intel: High Throughput Computing Collaboration: A CERN openlab / Intel collaboration

    CERN Document Server

    CERN. Geneva

    2015-01-01

    The Intel/CERN High Throughput Computing Collaboration studies the application of upcoming Intel technologies to the very challenging environment of the LHC trigger and data-acquisition systems. These systems will need to transport and process many terabits of data every second, in some cases with tight latency constraints. Parallelisation and tight integration of accelerators and classical CPU via Intel's OmniPath fabric are the key elements in this project.

  9. The role of interpreters in high performance computing

    Energy Technology Data Exchange (ETDEWEB)

    Naumann, Axel; /CERN; Canal, Philippe; /Fermilab

    2008-01-01

    Compiled code is fast, interpreted code is slow. There is not much we can do about it, and it's the reason why interpreters use in high performance computing is usually restricted to job submission. We show where interpreters make sense even in the context of analysis code, and what aspects have to be taken into account to make this combination a success.

  10. Energy conserving numerical methods for the computation of complex vortical flows

    Science.gov (United States)

    Allaneau, Yves

    One of the original goals of this thesis was to develop numerical tools to help with the design of micro air vehicles. Micro Air Vehicles (MAVs) are small flying devices of only a few inches in wing span. Some people consider that as their size becomes smaller and smaller, it would be increasingly more difficult to keep all the classical control surfaces such as the rudders, the ailerons and the usual propellers. Over the years, scientists took inspiration from nature. Birds, by flapping and deforming their wings, are capable of accurate attitude control and are able to generate propulsion. However, the biomimicry design has its own limitations and it is difficult to place a hummingbird in a wind tunnel to study precisely the motion of its wings. Our approach was to use numerical methods to tackle this challenging problem. In order to precisely evaluate the lift and drag generated by the wings, one needs to be able to capture with high fidelity the extremely complex vortical flow produced in the wake. This requires a numerical method that is stable yet not too dissipative, so that the vortices do not get diffused in an unphysical way. We solved this problem by developing a new Discontinuous Galerkin scheme that, in addition to conserving mass, momentum and total energy locally, also preserves kinetic energy globally. This property greatly improves the stability of the simulations, especially in the special case p=0 when the approximation polynomials are taken to be piecewise constant (we recover a finite volume scheme). In addition to needing an adequate numerical scheme, a high fidelity solution requires many degrees of freedom in the computations to represent the flow field. The size of the smallest eddies in the flow is given by the Kolmogoroff scale. Capturing these eddies requires a mesh counting in the order of Re³ cells, where Re is the Reynolds number of the flow. We show that under-resolving the system, to a certain extent, is acceptable. However our

  11. High Performance Computing Facility Operational Assessment, FY 2010 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Bland, Arthur S Buddy [ORNL; Hack, James J [ORNL; Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Boudwin, Kathlyn J. [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; White, Julia C [ORNL

    2010-08-01

    Oak Ridge National Laboratory's (ORNL's) Cray XT5 supercomputer, Jaguar, kicked off the era of petascale scientific computing in 2008 with applications that sustained more than a thousand trillion floating point calculations per second - or 1 petaflop. Jaguar continues to grow even more powerful as it helps researchers broaden the boundaries of knowledge in virtually every domain of computational science, including weather and climate, nuclear energy, geosciences, combustion, bioenergy, fusion, and materials science. Their insights promise to broaden our knowledge in areas that are vitally important to the Department of Energy (DOE) and the nation as a whole, particularly energy assurance and climate change. The science of the 21st century, however, will demand further revolutions in computing, supercomputers capable of a million trillion calculations a second - 1 exaflop - and beyond. These systems will allow investigators to continue attacking global challenges through modeling and simulation and to unravel longstanding scientific questions. Creating such systems will also require new approaches to daunting challenges. High-performance systems of the future will need to be codesigned for scientific and engineering applications with best-in-class communications networks and data-management infrastructures and teams of skilled researchers able to take full advantage of these new resources. The Oak Ridge Leadership Computing Facility (OLCF) provides the nation's most powerful open resource for capability computing, with a sustainable path that will maintain and extend national leadership for DOE's Office of Science (SC). The OLCF has engaged a world-class team to support petascale science and to take a dramatic step forward, fielding new capabilities for high-end science. This report highlights the successful delivery and operation of a petascale system and shows how the OLCF fosters application development teams, developing cutting-edge tools

  12. High performance parallel computers for science: New developments at the Fermilab advanced computer program

    Energy Technology Data Exchange (ETDEWEB)

    Nash, T.; Areti, H.; Atac, R.; Biel, J.; Cook, A.; Deppe, J.; Edel, M.; Fischler, M.; Gaines, I.; Hance, R.

    1988-08-01

    Fermilab's Advanced Computer Program (ACP) has been developing highly cost effective, yet practical, parallel computers for high energy physics since 1984. The ACP's latest developments are proceeding in two directions. A Second Generation ACP Multiprocessor System for experiments will include $3500 RISC processors each with performance over 15 VAX MIPS. To support such high performance, the new system allows parallel I/O, parallel interprocess communication, and parallel host processes. The ACP Multi-Array Processor, has been developed for theoretical physics. Each $4000 node is a FORTRAN or C programmable pipelined 20 MFlops (peak), 10 MByte single board computer. These are plugged into a 16 port crossbar switch crate which handles both inter and intra crate communication. The crates are connected in a hypercube. Site oriented applications like lattice gauge theory are supported by system software called CANOPY, which makes the hardware virtually transparent to users. A 256 node, 5 GFlop, system is under construction. 10 refs., 7 figs.

  13. Rapid identifying high-influence nodes in complex networks

    Institute of Scientific and Technical Information of China (English)

    宋波; 蒋国平; 宋玉蓉; 夏玲玲

    2015-01-01

    A tiny fraction of infl uential individuals play a critical role in the dynamics on complex systems. Identifying the infl uential nodes in complex networks has theoretical and practical significance. Considering the uncertainties of network scale and topology, and the timeliness of dynamic behaviors in real networks, we propose a rapid identifying method (RIM) to find the fraction of high-infl uential nodes. Instead of ranking all nodes, our method only aims at ranking a small number of nodes in network. We set the high-infl uential nodes as initial spreaders, and evaluate the performance of RIM by the susceptible–infected–recovered (SIR) model. The simulations show that in different networks, RIM performs well on rapid identifying high-infl uential nodes, which is verified by typical ranking methods, such as degree, closeness, betweenness, and eigenvector centrality methods.

  14. A Framework for the Interactive Handling of High-Dimensional Simulation Data in Complex Geometries

    KAUST Repository

    Benzina, Amal

    2013-01-01

    Flow simulations around building infrastructure models involve large scale complex geometries, which when discretized in adequate detail entail high computational cost. Moreover, tasks such as simulation insight by steering or optimization require many such costly simulations. In this paper, we illustrate the whole pipeline of an integrated solution for interactive computational steering, developed for complex flow simulation scenarios that depend on a moderate number of both geometric and physical parameters. A mesh generator takes building information model input data and outputs a valid cartesian discretization. A sparse-grids-based surrogate model—a less costly substitute for the parameterized simulation—uses precomputed data to deliver approximated simulation results at interactive rates. Furthermore, a distributed multi-display visualization environment shows building infrastructure together with flow data. The focus is set on scalability and intuitive user interaction.

  15. High-Precision Computation: Mathematical Physics and Dynamics

    Energy Technology Data Exchange (ETDEWEB)

    Bailey, D. H.; Barrio, R.; Borwein, J. M.

    2010-04-01

    At the present time, IEEE 64-bit oating-point arithmetic is suficiently accurate for most scientic applications. However, for a rapidly growing body of important scientic computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion e ort. This pa- per presents a survey of recent applications of these techniques and provides someanalysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, studies of the one structure constant, scattering amplitudes of quarks, glu- ons and bosons, nonlinear oscillator theory, experimental mathematics, evaluation of orthogonal polynomials, numerical integration of ODEs, computation of periodic orbits, studies of the splitting of separatrices, detection of strange nonchaotic at- tractors, Ising theory, quantum held theory, and discrete dynamical systems. We conclude that high-precision arithmetic facilities are now an indispensable compo- nent of a modern large-scale scientic computing environment.

  16. Integrating Reconfigurable Hardware-Based Grid for High Performance Computing

    Directory of Open Access Journals (Sweden)

    Julio Dondo Gazzano

    2015-01-01

    Full Text Available FPGAs have shown several characteristics that make them very attractive for high performance computing (HPC. The impressive speed-up factors that they are able to achieve, the reduced power consumption, and the easiness and flexibility of the design process with fast iterations between consecutive versions are examples of benefits obtained with their use. However, there are still some difficulties when using reconfigurable platforms as accelerator that need to be addressed: the need of an in-depth application study to identify potential acceleration, the lack of tools for the deployment of computational problems in distributed hardware platforms, and the low portability of components, among others. This work proposes a complete grid infrastructure for distributed high performance computing based on dynamically reconfigurable FPGAs. Besides, a set of services designed to facilitate the application deployment is described. An example application and a comparison with other hardware and software implementations are shown. Experimental results show that the proposed architecture offers encouraging advantages for deployment of high performance distributed applications simplifying development process.

  17. High-Performance Java Codes for Computational Fluid Dynamics

    Science.gov (United States)

    Riley, Christopher; Chatterjee, Siddhartha; Biswas, Rupak; Biegel, Bryan (Technical Monitor)

    2001-01-01

    The computational science community is reluctant to write large-scale computationally -intensive applications in Java due to concerns over Java's poor performance, despite the claimed software engineering advantages of its object-oriented features. Naive Java implementations of numerical algorithms can perform poorly compared to corresponding Fortran or C implementations. To achieve high performance, Java applications must be designed with good performance as a primary goal. This paper presents the object-oriented design and implementation of two real-world applications from the field of Computational Fluid Dynamics (CFD): a finite-volume fluid flow solver (LAURA, from NASA Langley Research Center), and an unstructured mesh adaptation algorithm (2D_TAG, from NASA Ames Research Center). This work builds on our previous experience with the design of high-performance numerical libraries in Java. We examine the performance of the applications using the currently available Java infrastructure and show that the Java version of the flow solver LAURA performs almost within a factor of 2 of the original procedural version. Our Java version of the mesh adaptation algorithm 2D_TAG performs within a factor of 1.5 of its original procedural version on certain platforms. Our results demonstrate that object-oriented software design principles are not necessarily inimical to high performance.

  18. Aging, High Altitude, and Blood Pressure: A Complex Relationship.

    Science.gov (United States)

    Parati, Gianfranco; Ochoa, Juan Eugenio; Torlasco, Camilla; Salvi, Paolo; Lombardi, Carolina; Bilo, Grzegorz

    2015-06-01

    Parati, Gianfranco, Juan Eugenio Ochoa, Camilla Torlasco, Paolo Salvi, Carolina Lombardi, and Grzegorz Bilo. Aging, high altitude, and blood pressure: A complex relationship. High Alt Biol Med 16:97-109, 2015.--Both aging and high altitude exposure may induce important changes in BP regulation, leading to significant increases in BP levels. By inducing atherosclerotic changes, stiffening of large arteries, renal dysfunction, and arterial baroreflex impairment, advancing age may induce progressive increases in systolic BP levels, promoting development and progression of arterial hypertension. It is also known, although mainly from studies in young or middle-aged subjects, that exposure to high altitude may influence different mechanisms involved in BP regulation (i.e., neural central and reflex control of sympathetic activity), leading to important increases in BP levels. The evidence is less clear, however, on whether and to what extent advancing age may influence the BP response to acute or chronic high altitude exposure. This is a question not only of scientific interest but also of practical relevance given the consistent number of elderly individuals who are exposed for short time periods (either for leisure or work) or live permanently at high altitude, in whom arterial hypertension is frequently observed. This article will review the evidence available on the relationship between aging and blood pressure levels at high altitude, the pathophysiological mechanisms behind this complex association, as well as some questions of practical interest regarding antihypertensive treatment in elderly subjects, and the effects of antihypertensive drugs on blood pressure response during high altitude exposure.

  19. Computational Environments and Analysis methods available on the NCI High Performance Computing (HPC) and High Performance Data (HPD) Platform

    Science.gov (United States)

    Evans, B. J. K.; Foster, C.; Minchin, S. A.; Pugh, T.; Lewis, A.; Wyborn, L. A.; Evans, B. J.; Uhlherr, A.

    2014-12-01

    The National Computational Infrastructure (NCI) has established a powerful in-situ computational environment to enable both high performance computing and data-intensive science across a wide spectrum of national environmental data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress in addressing harmonisation of the underlying data collections for future transdisciplinary research that enable accurate climate projections. NCI makes available 10+ PB major data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the national scientific records), major research communities, and collaborating overseas organisations. The data is accessible within an integrated HPC-HPD environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large scale and high-bandwidth Lustre filesystems. This computational environment supports a catalogue of integrated reusable software and workflows from earth system and ecosystem modelling, weather research, satellite and other observed data processing and analysis. To enable transdisciplinary research on this scale, data needs to be harmonised so that researchers can readily apply techniques and software across the corpus of data available and not be constrained to work within artificial disciplinary boundaries. Future challenges will

  20. Toward a theory of leadership in complex systems: computational modeling explorations.

    Science.gov (United States)

    Hazy, James K

    2008-07-01

    I propose a new theory of leadership in complex systems based upon computational modeling approaches that have appeared to date. It is new in that it promises an approach that is well specified, coherent across levels of analysis, is transparent to the outside observer and can be modeled computationally. Although many of its independent components have been modeled, the underlying theory connecting these models is articulated here for the first time. Leadership is defined as those aspects of agent interactions which catalyze changes to the local rules defining other agents' interactions. There are five distinct aspects of leadership to be observed. Leadership involves actions among agents that: (a) identify or espouse a cooperation strategy or program, (b) catalyze conditions where other agents choose to participate in the program, (c) organize choices and actions in other agents to navigate complexity and avoid interaction catastrophe (sometimes called 'complexity catastrophe'), (d) form a distinct output layer that expresses the system as a unity in its environment, and (e) translate feedback into structural changes in the influence network among agents. The contribution of this approach is discussed.

  1. Composition of complex numbers: Delineating the computational role of the left anterior temporal lobe.

    Science.gov (United States)

    Blanco-Elorrieta, Esti; Pylkkänen, Liina

    2016-01-01

    What is the neurobiological basis of our ability to create complex messages with language? Results from multiple methodologies have converged on a set of brain regions as relevant for this general process, but the computational details of these areas remain to be characterized. The left anterior temporal lobe (LATL) has been a consistent node within this network, with results suggesting that although it rather systematically shows increased activation for semantically complex structured stimuli, this effect does not extend to number phrases such as 'three books.' In the present work we used magnetoencephalography to investigate whether numbers in general are an invalid input to the combinatory operations housed in the LATL or whether the lack of LATL engagement for stimuli such as 'three books' is due to the quantificational nature of such phrases. As a relevant test case, we employed complex number terms such as 'twenty-three', where one number term is not a quantifier of the other but rather, the two terms form a type of complex concept. In a number naming paradigm, participants viewed rows of numbers and depending on task instruction, named them as complex number terms ('twenty-three'), numerical quantifications ('two threes'), adjectival modifications ('blue threes') or non-combinatory lists (e.g., 'two, three'). While quantificational phrases failed to engage the LATL as compared to non-combinatory controls, both complex number terms and adjectival modifications elicited a reliable activity increase in the LATL. Our results show that while the LATL does not participate in the enumeration of tokens within a set, exemplified by the quantificational phrases, it does support conceptual combination, including the composition of complex number concepts.

  2. Luminescent rhenium(I) tricarbonyl complexes with pyrazolylamidino ligands: photophysical, electrochemical, and computational studies.

    Science.gov (United States)

    Gómez-Iglesias, Patricia; Guyon, Fabrice; Khatyr, Abderrahim; Ulrich, Gilles; Knorr, Michael; Martín-Alvarez, Jose Miguel; Miguel, Daniel; Villafañe, Fernando

    2015-10-28

    New pyrazolylamidino complexes fac-[ReCl(CO)3(NH[double bond, length as m-dash]C(Me)pz*-κ(2)N,N)] (pz*H = pyrazole, pzH; 3,5-dimethylpyrazole, dmpzH; indazole, indzH) and fac-[ReBr(CO)3(NH[double bond, length as m-dash]C(Ph)pz*-κ(2)N,N)] are synthesized via base-catalyzed coupling of the appropriate nitrile with pyrazole, or via metathesis by halide abstraction with AgBF4 from a bromido pyrazolylamidino complex and the subsequent addition of LiCl. In order to study both the influence of the substituents present at the pyrazolylamidino ligand, and that of the "sixth" ligand in the complex, photophysical, electrochemical, and computational studies have been carried out on this series and other complexes previously described by us, of the general formula fac-[ReL(CO)3(NH[double bond, length as m-dash]C(R')pz*-κ(2)N,N)](n+) (L = Cl, Br; R' = Me, Ph, n = 0; or L = NCMe, dmpzH, indzH, R' = Me, n = 1). All complexes exhibit phosphorescent decays from a prevalently (3)MLCT excited state with quantum yields (Φ) in the range between 0.007 and 0.039, and long lifetimes (τ∼ 8-1900 ns). The electrochemical study reveals irreversible reduction for all complexes. The oxidation of the neutral complexes was found to be irreversible due to halido-dissociation, whereas the cationic species display a reversible process implying the ReI/ReII couple. Density functional and time-dependent density functional theory (TD-DFT) calculations provide a reasonable trend for the values of emission energies in line with the experimental photophysical data, supporting the (3)MLCT based character of the emissions.

  3. Users matter : multi-agent systems model of high performance computing cluster users.

    Energy Technology Data Exchange (ETDEWEB)

    North, M. J.; Hood, C. S.; Decision and Information Sciences; IIT

    2005-01-01

    High performance computing clusters have been a critical resource for computational science for over a decade and have more recently become integral to large-scale industrial analysis. Despite their well-specified components, the aggregate behavior of clusters is poorly understood. The difficulties arise from complicated interactions between cluster components during operation. These interactions have been studied by many researchers, some of whom have identified the need for holistic multi-scale modeling that simultaneously includes network level, operating system level, process level, and user level behaviors. Each of these levels presents its own modeling challenges, but the user level is the most complex due to the adaptability of human beings. In this vein, there are several major user modeling goals, namely descriptive modeling, predictive modeling and automated weakness discovery. This study shows how multi-agent techniques were used to simulate a large-scale computing cluster at each of these levels.

  4. Fault Tolerance and COTS: Next Generation of High Performance Satellite Computers

    Science.gov (United States)

    Behr, P.; Bärwald, W.; Brieß, K.; Montenegro, S.

    The increasing complexity of future satellite missions requires adequately powerful on- board computer systems. The obvious performance gap between state-of-the-art micro- processor technology ("commercial-off-the-shelf", COTS) and available radiation hard components already impedes the realization of innovative satellite applications requiring high performance on-board data processing. In the paper we emphasize the advantages of the COTS approach for future OBCS and we show why we are convinced that this approach is feasible. We present the architecture of the fault tolerant control computer of the BIRD satellite and finally we show some results of the BIRD mission after 20 months in orbit, especially the experience with its COTS based control computer.

  5. Unravelling the complexity of signalling networks in cancer: A review of the increasing role for computational modelling.

    Science.gov (United States)

    Garland, John

    2017-09-01

    Cancer induction is a highly complex process involving hundreds of different inducers but whose eventual outcome is the same. Clearly, it is essential to understand how signalling pathways and networks generated by these inducers interact to regulate cell behaviour and create the cancer phenotype. While enormous strides have been made in identifying key networking profiles, the amount of data generated far exceeds our ability to understand how it all "fits together". The number of potential interactions is astronomically large and requires novel approaches and extreme computation methods to dissect them out. However, such methodologies have high intrinsic mathematical and conceptual content which is difficult to follow. This review explains how computation modelling is progressively finding solutions and also revealing unexpected and unpredictable nano-scale molecular behaviours extremely relevant to how signalling and networking are coherently integrated. It is divided into linked sections illustrated by numerous figures from the literature describing different approaches and offering visual portrayals of networking and major conceptual advances in the field. First, the problem of signalling complexity and data collection is illustrated for only a small selection of known oncogenes. Next, new concepts from biophysics, molecular behaviours, kinetics, organisation at the nano level and predictive models are presented. These areas include: visual representations of networking, Energy Landscapes and energy transfer/dissemination (entropy); diffusion, percolation; molecular crowding; protein allostery; quinary structure and fractal distributions; energy management, metabolism and re-examination of the Warburg effect. The importance of unravelling complex network interactions is then illustrated for some widely-used drugs in cancer therapy whose interactions are very extensive. Finally, use of computational modelling to develop micro- and nano- functional models

  6. A new massively parallel version of CRYSTAL for large systems on high performance computing architectures.

    Science.gov (United States)

    Orlando, Roberto; Delle Piane, Massimo; Bush, Ian J; Ugliengo, Piero; Ferrabone, Matteo; Dovesi, Roberto

    2012-10-30

    Fully ab initio treatment of complex solid systems needs computational software which is able to efficiently take advantage of the growing power of high performance computing (HPC) architectures. Recent improvements in CRYSTAL, a periodic ab initio code that uses a Gaussian basis set, allows treatment of very large unit cells for crystalline systems on HPC architectures with high parallel efficiency in terms of running time and memory requirements. The latter is a crucial point, due to the trend toward architectures relying on a very high number of cores with associated relatively low memory availability. An exhaustive performance analysis shows that density functional calculations, based on a hybrid functional, of low-symmetry systems containing up to 100,000 atomic orbitals and 8000 atoms are feasible on the most advanced HPC architectures available to European researchers today, using thousands of processors.

  7. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  8. Implementation and complexity of the watershed-from-markers algorithm computed as a minimal cost forest

    CERN Document Server

    Felkel, P; Wegenkittl, R; Felkel, Petr; Bruckwschwaiger, Mario; Wegenkittl, Rainer

    2001-01-01

    The watershed algorithm belongs to classical algorithms in mathematical morphology. Lotufo et al. published a principle of the watershed computation by means of an iterative forest transform (IFT), which computes a shortest path forest from given markers. The algorithm itself was described for a 2D case (image) without a detailed discussion of its computation and memory demands for real datasets. As IFT cleverly solves the problem of plateaus and as it gives precise results when thin objects have to be segmented, it is obvious to use this algorithm for 3D datasets taking in mind the minimizing of a higher memory consumption for the 3D case without loosing low asymptotical time complexity of O(m+C) (and also the real computation speed). The main goal of this paper is an implementation of the IFT algorithm with a priority queue with buckets and careful tuning of this implementation to reach as minimal memory consumption as possible. The paper presents five possible modifications and methods of implementation of...

  9. Fast computation of scattering from 3D complex structures by MLFMA

    Institute of Scientific and Technical Information of China (English)

    Hu Jun; Nie Zaiping; Que Xiaofeng; Meng Min

    2008-01-01

    This paper introduces the research work on the extension of multilevel fast multipole algorithm (MLFMA) to 3D complex structures including coating object,thin dielectric sheet,composite dielectric and conductor,cavity.The impedance boundary condition is used for scattering from the object coated by thin lossy material.Instead of volume integral equation,surface integral equation is applied in case of thin dielectric sheet through resistive sheet boundary condition.To realize the fast computation of scattering from composite homogeneous dielectric and conductor,the surface integral equation based on equivalence principle is used.Compared with the traditional volume integral equation,the surface integral equation reduces greatly the number of unknowns.To compute conducting cavity with electrically large aperture,an electric field integral equation is applied.Some numerical results are given to demonstrate the validity and accuracy of the present methods.

  10. A method to compute derivatives of functions of large complex matrices

    CERN Document Server

    Puhr, M

    2016-01-01

    A recently developed numerical method for the calculation of derivatives of functions of general complex matrices, which can also be combined with implicit matrix function approximations such as Krylov-Ritz type algorithms, is presented. An important use case for the method in the context of lattice gauge theory is the overlap Dirac operator at finite quark chemical potential. Derivatives of the lattice Dirac operator are necessary for the computation of conserved lattice currents or the fermionic force in Hybrid Monte-Carlo and Langevin simulations. To calculate the overlap Dirac operator at finite chemical potential the product of the sign function of a non-Hermitian matrix with a vector has to be computed. For non-Hermitian matrices it is not possible to efficiently approximate the sign function with polynomials or rational functions. Implicit approximation algorithms, like Krylov-Ritz methods, that depend on the source vector have to be used instead. Our method can also provide derivatives of such implici...

  11. Phase transition and computational complexity in a stochastic prime number generator

    Energy Technology Data Exchange (ETDEWEB)

    Lacasa, L; Luque, B [Departamento de Matematica Aplicada y EstadIstica, ETSI Aeronauticos, Universidad Politecnica de Madrid, Plaza Cardenal Cisneros 3, Madrid 28040 (Spain); Miramontes, O [Departamento de Sistemas Complejos, Instituto de FIsica, Universidad Nacional Autonoma de Mexico, Mexico 01415 DF (Mexico)], E-mail: lucas@dmae.upm.es

    2008-02-15

    We introduce a prime number generator in the form of a stochastic algorithm. The character of this algorithm gives rise to a continuous phase transition which distinguishes a phase where the algorithm is able to reduce the whole system of numbers into primes and a phase where the system reaches a frozen state with low prime density. In this paper, we firstly present a broader characterization of this phase transition, both in analytical and numerical terms. Critical exponents are calculated, and data collapse is provided. Further on, we redefine the model as a search problem, fitting it in the hallmark of computational complexity theory. We suggest that the system belongs to the class NP. The computational cost is maximal around the threshold, as is common in many algorithmic phase transitions, revealing the presence of an easy-hard-easy pattern. We finally relate the nature of the phase transition to an average-case classification of the problem.

  12. Large-scale lattice Boltzmann simulations of complex fluids: advances through the advent of computational Grids.

    Science.gov (United States)

    Harting, Jens; Chin, Jonathan; Venturoli, Maddalena; Coveney, Peter V

    2005-08-15

    During the last 2.5 years, the RealityGrid project has allowed us to be one of the few scientific groups involved in the development of computational Grids. Since smoothly working production Grids are not yet available, we have been able to substantially influence the direction of software and Grid deployment within the project. In this paper, we review our results from large-scale three-dimensional lattice Boltzmann simulations performed over the last 2.5 years. We describe how the proactive use of computational steering, and advanced job migration and visualization techniques enabled us to do our scientific work more efficiently. The projects reported on in this paper are studies of complex fluid flows under shear or in porous media, as well as large-scale parameter searches, and studies of the self-organization of liquid cubic mesophases.

  13. Resilient and Robust High Performance Computing Platforms for Scientific Computing Integrity

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Yier [Univ. of Central Florida, Orlando, FL (United States)

    2017-07-14

    As technology advances, computer systems are subject to increasingly sophisticated cyber-attacks that compromise both their security and integrity. High performance computing platforms used in commercial and scientific applications involving sensitive, or even classified data, are frequently targeted by powerful adversaries. This situation is made worse by a lack of fundamental security solutions that both perform efficiently and are effective at preventing threats. Current security solutions fail to address the threat landscape and ensure the integrity of sensitive data. As challenges rise, both private and public sectors will require robust technologies to protect its computing infrastructure. The research outcomes from this project try to address all these challenges. For example, we present LAZARUS, a novel technique to harden kernel Address Space Layout Randomization (KASLR) against paging-based side-channel attacks. In particular, our scheme allows for fine-grained protection of the virtual memory mappings that implement the randomization. We demonstrate the effectiveness of our approach by hardening a recent Linux kernel with LAZARUS, mitigating all of the previously presented side-channel attacks on KASLR. Our extensive evaluation shows that LAZARUS incurs only 0.943% overhead for standard benchmarks, and is therefore highly practical. We also introduced HA2lloc, a hardware-assisted allocator that is capable of leveraging an extended memory management unit to detect memory errors in the heap. We also perform testing using HA2lloc in a simulation environment and find that the approach is capable of preventing common memory vulnerabilities.

  14. FPGAs in High Perfomance Computing: Results from Two LDRD Projects.

    Energy Technology Data Exchange (ETDEWEB)

    Underwood, Keith D; Ulmer, Craig D.; Thompson, David; Hemmert, Karl Scott

    2006-11-01

    Field programmable gate arrays (FPGAs) have been used as alternative computational de-vices for over a decade; however, they have not been used for traditional scientific com-puting due to their perceived lack of floating-point performance. In recent years, there hasbeen a surge of interest in alternatives to traditional microprocessors for high performancecomputing. Sandia National Labs began two projects to determine whether FPGAs wouldbe a suitable alternative to microprocessors for high performance scientific computing and,if so, how they should be integrated into the system. We present results that indicate thatFPGAs could have a significant impact on future systems. FPGAs have thepotentialtohave order of magnitude levels of performance wins on several key algorithms; however,there are serious questions as to whether the system integration challenge can be met. Fur-thermore, there remain challenges in FPGA programming and system level reliability whenusing FPGA devices.4 AcknowledgmentArun Rodrigues provided valuable support and assistance in the use of the Structural Sim-ulation Toolkit within an FPGA context. Curtis Janssen and Steve Plimpton provided valu-able insights into the workings of two Sandia applications (MPQC and LAMMPS, respec-tively).5

  15. High performance computing and communications: FY 1995 implementation plan

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1994-04-01

    The High Performance Computing and Communications (HPCC) Program was formally established following passage of the High Performance Computing Act of 1991 signed on December 9, 1991. Ten federal agencies in collaboration with scientists and managers from US industry, universities, and laboratories have developed the HPCC Program to meet the challenges of advancing computing and associated communications technologies and practices. This plan provides a detailed description of the agencies` HPCC implementation plans for FY 1994 and FY 1995. This Implementation Plan contains three additional sections. Section 3 provides an overview of the HPCC Program definition and organization. Section 4 contains a breakdown of the five major components of the HPCC Program, with an emphasis on the overall directions and milestones planned for each one. Section 5 provides a detailed look at HPCC Program activities within each agency. Although the Department of Education is an official HPCC agency, its current funding and reporting of crosscut activities goes through the Committee on Education and Health Resources, not the HPCC Program. For this reason the Implementation Plan covers nine HPCC agencies.

  16. A Computer Controlled Precision High Pressure Measuring System

    Science.gov (United States)

    Sadana, S.; Yadav, S.; Jha, N.; Gupta, V. K.; Agarwal, R.; Bandyopadhyay, A. K.; Saxena, T. K.

    2011-01-01

    A microcontroller (AT89C51) based electronics has been designed and developed for high precision calibrator based on Digiquartz pressure transducer (DQPT) for the measurement of high hydrostatic pressure up to 275 MPa. The input signal from DQPT is converted into a square wave form and multiplied through frequency multiplier circuit over 10 times to input frequency. This input frequency is multiplied by a factor of ten using phased lock loop. Octal buffer is used to store the calculated frequency, which in turn is fed to microcontroller AT89C51 interfaced with a liquid crystal display for the display of frequency as well as corresponding pressure in user friendly units. The electronics developed is interfaced with a computer using RS232 for automatic data acquisition, computation and storage. The data is acquired by programming in Visual Basic 6.0. This system is interfaced with the PC to make it a computer controlled system. The system is capable of measuring the frequency up to 4 MHz with a resolution of 0.01 Hz and the pressure up to 275 MPa with a resolution of 0.001 MPa within measurement uncertainty of 0.025%. The details on the hardware of the pressure measuring system, associated electronics, software and calibration are discussed in this paper.

  17. A new fast algorithm for computing a complex number: Theoretic transforms

    Science.gov (United States)

    Reed, I. S.; Liu, K. Y.; Truong, T. K.

    1977-01-01

    A high-radix fast Fourier transformation (FFT) algorithm for computing transforms over GF(sq q), where q is a Mersenne prime, is developed to implement fast circular convolutions. This new algorithm requires substantially fewer multiplications than the conventional FFT.

  18. Multimodal Learning Analytics and Education Data Mining: Using Computational Technologies to Measure Complex Learning Tasks

    Science.gov (United States)

    Blikstein, Paulo; Worsley, Marcelo

    2016-01-01

    New high-frequency multimodal data collection technologies and machine learning analysis techniques could offer new insights into learning, especially when students have the opportunity to generate unique, personalized artifacts, such as computer programs, robots, and solutions engineering challenges. To date most of the work on learning analytics…

  19. VinaMPI: facilitating multiple receptor high-throughput virtual docking on high-performance computers.

    Science.gov (United States)

    Ellingson, Sally R; Smith, Jeremy C; Baudry, Jerome

    2013-09-30

    The program VinaMPI has been developed to enable massively large virtual drug screens on leadership-class computing resources, using a large number of cores to decrease the time-to-completion of the screen. VinaMPI is a massively parallel Message Passing Interface (MPI) program based on the multithreaded virtual docking program AutodockVina, and is used to distribute tasks while multithreading is used to speed-up individual docking tasks. VinaMPI uses a distribution scheme in which tasks are evenly distributed to the workers based on the complexity of each task, as defined by the number of rotatable bonds in each chemical compound investigated. VinaMPI efficiently handles multiple proteins in a ligand screen, allowing for high-throughput inverse docking that presents new opportunities for improving the efficiency of the drug discovery pipeline. VinaMPI successfully ran on 84,672 cores with a continual decrease in job completion time with increasing core count. The ratio of the number of tasks in a screening to the number of workers should be at least around 100 in order to have a good load balance and an optimal job completion time. The code is freely available and downloadable. Instructions for downloading and using the code are provided in the Supporting Information.

  20. Department of Energy Mathematical, Information, and Computational Sciences Division: High Performance Computing and Communications Program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-11-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, The DOE Program in HPCC), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW).

  1. Department of Energy: MICS (Mathematical Information, and Computational Sciences Division). High performance computing and communications program

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1996-06-01

    This document is intended to serve two purposes. Its first purpose is that of a program status report of the considerable progress that the Department of Energy (DOE) has made since 1993, the time of the last such report (DOE/ER-0536, {open_quotes}The DOE Program in HPCC{close_quotes}), toward achieving the goals of the High Performance Computing and Communications (HPCC) Program. The second purpose is that of a summary report of the many research programs administered by the Mathematical, Information, and Computational Sciences (MICS) Division of the Office of Energy Research under the auspices of the HPCC Program and to provide, wherever relevant, easy access to pertinent information about MICS-Division activities via universal resource locators (URLs) on the World Wide Web (WWW). The information pointed to by the URL is updated frequently, and the interested reader is urged to access the WWW for the latest information.

  2. The Computational Complexity of Portal and Other 3D Video Games

    OpenAIRE

    Erik D. Demaine; Lockhart, Joshua; Lynch, Jayson

    2016-01-01

    We classify the computational complexity of the popular video games Portal and Portal 2. We isolate individual mechanics of the game and prove NP-hardness, PSPACE-completeness, or (pseudo)polynomiality depending on the specific game mechanics allowed. One of our proofs generalizes to prove NP-hardness of many other video games such as Half-Life 2, Halo, Doom, Elder Scrolls, Fallout, Grand Theft Auto, Left 4 Dead, Mass Effect, Deus Ex, Metal Gear Solid, and Resident Evil. These results build o...

  3. FORTRAN 4 computer program for calculation of thermodynamic and transport properties of complex chemical systems

    Science.gov (United States)

    Svehla, R. A.; Mcbride, B. J.

    1973-01-01

    A FORTRAN IV computer program for the calculation of the thermodynamic and transport properties of complex mixtures is described. The program has the capability of performing calculations such as:(1) chemical equilibrium for assigned thermodynamic states, (2) theoretical rocket performance for both equilibrium and frozen compositions during expansion, (3) incident and reflected shock properties, and (4) Chapman-Jouguet detonation properties. Condensed species, as well as gaseous species, are considered in the thermodynamic calculation; but only the gaseous species are considered in the transport calculations.

  4. High Performance Computing Facility Operational Assessment, CY 2011 Oak Ridge Leadership Computing Facility

    Energy Technology Data Exchange (ETDEWEB)

    Baker, Ann E [ORNL; Barker, Ashley D [ORNL; Bland, Arthur S Buddy [ORNL; Boudwin, Kathlyn J. [ORNL; Hack, James J [ORNL; Kendall, Ricky A [ORNL; Messer, Bronson [ORNL; Rogers, James H [ORNL; Shipman, Galen M [ORNL; Wells, Jack C [ORNL; White, Julia C [ORNL; Hudson, Douglas L [ORNL

    2012-02-01

    Oak Ridge National Laboratory's Leadership Computing Facility (OLCF) continues to deliver the most powerful resources in the U.S. for open science. At 2.33 petaflops peak performance, the Cray XT Jaguar delivered more than 1.4 billion core hours in calendar year (CY) 2011 to researchers around the world for computational simulations relevant to national and energy security; advancing the frontiers of knowledge in physical sciences and areas of biological, medical, environmental, and computer sciences; and providing world-class research facilities for the nation's science enterprise. Users reported more than 670 publications this year arising from their use of OLCF resources. Of these we report the 300 in this review that are consistent with guidance provided. Scientific achievements by OLCF users cut across all range scales from atomic to molecular to large-scale structures. At the atomic scale, researchers discovered that the anomalously long half-life of Carbon-14 can be explained by calculating, for the first time, the very complex three-body interactions between all the neutrons and protons in the nucleus. At the molecular scale, researchers combined experimental results from LBL's light source and simulations on Jaguar to discover how DNA replication continues past a damaged site so a mutation can be repaired later. Other researchers combined experimental results from ORNL's Spallation Neutron Source and simulations on Jaguar to reveal the molecular structure of ligno-cellulosic material used in bioethanol production. This year, Jaguar has been used to do billion-cell CFD calculations to develop shock wave compression turbo machinery as a means to meet DOE goals for reducing carbon sequestration costs. General Electric used Jaguar to calculate the unsteady flow through turbo machinery to learn what efficiencies the traditional steady flow assumption is hiding from designers. Even a 1% improvement in turbine design can save the nation

  5. TOWARD HIGHLY SECURE AND AUTONOMIC COMPUTING SYSTEMS: A HIERARCHICAL APPROACH

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Hsien-Hsin S

    2010-05-11

    The overall objective of this research project is to develop novel architectural techniques as well as system software to achieve a highly secure and intrusion-tolerant computing system. Such system will be autonomous, self-adapting, introspective, with self-healing capability under the circumstances of improper operations, abnormal workloads, and malicious attacks. The scope of this research includes: (1) System-wide, unified introspection techniques for autonomic systems, (2) Secure information-flow microarchitecture, (3) Memory-centric security architecture, (4) Authentication control and its implication to security, (5) Digital right management, (5) Microarchitectural denial-of-service attacks on shared resources. During the period of the project, we developed several architectural techniques and system software for achieving a robust, secure, and reliable computing system toward our goal.

  6. Iterative coupling reservoir simulation on high performance computers

    Institute of Scientific and Technical Information of China (English)

    Lu Bo; Wheeler Mary F

    2009-01-01

    In this paper, the iterative coupling approach is proposed for applications to solving multiphase flow equation systems in reservoir simulation, as it provides a more flexible time-stepping strategy than existing approaches. The iterative method decouples the whole equation systems into pressure and saturation/concentration equations, and then solves them in sequence, implicitly and semi-implicitly. At each time step, a series of iterations are computed, which involve solving linearized equations using specific tolerances that are iteration dependent. Following convergence of subproblems, material balance is checked. Convergence of time steps is based on material balance errors. Key components of the iterative method include phase scaling for deriving a pressure equation and use of several advanced numerical techniques. The iterative model is implemented for parallel computing platforms and shows high parallel efficiency and scalability.

  7. High Performance Computing - Power Application Programming Interface Specification.

    Energy Technology Data Exchange (ETDEWEB)

    Laros, James H.,; Kelly, Suzanne M.; Pedretti, Kevin; Grant, Ryan; Olivier, Stephen Lecler; Levenhagen, Michael J.; DeBonis, David

    2014-08-01

    Measuring and controlling the power and energy consumption of high performance computing systems by various components in the software stack is an active research area [13, 3, 5, 10, 4, 21, 19, 16, 7, 17, 20, 18, 11, 1, 6, 14, 12]. Implementations in lower level software layers are beginning to emerge in some production systems, which is very welcome. To be most effective, a portable interface to measurement and control features would significantly facilitate participation by all levels of the software stack. We present a proposal for a standard power Application Programming Interface (API) that endeavors to cover the entire software space, from generic hardware interfaces to the input from the computer facility manager.

  8. Comparative computational study of model halogen-bonded complexes of FKrCl.

    Science.gov (United States)

    Joseph, Jerelle A; McDowell, Sean A C

    2015-03-19

    Quantum chemical calculations for the FKrCl molecule at various levels of theory were performed and suggest that this molecule is metastable and may be amenable to experimental synthesis under cryogenic conditions. The FKrCl molecule forms weak halogen-bonded complexes FKrCl···Y with small molecules like FH and H2O and its computed properties were compared with those for analogous complexes of its precursor, FCl, and its rare gas hydride counterpart, FKrH. The cooperative effect of additional noncovalent interactions introduced at the F atom in the FKrCl···Y dimer (to give Z···FKrCl···Y trimers) showed a general strengthening of the intermolecular interactions in the order halogen bond < hydrogen bond < beryllium bond < lithium bond.

  9. The effects of syntactic complexity on the human-computer interaction

    Science.gov (United States)

    Chechile, R. A.; Fleischman, R. N.; Sadoski, D. M.

    1986-01-01

    Three divided-attention experiments were performed to evaluate the effectiveness of a syntactic analysis of the primary task of editing flight route-way-point information. For all editing conditions, a formal syntactic expression was developed for the operator's interaction with the computer. In terms of the syntactic expression, four measures of syntactic were examined. Increased syntactic complexity did increase the time to train operators, but once the operators were trained, syntactic complexity did not influence the divided-attention performance. However, the number of memory retrievals required of the operator significantly accounted for the variation in the accuracy, workload, and task completion time found on the different editing tasks under attention-sharing conditions.

  10. A computational method for planning complex compound distributions under container, liquid handler, and assay constraints.

    Science.gov (United States)

    Russo, Mark F; Wild, Daniel; Hoffman, Steve; Paulson, James; Neil, William; Nirschl, David S

    2013-10-01

    A systematic method for assembling and solving complex compound distribution problems is presented in detail. The method is based on a model problem that enumerates the mathematical equations and constraints describing a source container, liquid handler, and three types of destination containers involved in a set of compound distributions. One source container and one liquid handler are permitted in any given problem formulation, although any number of compound distributions may be specified. The relative importance of all distributions is expressed by assigning weights, which are factored into the final mathematical problem specification. A computer program was created that automatically assembles and solves a complete compound distribution problem given the parameters that describe the source container, liquid handler, and any number and type of compound distributions. Business rules are accommodated by adjusting weighting factors assigned to each distribution. An example problem, presented and explored in detail, demonstrates complex and nonintuitive solution behavior.

  11. Solving A Kind of High Complexity Multi-Objective Problems by A Fast Algorithm

    Institute of Scientific and Technical Information of China (English)

    Zeng San-you; Ding Li-xin; Kang Li-shan

    2003-01-01

    A fast algorithm is proposed to solve a kind of high complexity multi-objective problems in this paper. It takes advantages of both the orthogonal design method to search evenly, and the statistical optimal method to speed up the computation. It is very suitable for solving high complexity problems, and quickly yields solutions which converge to the Pareto-optimal set with high precision and uniform distribution. Some complicated multi objective problems are solved by the algorithm and the results show that the algorithm is not only fast but also superior to other MCGAS and MOEAs, such as the currently efficient algorithm SPEA, in terms of the precision, quantity and distribution of solutions.

  12. Stochastic Homology. Reduction Formulas for Computing Stochastic Betti Numbers of Maximal Random Complexes with Discrete Probabilities. Computation and Applications

    CERN Document Server

    Todorov, Todor

    2011-01-01

    Given a chain complex with the only modi?cation that each cell of the complex has a probability distribution assigned. We will call this complex - a random complex and what should be understood in practice, is that we have a classical chain complex whose cells appear and disappear according to some probability distributions. In this paper, we will try to fi?nd the stochastic homology of random complex, whose simplices have independent discrete distributions.

  13. Next Generation Seismic Imaging; High Fidelity Algorithms and High-End Computing

    Science.gov (United States)

    Bevc, D.; Ortigosa, F.; Guitton, A.; Kaelin, B.

    2007-05-01

    The rich oil reserves of the Gulf of Mexico are buried in deep and ultra-deep waters up to 30,000 feet from the surface. Minerals Management Service (MMS), the federal agency in the U.S. Department of the Interior that manages the nation's oil, natural gas and other mineral resources on the outer continental shelf in federal offshore waters, estimates that the Gulf of Mexico holds 37 billion barrels of "undiscovered, conventionally recoverable" oil, which, at 50/barrel, would be worth approximately 1.85 trillion. These reserves are very difficult to find and reach due to the extreme depths. Technological advances in seismic imaging represent an opportunity to overcome this obstacle by providing more accurate models of the subsurface. Among these technological advances, Reverse Time Migration (RTM) yields the best possible images. RTM is based on the solution of the two-way acoustic wave-equation. This technique relies on the velocity model to image turning waves. These turning waves are particularly important to unravel subsalt reservoirs and delineate salt-flanks, a natural trap for oil and gas. Because it relies on an accurate velocity model, RTM opens new frontier in designing better velocity estimation algorithms. RTM has been widely recognized as the next chapter in seismic exploration, as it can overcome the limitations of current migration methods in imaging complex geologic structures that exist in the Gulf of Mexico. The chief impediment to the large-scale, routine deployment of RTM has been a lack of sufficient computer power. RTM needs thirty times the computing power used in exploration today to be commercially viable and widely usable. Therefore, advancing seismic imaging to the next level of precision poses a multi-disciplinary challenge. To overcome these challenges, the Kaleidoscope project, a partnership between Repsol YPF, Barcelona Supercomputing Center, 3DGeo Inc., and IBM brings together the necessary components of modeling, algorithms and the

  14. High-reliability computing for the smarter planet

    Energy Technology Data Exchange (ETDEWEB)

    Quinn, Heather M [Los Alamos National Laboratory; Graham, Paul [Los Alamos National Laboratory; Manuzzato, Andrea [UNIV OF PADOVA; Dehon, Andre [UNIV OF PENN; Carter, Nicholas [INTEL CORPORATION

    2010-01-01

    The geometric rate of improvement of transistor size and integrated circuit performance, known as Moore's Law, has been an engine of growth for our economy, enabling new products and services, creating new value and wealth, increasing safety, and removing menial tasks from our daily lives. Affordable, highly integrated components have enabled both life-saving technologies and rich entertainment applications. Anti-lock brakes, insulin monitors, and GPS-enabled emergency response systems save lives. Cell phones, internet appliances, virtual worlds, realistic video games, and mp3 players enrich our lives and connect us together. Over the past 40 years of silicon scaling, the increasing capabilities of inexpensive computation have transformed our society through automation and ubiquitous communications. In this paper, we will present the concept of the smarter planet, how reliability failures affect current systems, and methods that can be used to increase the reliable adoption of new automation in the future. We will illustrate these issues using a number of different electronic devices in a couple of different scenarios. Recently IBM has been presenting the idea of a 'smarter planet.' In smarter planet documents, IBM discusses increased computer automation of roadways, banking, healthcare, and infrastructure, as automation could create more efficient systems. A necessary component of the smarter planet concept is to ensure that these new systems have very high reliability. Even extremely rare reliability problems can easily escalate to problematic scenarios when implemented at very large scales. For life-critical systems, such as automobiles, infrastructure, medical implantables, and avionic systems, unmitigated failures could be dangerous. As more automation moves into these types of critical systems, reliability failures will need to be managed. As computer automation continues to increase in our society, the need for greater radiation reliability is

  15. High-valent imido complexes of manganese and chromium corroles.

    Science.gov (United States)

    Edwards, Nicola Y; Eikey, Rebecca A; Loring, Megan I; Abu-Omar, Mahdi M

    2005-05-16

    The oxidation reaction of M(tpfc) [M = Mn or Cr and tpfc = tris(pentafluorophenyl)corrole] with aryl azides under photolytic or thermal conditions gives the first examples of mononuclear imido complexes of manganese(V) and chromium(V). These complexes have been characterized by NMR, mass spectrometry, UV-vis, EPR, elemental analysis, and cyclic voltammetry. Two X-ray structures have been obtained for Mn(tpfc)(NMes) and Cr(tpfc)(NMes) [Mes = 2,4,6-(CH(3))(3)C(6)H(2)]. Short metal-imido bonds (1.610 and 1.635 Angstroms) as well as nearly linear M-N-C angles are consistent with triple M triple-bond NR bond formation. The kinetics of nitrene [NR] group transfer from manganese(V) corroles to various organic phosphines have been defined. Reduction of the manganese(V) corrolato complex affords phosphine imine and Mn(III) with reaction rates that are sensitive to steric and electronic elements of the phosphine substrate. An analogous manganese complex with a variant corrole ligand containing bromine atoms in the beta-pyrrole positions, Mn(Br(8)tpfc)(NAr), has been prepared and studied. Its reaction with PEt(3) is 250x faster than that of the parent tpfc complex, and its Mn(V/IV) couple is shifted by 370 mV to a more positive potential. The EPR spectra of chromium(V) imido corroles reveal a rich signal at ambient temperature consistent with Cr(V) triple-bond NR (d(1), S = 1/2) containing a localized spin density in the d(xy) orbital, and an anisotropic signal at liquid nitrogen temperature. Our results demonstrate the synthetic utility of organic aryl azides in the preparation of mononuclear metal imido complexes previously considered elusive, and suggest strong sigma-donation as the underlying factor in stabilizing high-valent metals by corrole ligands.

  16. A Crafts-Oriented Approach to Computing in High School: Introducing Computational Concepts, Practices, and Perspectives with Electronic Textiles

    Science.gov (United States)

    Kafai, Yasmin B.; Lee, Eunkyoung; Searle, Kristin; Fields, Deborah; Kaplan, Eliot; Lui, Debora

    2014-01-01

    In this article, we examine the use of electronic textiles (e-textiles) for introducing key computational concepts and practices while broadening perceptions about computing. The starting point of our work was the design and implementation of a curriculum module using the LilyPad Arduino in a pre-AP high school computer science class. To…

  17. Synthetic, crystallographic, and computational study of copper(II) complexes of ethylenediaminetetracarboxylate ligands.

    Science.gov (United States)

    Matović, Zoran D; Miletić, Vesna D; Ćendić, Marina; Meetsma, Auke; van Koningsbruggen, Petra J; Deeth, Robert J

    2013-02-04

    Copper(II) complexes of hexadentate ethylenediaminetetracarboxylic acid type ligands H(4)eda3p and H(4)eddadp (H(4)eda3p = ethylenediamine-N-acetic-N,N',N'-tri-3-propionic acid; H(4)eddadp = ethylenediamine-N,N'-diacetic-N,N'-di-3-propionic acid) have been prepared. An octahedral trans(O(6)) geometry (two propionate ligands coordinated in axial positions) has been established crystallographically for the Ba[Cu(eda3p)]·8H(2)O compound, while Ba[Cu(eddadp)]·8H(2)O is proposed to adopt a trans(O(5)) geometry (two axial acetates) on the basis of density functional theory calculations and comparisons of IR and UV-vis spectral data. Experimental and computed structural data correlating similar copper(II) chelate complexes have been used to better understand the isomerism and departure from regular octahedral geometry within the series. The in-plane O-Cu-N chelate angles show the smallest deviation from the ideal octahedral value of 90°, and hence the lowest strain, for the eddadp complex with two equatorial β-propionate rings. A linear dependence between tetragonality and the number of five-membered rings has been established. A natural bonding orbital analysis of the series of complexes is also presented.

  18. Computational Confirmation of the Carrier for the "XCN" Interstellar Ice Bank: OCN(-) Charge Transfer Complexes

    Science.gov (United States)

    Park, J.-Y.; Woon, D. E.

    2004-01-01

    Recent experimental studies provide evidence that carrier for the so-called XCN feature at 2165 cm(exp -1) (4.62 micron) in young stellar objects is an OCN(-)/NH4(+) charge transfer (CT) complex that forms in energetically processed interstellar icy grain mantles. Although other RCN nitriles and RCN iosonitriles have been considered, Greenberg's conjecture that OCN(-) is associated with the XCN feature has persisted for over 15 years. In this work we report a computational investigation that thoroughly confirms the hypothesis that the XCN feature observed in laboratory studies can result from OCN(-)/NH4(+) CT complexes arising from HNCO and NH3, in a water ice environment. Density functional theory calculations with theory calculations with HNCO, NH3, and up to 12 waters reproduce seven spectroscopic measurements associated with XCN: the band origin of the asymmetric stretching mode of OCN(-), shifts due to isotopic substitutions of C, N, O, and H, and two weak features. However, very similar values are also found for the OCN(-)/NH4(+) CT complex arising from HOCN and NH3. In both cases, the complex forms by barrierless proton transfer from HNCO or HOCN to NH3 during the optimization of the solvated system. Scaled B3LYP/6-31+G** harmonic frequencies for HNCO and HOCN cases are 2181 and 2202 cm(exp -1), respectively.

  19. Impact of familiarity on information complexity in human-computer interfaces

    Directory of Open Access Journals (Sweden)

    Bakaev Maxim

    2016-01-01

    Full Text Available A quantitative measure of information complexity remains very much desirable in HCI field, since it may aid in optimization of user interfaces, especially in human-computer systems for controlling complex objects. Our paper is dedicated to exploration of subjective (subject-depended aspect of the complexity, conceptualized as information familiarity. Although research of familiarity in human cognition and behaviour is done in several fields, the accepted models in HCI, such as Human Processor or Hick-Hyman’s law do not generally consider this issue. In our experimental study the subjects performed search and selection of digits and letters, whose familiarity was conceptualized as frequency of occurrence in numbers and texts. The analysis showed significant effect of information familiarity on selection time and throughput in regression models, although the R2 values were somehow low. Still, we hope that our results might aid in quantification of information complexity and its further application for optimizing interaction in human-machine systems.

  20. NCI's High Performance Computing (HPC) and High Performance Data (HPD) Computing Platform for Environmental and Earth System Data Science

    Science.gov (United States)

    Evans, Ben; Allen, Chris; Antony, Joseph; Bastrakova, Irina; Gohar, Kashif; Porter, David; Pugh, Tim; Santana, Fabiana; Smillie, Jon; Trenham, Claire; Wang, Jingbo; Wyborn, Lesley

    2015-04-01

    The National Computational Infrastructure (NCI) has established a powerful and flexible in-situ petascale computational environment to enable both high performance computing and Data-intensive Science across a wide spectrum of national environmental and earth science data collections - in particular climate, observational data and geoscientific assets. This paper examines 1) the computational environments that supports the modelling and data processing pipelines, 2) the analysis environments and methods to support data analysis, and 3) the progress so far to harmonise the underlying data collections for future interdisciplinary research across these large volume data collections. NCI has established 10+ PBytes of major national and international data collections from both the government and research sectors based on six themes: 1) weather, climate, and earth system science model simulations, 2) marine and earth observations, 3) geosciences, 4) terrestrial ecosystems, 5) water and hydrology, and 6) astronomy, social and biosciences. Collectively they span the lithosphere, crust, biosphere, hydrosphere, troposphere, and stratosphere. The data is largely sourced from NCI's partners (which include the custodians of many of the major Australian national-scale scientific collections), leading research communities, and collaborating overseas organisations. New infrastructures created at NCI mean the data collections are now accessible within an integrated High Performance Computing and Data (HPC-HPD) environment - a 1.2 PFlop supercomputer (Raijin), a HPC class 3000 core OpenStack cloud system and several highly connected large-scale high-bandwidth Lustre filesystems. The hardware was designed at inception to ensure that it would allow the layered software environment to flexibly accommodate the advancement of future data science. New approaches to software technology and data models have also had to be developed to enable access to these large and exponentially

  1. High performance computing and communications: Advancing the frontiers of information technology

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-12-31

    This report, which supplements the President`s Fiscal Year 1997 Budget, describes the interagency High Performance Computing and Communications (HPCC) Program. The HPCC Program will celebrate its fifth anniversary in October 1996 with an impressive array of accomplishments to its credit. Over its five-year history, the HPCC Program has focused on developing high performance computing and communications technologies that can be applied to computation-intensive applications. Major highlights for FY 1996: (1) High performance computing systems enable practical solutions to complex problems with accuracies not possible five years ago; (2) HPCC-funded research in very large scale networking techniques has been instrumental in the evolution of the Internet, which continues exponential growth in size, speed, and availability of information; (3) The combination of hardware capability measured in gigaflop/s, networking technology measured in gigabit/s, and new computational science techniques for modeling phenomena has demonstrated that very large scale accurate scientific calculations can be executed across heterogeneous parallel processing systems located thousands of miles apart; (4) Federal investments in HPCC software R and D support researchers who pioneered the development of parallel languages and compilers, high performance mathematical, engineering, and scientific libraries, and software tools--technologies that allow scientists to use powerful parallel systems to focus on Federal agency mission applications; and (5) HPCC support for virtual environments has enabled the development of immersive technologies, where researchers can explore and manipulate multi-dimensional scientific and engineering problems. Educational programs fostered by the HPCC Program have brought into classrooms new science and engineering curricula designed to teach computational science. This document contains a small sample of the significant HPCC Program accomplishments in FY 1996.

  2. Modeling, Simulation and Analysis of Complex Networked Systems: A Program Plan for DOE Office of Advanced Scientific Computing Research

    Energy Technology Data Exchange (ETDEWEB)

    Brown, D L

    2009-05-01

    Many complex systems of importance to the U.S. Department of Energy consist of networks of discrete components. Examples are cyber networks, such as the internet and local area networks over which nearly all DOE scientific, technical and administrative data must travel, the electric power grid, social networks whose behavior can drive energy demand, and biological networks such as genetic regulatory networks and metabolic networks. In spite of the importance of these complex networked systems to all aspects of DOE's operations, the scientific basis for understanding these systems lags seriously behind the strong foundations that exist for the 'physically-based' systems usually associated with DOE research programs that focus on such areas as climate modeling, fusion energy, high-energy and nuclear physics, nano-science, combustion, and astrophysics. DOE has a clear opportunity to develop a similarly strong scientific basis for understanding the structure and dynamics of networked systems by supporting a strong basic research program in this area. Such knowledge will provide a broad basis for, e.g., understanding and quantifying the efficacy of new security approaches for computer networks, improving the design of computer or communication networks to be more robust against failures or attacks, detecting potential catastrophic failure on the power grid and preventing or mitigating its effects, understanding how populations will respond to the availability of new energy sources or changes in energy policy, and detecting subtle vulnerabilities in large software systems to intentional attack. This white paper outlines plans for an aggressive new research program designed to accelerate the advancement of the scientific basis for complex networked systems of importance to the DOE. It will focus principally on four research areas: (1) understanding network structure, (2) understanding network dynamics, (3) predictive modeling and simulation for complex

  3. Next-generation sequencing: big data meets high performance computing.

    Science.gov (United States)

    Schmidt, Bertil; Hildebrandt, Andreas

    2017-02-02

    The progress of next-generation sequencing has a major impact on medical and genomic research. This high-throughput technology can now produce billions of short DNA or RNA fragments in excess of a few terabytes of data in a single run. This leads to massive datasets used by a wide range of applications including personalized cancer treatment and precision medicine. In addition to the hugely increased throughput, the cost of using high-throughput technologies has been dramatically decreasing. A low sequencing cost of around US$1000 per genome has now rendered large population-scale projects feasible. However, to make effective use of the produced data, the design of big data algorithms and their efficient implementation on modern high performance computing systems is required.

  4. Towards robust dynamical decoupling and high fidelity adiabatic quantum computation

    Science.gov (United States)

    Quiroz, Gregory

    Quantum computation (QC) relies on the ability to implement high-fidelity quantum gate operations and successfully preserve quantum state coherence. One of the most challenging obstacles for reliable QC is overcoming the inevitable interaction between a quantum system and its environment. Unwanted interactions result in decoherence processes that cause quantum states to deviate from a desired evolution, consequently leading to computational errors and loss of coherence. Dynamical decoupling (DD) is one such method, which seeks to attenuate the effects of decoherence by applying strong and expeditious control pulses solely to the system. Provided the pulses are applied over a time duration sufficiently shorter than the correlation time associated with the environment dynamics, DD effectively averages out undesirable interactions and preserves quantum states with a low probability of error, or fidelity loss. In this study various aspects of this approach are studied from sequence construction to applications of DD to protecting QC. First, a comprehensive examination of the error suppression properties of a near-optimal DD approach is given to understand the relationship between error suppression capabilities and the number of required DD control pulses in the case of ideal, instantaneous pulses. While such considerations are instructive for examining DD efficiency, i.e., performance vs the number of control pulses, high-fidelity DD in realizable systems is difficult to achieve due to intrinsic pulse imperfections which further contribute to decoherence. As a second consideration, it is shown how one can overcome this hurdle and achieve robustness and recover high-fidelity DD in the presence of faulty control pulses using Genetic Algorithm optimization and sequence symmetrization. Thirdly, to illustrate the implementation of DD in conjunction with QC, the utilization of DD and quantum error correction codes (QECCs) as a protection method for adiabatic quantum

  5. Convergent evolution of complex brains and high intelligence.

    Science.gov (United States)

    Roth, Gerhard

    2015-12-19

    Within the animal kingdom, complex brains and high intelligence have evolved several to many times independently, e.g. among ecdysozoans in some groups of insects (e.g. blattoid, dipteran, hymenopteran taxa), among lophotrochozoans in octopodid molluscs, among vertebrates in teleosts (e.g. cichlids), corvid and psittacid birds, and cetaceans, elephants and primates. High levels of intelligence are invariantly bound to multimodal centres such as the mushroom bodies in insects, the vertical lobe in octopodids, the pallium in birds and the cerebral cortex in primates, all of which contain highly ordered associative neuronal networks. The driving forces for high intelligence may vary among the mentioned taxa, e.g. needs for spatial learning and foraging strategies in insects and cephalopods, for social learning in cichlids, instrumental learning and spatial orientation in birds and social as well as instrumental learning in primates.

  6. Chip-to-board interconnects for high-performance computing

    Science.gov (United States)

    Riester, Markus B. K.; Houbertz-Krauss, Ruth; Steenhusen, Sönke

    2013-02-01

    Super computing is reaching out to ExaFLOP processing speeds, creating fundamental challenges for the way that computing systems are designed and built. One governing topic is the reduction of power used for operating the system, and eliminating the excess heat generated from the system. Current thinking sees optical interconnects on most interconnect levels to be a feasible solution to many of the challenges, although there are still limitations to the technical solutions, in particular with regard to manufacturability. This paper explores drivers for enabling optical interconnect technologies to advance into the module and chip level. The introduction of optical links into High Performance Computing (HPC) could be an option to allow scaling the manufacturing technology to large volume manufacturing. This will drive the need for manufacturability of optical interconnects, giving rise to other challenges that add to the realization of this type of interconnection. This paper describes a solution that allows the creation of optical components on module level, integrating optical chips, laser diodes or PIN diodes as components much like the well known SMD components used for electrical components. The paper shows the main challenges and potential solutions to this challenge and proposes a fundamental paradigm shift in the manufacturing of 3-dimensional optical links for the level 1 interconnect (chip package).

  7. Molecular Dynamics Simulations on High-Performance Reconfigurable Computing Systems.

    Science.gov (United States)

    Chiu, Matt; Herbordt, Martin C

    2010-11-01

    The acceleration of molecular dynamics (MD) simulations using high-performance reconfigurable computing (HPRC) has been much studied. Given the intense competition from multicore and GPUs, there is now a question whether MD on HPRC can be competitive. We concentrate here on the MD kernel computation: determining the short-range force between particle pairs. In one part of the study, we systematically explore the design space of the force pipeline with respect to arithmetic algorithm, arithmetic mode, precision, and various other optimizations. We examine simplifications and find that some have little effect on simulation quality. In the other part, we present the first FPGA study of the filtering of particle pairs with nearly zero mutual force, a standard optimization in MD codes. There are several innovations, including a novel partitioning of the particle space, and new methods for filtering and mapping work onto the pipelines. As a consequence, highly efficient filtering can be implemented with only a small fraction of the FPGA's resources. Overall, we find that, for an Altera Stratix-III EP3ES260, 8 force pipelines running at nearly 200 MHz can fit on the FPGA, and that they can perform at 95% efficiency. This results in an 80-fold per core speed-up for the short-range force, which is likely to make FPGAs highly competitive for MD.

  8. Addressing capability computing challenges of high-resolution global climate modelling at the Oak Ridge Leadership Computing Facility

    Science.gov (United States)

    Anantharaj, Valentine; Norman, Matthew; Evans, Katherine; Taylor, Mark; Worley, Patrick; Hack, James; Mayer, Benjamin

    2014-05-01

    During 2013, high-resolution climate model simulations accounted for over 100 million "core hours" using Titan at the Oak Ridge Leadership Computing Facility (OLCF). The suite of climate modeling experiments, primarily using the Community Earth System Model (CESM) at nearly 0.25 degree horizontal resolution, generated over a petabyte of data and nearly 100,000 files, ranging in sizes from 20 MB to over 100 GB. Effective utilization of leadership class resources requires careful planning and preparation. The application software, such as CESM, need to be ported, optimized and benchmarked for the target platform in order to meet the computational readiness requirements. The model configuration needs to be "tuned and balanced" for the experiments. This can be a complicated and resource intensive process, especially for high-resolution configurations using complex physics. The volume of I/O also increases with resolution; and new strategies may be required to manage I/O especially for large checkpoint and restart files that may require more frequent output for resiliency. It is also essential to monitor the application performance during the course of the simulation exercises. Finally, the large volume of data needs to be analyzed to derive the scientific results; and appropriate data and information delivered to the stakeholders. Titan is currently the largest supercomputer available for open science. The computational resources, in terms of "titan core hours" are allocated primarily via the Innovative and Novel Computational Impact on Theory and Experiment (INCITE) and ASCR Leadership Computing Challenge (ALCC) programs, both sponsored by the U.S. Department of Energy (DOE) Office of Science. Titan is a Cray XK7 system, capable of a theoretical peak performance of over 27 PFlop/s, consists of 18,688 compute nodes, with a NVIDIA Kepler K20 GPU and a 16-core AMD Opteron CPU in every node, for a total of 299,008 Opteron cores and 18,688 GPUs offering a cumulative 560

  9. Estimating wildlife disease dynamics in complex systems using an Approximate Bayesian Computation framework.

    Science.gov (United States)

    Kosmala, Margaret; Miller, Philip; Ferreira, Sam; Funston, Paul; Keet, Dewald; Packer, Craig

    2016-01-01

    Emerging infectious diseases of wildlife are of increasing concern to managers and conservation policy makers, but are often difficult to study and predict due to the complexity of host-disease systems and a paucity of empirical data. We demonstrate the use of an Approximate Bayesian Computation statistical framework to reconstruct the disease dynamics of bovine tuberculosis in Kruger National Park's lion population, despite limited empirical data on the disease's effects in lions. The modeling results suggest that, while a large proportion of the lion population will become infected with bovine tuberculosis, lions are a spillover host and long disease latency is common. In the absence of future aggravating factors, bovine tuberculosis is projected to cause a lion population decline of ~3% over the next 50 years, with the population stabilizing at this new equilibrium. The Approximate Bayesian Computation framework is a new tool for wildlife managers. It allows emerging infectious diseases to be modeled in complex systems by incorporating disparate knowledge about host demographics, behavior, and heterogeneous disease transmission, while allowing inference of unknown system parameters.

  10. Computationally efficient multidimensional analysis of complex flow cytometry data using second order polynomial histograms.

    Science.gov (United States)

    Zaunders, John; Jing, Junmei; Leipold, Michael; Maecker, Holden; Kelleher, Anthony D; Koch, Inge

    2016-01-01

    Many methods have been described for automated clustering analysis of complex flow cytometry data, but so far the goal to efficiently estimate multivariate densities and their modes for a moderate number of dimensions and potentially millions of data points has not been attained. We have devised a novel approach to describing modes using second order polynomial histogram estimators (SOPHE). The method divides the data into multivariate bins and determines the shape of the data in each bin based on second order polynomials, which is an efficient computation. These calculations yield local maxima and allow joining of adjacent bins to identify clusters. The use of second order polynomials also optimally uses wide bins, such that in most cases each parameter (dimension) need only be divided into 4-8 bins, again reducing computational load. We have validated this method using defined mixtures of up to 17 fluorescent beads in 16 dimensions, correctly identifying all populations in data files of 100,000 beads in analysis, and up to 65 subpopulations of PBMC in 33-dimensional CyTOF data, showing its usefulness in discovery research. SOPHE has the potential to greatly increase efficiency of analysing complex mixtures of cells in higher dimensions.

  11. Physiological Dynamics in Demyelinating Diseases: Unraveling Complex Relationships through Computer Modeling

    Directory of Open Access Journals (Sweden)

    Jay S. Coggan

    2015-09-01

    Full Text Available Despite intense research, few treatments are available for most neurological disorders. Demyelinating diseases are no exception. This is perhaps not surprising considering the multifactorial nature of these diseases, which involve complex interactions between immune system cells, glia and neurons. In the case of multiple sclerosis, for example, there is no unanimity among researchers about the cause or even which system or cell type could be ground zero. This situation precludes the development and strategic application of mechanism-based therapies. We will discuss how computational modeling applied to questions at different biological levels can help link together disparate observations and decipher complex mechanisms whose solutions are not amenable to simple reductionism. By making testable predictions and revealing critical gaps in existing knowledge, such models can help direct research and will provide a rigorous framework in which to integrate new data as they are collected. Nowadays, there is no shortage of data; the challenge is to make sense of it all. In that respect, computational modeling is an invaluable tool that could, ultimately, transform how we understand, diagnose, and treat demyelinating diseases.

  12. Computational characterization of high temperature composites via METCAN

    Science.gov (United States)

    Brown, H. C.; Chamis, Christos C.

    1991-01-01

    The computer code 'METCAN' (METal matrix Composite ANalyzer) developed at NASA Lewis Research Center can be used to predict the high temperature behavior of metal matrix composites using the room temperature constituent properties. A reference manual that characterizes some common composites is being developed from METCAN generated data. Typical plots found in the manual are shown for graphite/copper. These include plots of stress-strain, elastic and shear moduli, Poisson's ratio, thermal expansion, and thermal conductivity. This manual can be used in the preliminary design of structures and as a guideline for the behavior of other composite systems.

  13. PRaVDA: High Energy Physics towards proton Computed Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Price, T., E-mail: t.price@bham.ac.uk

    2016-07-11

    Proton radiotherapy is an increasingly popular modality for treating cancers of the head and neck, and in paediatrics. To maximise the potential of proton radiotherapy it is essential to know the distribution, and more importantly the proton stopping powers, of the body tissues between the proton beam and the tumour. A stopping power map could be measured directly, and uncertainties in the treatment vastly reduce, if the patient was imaged with protons instead of conventional x-rays. Here we outline the application of technologies developed for High Energy Physics to provide clinical-quality proton Computed Tomography, in so reducing range uncertainties and enhancing the treatment of cancer.

  14. Using the High-Level Based Program Interface to Facilitate the Large Scale Scientific Computing

    Directory of Open Access Journals (Sweden)

    Yizi Shang

    2014-01-01

    Full Text Available This paper is to make further research on facilitating the large-scale scientific computing on the grid and the desktop grid platform. The related issues include the programming method, the overhead of the high-level program interface based middleware, and the data anticipate migration. The block based Gauss Jordan algorithm as a real example of large-scale scientific computing is used to evaluate those issues presented above. The results show that the high-level based program interface makes the complex scientific applications on large-scale scientific platform easier, though a little overhead is unavoidable. Also, the data anticipation migration mechanism can improve the efficiency of the platform which needs to process big data based scientific applications.

  15. Guest Editorial High Performance Computing (HPC) Applications for a More Resilient and Efficient Power Grid

    Energy Technology Data Exchange (ETDEWEB)

    Huang, Zhenyu Henry; Tate, Zeb; Abhyankar, Shrirang; Dong, Zhaoyang; Khaitan, Siddhartha; Min, Liang; Taylor, Gary

    2017-05-01

    The power grid has been evolving over the last 120 years, but it is seeing more changes in this decade and next than it has seen over the past century. In particular, the widespread deployment of intermittent renewable generation, smart loads and devices, hierarchical and distributed control technologies, phasor measurement units, energy storage, and widespread usage of electric vehicles will require fundamental changes in methods and tools for the operation and planning of the power grid. The resulting new dynamic and stochastic behaviors will demand the inclusion of more complexity in modeling the power grid. Solving such complex models in the traditional computing environment will be a major challenge. Along with the increasing complexity of power system models, the increasing complexity of smart grid data further adds to the prevailing challenges. In this environment, the myriad of smart sensors and meters in the power grid increase by multiple orders of magnitude, so do the volume and speed of the data. The information infrastructure will need to drastically change to support the exchange of enormous amounts of data as smart grid applications will need the capability to collect, assimilate, analyze and process the data, to meet real-time grid functions. High performance computing (HPC) holds the promise to enhance these functions, but it is a great resource that has not been fully explored and adopted for the power grid domain.

  16. SCEC Earthquake System Science Using High Performance Computing

    Science.gov (United States)

    Maechling, P. J.; Jordan, T. H.; Archuleta, R.; Beroza, G.; Bielak, J.; Chen, P.; Cui, Y.; Day, S.; Deelman, E.; Graves, R. W.; Minster, J. B.; Olsen, K. B.

    2008-12-01

    The SCEC Community Modeling Environment (SCEC/CME) collaboration performs basic scientific research using high performance computing with the goal of developing a predictive understanding of earthquake processes and seismic hazards in California. SCEC/CME research areas including dynamic rupture modeling, wave propagation modeling, probabilistic seismic hazard analysis (PSHA), and full 3D tomography. SCEC/CME computational capabilities are organized around the development and application of robust, re- usable, well-validated simulation systems we call computational platforms. The SCEC earthquake system science research program includes a wide range of numerical modeling efforts and we continue to extend our numerical modeling codes to include more realistic physics and to run at higher and higher resolution. During this year, the SCEC/USGS OpenSHA PSHA computational platform was used to calculate PSHA hazard curves and hazard maps using the new UCERF2.0 ERF and new 2008 attenuation relationships. Three SCEC/CME modeling groups ran 1Hz ShakeOut simulations using different codes and computer systems and carefully compared the results. The DynaShake Platform was used to calculate several dynamic rupture-based source descriptions equivalent in magnitude and final surface slip to the ShakeOut 1.2 kinematic source description. A SCEC/CME modeler produced 10Hz synthetic seismograms for the ShakeOut 1.2 scenario rupture by combining 1Hz deterministic simulation results with 10Hz stochastic seismograms. SCEC/CME modelers ran an ensemble of seven ShakeOut-D simulations to investigate the variability of ground motions produced by dynamic rupture-based source descriptions. The CyberShake Platform was used to calculate more than 15 new probabilistic seismic hazard analysis (PSHA) hazard curves using full 3D waveform modeling and the new UCERF2.0 ERF. The SCEC/CME group has also produced significant computer science results this year. Large-scale SCEC/CME high performance codes

  17. Computer program for calculation of complex chemical equilibrium compositions and applications. Part 1: Analysis

    Science.gov (United States)

    Gordon, Sanford; Mcbride, Bonnie J.

    1994-01-01

    This report presents the latest in a number of versions of chemical equilibrium and applications programs developed at the NASA Lewis Research Center over more than 40 years. These programs have changed over the years to include additional features and improved calculation techniques and to take advantage of constantly improving computer capabilities. The minimization-of-free-energy approach to chemical equilibrium calculations has been used in all versions of the program since 1967. The two principal purposes of this report are presented in two parts. The first purpose, which is accomplished here in part 1, is to present in detail a number of topics of general interest in complex equilibrium calculations. These topics include mathematical analyses and techniques for obtaining chemical equilibrium; formulas for obtaining thermodynamic and transport mixture properties and thermodynamic derivatives; criteria for inclusion of condensed phases; calculations at a triple point; inclusion of ionized species; and various applications, such as constant-pressure or constant-volume combustion, rocket performance based on either a finite- or infinite-chamber-area model, shock wave calculations, and Chapman-Jouguet detonations. The second purpose of this report, to facilitate the use of the computer code, is accomplished in part 2, entitled 'Users Manual and Program Description'. Various aspects of the computer code are discussed, and a number of examples are given to illustrate its versatility.

  18. Minimally complex ion traps as modules for quantum communication and computing

    Science.gov (United States)

    Nigmatullin, Ramil; Ballance, Christopher J.; de Beaudrap, Niel; Benjamin, Simon C.

    2016-10-01

    Optically linked ion traps are promising as components of network-based quantum technologies, including communication systems and modular computers. Experimental results achieved to date indicate that the fidelity of operations within each ion trap module will be far higher than the fidelity of operations involving the links; fortunately internal storage and processing can effectively upgrade the links through the process of purification. Here we perform the most detailed analysis to date on this purification task, using a protocol which is balanced to maximise fidelity while minimising the device complexity and the time cost of the process. Moreover we ‘compile down’ the quantum circuit to device-level operations including cooling and shuttling events. We find that a linear trap with only five ions (two of one species, three of another) can support our protocol while incorporating desirable features such as global control, i.e. laser control pulses need only target an entire zone rather than differentiating one ion from its neighbour. To evaluate the capabilities of such a module we consider its use both as a universal communications node for quantum key distribution, and as the basic repeating unit of a quantum computer. For the latter case we evaluate the threshold for fault tolerant quantum computing using the surface code, finding acceptable fidelities for the ‘raw’ entangling link as low as 83% (or under 75% if an additional ion is available).

  19. A computational model of the integration of landmarks and motion in the insect central complex

    Science.gov (United States)

    Sabo, Chelsea; Vasilaki, Eleni; Barron, Andrew B.; Marshall, James A. R.

    2017-01-01

    The insect central complex (CX) is an enigmatic structure whose computational function has evaded inquiry, but has been implicated in a wide range of behaviours. Recent experimental evidence from the fruit fly (Drosophila melanogaster) and the cockroach (Blaberus discoidalis) has demonstrated the existence of neural activity corresponding to the animal’s orientation within a virtual arena (a neural ‘compass’), and this provides an insight into one component of the CX structure. There are two key features of the compass activity: an offset between the angle represented by the compass and the true angular position of visual features in the arena, and the remapping of the 270° visual arena onto an entire circle of neurons in the compass. Here we present a computational model which can reproduce this experimental evidence in detail, and predicts the computational mechanisms that underlie the data. We predict that both the offset and remapping of the fly’s orientation onto the neural compass can be explained by plasticity in the synaptic weights between segments of the visual field and the neurons representing orientation. Furthermore, we predict that this learning is reliant on the existence of neural pathways that detect rotational motion across the whole visual field and uses this rotation signal to drive the rotation of activity in a neural ring attractor. Our model also reproduces the ‘transitioning’ between visual landmarks seen when rotationally symmetric landmarks are presented. This model can provide the basis for further investigation into the role of the central complex, which promises to be a key structure for understanding insect behaviour, as well as suggesting approaches towards creating fully autonomous robotic agents. PMID:28241061

  20. Initial Inductive Learning in a Complex Computer Simulated Environment: The Role of Metacognitive Skills and Intellectual Ability.

    Science.gov (United States)

    Veenman, M. V. J.; Prins, F. J.; Elshout, J. J.

    2002-01-01

    Discusses conceptual knowledge acquisition using computer simulation and describes a study of undergraduates that examined the role of metacognitive skillfulness and intellectual ability during initial inductive learning with a complex computer simulation. Results showed that metacognitive skillfulness was positively related to learning behavior…